close
close

ChatGPT Glossary: ​​47 AI Terms Everyone Should Know

ChatGPT Glossary: ​​47 AI Terms Everyone Should Know

ChatGPTIts launch in late 2022 has completely changed people’s relationship with finding information online. Suddenly, people could have meaningful conversations with machines; This means you can ask an AI chatbot questions in natural language and it can respond with new answers, just like a human would. This has been so transformative that Google, Meta, Microsoft, and Apple have rapidly begun integrating AI into their products.

But this aspect AI chatbots just one part of the AI ​​landscape. Of course to have ChatGPT helps you do your homework or have Midjourney created Fascinating images of mechanics by country of origin The potential of cool but productive AI could completely reshape economies. This might be worth 4.4 trillion dollars annually to the global economyAccording to the McKinsey Global Institute, you should expect to hear more and more about artificial intelligence.

It comes in a dizzying array of products; a short, short list includes Google’s products GeminiMicrosoft’s copilotanthropic claude, Wonder AI search tool and gadgets Humanitarian And Rabbit. You can read our reviews and hands-on reviews of these and other products, as well as news, explainers, and how-to posts, at: AI Atlas center.

As people adjust to a world intertwined with artificial intelligence, new terms are popping up everywhere. Whether you’re trying to look smart while drinking or trying to impress at a job interview, here are some important AI terms you need to know.

This dictionary is updated regularly.


artificial general intelligence or AGI: A concept that proposes a more advanced version of artificial intelligence than what we know today, capable of performing tasks much better than humans, while also teaching and improving its own abilities.

agent: Systems or models that exhibit agency, the ability to autonomously pursue actions to achieve a goal. In the context of artificial intelligence, an agentive model, such as a high-level autonomous car, can act without constant supervision. Unlike the “mediated” frame, which is in the background, mediated frames are in the foreground and focus on the user experience.

AI ethics: Principles aimed at preventing AI from harming people, such as determining how AI systems should collect data or deal with bias.

AI security: An interdisciplinary field that deals with the long-term effects of artificial intelligence and how it could suddenly evolve into a superintelligence that could be hostile to humans.

algorithm: A set of instructions that allows a computer program to learn and analyze data in a particular way, for example, recognize patterns, then learn from it and perform tasks on its own.

alignment: Fine-tuning the artificial intelligence to better produce the desired result. This can mean anything from moderating content to maintaining positive interactions with people.

anthropomorphism: When humans tend to give human characteristics to non-human objects. In AI, this may include believing that a chatbot is more human and aware than it actually is; such as believing that he/she is happy, sad, or even completely sensitive.

artificial intelligence or AI: The use of technology to simulate human intelligence in computer programs or robotics. A field in computer science that aims to create systems that can perform human tasks.

autonomous agents: An artificial intelligence model that has the capabilities, programming, and other tools necessary to perform a specific task. A self-driving car is an autonomous vehicle because it has sensory inputs, GPS, and driving algorithms to navigate the road on its own, for example. Stanford researchers They showed that autonomous actors could develop their own culture, traditions and common language.

prejudice: Errors arising from training data in large language models. This can result in certain characteristics being incorrectly attributed to certain races or groups based on stereotypes.

chatbot: A program that communicates with people through text that mimics human language.

ChatGPT: An AI chatbot developed by OpenAI using large language model technology.

cognitive computation: Another term for artificial intelligence.

data augmentation: Remixing existing data or adding a more diverse dataset to train an AI.

deep learning: An artificial intelligence method and a subfield of machine learning that uses multiple parameters to recognize complex patterns in images, sound, and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

spread: A machine learning method that takes an existing piece of data, such as a photo, and adds random noise. Diffusion models train their networks to repurpose or recover this photo.

emergency behavior: When an artificial intelligence model exhibits undesirable abilities.

end-to-end learning or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It is not trained to perform a task sequentially, but instead learns from input and solves it all at once.

ethical issues: Awareness of the ethical implications of AI and issues related to privacy, data use, fairness, abuse, and other security issues.

foom: Also known as a fast takeoff or hard takeoff. The concept that if someone creates an AGI, it may already be too late to save humanity.

generative adversarial networks or GANs: A generative AI model consisting of two neural networks to generate new data: a generator and a discriminator. The renderer creates new content and the allocator checks if it is original.

productive artificial intelligence: Content creation technology that uses artificial intelligence to create text, video, computer code, or images. AI is fed large amounts of training data and finds patterns to create its own new answers, which can sometimes be similar to the source material.

Google Twins: An AI chatbot that works similar to Google’s ChatGPT but pulls information from the existing web, while ChatGPT is limited to data until 2021 and is not connected to the internet.

railings: Policies and restrictions applied to AI models to ensure that data is handled responsibly and that the model does not generate offensive content.

hallucination: Wrong answer from artificial intelligence. It could include generative AI-generating answers that are wrong but expressed with confidence as if they were right. The reasons for this are not fully known. For example, ask an AI chatbot “When did Leonardo da Vinci paint the Mona Lisa?” IT may respond with a wrong statement “Leonardo da Vinci painted the Mona Lisa in 1815,” it said, which was 300 years after the Mona Lisa was painted.

major language model or LLM: An artificial intelligence model trained on large amounts of text data to understand language and generate new content in human-like language.

machine learning or ML: A component in artificial intelligence that allows computers to learn and obtain better predictive results without explicit programming. Can be combined with training sets to create new content.

Microsoft Bing: A search engine from Microsoft that can now use technology supporting ChatGPT to deliver AI-powered search results. It is similar to Google Gemini when it comes to connecting to the Internet.

multimodal artificial intelligence: A type of artificial intelligence that can process multiple types of input, including text, images, videos, and speech.

natural language processing: A branch of artificial intelligence that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models, and language rules.

plexus: A computational model that resembles the structure of the human brain and aims to recognize patterns in data. It consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.

overfitting: Error in machine learning where it works too closely to the training data and can only identify specific examples in that data, but not new data.

paperclip: Paperclip Maximizer theory invented by philosopher Nick Boström Oxford University’s scenario is a hypothetical scenario in which an AI system would create as many real paperclips as possible. With the goal of producing the maximum amount of paperclips, an AI system would hypothetically consume or transform all materials to achieve its goal. This may involve dismantling other machines to produce more paperclips that could be useful to humans. The unintended consequence of this AI system is that it could destroy humanity in its goal of making paperclips.

parameters: Numerical values ​​that give the structure and behavior of the Master and enable it to make predictions.

Wonder: The name of an artificial intelligence-powered chatbot and search engine owned by Perplexity AI. It uses a broad language model like those found in other AI chatbots to answer questions with new answers. Its connection to the open internet also allows it to provide up-to-date information and retrieve results from the web. A paid tier of the service, Perplexity Pro, is also available and uses other models, including GPT-4o, Claude 3 Opus, Mistral Large, open source LlaMa 3, and our own Sonar 32k. Professional users can also upload documents, create images, and comment code for analysis.

quick: A suggestion or question that you enter into the AI ​​chatbot to get an answer.

fast chaining: AI’s ability to use information from previous interactions to color future responses.

stochastic parrot: An analogy from The Master that shows that software has no broader understanding of the meaning behind the language or the world around it, no matter how convincing the output. This expression expresses how a parrot can imitate human words without understanding the meaning behind them.

style transfer: The ability to adapt the style of one image to the content of another allows AI to interpret the visual qualities of one image and use it in another. For example, taking Rembrandt’s self-portrait and recreating it in the style of Picasso.

heat: Parameters set to control how random the output of a language model will be. Higher temperature means the model takes more risks.

text to image conversion: Creating images based on textual descriptions.

tokens: Small pieces of written text that AI language models process to formulate their responses to your prompts. A token is equivalent to four characters in English, or about three-quarters of a word.

training data: Datasets, including text, images, code, or data used to help AI models learn.

transformer model: A neural network architecture and deep learning model that learns context by monitoring relationships in data, such as sentences or image fragments. So, instead of analyzing a sentence word by word, one can look at the entire sentence and understand the context.

Turing testNamed after the famous mathematician and computer scientist Alan Turing, this test tests a machine’s ability to behave like a human. If a human cannot distinguish the machine’s response from another human’s response, the machine passes.

weak AI, aka narrow AI: AI that focuses on a specific task and cannot learn beyond its own skills. Most of today’s AI is weak AI.

zero shot learning: A test in which a model must complete a task without being given the necessary training data. An example would be recognizing a lion while only training on tigers.