This article provides a comprehensive artificial intelligence glossary covering essential AI vocabulary, definitions, and terminology to help readers understand key concepts in artificial intelligence.
As you embark on a career in artificial intelligence (AI), it's important to become familiar with common industry terms and concepts. This list of AI terms and their definitions provides an overview of key topics in AI and machine learning.
You can use this AI terminology to build your vocabulary when studying for a Professional Certificate, drafting a resume, or interviewing for an AI job. Doing so can make it easier to present your qualifications to employers and have meaningful conversations about AI with others in this field.
AI terminology encompasses the specialized vocabulary essential to the field of Artificial Intelligence, describing the technologies, processes, and applications that enable machines to mimic human intelligence. As AI continues to revolutionize sectors such as healthcare, finance, and manufacturing by providing innovative solutions, understanding these terms is crucial for leveraging AI's full potential and staying current with emerging trends.
AI stands for artificial intelligence, which is the simulation of human intelligence processes by machines or computer systems. AI aims to mimic and ultimately surpass human capabilities such as communication, learning, and decision-making.
Learn more: Expand your understanding of intelligent systems.
AI ethics refers to the issues that AI stakeholders such as engineers and government officials must consider to ensure that the technology is developed and used responsibly. This means adopting and implementing systems that support a safe, secure, unbiased, and environmentally friendly approach to artificial intelligence.
Learn more: Navigate the complexities of moral guidelines in technology.
An algorithm is a set of instructions or rules to follow in order to complete a specific task. Algorithms can be particularly useful when you’re working with big data or machine learning. Data analysts may use algorithms to organize or analyze data, while data scientists may use algorithms to make predictions or build models.
An API, or application programming interface, is a set of protocols that determine how two software applications will interact with each other. APIs tend to be written in programming languages such as C++ or JavaScript.
Learn more: Discover how to integrate AI functionalities into applications.
Big data refers to the large data sets that can be studied to reveal patterns and trends to support business decisions. It’s called “big” data because organizations can now gather massive amounts of complex data using data collection tools and systems. Big data can be collected very quickly and stored in a variety of formats.
Learn more: Unlock the power of large datasets by exploring AI analytics.
A chatbot is a software application that is designed to imitate human conversation through text or voice commands.
Learn more: Learn to design and deploy interactive AI assistants with Chatbot Development.
Cognitive computing is essentially the same as AI. It’s a computerized model that focuses on mimicking human thought processes such as understanding natural language, pattern recognition, and learning. Marketing teams sometimes use this term to eliminate the sci-fi mystique of AI.
Computer vision is an interdisciplinary field of science and technology that focuses on how computers can gain understanding from images and videos. For AI engineers, computer vision allows them to automate activities that the human visual system typically performs.
Learn more: Explore the techniques to enable machines to interpret visual data.
Data mining is the process of closely examining data to identify patterns and glean insights. Data mining is a central aspect of data analytics; the insights you find during the mining process will inform your business recommendations.
Learn more: Uncover patterns and insights from large datasets with data mining.
Data science is an interdisciplinary field of technology that uses algorithms and processes to gather and analyze large amounts of data to uncover patterns and insights that inform business decisions.
Learn more: Explore the intersection of statistics, machine learning, and data analysis.
Deep learning is a machine learning technique that layers algorithms and computing units—or neurons—into what is called an artificial neural network (ANN). Unlike machine learning, deep learning algorithms can improve incorrect outcomes through repetition without human intervention. These deep neural networks take inspiration from the structure of the human brain.
Learn more: Master complex neural networks and advanced AI models with deep learning.
Emergent behavior, also called emergence, is when an AI system shows unpredictable or unintended capabilities that only occur when individual parts interact as a wider whole.
Generative AI is a type of technology that uses AI to create content, including text, video, code and images. A generative AI system is trained using large amounts of data, so that it can find patterns for generating new content.
Learn more: Learn how to create advanced synthetic content and models with generative AI.
Guardrails are mechanisms and frameworks designed to ensure that AI systems operate within ethical, legal, and technical boundaries. They prevent AI from causing harm, making biased decisions, or being misused.
Hallucination refers to an incorrect response from an AI system, or false information in an output that is presented as factual information.
Hyperparameters are parameters whose values control the learning process and determine the values of model parameters that a learning algorithm ends up learning.
Image recognition is the process of identifying an object, person, place, or text in an image or video.
A large language model (LLM) is an AI model that has been trained on large amounts of text so that it can understand natural language and generate human-like text.
Learn more: Explore the power of language understanding and generation with large language models like GPT and BERT.
Limited memory is a type of AI system that receives knowledge from real-time events and stores it in a database to make better predictions.
Machine learning is a subset of AI in which algorithms mimic human learning while processing data. With machine learning, algorithms can improve over time, becoming increasingly accurate when making predictions or classifications. Machine learning focuses on developing algorithms and models that help machines learn from data and predict trends and behaviors, without human assistance.
Learn more: Unlock the potential of predictive analytics and algorithmic processing with machine learning.
Natural language processing (NLP) is a type of AI that enables computers to understand spoken and written human language. NLP enables features like text and speech recognition on devices.
Learn more: Enhance your understanding of how machines process human language with Natural Language Processing (NLP).
A neural network is a deep learning technique designed to resemble the structure of the human brain. It requires large data sets to perform calculations and create outputs, which enables features like speech and vision recognition.
Learn more: Dive into the fundamentals and advanced concepts of artificial neural networks.
Overfitting occurs in machine learning training when the algorithm can only work on specific examples within the training data, rendering it unable to make accurate predictions on new data.
Pattern recognition is the method of using computer algorithms to analyze, detect, and label regularities in data. This informs how the data gets classified into different categories.
Predictive analytics is a type of analytics that uses technology to predict what will happen in a specific time frame based on historical data and patterns.
Learn more: Learn to forecast trends and behaviors by mastering predictive analytics.
Prescriptive analytics is a type of analytics that uses technology to analyze data for factors such as possible situations and scenarios, past and present performance, and other resources to help organizations make better strategic decisions.
Read more: Data Analysis Terms: A to Z Glossary
A prompt is an input that a user feeds to an AI system in order to get a desired result or output.
Quantum computing is the process of using quantum-mechanical phenomena such as entanglement and superposition to perform calculations. Quantum machine learning uses these algorithms on quantum computers to expedite work because it performs much faster than a classic machine learning program and computer.
Reinforcement learning is a type of machine learning that learns by interacting with its environment and receiving positive reinforcement for correct predictions and negative reinforcement for incorrect predictions. This type of machine learning may be used to develop autonomous vehicles. Common algorithms are temporal difference, deep adversarial networks, and Q-learning.
Learn more: Dive into training models through rewards and penalties with reinforcement learning.
Also known as opinion mining, sentiment analysis is the process of using AI to analyze the tone and opinion of a given text.
Learn more: Learn how to interpret and analyze emotions in text data with sentiment analysis.
Structured data is data that is defined and searchable. Structured data is formatted data; for example, data that is organized into rows and columns. Structured data is typically easier to analyze than unstructured data because of its tidy formatting. This includes data like phone numbers, dates, and product SKUs.
Supervised learning is a type of machine learning that learns from labeled historical input and output data. It’s “supervised” because you are feeding it labeled information. This type of machine learning may be used to predict real estate prices or find disease risk factors. Common algorithms used during supervised learning are neural networks, decision trees, linear regression, and support vector machines.
Learn more: Develop skills in training models on labeled data with supervised learning.
A token is a basic unit of text that an LLM uses to understand and generate language. A token may be an entire word or parts of a word.
Training data is the information or examples given to an AI system to enable it to learn, find patterns, and create new content.
Transfer learning is a machine learning system that takes existing, previously learned data and applies it to new tasks and activities.
The Turing test was created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence equal to humans, especially in language and behavior. When facilitating the test, a human evaluator judges conversations between a human and machine. If the evaluator cannot distinguish between responses, then the machine passes the Turing test.
Unstructured data is data that is not organized in any apparent way. In order to analyze unstructured data, you’ll typically need to implement some type of structured organization.
Unsupervised learning is a machine learning type that looks for data patterns. Unlike supervised learning, unsupervised learning doesn’t learn from labeled data. This type of machine learning is often used to develop predictive models and to create clusters. For example, you can use unsupervised learning to group customers based on purchase behavior, and then make product recommendations based on the purchasing patterns of similar customers. Hidden Markov models, k-means, hierarchical clustering, and Gaussian mixture models are common algorithms used during unsupervised learning.
Learn more: Explore the techniques for analyzing unlabeled data sets with unsupervised learning.
Voice recognition, also called speech recognition, is a method of human-computer interaction in which computers listen and interpret human dictation (speech) and produce written or spoken outputs. Examples of voice recognition include Apple’s Siri and Amazon’s Alexa.
Step into the innovative world of artificial intelligence with courses designed to enhance your expertise in AI and machine learning. Whether just starting or looking to advance your skills, these courses offer the critical knowledge and tools you need to excel. Take advantage of the opportunity to master AI technologies reshaping industries globally. Explore available AI courses and begin your journey toward becoming an AI expert. Equip yourself to innovate, solve complex challenges, and lead in the era of AI.
Get started today with IBM's AI Foundations for Everyone Specialization, where you'll develop a firm understanding of what is AI, its applications and use cases across various industries. Or, consider Generative AI for Everyone, a course from DeepLearning.AI where you'll learn how to think through the lifecycle of a generative AI project, from conception to launch, including how to build effective prompts.
Writer
Coursera is the global online learning platform that offers anyone, anywhere access to online course...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.