Learn what artificial intelligence actually is, how it’s used today, and what it may do in the future.
Artificial intelligence (AI) refers to computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems.
Today, the term “AI” describes a wide range of technologies that power many of the services and goods we use every day – from apps that recommend TV shows to chatbots that provide customer support in real time. But do all of these really constitute artificial intelligence as most of us envision it? And if not, then why do we use the term so often?
In this article, you’ll learn more about artificial intelligence, what it actually does, and different types of it. In the end, you’ll also learn about some of its benefits and dangers and explore flexible courses that can help you expand your knowledge of AI even further.
Enroll in AI for Everyone, an online program offered by DeepLearning.AI. In just 6 hours, you'll gain foundational knowledge about AI terminology, strategy, and the workflow of machine learning projects. Your first week is free.
Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP).
Although the term is commonly used to describe a range of different technologies in use today, many disagree on whether these actually constitute artificial intelligence. Instead, some argue that much of the technology used in the real world today actually constitutes highly advanced machine learning that is simply a first step towards true artificial intelligence, or “general artificial intelligence” (GAI).
Yet, despite the many philosophical disagreements over whether “true” intelligent machines actually exist, when most people use the term AI today, they’re referring to a suite of machine learning-powered technologies, such as Chat GPT or computer vision, that enable machines to perform tasks that previously only humans can do like generating written content, steering a car, or analyzing data.
Read more: The History of AI: A Timeline of Artificial Intelligence
Though the humanoid robots often associated with AI (think Star Trek: The Next Generation’s Data or Terminator’s T-800) don’t exist yet, you’ve likely interacted with machine learning-powered services or devices many times before.
At the simplest level, machine learning uses algorithms trained on data sets to create machine learning models that allow computer systems to perform tasks like making song recommendations, identifying the fastest way to travel to a destination, or translating text from one language to another. Some of the most common examples of AI in use today include:
ChatGPT: Uses large language models (LLMs) to generate text in response to questions or comments posed to it.
Google Translate: Uses deep learning algorithms to translate text from one language to another.
Netflix: Uses machine learning algorithms to create personalized recommendation engines for users based on their previous viewing history.
Tesla: Uses computer vision to power self-driving features on their cars.
Read more: Deep Learning vs. Machine Learning: Beginner’s Guide
The increasing accessibility of generative AI tools has made it an in-demand skill for many tech roles. If you're interested in learning to work with AI for your career, you might consider a free, beginner-friendly online program like Google's Introduction to Generative AI.
Artificial intelligence is prevalent across many industries. Automating tasks that don't require human intervention saves money and time, and can reduce the risk of human error. Here are a couple of ways AI could be employed in different industries:
Finance industry. Fraud detection is a notable use case for AI in the finance industry. AI's capability to analyze large amounts of data enables it to detect anomalies or patterns that signal fraudulent behavior.
Health care industry. AI-powered robotics could support surgeries close to highly delicate organs or tissue to mitigate blood loss or risk of infection.
Subscribe to our weekly newsletter Career Chat. It's a low-commitment way to stay current with industry trends and skills you can use to guide your career path.
Artificial general intelligence (AGI) refers to a theoretical state in which computer systems will be able to achieve or exceed human intelligence. In other words, AGI is “true” artificial intelligence, as depicted in countless science fiction novels, television shows, movies, and comics.
As for the precise meaning of “AI” itself, researchers don’t quite agree on how we would recognize “true” artificial general intelligence when it appears. However, the most famous approach to identifying whether a machine is intelligent or not is known as the Turing Test or Imitation Game, an experiment that was first outlined by influential mathematician, computer scientist, and cryptanalyst Alan Turing in a 1950 paper on computer intelligence. There, Turing described a three-player game in which a human “interrogator” is asked to communicate via text with another human and a machine and judge who composed each response. If the interrogator cannot reliably identify the human, then Turing says the machine can be said to be intelligent [1].
To complicate matters, researchers and philosophers also can’t quite agree whether we’re beginning to achieve AGI, if it’s still far off, or just totally impossible. For example, while a recent paper from Microsoft Research and OpenAI argues that Chat GPT-4 is an early form of AGI, many other researchers are skeptical of these claims and argue that they were just made for publicity [2, 3].
Regardless of how far we are from achieving AGI, you can assume that when someone uses the term artificial general intelligence, they’re referring to the kind of sentient computer programs and machines that are commonly found in popular science fiction.
Read more: Artificial General Intelligence vs. AI
When researching artificial intelligence, you might have come across the terms “strong” and “weak” AI. Though these terms might seem confusing, you likely already have a sense of what they mean.
Strong AI is essentially AI that is capable of human-level, general intelligence. In other words, it’s just another way to say “artificial general intelligence.”
Weak AI, meanwhile, refers to the narrow use of widely available AI technology, like machine learning or deep learning, to perform very specific tasks, such as playing chess, recommending songs, or steering cars. Also known as Artificial Narrow Intelligence (ANI), weak AI is essentially the kind of AI we use daily.
Read more: Machine Learning vs. AI: Differences, Uses, and Benefits
As researchers attempt to build more advanced forms of artificial intelligence, they must also begin to formulate more nuanced understandings of what intelligence or even consciousness precisely mean. In their attempt to clarify these concepts, researchers have outlined four types of artificial intelligence.
Here’s a summary of each AI type, according to Professor Arend Hintze of the University of Michigan [4]:
Reactive machines are the most basic type of artificial intelligence. Machines built in this way don’t possess any knowledge of previous events but instead only “react” to what is before them in a given moment. As a result, they can only perform certain advanced tasks within a very narrow scope, such as playing chess, and are incapable of performing tasks outside of their limited context.
Machines with limited memory possess a limited understanding of past events. They can interact more with the world around them than reactive machines can. For example, self-driving cars use a form of limited memory to make turns, observe approaching vehicles, and adjust their speed. However, machines with only limited memory cannot form a complete understanding of the world because their recall of past events is limited and only used in a narrow band of time.
Machines that possess a “theory of mind” represent an early form of artificial general intelligence. In addition to being able to create representations of the world, machines of this type would also have an understanding of other entities that exist within the world. As of this moment, this reality has still not materialized.
Machines with self-awareness are the theoretically most advanced type of AI and would possess an understanding of the world, others, and itself. This is what most people mean when they talk about achieving AGI. Currently, this is a far-off reality.
Generative AI is a kind of artificial intelligence capable of producing original content, such as written text or images, in response to user inputs or "prompts." Generative models are also known as large language models (LLMs) because they're essentially complex, deep learning models trained on vast amounts of data that can be interacted with using normal human language rather than technical jargon.
Generative AI is becoming increasingly common in everyday life, powering tools such as ChatGPT, Google Gemini, and Microsoft Copilot. While other kinds of machine learning models are well suited for performing narrow, repetitive tasks, generative AI is capable of responding to user inputs with unique outputs that allow it to respond dynamically in real-time. This makes it particularly useful for powering interactive programs like virtual assistants, chatbots, and recommendation systems.
That said, while generative AI may produce responses that make it seem like self-aware AI, the reality is that its responses are the result of statistical analysis rather than sentience.
Read more: Generative AI Examples and How the Technology Works
AI has a range of applications with the potential to transform how we work and live. While many of these transformations are exciting, like self-driving cars, virtual assistants, or wearable devices in the healthcare industry, they also pose many challenges.
It’s a complicated picture that often summons competing images: a utopia for some, a dystopia for others. The reality is likely to be much more complex. Here are a few of the possible benefits and dangers AI may pose:
Potential Benefits | Potential Dangers |
---|---|
Greater accuracy for certain repeatable tasks, such as assembling vehicles or computers. | Job loss due to increased automation. |
Decreased operational costs due to greater efficiency of machines. | Potential for bias or discrimination as a result of the data set on which the AI is trained. |
Increased personalization within digital services and products. | Possible cybersecurity concerns. |
Improved decision-making in certain situations. | Lack of transparency over how decisions are arrived at, resulting in less than optimal solutions. |
Ability to quickly generate new content, such as text or images. | Potential to create misinformation, as well as inadvertently violate laws and regulations. |
These are just some of the ways that AI provides benefits and dangers to society. When using new technologies like AI, it’s best to keep a clear mind about what it is and isn’t. With great power comes great responsibility, after all.
Read more: AI Ethics: What It Is and Why It Matters
Artificial Intelligence is quickly changing the world we live in. If you’re interested in learning more about AI and how you can use it at work or in your own life, consider taking one of these courses or specializations on Coursera today:
For a quick overview of AI, take DeepLearning.AI's AI For Everyone course. There, you'll learn what AI can realistically do and not do, how to spot opportunities to apply AI to problems in your own organization, and what it feels like to build machine learning and data science projects.
Top build job-ready AI skills to enhance your career, enroll in the IBM AI Foundations for Everyone Specialization. Learn foundational AI concepts, explore AI tools and services, and engage with AI environments through hands-on projects.
To learn how AI can address complex real-world problems, explore DeepLearning.AI’s AI For Good Specialization. Here, you’ll build skills combining human and machine intelligence for positive real-world impact using AI in a beginner-friendly, three-course program.
UMBC. “Computing Machinery and Intelligence by A. M. Turing, https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf.” Accessed December 19, 2024.
ArXiv. “Sparks of Artificial General Intelligence: Early experiments with GPT-4, https://arxiv.org/abs/2303.12712.” Accessed December 19, 2024.
Wired. “What’s AGI, and Why Are AI Experts Skeptical?, https://www.wired.com/story/what-is-artificial-general-intelligence-agi-explained/.” Accessed December 19, 2024.
GovTech. “Understanding the Four Types of Artificial Intelligence, https://www.govtech.com/computing/understanding-the-four-types-of-artificial-intelligence.html.” Accessed December 19, 2024.
Editorial Team
Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.