The History of AI: A Timeline of Artificial Intelligence

Written by Coursera Staff • Updated on

In recent years, the field of artificial intelligence (AI) has undergone rapid transformation. Learn more about its development from the 1950s to the present.

AI technologies now work at a far faster pace than human output and have the ability to generate once unthinkable creative responses, such as text, images, and videos, to name just a few of the developments that have taken place.  

The speed at which AI continues to expand is unprecedented, and to appreciate how we got to this present moment, it’s worthwhile to understand how it first began. AI has a long history stretching back to the 1950s, with significant milestones at nearly every decade. In this article, we’ll review some of the major events that occurred along the AI timeline. 

The beginnings of AI: 1950s

In the 1950s, computing machines essentially functioned as large-scale calculators. In fact, when organizations like NASA needed the answer to specific calculations, like the trajectory of a rocket launch, they more regularly turned to human “computers” or teams of women tasked with solving those complex equations [1]. 

Long before computing machines became the modern devices they are today, a mathematician and computer scientist envisioned the possibility of artificial intelligence. This is where AI's origins really begin. 

Alan Turing

At a time when computing power was still largely reliant on human brains, the British mathematician Alan Turing imagined a machine capable of advancing far past its original programming. To Turing, a computing machine would initially be coded to work according to that program but could expand beyond its original functions.  

At the time, Turing lacked the technology to prove his theory because computing machines had not advanced to that point, but he’s credited with conceptualizing artificial intelligence before it came to be called that. He also developed a means for assessing whether a machine thinks on par with a human, which he called “the imitation game” but is now more popularly called “the Turing test.” 

Dartmouth conference

During the summer of 1956, Dartmouth College mathematics professor John McCarthy invited a small group of researchers from various disciplines to participate in a summer-long workshop focused on investigating the possibility of “thinking machines.”

The group believed, “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” [2]. Due to the conversations and work they undertook that summer, they are largely credited with founding the field of artificial intelligence.

John McCarthy

During the summer Dartmouth Conference—and two years after Turing’s death—McCarthy conceived of the term that would come to define the practice of human-like machines. In outlining the purpose of the workshop that summer, he described it using the term it would forever be known as, “artificial intelligence.” 

Laying the groundwork: 1960s-1970s

The early excitement that came out of the Dartmouth Conference grew over the next two decades, with early signs of progress coming in the form of a realistic chatbot and other inventions. 

ELIZA

Created by the MIT computer scientist Joseph Weizenbaum in 1966, ELIZA is widely considered the first chatbot and was intended to simulate therapy by repurposing the answers users gave into questions that prompted further conversation—also known as the Rogerian argument. 

Weizenbaum believed that rather rudimentary back-and-forth would prove the simplistic state of machine intelligence. Instead, many users came to believe they were talking to a human professional. In a research paper, Weizenbaum explained, “Some subjects have been very hard to convince that ELIZA…is not human.” 

Shakey the Robot

Between 1966 and 1972, the Artificial Intelligence Center at the Stanford Research Initiative developed Shakey the Robot, a mobile robot system equipped with sensors and a TV camera, which it used to navigate different environments. The objective in creating Shakey was “to develop concepts and techniques in artificial intelligence [that enabled] an automaton to function independently in realistic environments,” according to a paper SRI later published [3].

While Shakey’s abilities were rather crude compared to today’s developments, the robot helped advance elements in AI, including “visual analysis, route finding, and object manipulation” [4].

American Association of Artificial Intelligence founded

After the Dartmouth Conference in the 1950s, AI research began springing up at venerable institutions like MIT, Stanford, and Carnegie Mellon. The instrumental figures behind that work needed opportunities to share information, ideas, and discoveries. To that end, the International Joint Conference on AI was held in 1977 and again in 1979, but a more cohesive society had yet to arise.  

The American Association of Artificial Intelligence was formed in the 1980s to fill that gap. The organization focused on establishing a journal in the field, holding workshops, and planning an annual conference. The society has evolved into the Association for the Advancement of Artificial Intelligence (AAAI) and is “dedicated to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines” [5].

AI winter

In 1974, the applied mathematician Sir James Lighthill published a critical report on academic AI research, claiming that researchers had essentially over-promised and under-delivered when it came to the potential intelligence of machines. His condemnation resulted in stark funding cuts. 

The period between the late 1970s and early 1990s signaled an “AI winter”—a term first used in 1984—that referred to the gap between AI expectations and the technology’s shortcomings.  

Early AI excitement quiets: 1980s-1990s

The AI winter that began in the 1970s continued throughout much of the following two decades, despite a brief resurgence in the early 1980s. It wasn’t until the progress of the late 1990s that the field gained more R&D funding to make substantial leaps forward. 

First driverless car 

Ernst Dickmanns, a scientist working in Germany, invented the first self-driving car in 1986. Technically a Mercedes van that had been outfitted with a computer system and sensors to read the environment, the vehicle could only drive on roads without other cars and passengers.

Deep Blue

In 1996, IBM had its computer system Deep Blue—a chess-playing computer program—compete against then-world chess champion Gary Kasparov in a six-game match-up. At the time, Deep Blue won only one of the six games, but the following year, it won the rematch. In fact, it took only 19 moves to win the final game.  

Deep Blue didn’t have the functionality of today’s generative AI, but it could process information at a rate far faster than the human brain. In one second, it could review 200 million potential chess moves. 

AI growth: 2000-2019

With renewed interest in AI, the field experienced significant growth beginning in 2000. 

Kismet 

You can trace the research for Kismet, a “social robot” capable of identifying and simulating human emotions, back to 1997, but the project came to fruition in 2000. Created in MIT’s Artificial Intelligence Laboratory and helmed by Dr. Cynthia Breazeal, Kismet contained sensors, a microphone, and programming that outlined “human emotion processes.” All of this helped the robot read and mimic a range of feelings. 

"I think people are often afraid that technology is making us less human,” Breazeal told MIT News in 2001. “Kismet is a counterpoint to that—it really celebrates our humanity. This is a robot that thrives on social interactions” [6].

Nasa Rovers 

Mars was orbiting much closer to Earth in 2004, so NASA took advantage of that navigable distance by sending two rovers—named Spirit and Opportunity—to the red planet. Both were equipped with AI that helped them traverse Mars’ difficult, rocky terrain, and make decisions in real-time rather than rely on human assistance to do so.   

IBM Watson

Many years after IBM’s Deep Blue program successfully beat the world chess champion, the company created another competitive computer system in 2011 that would go on to play the hit US quiz show Jeopardy. In the lead-up to its debut, Watson DeepQA was fed data from encyclopedias and across the internet. 

Watson was designed to receive natural language questions and respond accordingly, which it used to beat two of the show’s most formidable all-time champions, Ken Jennings and Brad Rutter.  

Siri and Alexa 

During a presentation about its iPhone product in 2011, Apple showcased a new feature: a virtual assistant named Siri. Three years later, Amazon released its proprietary virtual assistant named Alexa. Both had natural language processing capabilities that could understand a spoken question and respond with an answer. 

Yet, they still contained limitations. Known as “command-and-control systems,” Siri and Alexa are programmed to understand a lengthy list of questions but cannot answer anything that falls outside their purview. 

Geoffrey Hinton and neural networks

The computer scientist Geoffrey Hinton began exploring the idea of neural networks (an AI system built to process data in a manner similar to the human brain) while working on his PhD in the 1970s. But it wasn’t until 2012, when he and two of his graduate students displayed their research at the competition ImageNet, that the tech industry saw the ways in which neural networks had progressed. 

Hinton’s work on neural networks and deep learning—the process by which an AI system learns to process a vast amount of data and make accurate predictions—has been foundational to AI processes such as natural language processing and speech recognition. The excitement around Hinton’s work led to him joining Google in 2013. He eventually resigned in 2023 so that he could speak more freely about the dangers of creating artificial general intelligence

Sophia citizenship 

Robotics made a major leap forward from the early days of Kismet when the Hong Kong-based company Hanson Robotics created Sophia, a “human-like robot” capable of facial expressions, jokes, and conversation in 2016. Thanks to her innovative AI and ability to interface with humans, Sophia became a worldwide phenomenon and would regularly appear on talk shows, including late-night programs like The Tonight Show

Complicating matters, Saudi Arabia granted Sophia citizenship in 2017, making her the first artificially intelligent being to be given that right. The move generated significant criticism among Saudi Arabian women, who lacked certain rights that Sophia now held.

AlphaGO

The ancient game of Go is considered straightforward to learn but incredibly difficult—bordering on impossible—for any computer system to play given the vast number of potential positions. It’s “a googol times more complex than chess” [7]. Despite that, AlphaGO, an artificial intelligence program created by the AI research lab Google DeepMind, went on to beat Lee Sedol, one of the best players in the world, in 2016. 

AlphaGO is a combination of neural networks and advanced search algorithms trained to play Go using a method called reinforcement learning, which strengthened its abilities over the millions of games that it played against itself. When it bested Sedol, it proved that AI could tackle once insurmountable problems. 

AI surge: 2020-present

The AI surge in recent years has largely come about thanks to developments in generative AI——or the ability for AI to generate text, images, and videos in response to text prompts. Unlike past systems that were coded to respond to a set inquiry, generative AI continues to learn from materials (documents, photos, and more) from across the internet.  

OpenAI and GPT-3

The AI research company OpenAI built a generative pre-trained transformer (GPT) that became the architectural foundation for its early language models GPT-1 and GPT-2, which were trained on billions of inputs. Even with that amount of learning, their ability to generate distinctive text responses was limited. 

Instead, it was the large language model (LLM) GPT-3 that created a growing buzz when it was released in 2020 and signaled a major development in AI. GPT-3 was trained on 175 billion parameters, which far exceeded the 1.5 billion parameters GPT-2 had been trained on.  

DALL-E 

An OpenAI creation released in 2021, DALL-E is a text-to-image model. When users prompt DALL-E using natural language text, the program responds by generating realistic, editable images. The first iteration of DALL-E used a version of OpenAI’s GPT-3 model and was trained on 12 billion parameters. 

ChatGPT released 

In 2022, OpenAI released the AI chatbot ChatGPT, which interacted with users in a far more realistic way than previous chatbots thanks to its GPT-3 foundation, which was trained on billions of inputs to improve its natural language processing abilities. 

Users prompt ChatGPT for different responses, such as help writing code or resumes, beating writer’s block, or conducting research. However, unlike previous chatbots, ChatGPT can ask follow-up questions and recognize inappropriate prompts.

Keep reading: How to Write ChatGPT Prompts: Your Guide

Generative AI grows

2023 was a milestone year in terms of generative AI. Not only did OpenAI release GPT-4, which again built on its predecessor’s power, but Microsoft integrated ChatGPT into its search engine Bing and Google released its GPT chatbot Bard. 

GPT-4 can now generate far more nuanced and creative responses and engage in an increasingly vast array of activities, such as passing the bar exam.  

Learn more on Coursera

For a quick, one-hour introduction to generative AI, consider enrolling in Google Cloud’s Introduction to Generative AI. Learn what it is, how it’s used, and why it is different from other machine learning methods.

To get deeper into generative AI, you can take DeepLearning.AI’s Generative AI with Large Language Models course and learn the steps of an LLM-based generative AI lifecycle. This course is best if you already have some experience coding in Python and understand the basics of machine learning.

Article sources

1

Britannica. "Early business machines, https://www.britannica.com/technology/computer/Early-business-machines." Accessed October 25, 2024.

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.