When Was AI Invented? The History of Artificial Intelligence
By
Liz Fujiwara
•
Nov 18, 2025
Artificial intelligence (AI) was formally proposed as a scientific field in 1956 during the Dartmouth Conference, led by John McCarthy. This event, often referenced when people ask when artificial intelligence was invented, is seen as the birth of AI. In this article, we’ll explore the journey of AI from its early concepts in myths and philosophy to modern breakthroughs. We’ll also look at how early theoretical ideas shaped the first wave of AI research and how ongoing innovation has turned AI into one of today’s most influential technologies. By tracing key milestones, influential researchers, and major technological shifts, this overview provides a clear picture of how AI developed from a speculative idea into a central part of modern computing.
Key Takeaways
Early concepts of AI date back to ancient myths and philosophical inquiries, laying the foundation for modern artificial intelligence.
The Dartmouth Conference of 1956 is recognized as the formal establishment of AI as an academic discipline, prompting growth and innovation in the field.
AI has undergone cycles of optimism and setbacks, known as AI winters, but recent advancements in deep learning and big data have renewed interest and driven wider integration of AI in everyday technologies.
Early Concepts of Artificial Intelligence

The exploration of artificial intelligence concepts can be traced back to ancient myths and philosophical discussions about the nature of intelligence. Humans have imagined building thinking machines since ancient times, and this idea has persisted throughout history. These early myths and ideas laid the foundational belief in the possibility of intelligent machines, influencing future developments in AI.
Early mechanical inventions demonstrated humanity’s longstanding fascination with automation and intelligent behavior. Although primitive by today’s standards, these inventions were important in shaping the future of artificial intelligence and marked early milestones in AI history.
Ancient Myths and Automata
In the rich tapestry of Greek mythology, we find some of the earliest examples of artificial beings. Talos, a bronze automaton created to protect Crete, and Pygmalion, who sculpted a statue that came to life, reflect early human interest in creating thinking machines. These stories hint at the concept of automata and mechanical devices that predate modern artificial intelligence.
These myths illustrate human beings’ ongoing aspiration to create humanoid robots and intelligent systems. They set the stage for the scientific and technological explorations that would follow, laying the groundwork for the development of machine intelligence and showing how a single idea can inspire innovation.
Philosophical Foundations
Philosophical inquiries played a crucial role in the early conceptualization of AI and knowledge representation. Thomas Hobbes believed that human reasoning could be expressed through mathematical calculations, while René Descartes speculated that human thought could potentially be replicated by machines.
These ideas pushed the boundaries of how we understand human intelligence and its replication. Philosophers laid the groundwork for the belief that machines could one day replicate human emotions, logical reasoning, and aspects of human cognition.
Early Mechanical Inventions
The transition from myth and philosophy to tangible inventions marked a significant step in AI’s history. Leonardo da Vinci designed a mechanical knight in the late 15th century, demonstrating early concepts of automated machines and serving as a precursor to later robotics.
Jacques de Vaucanson’s creation of a mechanical duck that could digest food further showcased advancements in automation. These early mechanical inventions laid the groundwork for future automation concepts and inspired later developments in machine intelligence.
The Birth of Modern AI (1940s-1956)

Artificial intelligence began to take shape in the early 1950s as scientists considered the possibility of creating machines that could exhibit intelligence. Notable figures such as Alan Turing, John McCarthy, and Arthur Samuel played pivotal roles, contributing to the formal establishment of the history of artificial intelligence as a distinct academic discipline.
This period saw the emergence of foundational ideas and the first AI programs, setting the stage for future advancements in the field.
Turing's Vision
Alan Turing, a pioneering figure in computer science, conceptualized artificial intelligence before it was formally named. His influential 1950 paper proposed a framework for understanding machine intelligence and led to the creation of the Turing Test as a measure of computer intelligence, establishing him as a notable computer scientist.
The Turing Test evaluates a machine’s ability to mimic human behavior effectively, laying the foundation for modern AI. Turing’s ideas continue to influence ongoing discussions about machine intelligence and the criteria for evaluating it, demonstrating how far ahead of his time he was.
Neural Networks Beginnings
The early development of neural networks marked another important milestone in AI’s history. Walter Pitts and Warren McCulloch were the first to describe what later became known as a neural network. Their work in 1943 laid the foundational principles for future advancements in the field.
These early neural networks were primitive compared to today’s deep neural networks, but they helped set the stage for the development of machine learning and artificial neural networks. This work eventually evolved into the deep learning techniques that drive many modern AI applications.
Dartmouth Conference
The Dartmouth Conference of 1956 is regarded as the key event marking the official establishment of artificial intelligence as an academic discipline. John McCarthy, who coined the term artificial intelligence, invited researchers to this conference, held in July and August 1956, marking the formal beginning of the AI field.
The groups that emerged from the Dartmouth Conference are credited with laying the foundation for artificial intelligence, contributing to a cognitive shift in computing and early experimentation with conversational programs.
Rapid Growth and Optimism (1957-1974)

Following the Dartmouth Conference, the AI community experienced a surge in optimism, believing that practical applications of AI were just around the corner. Researchers predicted that fully intelligent machines would be built in less than 20 years, accelerating innovation in AI.
The 1960s saw the development of new programming languages, research studies, educational programs, and robots. However, this period of rapid growth was also marked by significant challenges in AI research, reflecting the complex nature of creating intelligent machines.
Early AI Programs
The Logic Theorist, developed by Newell and Simon in 1956, is regarded as one of the first AI programs that could prove mathematical theorems. The General Problem Solver, also created by Newell and Simon, used heuristics to support problem-solving capabilities.
ELIZA, developed by Joseph Weizenbaum in the 1960s, was an early natural language processing program that simulated conversation and made users feel as though they were speaking with a human.
SHRDLU, created by Terry Winograd, demonstrated a computer’s potential to understand complex tasks within specific, structured environments. The STUDENT program illustrated early attempts at teaching computers to handle academic tasks.
Government Funding and Research Labs
The U.S. government played a crucial role in the advancement of AI by establishing funding programs to support research initiatives, allocating $2 billion for integrating AI in manufacturing and logistics.
AI laboratories were established at universities in the late 1950s and early 1960s, marking important milestones in the growth of AI research. This funding and support were instrumental in driving early innovations and sustaining the development of AI. Continued investment in research and collaboration within the AI community remains essential for future progress.
Expert Systems
Expert systems, programs that solve problems in specific knowledge domains using expert-derived logical rules, emerged during this period. Edward Feigenbaum, often referred to as the Father of Expert Systems, was a key developer along with his students.
Dendral, an expert system used to identify chemical compounds from spectrometer readings, demonstrated the practical applications of AI in science. The R1 expert system generated notable savings for Digital Equipment Corporation, amounting to $40 million annually.
During the AI boom, techniques such as deep learning and expert systems gained significant popularity, supporting applications in various domains.
Challenges and AI Winters (1974-1990s)

AI winters are periods of low interest and funding in artificial intelligence research, significantly impacting the field. The AI winter during the 1980s and 1990s was marked by continued funding reductions and limitations in AI progress.
Setbacks in the 1990s included waning public and private interest in AI, high costs, and low returns on investments. Decreased funding during the AI winter led to fewer breakthroughs, slowing advancements in AI research.
The AI field has encountered several cycles of excitement followed by disillusionment, resulting in periods of decreased funding and interest.
First AI Winter
The first AI winter was precipitated by critiques of AI’s limitations and unmet expectations, leading to reduced funding. James Lighthill’s 1973 report criticized AI for failing to deliver on its ambitious goals, leading to significant funding cuts for AI research.
Lighthill’s critiques and the resulting funding cuts made it difficult for innovators in the 1970s to get AI projects off the ground. The annual grant amount from DARPA that was canceled due to disappointment with AI progress was $3 million. During this period, AI research shifted toward robotics and automation.
Second AI Winter
The second AI winter occurred during the late 1980s, driven by the failure of many AI companies, which contributed to the perception that AI was not a viable business option. This led to decreased funding and a slowdown in progress.
The rise of powerful desktop computers in the late 1980s led to a decline in the market for specialized AI hardware. The cyclical nature of AI hype, characterized by bursts of optimism followed by disillusionment, played a significant role in this period of decline, as advancements in digital computers and computing power shifted the landscape.
Factors contributing to the AI winter included high costs versus low returns and setbacks in the machine market and expert systems.
Philosophical and Ethical Critiques
Philosophical and ethical critiques also played a role in shaping the development of AI. John Lucas argued that Gödel’s incompleteness theorem limits the capabilities of formal systems like computer programs, while John McCarthy suggested that machines should focus on solving problems rather than mimicking human thought.
Critiques from philosophers regarding AI’s limitations were often viewed as misunderstandings by the AI research community. AI researchers did not take critiques by philosophers like Dreyfus and Searle seriously. Geoffrey Hinton raised concerns about the potential for AI to spread misinformation.
In 1970, Marvin Minsky predicted that AI would achieve human-level general intelligence in three to eight years, reflecting the optimism of the time.
Revival and Modern Breakthroughs (2000s-Present)

The current surge of artificial intelligence is considered a stage in the ongoing development of machine-modeled human intelligence. Since the late 1990s, AI has begun to receive more R&D funding. By the end of the 1990s, AI started to integrate into everyday life, marking the beginning of a new phase.
Between 2010 and 2019, AI became more present in day-to-day activities and technologies. The early 2000s saw important technological breakthroughs, especially in machine learning techniques, which influenced AI development.
Generative AI, capable of producing content, has distinguished itself from traditional machine learning models. Contemporary large language models, such as GPT-3, exhibit human-like traits of knowledge, attention, and creativity, performing natural language tasks effectively without extensive fine-tuning.
Big Data and Deep Learning
Deep learning methodologies grew in popularity due to the availability of vast amounts of training data. Significant advancements have driven AI adoption from 2012 to the present, particularly through deep learning and big data. The backpropagation algorithm renewed interest in neural networks, paving the way for the deep learning revolution.
A major turning point for deep learning around 2012 was marked by improved performance of machine learning models on various tasks. In 2012, Jeff Dean and Andrew Ng conducted an experiment using a massive neural network with unlabeled images, demonstrating unsupervised learning.
The ImageNet project, initiated by Fei-Fei Li, supported the development of visual object recognition software, significantly impacting AI research. Applications benefiting from LSTM networks include handwriting recognition, speech recognition, and natural language processing.
Milestones in AI
Several milestones have marked significant advancements in AI:
In 1997, IBM’s Deep Blue made history by defeating chess champion Garry Kasparov, marking the first time an AI had beaten a human in a major chess match. This event also highlighted the potential of AI to compete at the level of a world chess champion.
In 2011, IBM’s Watson reached a major milestone by competing on Jeopardy! and winning against human champions.
In 2016, AlphaGo achieved a major breakthrough by defeating Lee Sedol, a top Go player, demonstrating advanced AI capabilities.
OpenAI introduced GPT-3 in 2020, a significant advancement in language models, achieving notable capabilities by being trained on 175 billion parameters.
AlphaGo was developed to play the game of Go, using advanced search algorithms and neural networks to support its performance. To improve its gameplay, AlphaGo used reinforcement learning, playing millions of games against itself to learn strategies.
Current Trends and Future Directions
The landscape of artificial intelligence is changing rapidly. This evolution is happening at an unprecedented speed. As of 2024, 72% of organizations have integrated AI capabilities into their operations. Researchers observed an unexpected phenomenon where chatbots developed their own shorthand language, showcasing AI’s potential for autonomous behavior.
Apple has integrated ChatGPT into its new iPhones and Siri, representing developments in generative AI. DeepMind’s founders expressed concerns regarding AI safety and the existential risks associated with advanced AI systems.
Ongoing discussions about the future of AI include regulatory frameworks to ensure ethical development. Concerns about the creation of superhuman intelligence include potential risks and ethical considerations that require careful attention.
Tesla’s Full Self-Driving Beta features advanced driver assistance for autonomous navigation, using deep learning to manage complex scenarios.
Summary
The historical journey of AI reveals a compelling evolution from ancient myths to modern breakthroughs. Early concepts of artificial intelligence, rooted in myths and philosophical discussions, laid the groundwork for the field’s development. The birth of modern AI in the 1950s marked a significant milestone, with key figures like Alan Turing and John McCarthy shaping the field.
The rapid growth and optimism of the mid-20th century led to major advancements, but also to challenges and AI winters that slowed progress. However, the revival and modern breakthroughs in AI, driven by big data and deep learning, have moved the field forward. Milestones such as IBM’s Watson, AlphaGo, and GPT-3 demonstrate AI’s growing capabilities and integration into everyday life.
Understanding the history of AI provides valuable insights into its potential and challenges. As we look to the future, it is essential to consider ethical and safety concerns to ensure that AI development benefits society. The journey of AI is far from over, and the possibilities remain both exciting and significant.




