Who Invented AI? The Pioneers Who Created Artificial Intelligence

By

Liz Fujiwara

Nov 18, 2025

Illustration of a human and a robot facing each other with a waveform between them.
Illustration of a human and a robot facing each other with a waveform between them.
Illustration of a human and a robot facing each other with a waveform between them.

Artificial Intelligence, or AI, was largely conceptualized by British mathematician Alan Turing, often called the father of AI, and American computer scientist John McCarthy. Turing’s pioneering work on machine intelligence and the concept of the Turing Test laid the early groundwork for understanding who invented AI. It was McCarthy who coined the term “artificial intelligence” in 1956, establishing AI as a formal academic discipline.

In this article, we explore their contributions and the historical milestones that shaped the field of AI, tracing how early theories about machine reasoning evolved into a rapidly advancing area of research. This introduction sets the stage for understanding how foundational ideas became the basis of one of the most important technologies of the 21st century.

Key Takeaways

  • Alan Turing and John McCarthy are pivotal figures in the history of AI, with Turing recognized as the father of machine intelligence and McCarthy coining the term “artificial intelligence.”

  • The early developments in AI, including the Logic Theorist and the exploration of artificial neural networks, demonstrated the potential of machines to learn and adapt, laying the foundation for modern AI.

  • Modern AI advancements are driven by big data and deep learning breakthroughs, leading to practical applications across various industries and highlighting the importance of ethical AI development.

The Visionaries Behind AI

Portraits of AI pioneers including Alan Turing and John McCarthy, who invented AI.

The journey of artificial intelligence began with a few visionary minds who dared to dream of creating thinking machines. These pioneers laid the foundation for what would become one of the most transformative fields in technology. Understanding their contributions helps us appreciate the ambitious visions that have driven AI research and development, including the concept of artificial humans.

Among these early visionaries, Alan Turing stands out as a key figure. Widely regarded as the father of artificial intelligence and machine learning, Turing’s work has profoundly influenced the field. His contributions include:

  • Groundbreaking ideas on machine intelligence

  • The famous Turing Test, which set the stage for future AI research

  • The concept of a machine capable of expanding beyond its original programming

  • The idea of imitating human reasoning

  • Simulating intelligent behavior indistinguishable from that of a human

Another pivotal figure in AI’s history is John McCarthy, who is credited with coining the term “artificial intelligence” at the Dartmouth Conference in 1956. This conference marked the formal establishment of AI as an academic discipline and brought together researchers who would lead the AI movement. McCarthy’s contributions extended beyond the conference; he developed LISP, a programming language that became essential for AI research and remains widely used today.

These early leaders in AI were characterized by visionary courage, exploring new technological frontiers. Their contributions laid the groundwork for the development of intelligent systems and set the stage for the advancements that followed.

Next, we examine the lives and achievements of Alan Turing and John McCarthy, two of the most influential figures in AI history.

Alan Turing: The Father of Machine Intelligence

Alan Turing’s name is synonymous with the birth of artificial intelligence. A British mathematician, Turing’s seminal work in the mid-20th century laid the foundation for modern AI. His 1950 paper, “Computing Machinery and Intelligence,” introduced the concept of machines that could potentially think, sparking the imagination of future AI researchers.

Turing is best known for the Turing Test, a measure of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. This test became a standard benchmark for evaluating computer intelligence and continues to influence AI research today. Turing’s ideas on machine learning and artificial neural networks were forward-thinking, proposing that machines could learn from experience and adapt their behavior accordingly.

Turing’s legacy extends beyond theoretical contributions. His work during World War II on breaking the Enigma code demonstrated the practical applications of machine intelligence, showing the potential of computing machinery to solve complex problems. Turing’s ideas and pursuit of creating thinking machines have left a lasting mark on the history of artificial intelligence.

John McCarthy: Coining "Artificial Intelligence"

John McCarthy’s contributions to artificial intelligence are substantial. As one of the key organizers of the Dartmouth Conference in 1956, McCarthy played a crucial role in establishing AI as a distinct academic discipline. It was at this conference that he introduced the term “artificial intelligence,” setting the direction for future research in the field.

McCarthy’s impact on AI extended beyond terminology. His contributions include:

  • Developing LISP, a programming language specifically designed for AI research

  • Creating a language with flexibility and features that made it widely used for AI programming

  • Advancing natural language understanding, supporting AI’s ability to interpret and generate human language

Beyond his technical contributions, McCarthy believed that machines could emulate human intelligence. His work laid the foundation for symbolic AI, a branch of AI focused on using symbols and rules to represent and manipulate knowledge. McCarthy’s efforts have had a lasting influence on the development of intelligent systems and continue to inspire AI researchers worldwide.

Early Developments in AI

An illustration depicting early AI developments and technologies.

The early development of artificial intelligence was marked by significant milestones that laid the groundwork for the field. These initial advancements demonstrated the potential of AI and set the stage for future research and innovation. From the creation of the first AI programs to the development of artificial neural networks, these early achievements were pivotal in shaping the history of AI.

One of the most notable early developments in AI was the creation of the Logic Theorist, considered the first AI program. Developed by Allen Newell and Herbert A. Simon, the Logic Theorist was designed to mimic human reasoning and solve mathematical problems. This program showcased the potential of machines to perform logical reasoning and solve complex problems, marking a significant step forward in AI research.

Another important development in the early history of AI was the exploration of artificial neural networks. Key milestones include:

  • In 1943, Warren McCulloch and Walter Pitts laid the groundwork for neural networks, inspired by the human brain’s structure and function.

  • The development of the Perceptron by Frank Rosenblatt.

  • The creation of the Stochastic Neural Analog Reinforcement Calculator (SNARC), which advanced the field by demonstrating the potential of machines to learn and adapt based on input data.

These early milestones in AI research were driven by the collaborative efforts of scientists from diverse disciplines, including mathematics, psychology, and engineering. The Dartmouth Conference in 1956 established AI as an academic field, bringing together researchers to advance intelligent systems.

The Logic Theorist: First AI Program

The Logic Theorist, developed by Allen Newell and Herbert A. Simon, is widely recognized as the first AI program. This computer program was designed to demonstrate how machines could solve mathematical problems by mimicking human reasoning. The Logic Theorist highlighted AI’s potential by representing problems as symbolic expressions and applying logical rules to solve them.

Newell and Simon presented the Logic Theorist at a meeting in autumn 1956, outlining its capabilities and potential applications in AI research. The program’s success marked a significant milestone in the history of artificial intelligence, showing that machines could emulate aspects of human intelligence.

The Logic Theorist’s ability to perform logical reasoning laid the foundation for subsequent AI programs and inspired further research in mathematical logic and human problem-solving abilities.

Artificial Neural Networks

The development of artificial neural networks was a significant advancement in the early history of AI. Inspired by the human brain’s structure and function, researchers sought to create machines that could learn and adapt based on input data. The foundational work of Warren McCulloch and Walter Pitts in 1943 introduced the concept of artificial neural networks.

One of the earliest models of artificial neural networks was the Perceptron, developed by Frank Rosenblatt. The Perceptron was designed to recognize patterns by adjusting the weights of its connections based on input data, enabling it to learn from experience. While single-layer neural network models like the Perceptron were limited to solving linearly separable problems, they represented a meaningful step forward in the development of machine learning and artificial intelligence.

Another early advancement was the Stochastic Neural Analog Reinforcement Calculator, or SNARC, created to simulate learning processes in the human brain using reinforcement learning techniques.

These early neural networks laid the foundation for more advanced models and algorithms, such as the backpropagation algorithm, which would later influence the field of deep learning. The exploration of artificial neural networks in the early days of AI research demonstrated the potential of machines to emulate human learning and adaptation, paving the way for future advancements.

Milestones in AI Research

Milestones in AI research, including robots and chess computers.

The history of artificial intelligence is marked by significant milestones that have shaped the field and pushed the boundaries of what machines can achieve. These breakthroughs have demonstrated AI’s potential in various domains, from robotics to complex strategic games. Understanding these milestones helps us appreciate the progress made in AI research and its impact on our world.

One of the earliest and most notable milestones in AI research was the development of Shakey the Robot. Developed in the late 1960s, Shakey was the first mobile robot capable of reasoning about its actions and making decisions based on its environment. Equipped with sensors and a TV camera, Shakey could navigate, analyze its surroundings, and manipulate objects, demonstrating the potential of AI in robotics.

Another landmark achievement in AI research was IBM’s Deep Blue defeating world chess champion Garry Kasparov in 1997. Deep Blue’s victory was a pivotal moment that influenced the perception of AI’s potential. The machine’s ability to process 200 million potential chess moves per second and its strategic capabilities showed AI’s power in complex domains. This victory shifted public perception toward accepting AI as a serious contender in strategic and intellectual tasks.

These milestones in AI research highlight the field’s progress and the potential of intelligent systems to perform complex tasks. From Shakey the Robot’s advancements in robotics to Deep Blue’s triumph in chess, these achievements paved the way for further innovations and inspired continued research in artificial intelligence.

Shakey the Robot

Shakey the Robot, developed in the late 1960s, represented an important advancement in AI and robotics. As the first mobile robot capable of reasoning about its actions, Shakey could make decisions based on its environment, a notable achievement at the time. Its capabilities included:

  • Equipped with sensors and a TV camera

  • Ability to navigate its surroundings

  • Analyze visual data

  • Manipulate objects

These features demonstrated the potential of AI in creating autonomous systems.

Shakey’s development integrated various AI technologies, including computer vision, navigation, and object manipulation, as well as control systems. This combination of capabilities showed the potential of intelligent systems to perform complex tasks and interact with the physical world.

Shakey’s success paved the way for future advancements in robotics and AI, encouraging researchers to explore new possibilities in creating autonomous machines.

Deep Blue vs. Garry Kasparov

In 1997, IBM’s Deep Blue made history by defeating Garry Kasparov, the reigning world chess champion. This event was a turning point in AI, showcasing the strategic capabilities of intelligent systems. Deep Blue’s advanced algorithms and significant computational power, capable of processing 200 million potential chess moves per second, allowed it to outmaneuver a human chess grandmaster.

Deep Blue’s victory had a meaningful impact on public perception of AI. It demonstrated that machines could excel in complex, strategic games once thought to be exclusive to human intelligence. This achievement highlighted AI’s potential to address intricate problems and encouraged further research in the field.

The success of Deep Blue marked a major milestone in the history of artificial intelligence, establishing its place in strategic and intellectual tasks.

The AI Winters

A visual representation of the AI winters, highlighting challenges in AI development.

The journey of artificial intelligence has not been without its challenges. Periods of intense optimism and breakthroughs were followed by phases of disillusionment and reduced funding, known as AI winters. These cycles significantly impacted the progress of AI research and development, highlighting the difficulties in meeting high expectations and the complexities of creating intelligent systems.

The first AI winter occurred between 1974 and 1980, triggered by a critical report from Sir James Lighthill that highlighted the limitations and over-promises of AI. This report led to significant funding cuts from major institutions and government agencies, causing a slowdown in AI research. Despite philosophical critiques about the limitations of formal systems, AI researchers largely disregarded these challenges, contributing to the reduced support and interest in the field.

The second AI winter in the 1990s was marked by several factors:

  • The collapse of the LISP machine market

  • A lack of consensus on the reasons for AI’s previous failures

  • The fragmentation of AI research into subfields

  • Economic bubble patterns that contributed to the decline in interest and funding

As desktop digital computers became more powerful and affordable, the market for specialized AI hardware dwindled, leading to decreased investments and slower advancements in AI. These AI winters serve as reminders of the challenges faced by researchers and the cyclical nature of technological progress.

First AI Winter (1974-1980)

The first AI winter was a period of reduced interest and funding in artificial intelligence from 1974 to 1980. This downturn was largely triggered by a critical report from Sir James Lighthill, which highlighted the limitations and over-promises of AI research. The report led to significant funding cuts from major institutions like MIT, Stanford, CMU, and Edinburgh, as well as government agencies such as the National Research Council, causing a slowdown in AI advancements.

Despite philosophical critiques about the limitations of formal systems, AI researchers largely disregarded these challenges, contributing to the reduced support and interest in the field. The first AI winter underscored the difficulties in meeting high expectations and the complexities involved in developing intelligent systems.

This period of stagnation served as a reminder of the challenges faced by AI researchers and the need for realistic goals and sustained government funding.

Second AI Winter (1990s)

The second AI winter in the 1990s was marked by:

  • The collapse of the LISP machine market

  • A lack of consensus on the reasons for AI’s previous failures

  • The dwindling market for specialized AI hardware as desktop computers became more powerful and affordable

  • Decreased investments and advancements in AI

  • Fragmentation of AI research into subfields, which contributed to the decline in interest and funding

Within three years, high costs and low returns led to a significant reduction in AI investments, causing the second AI winter from 1987 to 1993. This period highlighted the cyclical nature of technological progress and the challenges faced by researchers in maintaining sustained support and interest in AI.

Despite these setbacks, the lessons learned during the AI winters helped create a more measured approach to AI research and development.

Modern AI: From Big Data to Deep Learning

Modern AI technologies including big data and deep learning.

The resurgence of artificial intelligence in the 21st century has been driven by significant advancements in big data, machine learning, and deep learning techniques. These modern developments have transformed AI from a theoretical concept to a practical tool with widespread applications across various industries. The availability of large datasets and powerful computing hardware has played a crucial role in this transformation.

The early 2000s marked a significant growth period for AI, supported by advancements in computer hardware and the availability of large training data. Researchers like Geoffrey Hinton pioneered work on artificial neural networks, laying the foundation for the rise of deep learning. In 2012, Hinton and his students showcased their neural network research at the ImageNet competition, achieving important breakthroughs in visual object recognition.

Convolutional neural networks (CNNs), developed by Yann LeCun, have played a major role in the advancement of deep learning, particularly in image recognition tasks. These networks have influenced the field by enabling machines to reach high levels of accuracy in various applications, from natural language processing to speech recognition. The success of deep learning techniques has led to a surge in common-use AI tools, driving further innovations and applications.

Organizations that have integrated AI and information technology capabilities are more likely to explore generative AI’s potential, further expanding what intelligent systems can achieve. Modern AI’s ability to analyze large datasets in real-time and support decision-making processes has made it an essential tool in industries ranging from finance to healthcare.

Big Data and Machine Learning

The evolution of powerful computing hardware and access to extensive training data have significantly advanced the capabilities of AI since the early 2000s. Machine learning, a subset of AI, has become essential in analyzing large datasets in real-time and supporting decision-making processes. The combination of big data and machine learning has enabled organizations to gain valuable insights and make informed decisions.

Access to extensive training data and improved computational speed has played a crucial role in the success of machine learning. This synergy has allowed for the development of more accurate and efficient AI models capable of performing complex tasks previously associated only with human intelligence.

The integration of big data and machine learning has helped transform AI into a practical tool with widespread applications across various industries.

Deep Learning Breakthroughs

Deep learning has been a major development in artificial intelligence. The backpropagation algorithm allowed deep neural networks to learn effectively from data, enabling significant advancements in machine learning. Convolutional neural networks (CNNs), designed by Yann LeCun, have been especially influential in deep learning’s growth, particularly in image recognition tasks.

The ImageNet project, initiated by Fei-Fei Li and her team, was essential in advancing visual object recognition. The project’s large-scale dataset and annual competition drove improvements in deep learning algorithms, leading to breakthroughs in various applications. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio’s contributions to deep learning earned them the Turing Award in 2018.

These developments have allowed AI to reach high accuracy levels in tasks such as natural language processing, speech recognition, and image recognition. The success of deep neural networks has driven further innovations, making AI an important tool in many industries. Advances in deep learning techniques continue to expand what intelligent systems can achieve, paving the way for future developments.

Current Trends and Future Directions

As artificial intelligence continues to evolve, new trends and future directions are emerging that will shape the field for years to come. One major trend is the rapid adoption of generative AI tools, which are now used regularly by one-third of organizations in at least one business function. Generative AI’s impact is expected to be significant, particularly in industries like technology and finance.

Nearly 40% of companies expect to increase their investments in AI due to advances in generative AI. Tools such as GPT-3 and DALL-E have demonstrated their broad capabilities, enabling users to generate text, images, and videos based on simple prompts. The integration of generative AI technologies into products by companies like Microsoft and Google has made advanced AI capabilities accessible to wider audiences.

Another important area in the future of AI is ethical AI and responsible development. As AI systems become more powerful, concerns about potential risks and unintended consequences have increased. Academic research has expanded in this area, emphasizing the need for ethical considerations and responsible development practices. Recommendations such as pausing training for at least six months for AI systems more powerful than GPT-4 highlight the importance of caution.

The future of AI holds significant promise, with new advancements and applications on the horizon. As we continue exploring intelligent systems, it is essential to balance innovation with ethical considerations. 

Generative AI and Large Language Models

Generative AI and large language models have seen rapid adoption and have significantly influenced many industries. Examples include:

  • GPT-3 (2020), which demonstrated extensive generative text capabilities

  • DALL-E, which produces images from text descriptions

  • Sora, which supports text-to-video generation

In 2023, companies like Microsoft and Google integrated generative AI technologies into their products, further expanding these tools’ reach. These developments have increased investments in AI, with nearly 40% of companies planning to expand their AI budgets due to generative AI’s potential.

The rise of generative AI represents a new era in AI development, with promising possibilities for future applications.

Ethical AI and Responsible Development

As artificial intelligence advances, ethical considerations and responsible development practices have become increasingly important. Potential risks associated with advanced AI systems highlight the need for caution and thoughtful planning. Academic research has expanded on these concerns, emphasizing the need for fairness, transparency, and accountability in AI development.

One key recommendation is to pause training for at least six months for AI systems more powerful than GPT-4. This pause gives researchers time to address potential risks and ensure responsible deployment.

The importance of ethical AI and responsible development continues to grow as AI systems become more integrated into daily life. By prioritizing responsible practices, we can ensure that AI technologies support society while reducing potential risks.

Introducing Fonzi: Revolutionizing AI Hiring

In the rapidly evolving field of artificial intelligence, finding top-tier talent is crucial for companies looking to stay ahead of the curve. Fonzi, a specialized marketplace for AI engineering talent, is transforming the hiring process by connecting companies with elite AI engineers and virtual assistants efficiently and effectively. This platform offers a unique solution to the challenges of hiring skilled AI professionals, ensuring that companies can find strong candidates quickly and reliably.

Fonzi’s platform is designed to curate and match high-caliber AI engineering professionals with leading tech organizations. Fonzi focuses on top-tier, pre-vetted AI engineers to ensure that companies access the best talent. The recurring hiring event, Match Day, further supports the process, making it easier for companies to find and hire skilled professionals.

One of the key features of Fonzi is its structured evaluation and data-driven matching processes, which include:

  • Conducting structured assessments to identify top candidates

  • Utilizing technology to detect fraudulent applications and reduce bias

  • Ensuring a high-signal evaluation process that supports compatibility between candidates and companies

These features lead to successful hires and satisfied clients.

Choosing Fonzi offers several advantages for companies looking to hire AI talent:

  • Accelerates the hiring process

  • Provides a scalable solution

  • Improves the quality of new hires

  • Reduces time-to-fill roles

  • Streamlines the hiring experience with a focus on transparency and speed

  • Allows companies to make informed hiring decisions rapidly

By accessing pre-vetted talent and receiving tailored offers, organizations can hire efficiently and effectively, making Fonzi a strong choice for AI recruitment.

What is Fonzi?

Fonzi is a specialized marketplace designed to match high-quality AI engineering talent with companies looking for skilled professionals in the field. The platform curates top-tier, pre-vetted AI engineers, ensuring that companies can find strong candidates efficiently and effectively. By focusing on elite talent, Fonzi addresses the challenges of hiring skilled AI professionals and provides a clear solution to the recruitment process.

One of the standout features of Fonzi is its recurring hiring event, Match Day. This event allows companies to connect with highly qualified AI engineers quickly and reliably. Fonzi’s curated approach ensures that companies have access to strong talent, streamlining the hiring process and making it easier to find and hire skilled professionals in artificial intelligence.

How Fonzi Works

Fonzi’s platform utilizes:

  • Structured evaluations and data-driven matching processes to support compatibility between candidates and companies

  • High-signal assessments to identify top candidates

  • Technology to detect fraudulent applications and reduce bias, supporting fair hiring practices

Unlike black-box AI tools or traditional job boards, Fonzi’s transparent and rigorous evaluation process ensures that companies can make informed and reliable hiring decisions.

Summary

The history of artificial intelligence is a fascinating journey marked by visionary pioneers, groundbreaking milestones, periods of stagnation, and modern advancements. From the foundational work of Alan Turing and John McCarthy to the development of the first AI programs and neural networks, the early days of AI laid the groundwork for future innovations. Significant milestones, such as Shakey the Robot and Deep Blue’s victory over Garry Kasparov, demonstrated the potential of AI in various domains.

Current trends, such as the rise of generative AI and the focus on ethical AI and responsible development, will influence the future of artificial intelligence. The integration of advanced AI technologies into everyday products and services highlights the ongoing evolution and potential of intelligent systems. As we continue to explore the possibilities of AI, it is important to balance innovation with ethical considerations and responsible development practices.

In this dynamic landscape, platforms like Fonzi are transforming the hiring process for AI talent, ensuring that companies can find qualified candidates efficiently and effectively. By connecting top-tier, pre-vetted AI engineers with leading organizations, Fonzi addresses the challenges of AI recruitment and provides a clear solution for companies looking to stay ahead in the evolving field of artificial intelligence.

FAQ

Who is the real creator of AI?

Who is the real creator of AI?

Who is the real creator of AI?

What was the Dartmouth Conference, and why is it significant?

What was the Dartmouth Conference, and why is it significant?

What was the Dartmouth Conference, and why is it significant?

What are some early milestones in AI research?

What are some early milestones in AI research?

What are some early milestones in AI research?

What are AI winters, and how did they impact AI research?

What are AI winters, and how did they impact AI research?

What are AI winters, and how did they impact AI research?

How does Fonzi revolutionize AI hiring?

How does Fonzi revolutionize AI hiring?

How does Fonzi revolutionize AI hiring?