A Brief History of AI
It's important to understand the technology's history in order to see its future
So I wrote a book. It was about A.I. But, it turns out, it was pretty blah. That’s sort of the problem with a topic like this. Critcally important but hard to get folks engaged. It was so average that even my closest friends couldn’t bring themselves to read through it. Fortunately I didn’t call in any favors.
But…. There was some really good content in there. So, as I go back to the drawing board I wanted to share some of the content as a means by which to salvage the initial train wreck.
The book started off with a softball - how AI’s history helps explain the modern day context. It was entitled, “A Brief History of AI.” Admittedly it wasn’t the most original title but it was just a chapter.
Ch. 1, A Brief History of AI
AI is as much aspiration as it is technology, and its brief history along with the various fathers, mothers, godfathers and godmothers largely reflects this fact. While it’s easy to get caught up in the data and code, AI has always been and likely will always be about the urge to teach machines to think. This is a key concept. To understand AI means to understand mankind’s historical urge to turn machines into automata and then into independent agents.
Throughout history, humanity has harbored a deep-seated desire to impart the ability to think and reason upon machines, a pursuit that has evolved through the centuries. The earliest inklings of this ambition can be traced back to the ancient Greeks, particularly in the works of Aristotle. His writings on syllogisms and deductive reasoning laid the groundwork for the concept of automata capable of logical thought. This fascination with automata and self-operating machines persisted into the medieval era, where inventors like Al-Jazari crafted mechanical devices that showcased rudimentary decision-making processes, albeit without a comprehensive understanding of cognition.
The modern drive to teach machines to think found its roots in the twentieth century with seminal figures like Alan Turing. Turing's visionary work during World War II, including the development of the Turing machine and the Turing test, established a theoretical framework for artificial intelligence. However, the practical realization of machine thinking only became feasible in the latter half of the twentieth century, with the advent of computers and the emergence of pioneers like John McCarthy and Marvin Minsky. McCarthy's creation of the LISP programming language and Minsky's co-founding of the MIT AI Lab marked significant milestones in the quest to endow machines with intelligence. Over the years, this ambition has driven innovations in fields such as neural networks, deep learning, and natural language processing, bringing us closer to the realization of truly thinking machines.
The field of artificial intelligence (AI) has been shaped by several pioneering figures, often referred to as the "fathers of AI." While it's worth noting that AI development was a collaborative effort involving many researchers, a few individuals stand out for their significant contributions. Here are some key figures considered as the original fathers of AI:
Alan Turing: Turing is a foundational figure in the development of AI. His concept of the Turing machine and his work on the "imitation game" (now known as the Turing test) laid essential theoretical groundwork for AI and machine learning.
John McCarthy: McCarthy is credited with coining the term "artificial intelligence" in 1956 and was a central figure in the development of the AI field. He also developed the LISP programming language, which played a critical role in early AI research.
Marvin Minsky: Minsky was a co-founder of the MIT AI Lab and made significant contributions to AI research, including work on neural networks and robotics. His book, Perceptrons, co-authored with Seymour Papert, was influential in shaping the field.
Herbert A. Simon and Allen Newell: Simon and Newell collaborated on the development of the Logic Theorist, one of the earliest AI programs. They also introduced the concept of "problem-solving" and "heuristic search" in AI.
Arthur Samuel: Samuel is known for his work on computer-game-playing programs, particularly his checkers-playing program. He laid the foundation for machine learning and reinforcement learning.
John von Neumann: von Neumann made notable contributions to the theory of computing and the development of digital computers. His work on the von Neumann architecture and his involvement in early computer design greatly influenced the development of AI systems.
Claude Shannon: Shannon's groundbreaking work in information theory and digital circuit design had a profound impact on AI. His insights into the nature of information and logical circuitry provided a solid foundation for the development of AI systems.
These individuals, among others, played pivotal roles in the inception and early development of AI as a field of study. Their contributions have had a lasting impact on the evolution of artificial intelligence.
Before there was Bard or ChatGPT or the other various AI applications currently consuming headlines there was a consistent effort to try and teach machines to be more like people. And this effort has largely been defined by two distinct epochs—the Dartmouth Summer and the various AI Winters. Both are critical to understand if you are to get to know the roots of modern AI.
The Dartmouth Summer—AI Was Sort Of Born Out Of A Lark
Most credit a specific moment in time for giving birth to AI as a modern movement. In the mid-1950s, John McCarthy, a young and ambitious computer scientist, had a vision to explore the possibility of creating intelligent machines. McCarthy had already made significant contributions to the field of computer science, particularly in the development of the programming language FORTRAN. He was captivated by the idea of programming computers to perform tasks that required human-like intelligence, such as problem-solving, reasoning, and learning.
At the time, McCarthy was teaching at Dartmouth College, where he saw an opportunity to bring his vision to life. He believed that the academic environment of Dartmouth, with its access to computers and collaboration potential, could serve as the perfect setting for an ambitious research project. McCarthy was determined to gather a group of like-minded individuals to tackle the challenges of artificial intelligence. Per Prof. Wooldridge’s book:
“In 1955, McCarthy submitted a proposal to the Rockefeller Institute in the hope of obtaining funds to organize a summer school at Dartmouth College. If you are not an academic, the idea of ‘summer schools’ for adults may sound a little strange, but they are a well-established and fondly regarded academic tradition even today. The idea is to bring together a group of researchers with common interests from across the world and give them the opportunity to meet and work together for an extended period. They are held in summer because, of course, teaching has finished for the year, and academics have a big chunk of time without lecturing commitments. Naturally, the goal is to organize the summer school in an attractive location, and a lively program of social events is essential. When McCarthy wrote his funding proposal for the Rockefeller Institute in 1955, he had to give a name to the event, and he chose artificial intelligence.”
Ironically, McCarthy was said to regret having used the term “artificial intelligence” and had he known what his meetup would create, would have instead preferred the term “computational intelligence.” But some things are fated. It’s hard to envision the same ominous collective interest in the field of “CI.”
In the spring of 1956, McCarthy, along with fellow researchers Marvin Minsky, Nathaniel Rochester, and Claude Shannon, began laying the groundwork for the Dartmouth Summer Research Project on Artificial Intelligence. McCarthy played a crucial role in conceptualizing the event, defining its goals and setting the research agenda. He also played a central role in securing funding from the Rockefeller Foundation, which was instrumental in making the workshop a reality.
McCarthy's leadership and passion for the field of artificial intelligence were infectious, and he managed to attract a remarkable group of participants to the Dartmouth Workshop. These participants included some of the brightest minds in computer science, mathematics, and related disciplines, who were eager to explore the possibilities of AI.
Under McCarthy's guidance, the Dartmouth Workshop took place in the summer of 1956, and it was during this event that the term "artificial intelligence" was first coined. The workshop laid the foundation for formalized AI research, setting the stage for the development of AI as a distinct field of study. McCarthy's vision and leadership in organizing this seminal event played a pivotal role in shaping the future of artificial intelligence, and his contributions continue to be celebrated in the history of AI research.
AI Winters—We’ve Seen AI Boom-and-Bust Cycles Before
While AI may have been born during a summer, it’s better known for its various winters. AI has experienced three commonly recognized “winter” periods defined by long, frenzied periods of hype and expectation were followed by periods of slumping forlornness bordering on despair.
Per the History of Data Science:
“Hype surrounding AI has peaked and waned over the years as the abilities of the technology are overestimated and reevaluated. The peaks, or AI summers, see innovation and investment. The troughs, or AI winters, experience reduced interest and funding.”
The history of the "AI winters" refers to three periods of disillusionment and reduced funding for artificial intelligence research, which occurred in the field of AI during the latter half of the twentieth century. These AI winters represented challenging times for AI research and development, characterized by unrealized expectations and decreased support. Here's a six-paragraph history of the three AI winters:
The First AI Winter (1974-1980): The first AI winter was a consequence of over-optimistic expectations in the early years of AI research. In the 1960s and early 1970s, AI was portrayed as a field that could achieve human-level intelligence. However, progress was slower than anticipated, and many early AI projects failed to deliver on their promises. Funding agencies, disheartened by the lack of tangible results, began to cut back on AI research funding. The infamous Lighthill Report in the United Kingdom, authored by Sir James Lighthill, criticized the field for its unrealistic goals. This led to a significant reduction in support for AI research during this period, and many AI research laboratories were shut down. Interest would once again gain momentum leading to a second AI winter in 1987.
The Second AI Winter (1987-1993): The second AI winter occurred due to a combination of factors, including the high expectations set by expert systems, which were a prominent area of AI research in the 1980s. Expert systems were rule-based programs designed to emulate human expertise in specific domains. While they showed promise, they often failed to live up to their lofty expectations. Funding agencies, once again disillusioned by the lack of progress, began cutting AI research funding. This period also coincided with economic recessions in various countries, which further reduced financial support for AI projects. The AI community, however, continued its research in the face of these challenges. However AI research would persist and once again gain steam under the banner of the Internet.
The Third AI Winter (1997-2001): The third AI winter was triggered by the burst of the dot-com bubble in the late 1990s. During the dot-com boom, there was a surge in investment in internet-related technologies, diverting attention and resources away from AI research. Additionally, some AI projects, particularly those related to natural language understanding and autonomous robotics, were not progressing as rapidly as expected. The disappointment in these areas led to reduced funding for AI once again. Several AI startups and companies went bankrupt, and there was a decline in the public perception of AI. Despite this, some researchers continued their work in AI and began to lay the foundations for future breakthroughs.
Following the third AI winter, the field of artificial intelligence underwent a transformation. Researchers began focusing on more practical and achievable goals, which led to advancements in machine learning, neural networks and data-driven approaches. This shift in strategy eventually resulted in significant breakthroughs and successes in AI, such as the development of deep learning and the popularization of AI applications in various domains.
In the twenty-first century, AI has experienced a resurgence, with increased interest, funding, and practical applications. AI technologies are now integrated into everyday life, powering virtual assistants, autonomous vehicles, recommendation systems, and healthcare applications. The successes of AI, including AlphaGo's victory over a human champion and advancements in natural language processing, have rekindled enthusiasm and investment in the field.
The AI winters serve as a historical reminder of the importance of setting realistic expectations and goals in research. While these periods were challenging, they also provided valuable lessons about the need for perseverance, adaptability, and a focus on solving real-world problems. The field of AI has now matured, with a more balanced approach, and is poised to continue making remarkable advancements in the years to come.
This naturally begs the question of whether we’re now heading toward a fourth AI winter. It remains to be seen. Some argue it’s inevitable while others argue this most recent epoch is different, as AI is a more grounded field with more legitimate knowledge under its belt.