top of page

Unraveling the History of Artificial Intelligence: From Fiction to Reality.

AI! AI! AI! It's all people speak about nowadays, and rightfully so. I personally think AI is one of, if not the greatest, innovations in the 21st century. Actually, no, it's the selfie stick, but AI is right on its tail! AI has definitely taken everyone by storm, and there are thousands of new things coming out on AI as you’re reading this article right now. Because of this massive novelty train, a lot of people do find it hard to keep in touch with it all, and it's genuinely raising a lot of concerns - “Am I going to lose my job to AI?" “Does AI belong to the aliens (Alien Invention)?" “When does the robot invasion start?” This is only the first of my articles on AI, and similar to my approach with other things, I wanted to start at the root of the situation; hence, in this article, instead of giving all the recent updates on AI, I am going to take a step back and delve into the history of AI and how it came to be, as well as only briefly explain what it is.


Artificial intelligence, also known as AI, is a field within computer science that concentrates on developing systems capable of carrying out tasks that generally necessitate human intelligence. These tasks encompass comprehension of natural language, identification of patterns, decision-making, and acquiring knowledge from past experiences. The objective of AI systems is to imitate human-like cognitive functions like reasoning, problem-solving, and perception, albeit through the utilization of algorithms and data instead of biological processes. The progress of AI technology is advancing rapidly and has already made significant impacts in various domains such as healthcare, finance, transportation, and entertainment, fundamentally transforming the way we work and live our lives. In essence, we humans, as the laziest yet most intelligent species on planet Earth, want to create an even more intelligent species so that we can be even more lazy—well, until the AI revolt starts!


Artificial intelligence has experienced significant growth and advancement in the 21st century; however, its origins can be traced back to the early 1900s. You may be curious to know the individual responsible for introducing this groundbreaking concept to the world. Surprisingly, it was not a scientist, but rather the talented author L. Frank Baum, renowned for his work "The Wizard of Oz." Baum's creation of the character known as the "heartless" Tin Man introduced the concept of artificially intelligent robots to the world.



In the 1950s, a group of scientists, mathematicians, and philosophers emerged who fully embraced the concept of artificial intelligence (AI) as an essential part of their intellectual world. One notable figure among them was Alan Turing, a brilliant polymath from Britain who extensively explored the mathematical possibility of AI. Turing put forward the idea that humans use available information and logical reasoning to solve problems and make decisions, leading to the question: why couldn't machines do the same? This idea formed the basis for his influential 1950 paper, "Computing Machinery and Intelligence," where he discussed the creation of intelligent machines and methods for evaluating their cognitive abilities.






If many intellectuals in the 1950s were enthusiastic about the concept of AI, why did it take so long for us to see functional models? The truth is, back then, talk was cheap and theories were all they had. But it wasn't entirely their fault. There were several reasons for the delay.

Firstly, computers needed significant modifications. Before 1949, computers lacked a crucial element for intelligence: the ability to store commands instead of just executing them. In essence, they could follow instructions but couldn't remember their actions.


Moreover, computing was extremely expensive. In the early 1950s, leasing a computer could cost as much as $200,000 per month. This meant that only prestigious universities and large technology companies could afford to explore this new field.


To secure funding, it was crucial to demonstrate a viable concept and gather support from influential individuals. Convincing potential sources of funding that pursuing machine intelligence was a worthwhile endeavor required showcasing a tangible idea and gaining the backing of influential figures.


These factors combined to create a situation where AI remained largely theoretical until more recent times. It took advancements in computer technology, along with increased affordability and accessibility, to bring AI from the realm of ideas into practical application.

As we reflect on the delayed progress of AI, it's important to recognize that the obstacles faced by those early pioneers were significant. Despite the challenges, their passion and vision laid the groundwork for the AI advancements we enjoy today.


Fret not, because just five years later, the initial demonstration of the idea took place with the creation of the Logic Theorist by Allen Newell, Cliff Shaw, and Herbert Simon. This project, which received funding from the Research and Development (RAND) Corporation, aimed to mimic human problem-solving abilities. Widely recognized as the first artificial intelligence program, it made its debut at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956. The event was organized by John McCarthy and Marvin Minsky, who had envisioned a collaborative effort bringing together leading researchers from various fields to engage in extensive discussions on artificial intelligence. It was during this gathering that McCarthy introduced the term "artificial intelligence" itself. Although the conference did not meet all of McCarthy's expectations, as there was no consensus on standard methodologies and attendees came and went as they pleased, it did solidify the belief in the possibility of AI. This event played a crucial role in shaping the direction of AI research for the following twenty years.


( Allen Newell [on the left] and Herbert Simon [on the right] )


During this period, which was marked by both failures and successes, the field of AI experienced significant advancements, bringing humanity one step closer to unraveling its mysteries. This progress was made possible by the remarkable improvements in computer technology, including increased storage capacity, enhanced processing speed, affordability, and accessibility. Simultaneously, researchers made notable strides in developing more sophisticated machine learning algorithms and gaining a deeper understanding of their practical applications.


Early demonstrations, such as the General Problem Solver by Newell and Simon and Joseph Weizenbaum's ELIZA, showcased promising breakthroughs in problem-solving and speech interpretation respectively. These achievements, combined with the influential support from renowned researchers, including those associated with the DSRPAI (Digital Speech Recognition and Processing in Artificial Intelligence), led to government agencies like the Defense Advanced Research Projects Agency (DARPA) allocating substantial funding for AI research across various institutions. The government showed particular interest in projects focused on speech transcription, translation, and high-throughput data processing.

This wave of progress fueled a sense of optimism within the AI community, with high expectations set for the future.


In 1970, Marvin Minsky expressed optimism to Life Magazine, stating his belief that machines would achieve human-level general intelligence within a span of three to eight years. However, despite laying the groundwork, the task of attaining natural language processing, abstract reasoning, and self-awareness proved to be a formidable challenge, with a significant journey still lying ahead.


As the initial haze surrounding AI was penetrated, numerous challenges came to light. One major obstacle was the lack of computational power needed to achieve meaningful results. Computers at that time were unable to store and process large volumes of information efficiently. For example, effective communication requires a comprehensive understanding of numerous words and their various combinations. Hans Moravec, then a doctoral student working under McCarthy, noted that "computers were still millions of times too weak to exhibit any form of intelligence." The dwindling patience among researchers led to a decline in funding, resulting in a decade of sluggish progress in the field.


In the 1980s, artificial intelligence (AI) experienced a resurgence due to two key factors: an expansion of the toolkit of algorithms and increased financial support. Notable figures like John Hopfield and David Rumelhart gained recognition for their development of "deep learning" techniques, which enabled computers to learn from experience. At the same time, Edward Feigenbaum introduced expert systems that mimicked the decision-making processes of human experts. These systems sought guidance from domain experts in various scenarios, making their knowledge accessible to those without expertise in the field. Expert systems found widespread use across different industries.


The Japanese government played a significant role in advancing AI during this period through the Fifth Generation Computer Project (FGCP). They provided substantial financial backing to expert systems and other AI initiatives, allocating a total of $400 million between 1982 and 1990. The goals of the FGCP included revolutionizing computer processing, implementing logic programming, and enhancing artificial intelligence. Although many of these ambitious objectives were not fully achieved, the project served as an inspiration to a new generation of talented engineers and scientists.


However, the funding for the FGCP was eventually discontinued, resulting in a decline in AI's prominence. Despite this setback, the advancements made during this era paved the way for future developments in artificial intelligence. The combination of an expanded algorithmic toolkit and increased financial support fueled innovation and laid the foundation for further advancements in AI technology.


Overall, the 1980s marked a significant period in AI history, characterized by breakthroughs in deep learning techniques and the widespread use of expert systems. The support from the Japanese government through the FGCP played a crucial role in driving AI research and development during this time. Though the project eventually ended, its impact on inspiring future generations of engineers and scientists cannot be understated. While AI's prominence may have waned temporarily, the groundwork laid during this period set the stage for future advancements in the field.


Despite the lack of government funding and limited public attention, AI thrived in an interesting manner. Numerous notable achievements in artificial intelligence were accomplished during the 1990s and 2000s. In 1997, IBM's Deep Blue, a computer program specifically designed for playing chess, famously triumphed over the reigning world chess champion Gary Kasparov. This widely publicized event marked a significant milestone as it was the first time a computer defeated a reigning world chess champion, showcasing substantial progress in the development of decision-making systems driven by artificial intelligence. Additionally, Dragon Systems introduced speech recognition software for Windows in the same year, representing another major breakthrough in the domain of interpreting spoken language. It seemed that there was no problem too complex for machines to tackle. Even the realm of human emotion became accessible to AI with the creation of Kismet, a robot developed by Cynthia Breazeal, capable of recognizing and expressing emotions.




So, what led to this sudden surge in the development of artificial intelligence? It turns out that the previous hindrance of computer storage, which impeded progress three decades ago, is no longer a concern. Moore's Law, which predicts that the memory and processing speed of computers will double every year, has finally caught up to our requirements and even surpassed them in many cases. This is precisely how Deep Blue managed to triumph over Gary Kasparov in 1997 and how Google's AlphaGo defeated Chinese Go champion Ke Jie in 2017. These examples shed light on the cyclical nature of AI research: we continuously push the boundaries of AI capabilities to align with our current computational power, which includes computer storage and processing speed, and then eagerly await further advancements in accordance with Moore's Law.


We are currently living in the era of "big data," where we have the ability to collect enormous amounts of information that would be overwhelming for an individual to manage. Artificial intelligence has already proven to be extremely advantageous in various industries, including technology, banking, marketing, and entertainment, when it comes to utilizing this vast amount of data. Even if algorithms do not advance significantly, the sheer quantity of data and powerful computing capabilities allow artificial intelligence to learn through sheer determination. While there are indications that Moore's Law might be slowing down somewhat, the increase in data collection shows no signs of losing momentum. Advances in fields like computer science, mathematics, or neuroscience provide potential opportunities to overcome the limitations presented by Moore's Law.


I hope this article was informative and you learned more about the history of AI. However, I feel I have even reached the tip of the iceberg with what I want to share about AI, so keep checking the updates, as I will be releasing another article about AI where I will this time properly explain what AI is and share my opinions on it. Thank you for reading, and have a great day!


Recent Posts

See All

Comments


bottom of page