🎉 [Gate 30 Million Milestone] Share Your Gate Moment & Win Exclusive Gifts!
Gate has surpassed 30M users worldwide — not just a number, but a journey we've built together.
Remember the thrill of opening your first account, or the Gate merch that’s been part of your daily life?
📸 Join the #MyGateMoment# campaign!
Share your story on Gate Square, and embrace the next 30 million together!
✅ How to Participate:
1️⃣ Post a photo or video with Gate elements
2️⃣ Add #MyGateMoment# and share your story, wishes, or thoughts
3️⃣ Share your post on Twitter (X) — top 10 views will get extra rewards!
👉
Looking back at the 80-year development history of AI, these 5 historical lessons are worth learning from.
Author: Gil Press
Compiled by: Felix, PANews
On July 9, 2025, NVIDIA became the first publicly traded company to reach a market value of $4 trillion. What will happen next for NVIDIA and the fluctuating AI field?
Although predictions are difficult, there is a wealth of data available. At the very least, it can help clarify why past predictions did not materialize, in what aspects, in what ways, and for what reasons they failed to come true. This is history.
What lessons can be learned from the 80-year development journey of artificial intelligence (AI)? During this period, funding has fluctuated, research and development methods have varied widely, and public sentiment has occasionally been filled with curiosity, at times feeling anxious, and at times full of excitement.
The history of AI began in December 1943, when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper on mathematical logic. In their paper "Logical Calculus of the Ideas Immanent in Nervous Activity," they speculated about idealized and simplified neural networks and how they could perform simple logical operations by transmitting or not transmitting impulses.
At that time, Ralph Lillie, who was pioneering the field of organizational chemistry, described the work of McCulloch and Pitts as endowing "logic and mathematical models with 'reality'" in the absence of "experimental facts." Later, when the hypotheses of the paper failed empirical verification, Jerome Lettvin from MIT pointed out that although the fields of neurology and neurobiology had ignored this paper, it had inspired a group of "those destined to become enthusiasts of a new field (now known as AI)."
In fact, the paper by McCulloch and Pitts inspired "connectionism," a specific variant of today's dominant AI now referred to as "deep learning," which has recently been rebranded as "AI." Although this approach has no relation to how the brain actually works, the statistical analysis methods that underpin this AI variant—"artificial neural networks"—are often described by AI practitioners and commentators as "mimicking the brain." Authorities, including top AI practitioner Demis Hassabis, claimed in 2017 that McCulloch and Pitts' fictional description of how the brain works and similar studies "continue to lay the groundwork for contemporary deep learning research."
Lesson One: Be wary of conflating engineering with science, confusing science with speculation, and equating science with papers filled with mathematical symbols and formulas. Most importantly, resist the temptation of the illusion that "we are like gods," which is the belief that humans are no different from machines and that humans can create machines that are like humans.
This stubborn and pervasive arrogance has been a catalyst for the tech bubbles and the periodic frenzy of AI over the past 80 years.
This inevitably brings to mind the idea of General AI (AGI), which refers to machines that will soon possess human-like intelligence or even superintelligence.
In 1957, AI pioneer Herbert Simon declared: "There are now machines that can think, learn, and create." He also predicted that within ten years, computers would become international chess champions. In 1970, another AI pioneer, Marvin Minsky, confidently stated: "Within three to eight years, we will have a machine with the intelligence of an average person... Once computers take control of the situation, we may never be able to regain it. We will survive by relying on their blessings. If we're lucky, they may decide to treat us as pets."
The anticipation surrounding the imminent arrival of general AI is extraordinary, even affecting government spending and policies. In 1981, Japan allocated $850 million for its Fifth Generation Computer Systems project, aimed at developing machines that think like humans. In response, the U.S. Defense Advanced Research Projects Agency, after enduring a long "AI winter," planned to re-fund AI research in 1983 to develop machines that could "see, hear, speak, and think like humans."
Progressive governments around the world have spent about a decade and billions of dollars not only to gain a clear understanding of Artificial General Intelligence (AGI) but also to recognize the limitations of traditional AI. By 2012, connectionism finally triumphed over other AI schools of thought, and a new wave of predictions about the imminent arrival of AGI swept across the globe. In 2023, OpenAI claimed that superintelligent AI—"the most influential invention in human history"—could arrive within this decade and "could lead to humans losing power, or even human extinction."
Lesson Two: Be wary of those shiny new things; examine them carefully, cautiously, and wisely. They may not be significantly different from previous speculations about when machines can possess human-like intelligence.
One of the "godfathers" of deep learning, Yann LeCun, once stated: "To enable machines to learn as efficiently as humans and animals, we are still missing some key elements, we just don't know what they are yet."
For many years, General AI (AGI) has been said to be "just around the corner," all due to the "first step fallacy." Machine translation pioneer Yehoshua Bar-Hillel was one of the first to discuss the limitations of machine intelligence, pointing out that many people believe if someone demonstrates that a computer can accomplish a task that was only recently thought to be possible, even if it does so poorly, it just requires further technological development to perfectly complete the task. It is widely believed that if we just wait patiently, it will eventually be realized. However, Bar-Hillel warned as early as the mid-1950s that this is not the case, and reality has repeatedly proven otherwise.
Lesson Three: The distance from being unable to do something to doing it poorly is usually much shorter than the distance from doing it poorly to doing it very well.
In the 1950s and 1960s, many people fell into the "first step fallacy" due to the increased processing speed of semiconductors driving computers. As hardware progressed each year along the reliable upward trajectory of "Moore's Law", it was commonly believed that machine intelligence would also develop in sync with hardware.
However, in addition to the continuous improvement of hardware performance, the development of AI has entered a new stage, introducing two new elements: software and data collection. Starting from the mid-1960s, expert systems (note: an intelligent computer program system) shifted the focus to acquiring and programming knowledge about the real world, particularly the knowledge of domain experts and their heuristics. Expert systems became increasingly popular, and by the 1980s, it was estimated that two-thirds of the Fortune 500 companies were using this technology in their daily business activities.
However, by the early 1990s, this AI boom had completely collapsed. Numerous AI startups went out of business, and major companies also froze or canceled their AI projects. As early as 1983, expert system pioneer Ed Feigenbaum pointed out the "key bottleneck" that led to their demise: the scaling of the knowledge acquisition process, "which is a very cumbersome, time-consuming, and expensive process."
Expert systems also face the challenge of knowledge accumulation. The constant need to add and update rules makes them difficult to maintain and costly. They also expose the deficiencies of thinking machines compared to human intelligence. They are "fragile" and can make ridiculous mistakes when faced with unusual inputs, unable to transfer their expertise to new domains, and lack an understanding of the surrounding world. At the most fundamental level, they cannot learn from examples, experiences, or environments like humans do.
Lesson Four: Initial success, such as widespread adoption by enterprises and government agencies, as well as substantial public and private investment, may not necessarily foster a lasting "new industry" even after ten or fifteen years. Bubbles often burst.**
In the ups and downs, hype, and setbacks, two distinctly different AI development approaches have been vying for the attention of academia, public and private investors, and the media. For over forty years, rule-based symbolic AI methods have dominated. However, instance-based, statistically-driven connectionism, as another major AI approach, briefly flourished and garnered much attention in the late 1950s and late 1980s.
Before the revival of connectionism in 2012, the research and development of AI was primarily driven by academia. The academic community is characterized by the prevalence of dogma (the so-called "normal science"), and there has always been an either-or choice between symbolic AI and connectionism. In 2019, Geoffrey Hinton spent much of his Turing Award speech discussing the hardships he and a few deep learning enthusiasts faced at the hands of mainstream AI and machine learning scholars. Hinton also deliberately belittled reinforcement learning and the work of his colleagues at DeepMind.
Just a few years later, in 2023, DeepMind took over Google's AI business (Hinton also left), primarily in response to OpenAI's success, which also incorporated reinforcement learning as a component of its AI development. The two pioneers of reinforcement learning, Andrew Barto and Richard Sutton, were awarded the Turing Award in 2025.
However, there are currently no signs that either DeepMind or OpenAI, or the many "unicorn" companies dedicated to Artificial General Intelligence (AGI), are focusing on anything beyond the prevailing paradigm of large language models. Since 2012, the focus of AI development has shifted from academia to the private sector; however, the entire field remains fixated on a single research direction.
Lesson Five: Don't put all your AI "eggs" in one "basket."
There is no doubt that Jensen Huang is an outstanding CEO and NVIDIA is an exceptional company. More than a decade ago, when the opportunities of AI suddenly emerged, NVIDIA quickly seized this opportunity, as its chips (originally designed for efficient rendering of video games) have parallel processing capabilities that are very suitable for deep learning computations. Jensen Huang remains vigilant, telling employees, "Our company is only 30 days away from bankruptcy."
In addition to staying vigilant (remember Intel?), the lessons learned from the 80-year development history of AI may also help Nvidia navigate the ups and downs of the next 30 days or 30 years.
Related Reading: An Overview of 10 AI Companies and Models Defining the Current AI Revolution