Since its inception, artificial intelligence (AI) has repeated several boom-and-bust cycles that we would do well to avoid repeating. These AI “springs” and “winters” are not unlike market psychology. During rallies (AI “springs”), the “bears” warn the exuberant “bulls” that a recession (an AI “winter”) is imminent, while the bulls, riding the gravy train, scoff at their doubters. We want to prevent these cycles of extremes because if the pendulum swings away from the current rosy promises and frenzied funding phase, AI will reenter an environment of withered confidence and funding in anything AI-related, setting us back again.

​Melanie Mitchell, a professor at the Santa Fe Institute, navigated such a pendulum swing after completing graduate school, where, to improve her career prospects, she was advised to avoid placing “artificial intelligence” on her resume. Those of us still cutting our teeth in the AI field can learn much from AI boom-and-bust-cycle veterans like Mitchell.

In “Why AI is Harder Than We Think,” Mitchell argues that “our limited understanding of the nature and complexity of intelligence itself” is one reason (of several) that AI research has repeated its peaks and valleys. Without a solid grasp of intelligence—the very phenomenon we seek to engineer—we’re apt to underestimate how tough engineering it actually is, find some meager successes, overestimate our results, promise the world, underdeliver, and then start this cycle anew. Mitchell’s diagnosis holds up well throughout AI’s history, but especially so at the very start.

Underestimate Intelligence at Your Own Peril

Perhaps hubris, perhaps the academic equivalent of “hold my beer, I got an idea” level enthusiasm, perhaps a mix of both, the proposal for AI’s first conference was ambitious, to say the very least. The famed 1956 Dartmouth Summer Project that coined the phrase “artificial intelligence” and largely birthed the AI field set forth the modest goal of figuring out “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” The project’s ten academics proclaimed to the Rockefeller Foundation, their project’s funding body, that “a carefully selected group of scientists” could make “a significant advance in one or more of these problems” within “a summer.”

To say AI was born from an underappreciation of the complexity of human intelligence might be an understatement.

Given that AI was a brand spanking new field, shouldn’t we forgive the early AI pioneers’ initial exuberance? Sure. But AI remained rife with confident pronouncements that artificial general intelligence (AGI) is just around the next bend. For example, Minsky predicted in Life Magazine that a machine with human-level intelligence could be cranked out within a mere “three to eight years” in a 1970 interview—after he had more than ample time to gauge how tough AI was turning out to be. Mitchell places some blame of AI research’s seeming default overconfidence mode on our tendency to conflate successes in narrow AI areas as steps toward AGI, when this is often not the case. Again, this stems from our limited understanding of intelligence itself. Is our intelligence mostly symbolic or pattern recognition? Is it modular, a dynamic system, something else? We simply don’t yet well understand the whole of human intelligence and yet we’re trying to craft it with machines.

Entrenched Ideas

​If insufficiently understanding intelligence itself weren’t enough to throw us off course, AI research has also suffered from idea entrenchment. In AI’s various epochs, dominant schools of thought tended to overshadow exploration beyond what was in fashion. Early AI, also called symbolic AI, primarily utilized logic-based systems of symbols. Often referred to as a “top-down” approach, symbolic AI glossed over how the brain actually instantiates symbols, opting to focus instead on harnessing and manipulating symbols. Without deep learning’s pattern recognition prowess, imagine using pure logic to create an autonomous vehicular driving system. You would need to hardcode everything: what’s a squirrel, child, or ball; how does each moves; and how should a vehicle respond if that squirrel, child, or ball (or some combination thereof) nears the road. Even designing a computer vision application that reliably recognizes a static ball by only using symbols and logic is non-trivial. Now repeat this for all the moving objects and situations involved in driving and I think you’ll get an idea of why symbolic AI fell from its throne. As with autonomous driving, in most domains, anticipating and hardcoding every relevant situation is intractable. But dominant ideas die hard.

​While symbolic AI was still prominent, AI luminaries Marvin Minsky and Seymour Papert proved in their book that Frank Rosenblatt’s “perceptron” (a precursor to today’s deep neural networks) had serious limitations because it could not compute an “exclusive OR” operation, a take down that some scientists considered a competitive “hatchet job” aimed at securing funding for Minsky and Papert’s preferred symbolic AI approaches. Later discoveries (along with hardware advancements and increased data availability) revealed that adding hidden layers and backpropagation granted artificial neural networks gobs of utility, but Minsky and Papert’s skepticism of perceptrons’ utility likely discouraged some folks from further experimenting with perceptrons, possibly delaying further artificial neural network research.

Now fast forward to the 2010s. It wasn’t long after deep learning got a few notches on its belt, that some of deep learning’s luminaries became similarly dismissive of non-deep-learning approaches. Concluding their 2015 Nature article, deep learning heavyweights Yann LeCun, Yoshua Bengio, and Geoffrey Hinton stated that new deep learning paradigms ought to “replace rule-based manipulation of symbolic expressions by operations on large vectors.” Strong convictions and swollen egos can spur entrenchment in any field, of course, but given that AI is still so nascent, its cyclical nature remains especially vulnerable to the influences of these human factors.

The Funding Teeter Totter

​And then we have to contend with that ever pervasive existential threat to academic research and commercial pursuits—finding funding. When AI kicked off as a field, computers had recently proved their mettle in WWII, cracking codes and calculating ballistics. Unsurprisingly, as the Cold War crept into the 1950s, the US and UK governments were willing to invest in AI. Due to government funds being AI’s primary funding source, early AI research predictably revolved around military applications. Because American and British defense departments wanted increased capabilities for translating Soviet documents, much early AI effort was directed at machine translation (MT).

​But government funding abruptly dried up in response to the 1973 independent US congressional and UK Science Research Council reports on AI research. Each report criticized their government’s funding-to-results ratio. On the British side, James Lighthill took aim at “common sense,” conjecturing that it’d remain elusive to machines. The US crunched some numbers and concluded that human interpreters were, at the time, more cost effective, more accurate, and more plentiful than their machine counterparts. Consequently, winter set in. For approximately twenty years afterwards, researchers avoided MT, meaning not only can heavyweight researchers unduly sway research directions, but the government—through its deep pockets—can as well.​

By the early 1980s, no wanting to be left behind by Japan’s ambitious 5th Generation Project, the US Defense Advanced Projects Research Agency (DARPA) decided to generously fund AI research again. Another hope-filled AI Spring sprung up, but by the waning days of the 80s—again, lacking the results researchers promised—Uncle Sam turned the funding faucet off; the teeter totter thumped back to the ground, ushering in another winter lasting from the early 1990s until the 2010s when deep learning catalyzed a new AI spring. Thankfully, AI is no longer reliant upon government funding; Silicon Valley, venture capital, and industry have entered the fray, investing heaps into AI. Solely from a funding standpoint, another winter seems unlikely.

​How Can We Do Better?

A dose of Stoicism’s dichotomy of control—ignoring what’s beyond our control while focusing on what we can control—might help us here. While we can’t do much about large systemic issues like government funding, evolving geopolitical landscapes, AI heavyweights’ egos, or entrenched ideas, as individual AI students, researchers, hobbyists, professors, and business leaders, we can refuse to be purveyors of AI hype. Confidence is fine—few will invest in a founder or fund an academic who doesn’t firmly believe in their own ideas—but sobriety toward new, untested ideas is even better if we seek steady progress. But how can we do this in a field prone to hype?

To remain even-keeled, extensive testing can give us awareness of our AI models’ limitations, which we can then communicate to folks. We can also release prototypes of our products for the public to test (and remain skeptical of “breakthrough” AI models without a prototype that the public can play around with). We ought to also think much harder about the very thing we’re trying to conjure from silicon—intelligence itself. To do this, we might need to delve further into linguistics, developmental psychology, cognitive neuroscience, philosophy of mind, or other areas. Finally, we should remain open to novel paradigm shift proposals and remain closed to idea entrenchment. When AI was still an embryonic field, its springs and winters were likely unavoidable, but AI is maturing now, and we ought to do better. Drawing on lessons learned from past AI cycles, let’s focus on what’s in our locus of control to avoid future winters.

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.

Sign Up FreeBook a Demo