From ENIAC to NVIDIA: The Epic Saga of AI Hardware Revolution
For the past couple of years, AI has been a leading topic of conversation in both the tech world and mainstream media. Advances in AI technology and the release of AI tools that have grasped the public’s interest has resulted in an increasing awareness of the capabilities and potential of AI for good. While AI-powered tools and products are now popular knowledge, the more complex technologies and hardwares that is used behind the scenes is not as well known.
AI hardware includes the devices and components that powers the processing capabilities of AI systems allowing them to process and analyze large amounts of data. This includes GPUs, TPUs, and NNPs among others. Without these core components it would be impossible to build the various AI models and systems that we have today. Hardware architectures like Von Neumann, Dataflow, and Harvard architecture among others are also important in creating viable AI models.
The AI hardwares that we use today has come a long way from their earlier versions. The first programmable, general purpose computer, ENIAC (Electronic Numerical Integrator And Computer) was a remarkable invention in 1945 and it ran on over 17,000 vacuum tubes resulting in a computer that filled an entire room. While we have more efficient and powerful hardwares now, it is important to understand the history of AI hardwares and how we got to the point where we are now.
Early developments in AI hardware
Before the introduction of computer systems, there were simpler machines and models that laid the foundation for more complex ones. Designed by Alan Turing in 1936, the Turing machine was a simple mathematical model that was used in the computing of real numbers. These machines were capable of implementing computer algorithms and could do everything that a real computer could do but were too slow to be used in real world applications. Regardless of their limitations, Turing machines are still seen as a foundational model of computability and computer science theory.
In 1942, Dr John W. Mauchly and J Presper Eckert Jr. started designing what would be the world’s first electronic computer at the Moore School of Electrical Engineering at the University of Pennsylvania. Sponsored by the US Army’s Ballistic Research Laboratory, the ENIAC was originally designed to calculate artillery firing tables and consisted of an IBM card reader, a card punch, 1500 associated relays, and about 18,000 vacuum tubes. Although ENIAC itself did not have a lot of influence on subsequent machines, its development still impacted the direction of computer development for the next decade.
One of ENIAC’s greatest contributions to computer hardware and development was the creation of a document concerning a new and improved version of ENIAC. First Draft on a Report on the EDVAC was produced by Dr Jon L. von Neumann and described a design architecture for an electronic computer consisting of a processing unit, a control unit, an external mass storage, memory, and input and output mechanisms. This design architecture is now known as the von Neumann architecture and gave subsequent computers the ability to store a set of instructions in memory. Today, the von Neumann architecture still powers sophisticated computer systems that rely on the architecture to provide a machine independent way to manipulate executable code.
Computer hardware evolution
Throughout the 1940s and 1950s, vacuum tubes were used as the primary components of computers that were being developed. Invented by Lee De Forest in 1906, vacuum tubes were used to build various electronic devices like radios, x-ray machines, televisions and others. These tubes were expensive, large, and a computer needed thousands of them to run. They also consumed a lot of electricity, sparking a rumor that whenever the ENIAC computer was switched on, the lights in Philadelphia dimmed. The all around inefficiency of vacuum tubes meant that a replacement was urgently needed.
On November 16, 1953, Tom Kilburn, Richard Grimsdale and Douglas Webb exhibited a prototype transistorized computer to an audience at the University of Manchester. The transistor was invented by scientists at the Bell Telephone Laboratory while researching the behavior of crystals as semiconductors. The scientists, John Bardeen, William Shockley, and Walter Brattain went on the receive the Nobel Prize in physics in 1955 as transistors revolutionized the computing industry. By the 1960s, vacuum tubes had been completely replaced by transistors which were significantly smaller and more durable than their predecessors.
The invention of transistors paved the way for the development of other smaller alternatives to vacuum tubes. In 1958, Jack Kilby, an electrical engineer came up with the idea for the integrated circuit, a device that consists of different but interconnected components. The integrated circuit, also known as microchip, is a lot faster, cheaper to produce, and more efficient than both the vacuum tube and the transistor. With the advancement of technology, integrated circuits have gotten even smaller and faster than they used to be when they were first developed. Today, a small chip can contain billions of transistors leading to an observable called Moore’s law that posits that the number of transistors in an integrated circuit doubles every two years.
Neural networks and Processors
By the 1960s, most large-scale computer system architectures were already developed and mainframe computers were used as servers. Mainframe computers are basically data servers that are able to process large amounts of data and billions of simple calculators in real time. These computers can be used for large scale data processing and were essentially an early variation of artificial intelligence as we know it today.
In the 1950s, computer scientists began to contemplate the possibility of simulating the neural network of the brain. Towards the end of the decade, Bernard Widrow and Marcian Hoff, two scientists at Stanford developed a model that was able to detect binary patterns and predict the next bit. They also developed MADELINE, a model that used adaptive filters to eliminate echoes on phone lines and the first neural network applied to a real life problem. After years of research as well as ups and downs (there was a period after the introduction of the von Neumann architecture where research and funding for neural network was scarce), the first multilayered network was eventually developed in 1975. By the 2010s, there was already increased interest in neural networks and a deep neural network called AlexNet was created.
Graphics processing units (GPUs) were initially being used in the 1970s to animate arcade system boards before they began to be used for general purpose computing. By the 1980s, it was being used in graphics card and terminals and in 1986, TMS34010, the first programmable graphics processor chip was released. In 2009, three Stanford University researchers, Rajat Raina, Anand Madhavan and Andrew Ng published a paper Large-scale Deep Unsupervised Learning using Graphics Processors detailing the use of GPUs in machine learning. After the paper was published GPUs began to be used for training neural networks on large datasets for large language models. Today, GPUs are being used for large scale machine learning workloads contributing significantly to the progress of AI development. Current GPUs like the NVIDIA Quadro RTX 8000 and the NVIDIA Titan RTX power large language models like ChatGPT and are essentially the foundation of artificial intelligence.
In May 2016, Google announced that they had created their own processing unit, the Tensor Processing Unit (TPU) that was built specifically for machine learning. According to Google, their TPUs were able to deliver better-optimized performance per watt and required fewer transistors per operation. According to a Google engineer, Norm Jouppi, a TPU can process 100 million photos on Google Photos every day.
Other AI hardwares
Application-specific Integrated Circuits (ASIC) are computer chips that are designed for a specific purpose or application like AI/machine learning processing. In the 1970s, when ASICs were being developed designed, the computing industry was flooded with general purpose circuits that were inefficient and slow. Although ASICs were being used beforehand, the introduction of complementary metal-oxide-semiconductor (CMOS) technology created a pathway for the commercialization of ASICs. Today ASICs are used for AI applications since they can be tailored for specific tasks that require high performance and speed that cannot be gotten from general purpose circuits.
In the late 1980s, Steve Casselman proposed an experiment to build a computer with more than 600,000 reprogrammable gates. This experiment resulted in the development of the Field Programmable-Gate Array (FPGA), a logic device that is programmed by a circuit and “emulates” that circuit. The FPGA emulation can be reprogrammed every few hundred milliseconds, but it is slower than the circuit would run if it was powered in an ASIC. This makes it perfect for people looking for a shortcut since ASICs take a long time to make.
Conclusion
AI hardware has come a long way from thousands of vacuum tubes occupying an entire room. Today, AI hardware is faster, more powerful and a lot more efficient than it was 50 years ago, giving us a lot of confidence in the continued development and evolution of AI in the coming years.
Note: If you like this content and would like to learn more, click here! If you want to see a completely comprehensive AI Glossary, click here.
Unlock language AI at scale with an API call.
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.