Five Must-Follow AI Researchers: Household names in ML
Dozens of papers are uploaded everyday on advancements in AI, such as papers on new machine learning algorithms that tackle short-term stock price forecasting, or a deep reinforcement learning framework for automated cryptocurrency trading. It can be difficult to track current advancements in the field. It may be even more complicated to obtain a foundational understanding of the AI world at large.
We’ve come to help.
Here are just five of the numerous scientists and researchers who have broken barriers in the field and pioneered work.
🇨🇦 Geoffrey Hinton
Geoffrey Hinton, or the "godfather of deep learning," is a renowned Canadian computer scientist and researcher. He is best known for his significant contributions to the development of artificial neural networks, specifically deep neural networks. One of his most influential papers, “Learning Representations by Backpropagating Errors” (published in Nature in 1986), popularized the backpropagation algorithm, which computes the gradient of the loss function with regards to network weights working backwards from the last layer. This paper laid the foundation for modern deep learning techniques, including the use of multi-layered neural networks for tasks such as image and speech recognition. He was awarded the Turing Award in 2018 for making deep neural networks an important part of computing.
During his research on backpropagation, Hinton also popularized and co-invented the concept of Boltzmann machines with Terrence J. Sejnowski. Boltzmann machines are unsupervised deep learning models where every node is connected to another node that seeks to optimize the quantities and weights within a network. Later on, Hinton also created the Forward-Forward algorithm, inspired by how neural activity in the brain works, which replaces the forward and backward pass in backpropagation with two forward passes instead. This algorithm intends to fill in the gaps of backpropagation, which requires a perfect model of forward computation and cannot be used as easily on networks with a high number of parameters. In 2012, Hinton and two of his graduate students won the 2012 ImageNet challenge, where programs compete to classify objects within the large visual database, with AlexNet, a convolutional neural network that had an error rate of 15.3%.
Hinton is currently a professor emeritus at the University of Toronto. He worked at Google as a distinguished researcher and part of the Google Brain team until early this year. Hinton departed to advocate for more caution and regulation around AI. Geoffrey Hinton is also the great-great-grandson of George Boole, the man who coined Boolean logic, foundational to the study of mathematics today.
🌲Fei-Fei Li
Fei-Fei Li is widely recognized for her pioneering work in computer vision, a subfield of AI focused on enabling machines to understand and interpret visual information from the world. She played a pivotal role in the development of large-scale datasets like ImageNet, a database of millions of labeled images used to train and evaluate computer vision algorithms. She and a team of international collaborators also organized the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) from 2010-17 which spurred significant advancements in image classification and object detection, setting new benchmarks for AI systems.
Li’s research also contributed to a new field of work titled Natural Scene Understanding. In traditional computer vision, algorithms typically only classify objects; however Li and a Stanford team have built new software in conjunction with Google that can train itself to identify entire scenes, such as people playing volleyball on a beach.
Li is currently a Professor of Computer Science and the co-director of the Human-Center AI Institute at Stanford University. She is also a member of the National Artificial Intelligence Research Resource (NAIRR) Task Force which actively advocates for a centralized government provision of AI resources. As part of a proposed bill, NAIRR would help researchers across the country advance forth on AI efforts and increase safety in AI research.
In addition to her research, Li co-founded AI4ALL, an organization dedicated to increasing diversity and inclusion in AI by providing education and mentorship to underrepresented groups, particularly women and minorities. She has also worked on practical applications of AI research, ranging from giving talks on healthcare to autonomous vehicles.
💻 Latanya Sweeney
Latanya Sweeney is primarily known for her groundbreaking research and advocacy in privacy and data privacy and her establishment of the field known as public interest technology. Her work has shed light on the vulnerabilities of personal data and the need for stronger privacy safeguards in the age of big data and AI.
One of her notable contributions is the development of re-identification techniques, which demonstrated how seemingly anonymous datasets could be used to identify individuals with high accuracy. In "Simple Demographics Often Identify People Uniquely," published in 2000, Sweeney conducted experiments on the 1990 U.S. Census summary and found that 87% of the population in the United States had reported characteristics that likely made them unique based only on their zip code, gender, and date of birth. This work has underscored the importance of preserving individual privacy in data sharing and the potential consequences of inadequate data protection measures.
Sweeney also was able to re-identify data about former governor of Massachusetts Bill Weld after he visited the hospital due to the Massachusetts Group Insurance Commission (GIC) released datasets. Her work on re-identification was eventually cited in two U.S. regulations, one of them being HIPAA.
Sweeney is currently Professor of the Practice of Government and Technology at the Harvard Kennedy School and the director of the Public Interest Technology Lab. She has recently worked on increasing election safeguards and collaborated on a project publishing Facebook’s documents for further research. She also previously served as Chief Technologist for the FTC.
🔊Yoshua Bengio
Yoshua Bengio is also known for helping pioneer the deep learning revolution, where he has worked alongside his mentor Geoffrey Hinton. He has authored over 500 publications, and his work has garnered over 220,000 citations, making him one of the most influential voices in AI research.
One of his most notable contributions is the development of word embeddings, a representation of a word which enables machines to understand the meaning and context of words within vast bodies of text. This innovation revolutionized natural language processing and made machine translation, sentiment analysis, and other language-related tasks remarkably more accurate. Furthermore, Bengio's research in unsupervised learning and generative adversarial networks (GANs), especially in his 2014 paper, has pushed the boundaries of AI's creative potential. GANs train each other by competing against each other, with one generating content and the other evaluating for quality. Bengio also supported the development of graph attention networks, which increase understanding of highly networked data by focusing on the most relevant components.
Bengio co-founded Montreal Institute for Learning Algorithms (Mila), a renowned research institute focused on AI, and currently serves as its director. The institute has housed other notable researchers, including Aaron Courville, who co-wrote the seminal textbook Deep Learning with Bengio and Ian Goodfellow. In recent years, Bengio has spoken out against the potential existential risks AI poses.
🇩🇪 Jürgen Schmidhuber
Jürgen Schmidhuber is a German computer scientist primarily known for his contributions to the development of deep learning and artificial neural networks. He is credited with several key innovations in the field, including the development of Long Short-Term Memory (LSTM) networks, which are a type of recurrent neural network (RNN), in his 1997 paper co-authored with Sepp Hochreiter.
LSTMs are designed to overcome the vanishing gradient problem in traditional RNNs, where the calculated gradient becomes too small to change the weights in the network. LSTMs have since become a fundamental building block for sequence modeling and have had a profound impact on our understanding of AI's capabilities in handling sequential data. They have applications in natural language processing, speech recognition, and other applications that classify and process data in a time series, like handwriting recognition. Everyday applications you may use like Google Translate and Siri are reliant on LSTMs.
Schmidhuber is the co-founder and scientific director of the Swiss AI Lab IDSIA, renowned for its research in machine learning and AI, located in Lugano, Switzerland. Schmidhuber and the IDSIA team dramatically sped up the performance of convolutional neural networks on GPUs; their work achieved the first superhuman performance in a computer vision contest. Their work has greatly contributed to forward progress in the field of computer vision.
👀 Who Else?
This list is only just a brief overview of the prominent AI researchers within artificial intelligence and was formulated based on the papers and names I have seen in my own studies. However, there are so many more fields to cover beyond natural language processing, computer vision, and data privacy, nor is this an overarching list of the researchers within those fields. For example, Vladimir Vapnik is co-inventor of the support vector machine method, which has served as the foundation for two decades of AI work. With the merging of AI into nearly every industry, these leading thinkers can provide insight to where the field is headed.
Note: If you like this content and would like to learn more, click here! If you want to see a completely comprehensive AI Glossary, click here.
Unlock language AI at scale with an API call.
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.