Article·AI & Engineering·Aug 4, 2023
5 min read

What is Machine Unlearning, and Why Does It Matter?

5 min read
Share this guide
Zian (Andy) Wang
By Zian (Andy) Wang
PublishedAug 4, 2023
UpdatedJun 27, 2024

Coined as "machine unlearning," this concept represents the converse of machine learning—it serves to make a model unlearn or forget. These algorithms, applied to previously trained models, force them to expunge specific portions of the training dataset. The beginnings of machine unlearning lie in responding to the "Right to be Forgotten" legislation, a provision of the European Union's General Data Protection Regulation (GDPR).

This regulation primarily focuses on the internet, explicitly dealing with links to or replicas of personal data. Removing website links or references to an individual is relatively straightforward. However, achieving this within machine learning models is often regarded as almost an impossible task.

Machine learning has infiltrated virtually all areas of modern software development and the internet. Particularly in recent years, models like Midjourney and GPT-4 have amplified the discussions around AI's privacy and security concerns. There have been cases where artists' and writers' works were used in model training without consent—essentially stealing their work. Large Language Models (LLMs) may harbor substantial bias or leak sensitive information, leading to potentially severe consequences.

Efforts to impede certain types of information in production models have been made, but they falter as, despite OpenAI's consistent model enhancements, ChatGPT jailbreaks were constantly being discovered.

Machine unlearning algorithms hold promise not just in complying with regulations but also in rectifying factually incorrect information within a model. They may even serve as a remedy for LLM hallucinations!

Challenges and Limitations

Despite its promise, machine unlearning remains a nascent and unrefined subfield of machine learning. The apparent solution—removing sensitive data points and retraining the model—seems straightforward but impractical. The retraining of deep learning models is typically an expensive and time-consuming process, with the training of GPT-4 alone costing over $100 million. Moreover, removing specific data points could adversely impact the model's performance.

Despite potential methods to excise specific "knowledge" fragments from machine learning models, numerous obstacles persist:

  1. There is no universally accepted gauge to determine the effectiveness of machine unlearning. For larger, more complex deep learning models, ensuring complete data point forgetting is almost impossible.

  2. Training in most models is interdependent. Consequently, all subsequent updates and predictions depend, to some extent, on previous updates.

  3. Larger deep learning models are effectively enigmatic to humans. Quantifying the exact influence of data points on the model and how their removal changes the model's overall operation is challenging.

Despite these hurdles, many algorithms capable of machine unlearning for various models exist.

Methods and Techniques 

Existing algorithms typically leverage reverse-engineered, traditional machine-learning methodologies. These are generally classified into two main categories: exact and approximate unlearning methods.

Exact unlearning methods concentrate on precisely eliminating the impact of individual data points on the model. Techniques such as Reverse Nearest Neighbors (RNN), which find and adjust the model based on the nearest neighbors of the data points set for removal, and K-Nearest Neighbors (KNN), which remove data points based on their closeness to the nearest neighbors, provide an intricate strategy for effective data point exclusion.

In contrast, approximate unlearning methods incorporate various data sources for efficient data elimination. For example, the Local Outlier Factor (LOF) identifies and purges outliers in the dataset to boost the model's performance. Similarly, Isolation Forest (IF) constructs random forests and determines the anomaly scores of data points, isolating and discarding those with elevated scores. These methods present a more efficient alternative to exact unlearning by utilizing additional data to be expunged or retained after removal.

Exploring the concept of "knowledge unlearning" in large language models (LLMs), a fascinating approach was introduced in the paper titled "Knowledge Unlearning for Mitigating Privacy Risks in Language Models." The authors developed a methodology for training LLMs to forget by optimizing the negative log-likelihood objective function. The authors demonstrated the efficiency of this straightforward unlearning technique in smaller versions of the GPT model.

Outlook

With the recent buzz surrounding LLMs and numerous large tech companies venturing into creating their own, ensuring personal information privacy in the digital world is of paramount importance. It might be challenging and unnecessary to filter out every piece of personal information from LLM training data. However, it is undoubtedly feasible to "unlearn" this information without sacrificing performance.

The outlook might not be as straightforward as merely reversing these LLMs' objective functions. Recent research on the inverse scaling law of LLMs revealed that bigger models are sometimes better. Indeed, these larger models are more susceptible to generating “memorized” responses enforced during training; however, they are often incorrect or illogical. Furthermore, the paper discovered that larger models are prone to “distractor tasks,”. In contrast, a brief mention of another topic in the context diverges the model’s response to follow the logic of the insignificant part. 

The good news, though, is that on the other frontier of generative machine learning, diffusion models are already attempting to respect artists’ works with stable diffusion 3.0, allowing artists to remove their artwork from the training data. 

Further study and development of machine unlearning could lead to the creation of more refined models that respect individual privacy rights while maintaining their utility. It may also give rise to novel applications and innovations within the machine learning field, opening up unexplored areas of research and development. However, the challenges cannot be ignored, and addressing them will be a critical part of developing effective and efficient machine unlearning algorithms.

With all these in mind, machine unlearning stands at a crossroads of great potential and significant challenges. It's an exciting space to watch as the industry evolves and adapt to new demands and expectations. As we continue to embrace the digital age, machine unlearning is likely to play an increasingly vital role in maintaining the balance between artificial intelligence's capabilities and the preservation of privacy and security.

Note: If you like this content and would like to learn more, click here! If you want to see a completely comprehensive AI Glossary, click here.

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.