Most developers today are fine-tuning pre-trained models to develop AI applications. Fine-tuning involves making small adjustments to a pre-trained model to better align it with specific tasks or domains.
This process uses the foundational knowledge (weights) embedded in the model to work with specialized applications without requiring extensive retraining from scratch.
In this article, you will learn:
How to assess and select datasets for fine-tuning models.
Popular datasets for fine-tuning and their performance results.
How to Assess and Select Datasets for Fine-Tuning Models
Selecting the right dataset is crucial when fine-tuning models. The dataset's quality and relevance can impact a model's performance. Here are key steps to help you assess and select datasets for fine-tuning:
Step 1: Domain Relevance
The first step is to ensure the dataset aligns with the target domain by considering the following factors:
Task Alignment: The dataset should contain examples of the tasks you want your fine-tuned model to perform. For example, if you're building a sentiment analysis model for audio recordings, the dataset should consist of labeled examples of audio recordings with associated sentiment labels.
Data Distribution: Assess whether the dataset covers a wide range of scenarios and edge cases within your domain. A diverse dataset helps the model learn robust features and generalize well to unseen examples.
If you're building a speech-to-text model for various audio environments, the dataset should consist of labeled examples of audio recordings from diverse scenarios. Such scenarios include different accents, background noises, and speaking styles to ensure the model learns robust features and generalizes well to unseen examples.
Step 2: Data Quality
Consider the following aspects when it comes to the dataset’s quality:
Labeling Accuracy: If the dataset is labeled, ensure the labels are accurate and consistent. Noisy or incorrect labels can mislead the model during fine-tuning and degrade performance.
Data Cleanliness: Check the dataset for corruption, such as audio files with missing segments, noise, or incorrect labels. Clean and preprocess the data as needed to ensure its integrity.
For example, if you find audio recordings with background noise or silent segments, you can use noise reduction techniques to remove silent parts and improve the speech quality.
Sample Size: While larger datasets generally lead to better performance, the optimal size depends on the task's complexity and the model architecture. Aim for a dataset that provides sufficient coverage of the target domain and can be computationally managed.
Step 3: Licensing and Accessibility
Licensing: Verify that the dataset has a clear and permissive license for use in your specific context. Some datasets may have restrictions or require attribution, so complying with the licensing terms is important.
For example, the People's Speech Dataset is available under the Creative Commons Attribution-ShareAlike (CC BY-SA) license. The license allows for academic and commercial use but requires users to credit the source and share any derivative works under the same license.
Accessibility: Ensure that the dataset is easily accessible and can be integrated into your fine-tuning pipeline. Consider factors such as data format, storage requirements, and any access restrictions or authentication mechanisms.
Benchmarking and Comparison
To assess the suitability of a dataset for fine-tuning, benchmarking, and comparing its performance with other relevant datasets:
Baseline Performance: Evaluate the performance of the pre-trained model on the dataset without fine-tuning. This provides a baseline to measure the improvement achieved through fine-tuning.
We can determine how well a pre-trained audio model like Wav2Vec2 does on a new dataset by measuring the word error rate (WER) on the test set before fine-tuning. This allows us to measure the model's accuracy after fine-tuning it on that dataset.
Comparison with Similar Datasets: Compare the dataset's characteristics and performance with other datasets in the same domain. Look for datasets that have been successfully used for similar tasks and have yielded good results.
For example, if you are working with the People’s Speech dataset for speech recognition, you might compare it with other datasets like LibriSpeech or TED-LIUM, which have been successfully used for similar tasks and have yielded good results.
Dataset for Fine-Tuning Models and Their Performance Results
The following sections explore various datasets developers and AI engineers use to fine-tune models. Here are the datasets we’ll review:
Common Voice Dataset.
ImageNet.
Amazon Reviews.
Anthropic HH Golden.
OBELICs
Common Voice Dataset
Mozilla initiated the Common Voice dataset in 2017, a crowdsourced project to create a free and diverse database for speech recognition. The project invites volunteers to record and validate voice samples. They have contributed to a multilingual corpus available under the public domain license CC0.
Common Voice dataset stats
Here are some key stats about the Common Voice dataset:
Contains over 17,000 validated hours of transcribed speech data across 104 languages as of version 12.0.
Includes demographic metadata like age, gender, and accent for many recordings.
Models you can fine-tune with Common Voice
Frequently used to fine-tune speech recognition models like:
Automatic Speech Recognition (ASR) models.
Speech-to-text (STT) models.
Voice activity detection models.
Speaker diarization models.
Fine-Tuning OpenAI Whisper Model with Common Voice Dataset
The Common Voice dataset can help fine-tune the OpenAI Whisper model, an ASR model. Whisper is pre-trained with 680,000 hours of multilingual and multitasking supervised data.
Fine-tuning Whisper with this dataset can improve its performance in recognizing and transcribing speech across various languages and accents. For instance, fine-tuning Whisper on specific subsets of Common Voice, such as the Hindi or Dhivehi language datasets, has demonstrated improvements in WER.
A fine-tuned version of Whisper by Sanchit Gandhi has a WER of 32.0% after 4000 training steps. For reference, the pre-trained Whisper small model achieves a WER of 63.5%, meaning an improvement of 31.5% absolute through fine-tuning.
ImageNet
The ImageNet dataset is a large-scale visual database designed for visual object recognition research. Prof. Li Fei-Fei and other researchers from Princeton, Stanford, and UNC-Chapel Hill curated it.
ImageNet dataset stats
Total Images: Over 14 million high-resolution images.
Training Images: 1,281,167 images.
Validation Images: 50,000 images.
Test Images: 100,000 images.
Object Classes: 1,000 classes in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) subset
Models you can fine-tune with ImageNet
Image Classification Models: ResNet, AlexNet, VGG, EfficientNet, Vision Transformers (ViT)
Object Detection Models: YOLO (You Only Look Once), Faster R-CNN, SSD (Single Shot MultiBox Detector)
Self-Supervised Learning Models: DINOv2, SimCLR
The extensive and diverse collection of images has made the dataset a foundational resource for training deep learning models. This has enabled significant breakthroughs in image classification, object detection, and other computer vision tasks.
Fine-Tuning ResNet with ImageNet
Fine-tuning models on the ImageNet dataset has been a standard practice to enhance their performance on specific tasks.
For instance, the ResNet CNN model has been fine-tuned on ImageNet to achieve state-of-the-art results in image classification. When fine-tuned on ImageNet, ResNet50 achieves a top-1 accuracy of 75.8% and a top-5 accuracy of 93.5%.
Amazon Reviews
The Amazon Reviews dataset is a large and comprehensive collection of customer reviews spanning various product categories on Amazon, including books, electronics, clothing, and more.
Amazon Reviews stats
Total Reviews: Over 571.54 million reviews as of the 2023 version.
Timespan: Reviews range from May 1996 to September 2023.
Number of Users: 54.51 million unique users.
Number of Products: 48.19 million unique products.
Models you can fine-tune with Amazon Reviews
Sentiment Analysis Models: BERT, RoBERTa, DistilBERT, LSTM with Word2Vec or GloVe embeddings
Recommendation Systems: Collaborative Filtering models, Matrix Factorization models, Neural Collaborative Filtering (NCF)
Text Classification Models: Support Vector Machines (SVM), Logistic Regression, Naive Bayes
Aspect-Based Sentiment Analysis Models: Attention-based LSTM, Transformer-based models
Topic Modeling: Latent Dirichlet Allocation (LDA), Non-negative Matrix Factorization (NMF)
Each review includes valuable information like the product ID, user ID, rating (1–5 stars), review text, and timestamp. This dataset is widely used in natural language processing (NLP) tasks (sentiment analysis, text classification) and recommendation systems.
Fine-Tuning Models with Amazon Reviews
One study fine-tuned GPT-3 on a subset of 1 million Amazon reviews and evaluated its performance on sentiment classification. The fine-tuned model achieved an impressive accuracy of 94.2%, outperforming traditional machine learning models like logistic regression and random forests.
GPT-3, fine-tuned on 10 million Amazon Reviews, achieves a ROUGE-L score of 0.42, surpassing the performance of baseline summarization models.
Anthropic HH Golden
The Anthropic HH Golden is an improved version of the company's original Helpful and Harmless (HH) dataset. This dataset provides high-quality demonstration data to improve the alignment of large language models (LLMs) with human preferences (i.e., RLHF).
Anthropic HH Golden stats
Source: Extends the harmless dataset of Anthropic's Helpful and Harmless (HH) datasets.
Positive Responses: Rewritten by GPT-4 to improve quality and harmlessness.
Negative Responses: Left unchanged from the original HH dataset.
Data Format: Pairs of texts with "chosen" (positive) and "rejected" (negative) responses.
Models you can fine-tune with Anthropic HH Golden
Designed to test the Unified Language Model Alignment (ULMA) technique, which improves performance by treating positive and negative samples differently and removing the KL regularizer for positive samples.
Fine-Tuning Language Model Alignment with Anthropic HH Golden
The Anthropic HH Golden dataset can be used to fine-tune models to enhance their alignment with human preferences, particularly regarding helpfulness and harmlessness.
According to GPT-4's evaluation in the ULMA paper, the method achieved lower perplexity scores and higher win rates regarding helpfulness and harmlessness. For example, the perplexity score for ULMA fine-tuned on the Golden HH dataset was 16.93, compared to 18.23 for the original HH dataset, which indicates better model performance and alignment with human preferences.
OBELICS
The OBELICS dataset is an open, massive, and curated collection of interleaved image-text web documents.
OBELICS stats
Total Web Pages: 141 million web pages extracted from Common Crawl.
Associated Images: 353 million images.
Text Tokens: 115 billion text tokens.
Dataset Type: Interleaved image-text documents.
Models you can fine-tune with ImageNet
Multimodal Models: Vision and language models such as IDEFICS-9B and IDEFICS-80B.
Image-Text Retrieval Models: Models that retrieve relevant images based on text queries and vice versa.
Image Captioning Models: Models that generate descriptive captions for images.
Visual Question Answering (VQA) Models: Models that answer questions about the content of images.
Fine-Tuning Multimodal Models with OBELICS
The OBELICS dataset can be used to fine-tune multimodal models such as IDEFICS (Interleaved Description and Flinging Compound Scorer), a model for language and vision with 9 billion parameters.
The researchers fine-tuned IDEFICS-9B and IDEFICS (80 billion parameters) on OBELICS and evaluated their performance on various multimodal benchmarks. The fine-tuned models achieved competitive results, outperforming or matching the performance of other large-scale multimodal models trained on image-text pairs.
For instance, on the NLVR2 benchmark for visual reasoning, IDEFICS fine-tuned OBELICS achieved an accuracy of 80.9%, surpassing the performance of models like CLIP (77.9%) and ALIGN (79.3%). On the VQAv2 benchmark for visual question answering, IDEFICS scored 73.2%, comparable to the state-of-the-art models at the time of publication.
Conclusion
This article has explored the importance of fine-tuning models and how to assess and define selection criteria for various use cases. We highlighted several datasets for fine-tuning, including Common Voice, Amazon Reviews, ImageNet, Anthropic HH Golden, and OBELICS, along with their respective benchmark performances on fine-tuned models.
Here is a table with the key summary:
Frequently Asked Questions
Why is fine-tuning AI models important?
Fine-tuning AI models is crucial because it allows customizing pre-trained models to meet specific needs without building a new model. This process enhances the model's accuracy and efficiency for particular tasks or datasets, making advanced AI technology more accessible and applicable across various industries.
What are the challenges associated with choosing the right dataset for fine-tuning AI models?
Choosing the right dataset for fine-tuning AI models involves several challenges, including ensuring data quality and relevance, managing resource constraints, and addressing potential biases. High-quality data must be free from noise and inconsistencies and representative of the real-world scenarios the model will encounter.
How can I improve the performance of my fine-tuned model?
Increasing the dataset size, improving the data quality, adjusting the hyperparameters, and using data augmentation techniques can all help a fine-tuned model perform better. Regular evaluation and iterative model refinement based on performance metrics are crucial for achieving optimal results.
Unlock language AI at scale with an API call.
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.