Getting accurate transcriptions from a high volume of recorded phone calls and meetings can be a costly and downright frustrating experience for companies. That’s why we rebuilt the speech stack from the ground-up, ditching traditional, brittle methods for end-to-end deep learning.
Using deep learning through the entire speech-to-text lifecycle allows us to build custom-trained speech models unique to each customer we serve. A custom speech model created through training versus keyword boosting or tuning, provides companies with a higher level of accuracy on the data they care about — their unique vocabularies, customer accents, product names, and acoustic environments. With more accurate transcriptions, Enterprises can deliver customer experiences that not only demonstrate compliance but differentiate themselves from the competition. The value created by our end-to-end deep learning approach is why our customers have switched from off-the-shelf and hybrid models offered by legacy and Big Tech competitors. It’s also why we recently raised a $12M Series A funding round.
“Deepgram’s custom-trained speech models are an integral part of our workflows and have been key in analyzing over one million recruiting voice calls,” said Dennis Evanson, Compliance & QA of Randall Reilly. “The level of accuracy we have seen using Deepgram’s custom models is unprecedented and a significant boost compared to any other ASR platform we have encountered. Since deploying Deepgram, our team has unlocked unutilized voice datasets and has seen accuracies greater than 90% on virtually everything that we do.”
We have our sights set on being the de facto speech company, and to do that we need to broaden access to our technology and make it as easy as possible for engineers, developers, data scientists and other users to use our platform. That’s why we’re allowing users to sign up for a free account to Deepgram MissionControl. With MissionControl, users can train custom models with their own unique datasets.
Deepgram MissionControl guides you through a three-step process to fully train and deploy custom speech models.
- DataFactory: Prepare your voice data
Getting higher accuracy on the data you care about starts with the data you use to train your model. It’s the age-old saying, “Garbage in, garbage out.” That’s why the first thing you’ll want to do is upload voice data that is representative of the output you want to use in your production system or application. With DataFactory, you’ll be guided through uploading and labeling your audio data to create training-ready datasets. Whether you already have labels or need to create some, DataFactory lets you pair existing labels or label it yourself on the spot. Sign up for a free account and take advantage of 10 minutes of free professional data labeling to kickstart your training process.
- ModelForge: Train a custom model
Once your datasets are prepared, you’re ready to train. Click over to ModelForge to custom-train your own model. You’ll be able to give your model a name, select a base model, and add the datasets you’d like your model to learn from. It’s that simple.
- SpeechEngine: Deploy your model for transcription at scale
Now you get to do what you came here to do: Deploy a world class transcription model at scale! Conduct a model comparison to test two models head-to-head on an audio file of your choice. Use the API to apply your custom-trained speech recognition model to your own product, call center or conversational AI workflows.
Speech recognition has been a persistent problem for enterprises, but it doesn’t have to be. With MissionControl, we’re thrilled to offer custom-trained speech models so that any team can continuously unlock value from their voice data. With the power of training at your fingertips, it’s never been simpler to improve the accuracy of your audio data, and see results quickly and consistently at scale.
Your free MissionControl account includes the following:
- 20 hours automatic speech recognition per month: Get automatic transcripts from your own custom-trained models or take advantage of our pre-trained models.
- 10 minutes of professional data labeling: Access the best data labeling designed for speech recognition. With your free credits, it’s easy to create training-ready datasets with the data of your choice.
- 2 training ready datasets: Don’t have audio data on hand? No sweat, we’ve prepared a couple training-ready datasets so that anyone can train a model.
- 2 custom models: Automatically train your own custom models to excel on the audio data of your choosing.
- 1 cloud deployment: Run your very own custom end-to-end deep learning model in the Deepgram cloud and see the results!