Introducing Nova: The world’s most powerful speech-to-text model → Introducing Nova: The world’s most powerful speech-to-text model →

The most powerful
speech-to-text API.

Unmatched accuracy. Blazing fast. Enterprise scale. Hands-down the best price. Everything developers need to build with confidence and ship faster.

Start building AI-powered voice experiences.

Choose one:
NASA: First All Female Space Walk

[Speaker 0:] And, Jessica, Christina, we are so proud of you. I’m gonna do great today. We’ll be waiting for you here in a couple hours when you get home. I’m gonna hand you over to Stephanie now.

[Speaker 1:] Have a great great EVA. Drew, thank you so much. It’s been our pleasure working with you this morning, and working on getting my EV hat open. and I can report. It’s opened and stowed. Thank you, Drew. Thank you so much.

[Speaker 2:] Tika. On your GCMs, Take your power switches to bat, stagger switch throws, and expect a warning tone.

[Speaker 3:] Final steps before they begin the space launch. Copy. Check display switch functional. Tracy, how important is this this regarding it? There is Sounds like seems like a lot to remember on your own. Absolutely.

[Speaker 2:] Take power eighty one eighty two, two switches to off, o f f. And Christina and just could have enough work with their hands and feet and their brain outside that it really helps to have someone like Stephanie. New powerboat off. DCMs. This connect your SCUs from your DCMs and stow the SCUs in the pouch. So not only does Stephanie

[Speaker 3:] Thirty eight AM central time. A little ahead of schedule about twelve minutes, but That gets us started on today’s historic spacewalk. Andrew Morgan there. He’s been wishing the crew luck. He’s being made in pouch and DCM cover clothes.

[Speaker 2:] Copy. You need to.

Podcast: Deep Learning’s Effect on Science

[Speaker 0:] Yeah. I mean Welcome to the AI Show. I’m Scott Stephenson, cofounder of Deepgram. With me is Jeff Ward, a k a Susan. He’s a navy pilot, acclaimed dad joke We’ve never had you. Give a dad joke. We need to do that. Acclaimed dad joke writer. Yeah. Well, okay.

[Speaker 1:] Knock knock

[Speaker 0:] Who’s there?

[Speaker 1:] Spell

[Speaker 0:] Spell who?

[Speaker 1:] W H O

[Speaker 0:] Oh, good one. Tensor. That’s a real good one. He’s also an AI scientist at Deepgram on the AI show, we talk about all things AI. What is it? What can you do with it? How does it affect you? Where is it going? We’re live and ready to answer your questions. Comment on YouTube and Twitch or Tweet at Deepgram AI to join in. Today, we’re asking the question. Our big question How is machine learning or deep learning affecting science?

[Speaker 1:] Actually, I’m asking the question of you.

[Speaker 0:] Good. I’m ready to answer. What’s the question?

[Speaker 1:] For those that do not know, Scott here has a little bit of a science background

[Speaker 0:] a little bit

[Speaker 1:] and a little bit of machine learning and science background

[Speaker 0:] That’s true. Yeah.

[Speaker 1:] So so, Scott, can you at least give us a just give us the the the ten thousand foot overview of of a little bit of what you’ve done?

[Speaker 0:] Ten ten thousand foot overview is I’d have a PhD in particle physics, and I was

[Speaker 1:]Yes, sir. Yes.

[Speaker 0:] So doctor Scott But I was searching for dark matter, deep underground, in a government controlled region of China, basically a James Bond lair.

[Speaker 1:] I like it.

[Speaker 0:] Yep. We had to design the experiment and build the experiment, operate the experiment, take data, analyze the data, write a paper, you know. So this is what you do in experimental particle physics. And we did that searching for dark matter. Mhmm. And we did it with lots of computers, servers, CPUs, things like that, lots of copper, plastic, liquid Xenon cryogenic stuff, and the CPUs were used to do data analysis, and we were using, like, boosted decision trees and neural networks and other standard, like, statistics based cuts in order to figure out Was it a dark matter particle or not? So tons of signal signal noise search space. Yeah. Yeah

Call Center: Upgrade Service

[Speaker 0:] Thank you for calling premier phone service. This call may be recorded for quality and training purposes. My name is Beth, and I’ll be assisting you. How are you today?

[Speaker 1:] I’m pretty good. Thanks. How are you?

[Speaker 0:] I’m doing well. Thank you. May I have your name?

[Speaker 1:] Yeah. Sure. My name’s Tom Idol.

[Speaker 0:] Can you spell that last name for me?

[Speaker 1:] Yeah. Yeah. i d l e.

[Speaker 0:] Okay. l e at the end. I was picturing it idle, like American Idol, i b o l.

[Speaker 1:] Yeah. That that happens a lot. It’s not really a common name.

[Speaker 0:] Okay, mister Idol. How can I help you today?

[Speaker 1:] Yeah. I need some information on upgrading my service plan.

[Speaker 0:] Sure. I can absolutely help you with that today. Can you tell me what plan you have currently?

[Speaker 1:] I think it’s a silver plan. Let me get my classes so I can read this. Yeah. Yeah. It’s the silver plan.

[Speaker 0:] Okay. Alright. Silver plan. And how many people do you have on your plan right now?

[Speaker 1:] Three. I’ve got my brother, Billy, my mom cat, and I guess I count too. So yeah. That’s three.

[Speaker 0:] Great. And how can I help you with your plan today, sir? Oh, you can call me, Tom. There’s no date for this, sir.

[Speaker 0:] I’m sorry, Tom. It’s just an old habit. How can I help you with your plan?

[Speaker 1:] Well, on my plan right now, I can only have three people on it, and I’m wanting to add more. So I’m wondering if I can switch my plan up or upgrade it somehow.

[Speaker 0:] And how many more people are you wanting to add to your plan?

[Speaker 1:] Well, here’s the thing. I need to add three more people so far. I wanted to add my friend Margaret, my daughter, Anna, and my son Todd.

[Speaker 0:] Alright? We do have a few options that support six users. One is our gold, the other is our platinum plan.

[Speaker 1:] Okay. So how much are those gonna cost me?

[Speaker 0:] Well, the gold plan is

Download response

Do more with voice

Deepgram is a comprehensive AI transcription foundation plus the understanding features you need to make your data readable and actionable by humans…or machines.
View Product Overview

More speed. More human. More more.

Transcribe an hour of pre-recorded audio in about 8 seconds.
The fastest real-time transcription speeds for human-like conversational AI experiences, real-time analytics, and enablement.
Over 30 languages and dialects to choose from with more rapidly being added. Over 100 languages supported for translation.
40+ file types
Over 40 different audio formats and encodings supported including MP3, MP4, MP2, AAC, WAV, FLAC, PCM, M4A, Ogg, Opus, and WebM.

Deepgram gives me so much trust, confidence, and relief…so I can focus on building my product.”

View Case Study

See what developers are building with Deepgram 🔥

10,000+ years of audio data have been transcribed with Deepgram.

The Deepgram API covers the languages we need (and then some), integrates easily with our audio source, is accurate enough, and delivers results quickly. The documentation made it easy to design our code, and the very helpful support engineers were quick to respond to questions and to help us debug our initial efforts.

The speed and accuracy of Deepgram API is the best I have seen.

We provide Fraud Detection services to the insurance industry using intelligent and compliant AI-driven Digital Speech DNA solutions over Blockchain. Using Deepgram allowed us to process a large volume of data quickly and accurately. In addition, Deepgram has the ability to detect different accents which improved the overall accuracy of our scoring module.

The Best Audio Transcription Service in the Wild!

I have been using Deepgram’s API for a couple of months now, and I am beyond impressed with the accuracy. It is so much better than other voice recognition services that I have tried in the past. I love that it supports so many languages, which is perfect for me because I work with clients worldwide. The best part is that its API is pretty intuitive, which means it doesn’t require any training, which saves me tons of time. I would recommend this to anyone who needs a speech-to-text service!

The low latency of the response with high accuracy from the websocket connection is the most distinguishing feature from other providers. If this feature was not there then it’s yet another Speech to Text service. I really love the community around it and the team which is driving it, kudos to the DevRel team.

We have tested a number of transcription APIs, and Deepgram has consistently come out as the most accurate for our use case. whilst offering a nice Python interface for batch operations. The API schemas are also excellent.

Great speech-to-text results in seconds.

As a software developer, there is plenty to like about Deepgram – complete and easy to follow documentation; easy to use API that allows for quick language-independent implementation; great follow-up support; multiple models including one specifically for telephone-based dictation; not only one of the best but also one of the least expensive speech rec services available; a generous free number of credits are provided at sign-up – plenty enough for experimentation and testing of your application.

The ease of use! The simple but powerful APIs make it so quick and easy to start creating something. Not only were the tools very easy to use but they were also incredibly fast and accurate. I came across no translation issues when using the product despite testing it in noisy and non-optimal conditions. And the results were almost instantaneous. Other tools I had looked at were either very restrictive or not very accurate so it was refreshing to find an SDK that gave the flexibility to do whatever I want without compromising on speech and accuracy.

An Automated Speech API with Intuitive Documentation

My favorite part about using Deepgram was the ease of learning. The API documentation is complete and intuitive, and the tutorials in the console left me feeling confident that I could use the API and SDK in either Node or Python projects.