Share this guide

If you’re just getting started with Deepgram’s live streaming transcription API, learning how to work with WebSockets and real-time audio can be tricky. That’s why we’re releasing a new open-source project: the streaming test suite!

The streaming test suite is designed to ensure you can stream basic audio to Deepgram before you begin building custom integrations. It also provides some sample code that may be helpful when creating your own integration.

The commented Python code demonstrates how to stream input from your microphone or a WAV file to Deepgram, and receive transcriptions back from our real-time endpoint. Once you’re up and running with our streaming test suite, you’ll have validated a couple of important points:

  1. your API key works

  2. you can connect to Deepgram’s API

  3. you can stream audio to Deepgram

  4. you can receive transcriptions for the audio.

If the script encounters any errors along the way, it’ll print a useful error message that gives you any context needed to either debug it or reach out to our support team.

After you successfully run the streaming test suite, you’ll be ready to integrate Deepgram with more complex audio sources. Happy building!

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

Related Articles

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.

Sign Up FreeBook a Demo
Essential Building Blocks for Language AI