Have you ever wondered how to do live voice-to-text transcription with Python? We’ll use FastAPI and Deepgram to achieve our goal in this article.
FastAPI is a new, innovative Python web framework gaining popularity because of its modern features like support for concurrency and asynchronous code. Deepgram uses AI speech recognition to do real-time audio transcription, and we’ll be using our Python SDK.
If you want to jump ahead, the final code for this project is located here in Github.
Getting Started
Before we start, it’s essential to generate a Deepgram API key to use in our project. To grab our key, we can go here. For this tutorial, we’ll be using Python 3.10, but Deepgram supports some earlier versions of Python as well.We’ll also need to set up a virtual environment to hold our project. We can read more about those here and how to create one.
Install Dependencies
Create a folder directory to store all of our project files, and inside of it, create a virtual environment. Ensure our virtual environment is activated, as described in the article in the previous section. Make sure that all of the dependencies get installed inside that environment.
For a quick reference, here are the commands we need to create and activate our virtual environment:
The first thing we need to install is FastAPI and all of its dependencies. Next, we need to install Deepgram, which allows us to access the Deepgram Python SDK. To do this, we’ll need to run the following from the terminal:
Create a FastAPI Application
Let’s get a starter FastAPI application up and running that renders an HTML page so that we can progress on our live transcription project.
Create a file called main.py inside of our project and a templates folder with an HTML file inside called index.html.
The main.py file will hold our Python code.
Lastly, we’ll store our HTML file inside the templates folder and hold our HTML markup here.
If we start our development server from the terminal to run the project using uvicorn main:app --reload the index.html page renders in the browser.
Add Deepgram API Key
Our API Key will allow access to use Deepgram. Let’s create a .env file that will store our key. Make sure to add this file to our .gitignore file when we push our code to Github, hiding our key.
In our file, add the following environment variable with our Deepgram API key which we can grab here:
The below code shows how to load our key into the project and access it in main.py:
Get Mic Data From Browser
Our next step is to get the microphone data from the browser, which will require a little JavaScript.
Use this code inside the <script></script> tag in index.html to access data from the user’s microphone.
If you want to learn more about working with the mic in the browser, please check out this post.
Websocket Connection Between Server and Browser
We’ll need to work with WebSockets in our project. We can think of WebSockets as a connection between a server and a client that stays open and allows sending continuous messages back and forth.
The first WebSocket connection is between our Python server that holds our FastAPI application and our browser client.
In our FastAPI web server code, we need to create a WebSocket endpoint that listens for client connections. In the main.py file, add the following code:
This code accepts a WebSocket connection between the server and the client. As long as the connection stays open, we will receive bytes and wait until we get a message from the client. If that doesn’t work, then we’ll throw an exception. Once the server and client finish sending messages to one another, we’ll close the connection.
In index.html, this code listens for a client connection, and if there is one, it connects to the client like so:
Websocket Connection Between Server and Deepgram
We need to establish a connection between our central FastAPI server and Deepgram to get the audio and do our real-time transcription. Add this code to our main.py file.
We’re defining a variable called deepgram_socket, which calls a function process_audio and opens the connection to Deepgram. In this user-defined method, we’ll also connect to Deepgram. While the server and browser connection stays open, we’ll wait for messages and send data.
Next, let’s create our functions to process the audio, get the transcript from that audio and connect to Deepgram. In our main.py, add this code.
The process_audio function takes fast_socket as an argument, which will keep the connection open between the client and the FastAPI server. We’re also connecting to Deepgram and passing in the get_transcript function. This function gets the transcript and sends it back to the client.
The connect_to_deepgram function creates a socket connection to deepgram, listens for the connection to close, and gets incoming transcription objects.
Lastly, in our index.html, we need to receive and obtain data with the below events. Notice they are getting logged to our console. If you want to know more about what these events do, check out this blog post.
Let’s start our application and start getting real-time transcriptions. From our terminal, run uvicorn main:app --reload and pull up our localhost on port 8000, http://127.0.0.1:8000/. If we haven’t already, allow access to our microphone. Start speaking, and we should see a transcript like the one below:
Congratulations on building a real-time transcription project with FastAPI and Deepgram. You can find the code here with instructions on how to run the project. If you have any questions, please feel free to reach out to us on Twitter at @DeepgramDevs.
If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .
Unlock language AI at scale with an API call.
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.