Article·Tutorials·Nov 4, 2021

Automatically Censor Profanity with Node.js

Kevin Lewis
By Kevin Lewis
PublishedNov 4, 2021
UpdatedJun 13, 2024

Here at Deepgram we run GRAM JAM - a series of internal hackathons to have Deepgrammers build cool projects using our own API. Sometimes the projects lead to product improvements, sometimes they get a laugh, and other times they are just super useful. This blog post is based on one of those projects - Bleepgram - built by the very interdisciplinary team of Faye Donnelley, Mike Stivaletti , Conner Goodrum, Claudia Ring, and Anthony Deschamps.

Sometimes we all let "unprovoked or unintended utterances" slip out of our mouth, and often it's the job of an editor to go through recordings and overlay a bleep so no one has to hear the original word. Historically this has been a manual process, but with Deepgram's Speech Recognition API we can work to censor them automatically.

If you want to look at the final project code you can find it at https://github.com/deepgram-devs/censor-audio.

Before We Start

You will need:

Create a new directory and navigate to it with your terminal. Run npm init -y to create a package.json file and then install the following packages:

Create an index.js file, and open it in your code editor.

Preparing Dependencies

At the top of your file require these packages:

  • fs is the built-in file system module for Node.js. It is used to read and write files which you will be doing a few times throughout this post.

  • exec allows us to fire off terminal commands from our Node.js script.

  • profane-words exports an array of, perhaps unsurprisingly, profane words.

  • ffmpeg-static includes a version of FFmpeg in our node_modules directory, and requiring it returns the file path.

FFmpeg is a terminal-based toolkit for developers to work with audio and video files, which can include some quite complex manipulation. We'll be using exec to run it.

Initialize the Deepgram client:

Creating a Main Function

Since Node.js 14.8 you can use await anywhere, even outside of an asynchronous function, if you are creating a module. For this blog post I'll assume that's not the case, so we'll create a main() function for our logic to sit in:

Get Transcript and Profanity

Inside of our main() function get a transcript using the Deepgram Node.js SDK, and then find the profanities:

Bleeps will return words that appear in the profane-words list. Test this code by running node index.js in your terminal and you should see a result like this:

Once you have done this, remove the console.log() statement.

Determine Clean Audio Timings

Next, we want the inverse start and end times - where the audio is 'clean' and doesn't need bleeping. Add this to the main() function:

Run this again with node index.js and you should have the following result:

FFmpeg Complex Filters

FFmpeg allows complex manipulation of audio files, and works by chaining smaller manipulations known as filters. We pass in audio by a variable name, do something, and export a new variable which we can then further chain. This might feel complex, so let's talk through what we will do.

  1. Take the original audio file and drop the volume to 0 during times where we have profanity.

  2. Generate a constant beep with a sine wave.

  3. Make the constant beep end when the final profanity finishes.

  4. Drop the volume of the beep to 0 whenever there is not profanity.

  5. Mix the bleep and the vocals to one final track which at any point in time will have a bleep or vocals - never both.

In our main() function let's do this with code. Starting with dropping the volume wherever we have profanity:

dippedVocals will now look something like:

This takes the provided file (which here is [0]), makes the volume 0 between the provided times, and makes this altered version available to future parts of this filter as [dippedVocals]

Delete dippedVocals and create filter which contains all parts of our complex filter with the value of dippedVocals as the first item, and then creates a valid string for FFmpeg:

That's all five steps above built into one complex filter. The final filter looks like this:

Yeah. We did it in an array for a reason.

Create Censored File

The very final step is to actually run FFmpeg via exec with the above filter. Add this line to the bottom of your main() function:

And run your script with node index.js. Once completed, your output.wav file should be your original file with automatic transcription.

Wrapping Up

A transcript is not always the final step in a project - you can use the structured data returned by Deepgram to do further processing or analysis, as demonstrated by this post. I hope you found it interesting.

The complete project is available at https://github.com/deepgram-devs/censor-audio and if you have any questions please feel free to reach out on Twitter - we're @DeepgramAI.

If you have any feedback about this post, or anything else around Deepgram, we'd love to hear from you. Please let us know in our GitHub discussions .

Unlock language AI at scale with an API call.

Get conversational intelligence with transcription and understanding on the world's best speech AI platform.