We’ve recently adopted a new approach for how our ASR models predict punctuated outputs. Previously, this was handled as part of a post-processing step. Now, utilizing our newly developed and novel model architecture, we are able to represent punctuation tokens explicitly and incorporate them directly within model training. While this represents a significant evolution of our models, the implementation details have been completely contained internally, meaning that no client-side changes are needed to start benefiting from our most recent improvements.

When turned on, punctuation results are displayed at the transcript and word-level.

Read more about including punctuation and capitalization in your transcripts in the Deepgram Documentation.

Stop building work-arounds for STT systems that don't work.

Start FreeTalk to an expert