We’ve recently adopted a new approach for how our ASR models predict punctuated outputs. Previously, this was handled as part of a post-processing step. Now, utilizing our newly developed and novel model architecture, we are able to represent punctuation tokens explicitly and incorporate them directly within model training. While this represents a significant evolution of our models, the implementation details have been completely contained internally, meaning that no client-side changes are needed to start benefiting from our most recent improvements.
When turned on, punctuation results are displayed at the transcript and word-level.