Deepgram released a new version of its on-premises solution.

On-Premises Release 221031: Docker Hub Images

  • deepgram/onprem-api:1.72.2

  • deepgram/onprem-engine:3.37.8

  • deepgram/onprem-license-proxy:1.2.2

  • deepgram/onprem-billing:1.4.0

  • deepgram/onprem-metrics-server:2.0.0

Changes

  • Deepgram On-premises users can now choose between Deepgram’s Base and Enhanced models in an ASR request via the tier query parameter, where tier=base will select the Base model and tier=enhanced will select the Enhanced model.

    • tier works in conjunction with the detect_language query parameter.

    • For users whose Enhanced models do not include the “*-enhanced” suffix in the model name, the use of the tier parameter is required.

    • Models may still be invoked directly via the model UUID without the use of tier.

  • Deepgram On-premises deployments now support the following Understanding features (with the accompanying Understanding model deployed on-prem and the requisite configuration changes):

    • Topic Detection enables users to detect the most important and relevant topics that are referenced in speech within the audio.
      detect_topics=true&punctuate=true

      • This requires the addition of the following section to the api.toml file:[features]
        topic_detection = true

    • When you use these Understanding features, please note that the punctuate=true parameter is required as part of the ASR request. If you do not explicitly include this parameter, it will be implicitly included by the system.

  • Deepgram On-premises now supports the all-new “CloseStream” web socket message for closing your live audio streams. Please see the New Methods for Closing Streams changelog post for more information, or refer to the API documentation for Transcribing Live Streaming Audio.

We welcome your feedback, please share it with us at Product Feedback.

Stop building work-arounds for STT systems that don't work.

Start FreeTalk to an expert