Technology Partner
Enterprise Voice AI, Native to AWS.
Real-time speech-to-text, text-to-speech, and voice agents that run inside your AWS stack: Amazon Connect, SageMaker, EC2, Bedrock and more.
For background on the partnership at scale, see AWS and Deepgram: The Foundation That Makes Voice AI Scale.
Deepgram purchases draw down on your existing AWS commit. Nova-3 STT, Aura-2 TTS, and the Voice Agent API are available through AWS Marketplace and AWS credits available for enterprise procurement.
Amazon Connect. Deepgram is a productized speech provider inside Amazon Connect contact flows. For contact center teams running Connect, Nova-3 replaces or augments Connect's native STT for live transcription, and voicebot use cases, with a 30% average WER improvement over alternative STT options. The integration also extends to Amazon Lex for IVR and virtual voice agents.
Amazon SageMaker. Deepgram STT and TTS are productized for SageMaker via dedicated Model Packages on AWS Marketplace, so customers can deploy real-time speech inference inside their own VPC. See Deployment options below for the SageMaker deployment path.
Amazon Bedrock. Deepgram's Voice Agent API integrates natively with Amazon Bedrock as the LLM layer, giving enterprises a single procurement and inference surface for both Deepgram speech models and the Bedrock-hosted LLMs (Claude, Titan, AI21) they pair with. Two integration paths are supported: native via the built-in aws_bedrock provider type, and a proxy-server option for advanced cases.
Managed API. Deepgram's hosted API consumed directly from any AWS workload. The fastest path for teams that want to start shipping today.
PrivateLink and VPC Endpoints. Managed Deepgram with no public-internet exposure. Audio routes from the customer's VPC to Deepgram's service endpoints over the AWS private backbone, suitable for security-conscious enterprises and regulated workloads.
Amazon SageMaker. Deepgram STT and TTS models deployed inside the customer's own VPC using SageMaker Model Packages. Subscribe to Deepgram in AWS Marketplace, stand up a SageMaker Endpoint, and tune environment variables for concurrency, interim results, and streaming behavior. The middle ground between fully-managed Deepgram and full self-hosted operations: the data plane stays in the customer's account, but SageMaker handles the deployment and scaling surface.
Self-hosted on AWS. For customers who need full control of the runtime, Deepgram's self-hosted models deploy on EC2, EKS, ECS, and across the G5, G6, and G6e instance families with production-grade operations. Each GPU instance can run many concurrent real-time streams. For regulated industries, this means a single AWS account becomes the home for the entire voice stack: contact center, healthcare workflows, media transcription, and voice agents.
If you are evaluating voice AI on AWS, the fastest path is the Marketplace listing. For enterprise terms, ISV Accelerate co-sell, or specific Connect, SageMaker, or Bedrock integrations, deepgram.com/contact-us is the direct route.
Procurement and partnership
Amazon Connect + Lex
Amazon SageMaker
Amazon Bedrock
AWS Lambda + serverless
Self-hosted on AWS
Demos, case studies, comparison
Contact

Media Transcription
Contact Centers
Conversational AI
Looking to use Deepgram + AWS ?
Talk to an ExpertOther Partners

OneReach.ai

Think41

APrime

Vida

Five9

Google Cloud

Twilio

Kore

Stream
Abby Connect

Cloudflare

Porter

Lindy

AI Heroes

AudioCodes

Genesys

Daily.co

Voximplant

Perlon AI

Recall.ai

Cognigy

Vapi

Enterprise Bot

Caylent

Deepgram & Vercel Next.js Templates

OneSix Solutions

InfoCap

Lumio AI

LucidPoint

Deepgram × IBM: Enterprise Voice AI Inside watsonx CX

Vonage
Carahsoft