Companies embedding ElevenLabs into production voice products face a compliance question that typically surfaces three days before an enterprise deal closes: what data does ElevenLabs retain by default, and what does it take to change that?
The answer matters financially. Under GDPR, administrative fines can reach up to 30 million or 4% of annual global turnover, whichever is higher (GDPR Article 83). Under Illinois BIPA, private plaintiffs can seek liquidated damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation (740 ILCS 14/20).
This article breaks down what ElevenLabs collects, how retention differs by plan tier, and where B2B2B companies face compliance gaps.
Key Takeaways
- Free and Growth plan users have training data use turned on by default; opting out requires a manual toggle and only applies prospectively.
- Zero Retention Mode, the primary technical control for data minimization, is exclusively available to Enterprise tier customers.
- Voice cloning workflows are excluded from Zero Retention Mode entirely, creating a gap for biometric data compliance.
- ElevenLabs positions itself as a data processor, shifting primary consent and classification obligations to integrators.
- Character-based billing logs per-request data that increases your compliance surface area at scale.
What ElevenLabs Collects When You Use Its API
ElevenLabs collects voice inputs, generated outputs, and account metadata across all plan tiers. What happens to that data depends heavily on which tier you're on and whether you've taken deliberate steps to limit retention.
Voice Data and Biometric Classification
The ElevenLabs Privacy Policy classifies voice data using conditional language: it "may constitute biometric data under applicable laws" when processed for identity verification. The repeated "may" rather than "does" means ElevenLabs hasn't taken a definitive legal position. For B2B2B companies, this pushes the classification burden onto you. Your legal team needs to independently determine whether your use case triggers biometric data protections under Illinois BIPA, CCPA, or GDPR Article 9.
Default Retention and API Logging
Every TTS API call returns response headers that include x-character-count, request-id, current-concurrent-requests, and maximum-concurrent-requests. Your compliance team needs to integrate multiple data sources to build a complete audit trail.
The ElevenLabs Trust Center describes a 30-day post-deletion backup retention period, but active data retention periods aren't clearly specified in a single place. In practice, assume two separate timelines: how long data is kept while your account is active, and how long residual copies remain after you request deletion.
How ElevenLabs Uses Your Data for Model Training
In non-enterprise tiers, your audio can be used to improve ElevenLabs' models unless you proactively opt out. The toggle applies prospectively only—data already submitted may have been included in training sets—and it only prevents model training use, not security monitoring, analytics, or metadata handling.
Enterprise customers get the inverse default: training data use is off unless they explicitly opt in, with access to custom DPAs, BAAs, and Zero Retention Mode. Even so, ElevenLabs retains rights to process data for service delivery, security and fraud prevention, usage analytics, and legal compliance independent of the training toggle.
Data Handling Differences Across Plan Tiers
Your plan tier effectively determines your data governance model. For compliance-driven deployments, the gap between Growth and Enterprise is operational, not cosmetic.
Free and Growth: Shared Infrastructure, Shared Risk
Free and Growth users operate under standard Privacy Policy defaults: training on by default, no Zero Retention Mode, no BAA availability, standard post-deletion retention.
Enterprise: Tighter Controls, But Not Complete Isolation
Enterprise customers can negotiate DPAs, BAAs, data residency, and access to Zero Retention Mode. One gap worth noting in many vendor DPAs: subprocessor-change notification is often described as "reasonable" without a hard number of days. Push for a specific window (commonly 30 to 90 days) and a current subprocessor list.
Zero Retention Mode covers text-to-speech, Voice Changer, Speech to Text, and conversational agents—activated per-request via enable_logging=false. Voice cloning, dubbing, studio, and music generation are excluded entirely.
The B2B2B Problem: When Your Users' Voices Are the Data
If you're a platform company embedding ElevenLabs into your product, your users' audio is what actually gets processed. ElevenLabs positions itself as a data processor, making you the data controller. You bear primary responsibility for biometric consents, collection notices, deletion requests, and establishing lawful bases for processing—none of which you can delegate through a vendor contract.
PHI and HIPAA Exposure
ElevenLabs' HIPAA posture is conditional: submitting protected health information without both an executed BAA and Zero Retention Mode active is prohibited, and BAAs are Enterprise-only. If you're on a Growth plan building a healthcare-adjacent product, keep PHI out of your audio and text payloads, or use a vendor that signs a BAA for your tier. Companies like Vida Health require BAAs as a baseline, not an upsell.
BIPA and Biometric Risk
Illinois BIPA carries uncapped statutory damages and a private right of action. ElevenLabs provides no BIPA-specific implementation guidance and no built-in consent mechanism you can simply turn on. Features like voice cloning enrollment, speaker-specific customization, or user-level personalization can look close to "unique identification" depending on how your system is designed—and in a B2B2B model, that ambiguity becomes a sales blocker when downstream customers ask you to prove what you're collecting and retaining.
If Illinois exposure is plausible, treat biometric compliance as an engineering requirement, not a legal footnote. At a minimum:
- Define in writing whether any part of your pipeline creates or stores a reusable voice-derived identifier—a voice cloning embedding or persistent speaker representation both qualify.
- Separate your consent flows. A general product consent is rarely persuasive in a biometric review; make biometric collection distinct, explicit, and logged.
- Publish a retention schedule and destruction workflow you can actually execute operationally, including vendor deletion steps and backup timelines.
- Ensure you can satisfy end-user deletion requests end to end, including data held at processors and subprocessors.
- Treat voice cloning as its own risk domain: isolated storage, audited enrollment flows, and geography-based blocking for regulated tenants. Many teams disable cloning for regulated customers entirely rather than trying to paper over the gap contractually.
This is the kind of checklist enterprise security teams expect in a DPIA-style review, even when they don't call it that.
Architectural Workarounds When You Can't Use Zero Retention Mode
If you're not on Enterprise, or your workflow touches products excluded from Zero Retention Mode, you can still reduce privacy exposure by changing what you send, what you log, and how you separate identity from content.
Minimize What You Send
For many B2B2B products, the most sensitive content isn't the voice itself—it's the text you feed into TTS. Consider server-side templating with substitution tokens, rendering the fully personalized string only inside your own trust boundary. Where you must send dynamic text, redact obvious identifiers (names, account numbers, addresses) before the request, and keep a reversible mapping only in your own system with tight retention.
Decouple End-User Identity From Vendor Telemetry
ElevenLabs response headers like request-id are easy to accidentally join to user identity in your observability stack. Log vendor request identifiers only at a tenant or session level, and keep any per-user mapping in a separate, access-restricted system with a shorter retention policy. This reduces the blast radius if internal logs are subpoenaed or pulled into a broad DSAR export.
ElevenLabs' character-based billing creates per-request artifacts—character counts, request IDs, concurrency metadata—that accumulate fast. At 10 million characters per month, you're generating millions of billing-relevant events subject to the provider's retention policies.
The operational issue is correlation. When your internal logs store request-id alongside user IDs or message text, you've built a bridge between vendor telemetry and end-user identity. That changes what you must document and defend in DSAR and deletion workflows, and spreads identifiers into any observability tools that ingest vendor headers. Treat billing metadata like security logs: restricted access, short retention, and clear separation between tenant-level aggregation and per-user traces.
Runtime-based providers like Deepgram bill by audio duration rather than character count, which reduces the need to keep granular text-derived billing artifacts. Deepgram's text-to-speech and Voice Agent API also support on-premises and private cloud deployment, keeping voice data inside your own security boundary. Contact centers like Five9 and Sharpen use this approach to keep compliance obligations contained.
Choosing the Right Deployment Configuration
Your compliance posture comes down to tier, workflow, and whether you can enforce minimization controls in code. Before signing, validate the operational reality.
Questions to Ask Before You Sign
Request the current subprocessor list with names, locations, and certifications—if it's not publicly available, treat that as a signal. Ask for a specific notification window in days (not "reasonable notice") for subprocessor changes; 30 to 90 days is standard. Clarify active data retention periods, not just post-deletion retention, since the 30-day backup window only starts after you initiate deletion. Get written confirmation of which metadata fields persist under Zero Retention Mode. Confirm BAA availability and any model restrictions for PHI handling if HIPAA applies. And verify whether your specific use case triggers biometric data classification under the jurisdictions where your users live—don't rely on the vendor's conditional language to answer that question for you.
Reducing Risk in Production
If you proceed with ElevenLabs at Enterprise, set enable_logging=false on every API call handling sensitive data—this needs to be enforced in code, not just policy, because the parameter is per-request. Combine data residency selection with Zero Retention Mode, since residency alone doesn't prevent cross-border access for support and moderation workflows. Avoid voice cloning for any data subject to biometric protections. Build your own BIPA consent framework rather than relying on ElevenLabs' conditional classification language—a dedicated consent flow that's explicit, logged, and tied to a specific retention schedule will hold up far better in a security review than a general terms-of-service acknowledgment.
Ready to benchmark an alternative without committing to a data handling model you haven't fully vetted? Start in Console with $200 in free credits—no credit card required.
FAQ
The bottom line: ElevenLabs' privacy terms and technical controls flow through you as the integrator, so most compliance outcomes depend on your tier and your architecture.
Does ElevenLabs' Privacy Policy Apply to End Users of Apps Built on ElevenLabs?
Yes, but indirectly. Your end users are data subjects; you own the notice, consent, and DSAR burden. "Cancelling" a plan and "deleting" data are often separate actions operationally—confirm what gets deleted, what stays in backups, and who can request it.
Can ElevenLabs' Moderation Team Access My Audio Even After I've Opted Out of Training?
Potentially. Training opt-out isn't the same as operational access controls. Assume safety review, abuse investigation, and support debugging can still require access to content and metadata, even under Enterprise terms. Design your payloads so sensitive identifiers are minimized before they reach ElevenLabs.
Is ElevenLabs HIPAA Compliant on Standard Paid Plans?
Treat HIPAA support as Enterprise-only unless you have a signed BAA and a documented no-retention posture. If you can't get those terms, keep PHI out of the audio and text you send entirely.
How Does ElevenLabs' Data Residency Work?
Residency constrains where primary processing occurs, but doesn't automatically prevent cross-border access for support or moderation. Ask who can access content, from where, under what approvals, and what gets logged when that access happens. Residency selection should be combined with Zero Retention Mode—not treated as a standalone control.
In a B2B2B Product, Who Should Own the ElevenLabs Account?
If you run the account, you typically stay the controller and must flow obligations downstream. If your enterprise customer owns the account, you may reduce controller exposure but trade it for harder integration and multi-tenant operations. Choose intentionally.

