“We will never use your conversations to train our AI.”
That’s what Anthropic said when it launched Claude. For two years, that promise set Claude apart. While ChatGPT trained on every conversation by default and Google Gemini merged your chats with your entire digital footprint, Anthropic held the line: your data stayed private. Period.
Then, in September 2025, everything changed.
Anthropic sent an email to millions of users with a simple popup: “Updates to Consumer Terms and Policies.” A large black “Accept” button. And below it, in smaller text, a toggle switch—pre-set to “On”—giving the company permission to use your conversations to train future AI models. For up to five years.
If you clicked Accept without reading carefully, you just agreed to let Anthropic feed every chat, every coding session, every idea you’ve shared with Claude into its training pipeline. The 30-day data deletion? Gone. The privacy-first positioning? Reversed.
Most users never noticed. The change was announced on a blog post buried in Anthropic’s news feed. No headlines. No alarm bells. Just a quiet policy shift that redefined what “privacy” meant for one of the most trusted AI assistants on the market.
Here’s what makes this complicated: even after the change, Claude is still more private than most alternatives. The training is opt-in, not automatic. The controls are clearer than ChatGPT’s. Enterprise customers still get bulletproof protection. But the promise that made Anthropic different—that your conversations would never become training data—is gone.
This guide explains what changed in September 2025, what data Anthropic collects, how the company uses it, and how Claude’s privacy compares to ChatGPT and other AI tools. It also covers compliance (GDPR, SOC 2, HIPAA), enterprise protections, privacy settings you can control, and whether you should trust Claude with confidential information. Because the answer depends on which Claude you’re using—and whether you read the fine print.
What we’ll cover:
Let’s start with the basics.
Like all AI chatbots, Claude collects information to operate the service. According to Anthropic’s privacy policy, the company collects:
Account information
Email address, name (if provided)
Payment information (processed through third-party payment providers)
Account preferences and settings
Conversation data
Every message you send to Claude
Claude’s responses
Files you upload (documents, images, code)
Feedback you provide (thumbs up/down ratings)
Technical information
Device type, browser, operating system
IP address and general location (if you enable location services)
Usage patterns (when you use Claude, feature usage)
Error logs and diagnostic data
Third-party integration data If you connect Claude to other services (Google Drive, Slack, etc.), Claude can access data from those integrations based on the permissions you grant.
What’s different about Claude? According to independent privacy analyzes, Claude’s data collection is more limited than competitors like Google Gemini, which integrates deeply with your broader Google ecosystem (search history, Gmail, YouTube activity).
But this is the critical point: The conversation data is what matters most for privacy. Everything you type into Claude – work documents, personal questions, code, ideas – gets collected. The question is what happens to it next.
This is where the September 2025 policy change/pivot matters. Anthropic uses your data, by account type, like this:
Before September 2025: Anthropic did NOT use conversations for training. Data was retained for 30 days, then deleted.
After October 8, 2025 (enforcement date): You must choose:
Option 1: Opt out (Privacy Mode)
Option 2: Opt in (Help Improve Claude)
Anthropic uses data for:
Even if you opt out of training, flagged conversations (those that violate usage policies) may be reviewed by human moderators and retained for up to 2 years for safety purposes.
The Claude AI privacy policy is at anthropic.com/legal/privacy. This is what you need to know from the privacy policy:
On data training (as of October 2025):
“For users of our consumer products (Claude Free, Pro, and Max, and when using Claude Code with these accounts) we may use your chats and coding sessions to improve Claude, if you choose to allow us to.”
Translation? Training is opt-in, not automatic. But you must actively make a choice.
“By default, Anthropic employees cannot access your conversations with Claude.”
According to Anthropic privacy policy section 11, EU/EEA users are protected by the GDPR. Anthropic uses standard contractual clauses for international data transfers. You have rights to:
California users are protected by the CCPA and have similar rights to the GDPR.
According to independent privacy evaluations, Anthropic’s policy is clearer than ChatGPT’s but still has gaps:
This is the comparison most people want. This is how Claude and ChatGPT handle privacy, based on their current policies (March 2026):
The winner here is Claude. While both offers opt-out, Claude forces an explicit choice rather than defaulting to training. See what Tom’s Guide says and read TechCrunch analysis.
This one is a tie. ChatGPT’s indefinite storage is worse, but Claude’s 5-year retention (if opted in) is aggressive.
This one is a tie. Both offer temporary/incognito modes and independent memory toggles with similar functionality.
The winner here is Claude for its clearer policy on access restrictions.
This one is a tie. Both offer strong enterprise privacy. Claude Team is cheaper at entry level ($25/month vs. ChatGPT’s $60+ minimum).
While Anthropic built its reputation on “no-training” privacy, the September 2025 policy shift (enforceable from October 2025) fundamentally changed the service. Anthropic introduced a “forced-choice” gate: users were required to actively choose whether to “Help Improve Claude” (opt-in) or maintain their privacy (opt-out) to continue using the service.
Check your settings now: many users clicked through these pop-ups quickly and may have unknowingly accepted the “On” position, which extends data retention from 30 days to 5 years for training purposes.
When Anthropic announced the policy change, it faced significant criticism:
The Decoder review said:
TechCrunch weighed in with:
Medium analysis made the point:
“Claude is still more cautious than OpenAI or Google.”
But it “no longer holds the hard line of refusing training by default.”
Apple Intelligence is now the “only true privacy absolutist.”
Bottom line? Claude went from “no training ever” to “training if you agree.” That’s a major retreat. But it’s still more privacy-protective than ChatGPT, which trained from the start.
The short answer is, it depends on your account type.
Do not use consumer Claude for:
This is because even if you opt out of training:
This AMST Legal analysis says, “Small businesses using Pro accounts face the same data training exposure as Free users. The biggest question is whether companies realize that they are now training Claude AI with their data.”
Claude Team and Enterprise accounts offer:
Claude Enterprise additionally offers:
Some real-world enterprise deployments examples are:
But even with Enterprise:
“Shadow AI” happens when employees use personal AI accounts on Claude, ChatGPT, Gemini, Perplexity, or any consumer AI tool, for work tasks without their company knowing. This is dangerous because an employee might use their personal account to draft client emails, analyze company data, or debug proprietary code. When they signed up for their personal account, they accepted consumer terms, not enterprise protections. Depending on their settings, that company data could be retained for months or years and used to train AI models.
The company has:
According to this analysis: “Individual employees accepted terms independently. They unknowingly bound their organizations to training consent. Corporate data entered pipelines without authorization or oversight.”
This is why many companies now block consumer AI tools entirely and require employees to use only approved enterprise accounts.
Here’s how to control your privacy in Claude:
Go to claude.ai and sign in.
Click your profile icon (top right).
Select “Settings”.
Go to “Privacy” section.
Find “Help improve Claude” toggle.
Turn it OFF.
Confirm.
This only affects future conversations. Past chats (before you opted out) may already be in training pipelines.
Individual chats:
All chats:
Deleted chats are not used for training, but they may be retained in backups for 30 days before permanent deletion.
It is worth noting that Claude’s Incognito chats are the only exception to the training rule. Conversations started in this mode are never used for model improvement, regardless of whether your global ‘Help Improve Claude’ toggle is on or off.
If you want to know about Claude LLM’s privacy compliance, here’s the list:
According to Anthropic’s Trust Center, Claude has:
For EU/EEA users:
Standard Contractual Clauses: Anthropic uses EU-approved Standard Contractual Clauses (SCCs) for data transfers outside the European Economic Area.
Data subject rights: You can exercise the following rights under GDPR:
Legal basis for processing: Anthropic processes your data based on three legal grounds:
To exercise your rights, send an email to privacy@anthropic.com.
According to independent legal analysis, the September 2025 opt-in UI may violate the GDPR:
“These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) … making it likely that Anthropic will soon draw the attention of privacy regulators.”
There was no regulatory action as of March 2026, but GDPR compliance remains contested.
Consumer accounts: Consumer accounts (Free, Pro, and Max) are NOT HIPAA compliant.
Enterprise accounts: Enterprise accounts can be configured for HIPAA compliance with the following features:
An example is that Cleveland Clinic deployed Claude across 120 hospitals after Anthropic achieved SOC 2 Type II certification and signed a HIPAA Business Associate Agreement.
FedRAMP: FedRAMP certification is in progress as of March 2026 to enable US government use of Claude.
PCI DSS: Claude is not PCI DSS certified, as it is not designed for payment processing.
Regional compliance:
Learn more at the Anthropic Privacy Center and in this compliance comparison.
Even with enterprise accounts, follow these practices:
Never share these in Claude:
Social Security Numbers, passport numbers, driver’s licenses
Credit card numbers, bank account details
Health records (unless using HIPAA-compliant Enterprise + BAA)
Passwords, API keys, access tokens
Client confidential information
Proprietary code with trade secrets
Personal information of others without consent
Use redaction for examples:
Instead of:
“Draft an email to john.smith@company.com about Project Phoenix budget overruns”
Use:
“Draft an email to [CLIENT] about [PROJECT] budget concerns”
For sensitive work:
Use Enterprise account with Zero Data Retention.
Run Claude via API with ZDR agreement.
Deploy via AWS Bedrock or Google Vertex AI (data stays in your VPC).
Enable audit logging.
Regular access reviews.
Read up on this Claude Code security guide.
Regular privacy hygiene:
Review privacy settings monthly.
Delete old conversations you don’t need.
Check which integrations have access.
Monitor for unauthorized account access.
Use strong authentication (MFA when available).
For maximum privacy:
If you absolutely cannot accept any cloud AI risk:
Use open-source models (Llama, Mistral) running locally.
Self-host with air-gapped infrastructure.
Don’t use cloud AI at all.
Pro tip for 2026: For those seeking the highest privacy without an Enterprise contract, the Anthropic API is actually stricter than the web interface. As of September 14, 2025, standard API log retention was reduced from 30 days to just 7 days, and data is never used for training. It’s a flat policy with no exceptions.
Organizations needing longer retention for auditing can opt into 30 days via their Data Processing Addendum. Zero Data Retention (ZDR) is available for qualifying Enterprise customers. This 7-day retention makes Claude API one of the most privacy-protective in the industry.
If your organization needs Claude, the Enterprise tier offers substantially better privacy:
Data protection:
Access controls:
Security:
Pricing:
Compare this to:
Real Enterprise deployments:
According to Anthropic’s announcement of enterprise case studies:
Healthcare (Cleveland Clinic):
Finance (JPMorgan Chase):
Legal (Kirkland & Ellis):
Here’s the honest answer:
Consumer Claude (Free/Pro/Max): No
Reasons not to use consumer accounts for confidential information:
Enterprise Claude with proper setup: Conditional yes
Enterprise Claude is safe for confidential information when you have implemented the following safeguards:
However, even with Enterprise, Claude should still not be used for:
Claude’s privacy is better than most alternatives. According to comparative analyses:
Claude is safer than:
Claude is similar to:
Claude is less safe than:
Ask yourself:
What’s the worst-case scenario if this data leaks?
Do I have a legal obligation to protect this data?
Am I using the right account type for this use case?
Have I configured privacy settings correctly?
Does my organization have a policy on AI tool usage?
If you answered “catastrophic” to number 1 or “yes” to number 2, don’t use consumer Claude. Consider Enterprise with ZDR, or don’t use cloud AI at all.
Claude’s privacy story changed in September 2025. Anthropic went from “we never train on your data” to “we’ll train on your data if you let us.” That’s a significant retreat from its privacy-first positioning.
But context matters. Claude still offers:
For individual users, Claude remains one of the most private mainstream AI chatbots, especially if you opt out of training. Tom’s Guide called it “the clear winner” among ChatGPT, Gemini, and Perplexity.
For businesses, Consumer Claude (Pro/Max) is not appropriate for confidential data. Period. Use Enterprise with a DPA, or don’t use it at all.
For maximum privacy, the only truly private AI is one running locally on your own hardware. Everything else is a trade-off between capability and control.
Review your settings: Go to Settings > Privacy and confirm your training preference.
Delete old conversations: Remove anything you wouldn’t want retained.
If using for work: Check if you should be on Enterprise instead.
Stay informed: Privacy policies change. Anthropic’s did in 2025, and it could change again.
The bottom line is that Claude’s privacy is good relative to alternatives, but it’s not perfect. Use it knowing what you’re trading. And for anything truly sensitive, remember: the safest data is the data you never share.
Claude is safer than many AI alternatives, but “safe” depends on your use case. For casual personal use with training opted out, Claude offers strong privacy protections: 30-day data retention, no training on your conversations, and restricted employee access. For business use, consumer accounts (Free/Pro/Max) are NOT safe for confidential information. You need Enterprise with a Data Processing Agreement. Claude’s privacy is stronger than ChatGPT for individual users but weaker than on-device solutions like Apple Intelligence. Think of it as “safe for what you’d share in a coffee shop, unsafe for what belongs in a vault.”
By default, no. Anthropic’s privacy policy explicitly states that employees cannot access your conversations unless: (1) you explicitly consent (e.g., sharing feedback for debugging), (2) your conversation is flagged for safety violations (abuse, illegal content), or (3) legal compliance requires it (court orders, law enforcement requests). This is more restrictive than ChatGPT’s policy, which simply says OpenAI “may review” conversations without specifying when. However, if your content gets flagged, it may be reviewed by human moderators and retained for up to 2 years.
It depends on your settings. If you opted OUT of training (Settings > Privacy > “Help improve Claude” toggle OFF), your conversations are retained for 30 days for technical/security purposes, then deleted. They are never used to train AI models. If you opted IN, your conversations are retained for up to 5 years and used to train future Claude models. For Enterprise accounts, data is never used for training regardless of settings; it’s governed by your contract. All accounts use data for providing responses, safety monitoring, improving infrastructure, and complying with legal requirements.
Claude is slightly more private than ChatGPT for individual users. Key differences are that ChatGPT trains on conversations by default (you must opt out), while Claude requires you to actively choose. ChatGPT stores conversations indefinitely unless deleted; Claude uses 30-day retention if you opt out, or 5 years if you opt in. Both offer similar privacy controls: temporary/incognito chat modes and memory toggles. ChatGPT’s controls are slightly more independent (you can keep chat history while blocking training), while Claude has clearer employee access restrictions. For enterprise accounts, they’re roughly equivalent: both offer strong protections with proper contracts. When it comes to a winner, it’s Claude for default privacy stance and employee access transparency, and ChatGPT for control flexibility.
Only if you’re using Claude Enterprise with proper safeguards. Consumer accounts (Free/Pro/Max) are NOT appropriate for confidential business information, even if you opt out of training. This is because: (1) No Data Processing Agreement means no legal protection, (2) 30-day retention creates exposure, (3) You’re likely violating corporate data policies, (4) Flagged content may be reviewed by humans. Claude Enterprise is safe for confidential work when you have a formal DPA, Zero Data Retention enabled (if needed), documented risk assessment, employee training, access controls, and regular audits. Even then, don’t use it for classified information, existential trade secrets, or anything requiring air-gapped systems.
Yes, for EU/EEA users. Anthropic uses EU-approved Standard Contractual Clauses (SCCs) for data transfers outside Europe. You have full GDPR rights: access your data, request deletion, data portability, object to processing, and rectify inaccurate information. Email privacy@anthropic.com to exercise these rights. However, the September 2025 opt-in interface has been criticized as a potential GDPR violation; critics called the large “Accept” button with pre-toggled “On” setting a manipulative dark pattern. As of March 2026, no regulatory action has been taken, but GDPR compliance remains contested. Enterprise customers get additional protections through Data Processing Addendums.
For individual conversations: Open the chat > three-dot menu > “Delete conversation.” For all conversations: Settings > Privacy > “Delete conversation history” > Confirm. Account deletion: Contact Anthropic support to request full account deletion. Important caveats are that deleted chats are removed immediately from your interface but retained in Anthropic’s backend for 30 days before permanent deletion. Deleted conversations are never used for training. However, if data was already used for training before deletion, it cannot be removed from the model. For Enterprise accounts, deletion policies are governed by your contract and may differ.
Consumer accounts (Free/Pro/Max) are NOT HIPAA compliant. Do not use them for protected health information (PHI). Claude Enterprise can be configured for HIPAA compliance with: (1) Business Associate Agreement (BAA) signed with Anthropic, (2) Zero Data Retention mode enabled, (3) Additional security controls for PHI protection, (4) Audit logging for compliance tracking. For example, Cleveland Clinic deployed Claude across 120 hospitals after securing a HIPAA BAA and SOC 2 Type II certification. Even with Enterprise, certain features like web search may be disabled under HIPAA configuration. Only use Claude for healthcare data if you have explicit HIPAA Enterprise setup.
It depends. Before September 2025, Anthropic never used consumer conversations for training. After October 8, 2025, you must choose. If you opt OUT (toggle OFF), your conversations are never used for training. If you opt IN (toggle ON), your conversations may be used to train future Claude models and are retained for up to 5 years. Enterprise/API users: Data is never used for training, regardless of settings. Incognito mode: Conversations in incognito mode are never used for training, even if your global setting is “on.” Check your settings: Settings > Privacy > “Help improve Claude” toggle. Many users unknowingly opted in during the September 2025 policy change.
Anthropic’s full privacy policy is at Privacy Policy . Key points are that it:
Collects account info, conversation data, technical data, and third-party integration data.
Uses data for service operation, safety monitoring, training (if you opt in), and legal compliance.
Employees cannot access conversations by default; it requires consent or safety flag.
Consumer data: 30-day retention (opt-out) or 5-year retention (opt-in). Enterprise data: Never used for training, custom retention via contract.
GDPR/CCPA compliant with standard contractual clauses.
Deleted conversations are not used for training. The policy changed significantly in September 2025 from “never train” to “opt-in training.”
It depends on your account type. Consumer accounts (Free/Pro/Max): Your employer cannot see your conversations unless they have legal access to your device or network. These are personal accounts tied to your email. Enterprise/Team accounts: Yes, if you’re using a company-provided Claude account. Enterprise administrators have access to usage analytics, audit logs, and potentially full conversation history depending on configuration. Your IT department can see what you’re doing. Always assume work-provided tools are monitored. Best practice is to never use work Claude accounts for personal conversations and never use personal Claude accounts for work confidential information.
Yes, Claude Enterprise is significantly more private than Claude Pro.
Claude Pro ($20/month): Claude Pro is a consumer account governed by the standard privacy policy. Your data may be used for training if you opt in, with conversations retained for up to 5 years. There is no Data Processing Agreement protecting your data. If you opt out of training, your data is retained for 30 days. Claude Pro does not provide HIPAA compliance or SOC 2 guarantees for your individual account.
Claude Enterprise (custom pricing): Claude Enterprise never uses your data for training, and this is guaranteed by contract. It includes a Data Processing Agreement that is GDPR-compliant. Zero Data Retention mode is available for maximum privacy. The service is SOC 2 Type II certified with full audit rights for your organization. It can be configured for HIPAA compliance with a Business Associate Agreement. You can set custom retention policies based on your needs. Enterprise includes admin controls and comprehensive audit logging.
The privacy gap between these tiers is massive. Claude Pro is designed for personal use, while Claude Enterprise is built for organizations with serious compliance requirements.
Claude Privacy