Claude privacy: What Anthropic knows about you (and how it compares)

“We will never use your conversations to train our AI.”

That’s what Anthropic said when it launched Claude. For two years, that promise set Claude apart. While ChatGPT trained on every conversation by default and Google Gemini merged your chats with your entire digital footprint, Anthropic held the line: your data stayed private. Period.

Then, in September 2025, everything changed.

Anthropic sent an email to millions of users with a simple popup: “Updates to Consumer Terms and Policies.” A large black “Accept” button. And below it, in smaller text, a toggle switch—pre-set to “On”—giving the company permission to use your conversations to train future AI models. For up to five years.

If you clicked Accept without reading carefully, you just agreed to let Anthropic feed every chat, every coding session, every idea you’ve shared with Claude into its training pipeline. The 30-day data deletion? Gone. The privacy-first positioning? Reversed.

Most users never noticed. The change was announced on a blog post buried in Anthropic’s news feed. No headlines. No alarm bells. Just a quiet policy shift that redefined what “privacy” meant for one of the most trusted AI assistants on the market.

Here’s what makes this complicated: even after the change, Claude is still more private than most alternatives. The training is opt-in, not automatic. The controls are clearer than ChatGPT’s. Enterprise customers still get bulletproof protection. But the promise that made Anthropic different—that your conversations would never become training data—is gone.

This guide explains what changed in September 2025, what data Anthropic collects, how the company uses it, and how Claude’s privacy compares to ChatGPT and other AI tools. It also covers compliance (GDPR, SOC 2, HIPAA), enterprise protections, privacy settings you can control, and whether you should trust Claude with confidential information. Because the answer depends on which Claude you’re using—and whether you read the fine print.

What we’ll cover:

  • What data does Claude collect?
  • How Anthropic uses your data
  • Claude privacy policy: key points
  • Claude vs ChatGPT privacy comparison
  • Is Claude safe for work and sensitive data?
  • Claude privacy settings and controls
  • Claude AI privacy compliance (GDPR, HIPAA, etc.)
  • How to use Claude more privately
  • Claude enterprise: better privacy for organizations
  • Should you trust Claude with confidential information?
  • Conclusion: final thoughts no Claude privacy

Let’s start with the basics.

What does Claude collect?

Like all AI chatbots, Claude collects information to operate the service. According to Anthropic’s privacy policy, the company collects:

  1. Account information

  • Email address, name (if provided)

  • Payment information (processed through third-party payment providers)

  • Account preferences and settings

  1. Conversation data

  • Every message you send to Claude

  • Claude’s responses

  • Files you upload (documents, images, code)

  • Feedback you provide (thumbs up/down ratings)

  1. Technical information

  • Device type, browser, operating system

  • IP address and general location (if you enable location services)

  • Usage patterns (when you use Claude, feature usage)

  • Error logs and diagnostic data

  1. Third-party integration data If you connect Claude to other services (Google Drive, Slack, etc.), Claude can access data from those integrations based on the permissions you grant.

What’s different about Claude? According to independent privacy analyzes, Claude’s data collection is more limited than competitors like Google Gemini, which integrates deeply with your broader Google ecosystem (search history, Gmail, YouTube activity).

But this is the critical point: The conversation data is what matters most for privacy. Everything you type into Claude – work documents, personal questions, code, ideas – gets collected. The question is what happens to it next.

How Anthropic uses your data

This is where the September 2025 policy change/pivot matters. Anthropic uses your data, by account type, like this:

For consumer accounts (Free, Pro, Max):

Before September 2025: Anthropic did NOT use conversations for training. Data was retained for 30 days, then deleted.

After October 8, 2025 (enforcement date): You must choose:

Option 1: Opt out (Privacy Mode)

  • Data retained for 30 days for technical/security purposes
  • NOT used to train Claude models
  • Deleted after 30 days (unless flagged for safety review).

Option 2: Opt in (Help Improve Claude)

  • Data retained for up to 5 years
  • Used to train future Claude models
  • Used to improve safety systems.

For enterprise accounts (Claude for Work, API, Gov, Education):

  • Data is NEVER used for training
  • Retention controlled by contract (30 days standard, Zero Data Retention available)
  • Protected by data processing agreements (DPAs).

Other uses (all account types):

Anthropic uses data for:

  • Providing the service (generating responses)
  • Safety monitoring (detecting abuse, illegal content)
  • Improving infrastructure and features
  • Complying with legal requirements.

Even if you opt out of training, flagged conversations (those that violate usage policies) may be reviewed by human moderators and retained for up to 2 years for safety purposes.

Claude privacy policy: key points

The Claude AI privacy policy is at anthropic.com/legal/privacy. This is what you need to know from the privacy policy:

What the policy says:

On data training (as of October 2025):

“For users of our consumer products (Claude Free, Pro, and Max, and when using Claude Code with these accounts) we may use your chats and coding sessions to improve Claude, if you choose to allow us to.”

Translation? Training is opt-in, not automatic. But you must actively make a choice.

On data retention:

  • Opt-out: approximately 30 days
  • Opt-in: Up to 5 years
  • Deleted conversations: Not used for training “under any circumstance”

On employee access:

“By default, Anthropic employees cannot access your conversations with Claude.”

Access requires:

  • Your explicit consent (e.g., sharing for debugging)
  • Safety review of flagged content
  • Legal compliance requirements

Geographic protections:

According to Anthropic privacy policy section 11, EU/EEA users are protected by the GDPR. Anthropic uses standard contractual clauses for international data transfers. You have rights to:

  • Access your data
  • Request deletion
  • Data portability
  • Object to processing.

California users are protected by the CCPA and have similar rights to the GDPR.

What's missing?

According to independent privacy evaluations, Anthropic’s policy is clearer than ChatGPT’s but still has gaps:

  • Its language is vague on “commercially reasonable” security measures.
  • It has limited detail on how long “flagged” content is retained.
  • There are no transparency reports on government data requests.

Claude vs ChatGPT privacy comparison

This is the comparison most people want. This is how Claude and ChatGPT handle privacy, based on their current policies (March 2026):

Default privacy stance

ChatGPT (OpenAI):

  • It trains on conversations BY DEFAULT.
  • You must opt out manually.
  • History is saved indefinitely unless you delete it.

Claude (Anthropic):

  • It asks you to choose (opt-in for training).
  • There’s no default – you must decide.
  • Effective October 8, 2025.

The winner here is Claude. While both offers opt-out, Claude forces an explicit choice rather than defaulting to training. See what Tom’s Guide says and read TechCrunch analysis.

Data retention

ChatGPT:

  • Conversations are stored indefinitely (if not deleted).
  • Deleted chats are removed within 30 days.
  • Once training data is used, it cannot be removed from the model.

Claude:

  • Opted out: 30 days retention
  • Opted in: Up to 5 years retention
  • Deleted chats are not used for training.

This one is a tie. ChatGPT’s indefinite storage is worse, but Claude’s 5-year retention (if opted in) is aggressive.

Privacy controls

ChatGPT:

  • You can opt out of training in Settings > Data Controls.
  • ChatGPT offers “Temporary Chat” mode, which prevents conversations from being saved or used for training.
  • You can toggle Memory on or off independently.
  • You can delete individual chats from your history.
  • The controls work independently, meaning you can keep your chat history while blocking training.

Claude:

  • You can opt out of training in Settings > Privacy.
  • Claude offers Incognito mode (ghost icon).
  • Chats are never used for training and don’t save to history (launched September 2025).
  • You can delete individual chats from your history.
  • You can delete your entire conversation history at once.
  • Claude offers similar privacy controls to ChatGPT.

This one is a tie. Both offer temporary/incognito modes and independent memory toggles with similar functionality.

Employee access

ChatGPT:

  • OpenAI states that it may review conversations to improve safety systems.
  • The policy is vague about when or how these reviews happen.

Claude:

  • Anthropic’s policy states that by default, employees cannot access your conversations.
  • Access requires either your explicit consent or a safety flag on the content.
  • Claude’s policy provides more explicit restrictions on  employee access.

The winner here is Claude for its clearer policy on access restrictions.

Enterprise privacy

ChatGPT Enterprise:

  • ChatGPT Enterprise does not train on your data.
  • It offers custom retention periods.
  • It includes SSO and admin controls.
  • Pricing ranges from $60 to $200+ per user per month, depending on the plan.

Claude Team & Enterprise:

  • Claude Team and Enterprise do not train on your data.
  • A Data Processing Agreement is standard with all business accounts.
  • Zero Data Retention mode is available for sensitive industries (Enterprise only).
  • Both include SSO and admin controls.
  • Pricing is $25-30 per user per month for Team Standard, $125-150 for Team Premium, and custom pricing for Enterprise.

This one is a tie. Both offer strong enterprise privacy. Claude Team is cheaper at entry level ($25/month vs. ChatGPT’s $60+ minimum).

The 2025 policy pivot

While Anthropic built its reputation on “no-training” privacy, the September 2025 policy shift (enforceable from October 2025) fundamentally changed the service. Anthropic introduced a “forced-choice” gate: users were required to actively choose whether to “Help Improve Claude” (opt-in) or maintain their privacy (opt-out) to continue using the service.

Check your settings now: many users clicked through these pop-ups quickly and may have unknowingly accepted the “On” position, which extends data retention from 30 days to 5 years for training purposes.

When Anthropic announced the policy change, it faced significant criticism:

The Decoder review said:

  • Anthropic uses a “questionable dark pattern” in the opt-in UI, citing the large “Accept” button, small toggle pre-set to “On”.
  • Critics called it manipulative under GDPR standards.

TechCrunch weighed in with:

  • The change represents “a stunning reversal” from Anthropic’s privacy-first positioning.
  • It reflects competitive pressure; AI companies need training data.

Medium analysis made the point:

“Claude is still more cautious than OpenAI or Google.”

But it “no longer holds the hard line of refusing training by default.”

Apple Intelligence is now the “only true privacy absolutist.”

Bottom line? Claude went from “no training ever” to “training if you agree.” That’s a major retreat. But it’s still more privacy-protective than ChatGPT, which trained from the start.

Is Claude safe for work and sensitive data?

The short answer is, it depends on your account type.

Consumer accounts (Free/Pro/Max): NO

Do not use consumer Claude for:

  • Client information
  • Proprietary code or trade secrets
  • Health information (HIPAA data)
  • Financial records
  • Anything you wouldn’t post publicly/

This is because even if you opt out of training:

  • Data is retained for 30 days.
  • Flagged content may be reviewed by humans.
  • There are no legal protections (no DPA or BAA).
  • You’re violating your company’s data policies.

This AMST Legal analysis says, “Small businesses using Pro accounts face the same data training exposure as Free users. The biggest question is whether companies realize that they are now training Claude AI with their data.”

Enterprise accounts (Claude Team or Claude Enterprise): YES, with caveats

Claude Team and Enterprise accounts offer:

  • Data Processing Agreements (GDPR-compliant) – both Team and Enterprise
  • No training on your data – both Team and Enterprise
  • SOC 2 Type II audited – both Team and Enterprise
  • SSO and admin controls – both Team and Enterprise

Claude Enterprise additionally offers:

  • Zero Data Retention mode (required for healthcare/finance)
  • HIPAA Business Associate Agreements
  • Custom data retention policies
  • Enhanced compliance features

Some real-world enterprise deployments examples are:

  • Cleveland Clinic: Healthcare (120 hospitals)
  • JPMorgan Chase: Finance (80 trading desks)
  • Kirkland & Ellis: Legal (35 global offices)

But even with Enterprise:

  • Claude Enterprise is still cloud-based, meaning your data leaves your infrastructure.
  • It requires proper access controls.
  • You must train employees on what information not to share with Claude.
  • Regular audits are required to ensure compliance.

The "Shadow AI" problem

“Shadow AI” happens when employees use personal AI accounts on Claude, ChatGPT, Gemini, Perplexity, or any consumer AI tool, for work tasks without their company knowing. This is dangerous because an employee might use their personal account to draft client emails, analyze company data, or debug proprietary code. When they signed up for their personal account, they accepted consumer terms, not enterprise protections. Depending on their settings, that company data could be retained for months or years and used to train AI models.

The company has:

  • No data processing agreement
  • No control over the data
  • No audit trail
  • No way to enforce compliance.

According to this analysis: “Individual employees accepted terms independently. They unknowingly bound their organizations to training consent. Corporate data entered pipelines without authorization or oversight.”

This is why many companies now block consumer AI tools entirely and require employees to use only approved enterprise accounts.

Claude privacy settings and controls

Here’s how to control your privacy in Claude:

How to opt out of training:

  1. Go to claude.ai and sign in.

  2. Click your profile icon (top right).

  3. Select “Settings”.

  4. Go to “Privacy” section.

  5. Find “Help improve Claude” toggle.

  6. Turn it OFF.

  7. Confirm.

This only affects future conversations. Past chats (before you opted out) may already be in training pipelines.

How to delete conversation history

Individual chats:

  • Open the chat.
  • Click the three-dot menu.
  • Select “Delete conversation”.

All chats:

  • Settings > Privacy.
  • “Delete conversation history”.
  • Confirm.

Deleted chats are not used for training, but they may be retained in backups for 30 days before permanent deletion.

It is worth noting that Claude’s Incognito chats are the only exception to the training rule. Conversations started in this mode are never used for model improvement, regardless of whether your global ‘Help Improve Claude’ toggle is on or off.

Privacy settings comparison

Claude AI privacy compliance (GDPR, HIPAA, etc.)

If you want to know about Claude LLM’s privacy compliance, here’s the list:

Security certifications

According to Anthropic’s Trust Center, Claude has:

SOC 2 Type II:

  • Independent audit of security controls
  • Covers availability, confidentiality, processing integrity
  • Report available under NDA for enterprise customers

ISO 27001-2022:

  • International information security standard
  • Full certification as of 2025

ISO/IEC 42001-2023:

  • AI-specific management system standard
  • Shows commitment to responsible AI governance

GDPR compliance

For EU/EEA users:

Standard Contractual Clauses: Anthropic uses EU-approved Standard Contractual Clauses (SCCs) for data transfers outside the European Economic Area.

Data subject rights: You can exercise the following rights under GDPR:

  • You have the right to access your personal data.
  • You have the right to request deletion of your data.
  • You have the right to data portability.
  • You have the right to object to processing.
  • You have the right to rectification of inaccurate data.

Legal basis for processing: Anthropic processes your data based on three legal grounds:

  • Contract performance, which allows Anthropic to provide Claude’s services to you.
  • Legitimate interests, which covers service improvement and security.
  • Consent, which applies when you opt in to allow your data to be used for training.

To exercise your rights, send an email to privacy@anthropic.com.

GDPR concerncs

According to independent legal analysis, the September 2025 opt-in UI may violate the GDPR:

“These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) … making it likely that Anthropic will soon draw the attention of privacy regulators.”

There was no regulatory action as of March 2026, but GDPR compliance remains contested.

HIPAA compliance

Consumer accounts: Consumer accounts (Free, Pro, and Max) are NOT HIPAA compliant.

Enterprise accounts: Enterprise accounts can be configured for HIPAA compliance with the following features:

  • Anthropic provides a Business Associate Agreement (BAA) for covered entities.
  • Zero Data Retention mode ensures data is not stored beyond immediate processing.
  • Additional security controls are implemented to protect protected health information.
  • Audit logging tracks all data access and usage for compliance purposes.

An example is that Cleveland Clinic deployed Claude across 120 hospitals after Anthropic achieved SOC 2 Type II certification and signed a HIPAA Business Associate Agreement.

Other compliance frameworks

FedRAMP: FedRAMP certification is in progress as of March 2026 to enable US government use of Claude.

PCI DSS: Claude is not PCI DSS certified, as it is not designed for payment processing.

Regional compliance:

  • Claude is compliant with the California Consumer Privacy Act (CCPA).
  • Claude is compliant with Brazil’s Lei Geral de Proteção de Dados (LGPD) using Standard Contractual Clauses.
  • Claude is compliant with UK GDPR.

Learn more at the Anthropic Privacy Center and in this compliance comparison.

How to use Claude more privately

Even with enterprise accounts, follow these practices:

  1. Never share these in Claude:

  • Social Security Numbers, passport numbers, driver’s licenses

  • Credit card numbers, bank account details

  • Health records (unless using HIPAA-compliant Enterprise + BAA)

  • Passwords, API keys, access tokens

  • Client confidential information

  • Proprietary code with trade secrets

  • Personal information of others without consent

  1. Use redaction for examples:

Instead of:

“Draft an email to john.smith@company.com about Project Phoenix budget overruns”

Use:

“Draft an email to [CLIENT] about [PROJECT] budget concerns”

  1. For sensitive work:

  • Use Enterprise account with Zero Data Retention.

  • Run Claude via API with ZDR agreement.

  • Deploy via AWS Bedrock or Google Vertex AI (data stays in your VPC).

  • Enable audit logging.

  • Regular access reviews.

Read up on this Claude Code security guide.

  1. Regular privacy hygiene:

  • Review privacy settings monthly.

  • Delete old conversations you don’t need.

  • Check which integrations have access.

  • Monitor for unauthorized account access.

  • Use strong authentication (MFA when available).

  1. For maximum privacy:

If you absolutely cannot accept any cloud AI risk:

  • Use open-source models (Llama, Mistral) running locally.

  • Self-host with air-gapped infrastructure.

  • Don’t use cloud AI at all.

Pro tip for 2026: For those seeking the highest privacy without an Enterprise contract, the Anthropic API is actually stricter than the web interface. As of September 14, 2025, standard API log retention was reduced from 30 days to just 7 days, and data is never used for training. It’s a flat policy with no exceptions.

Organizations needing longer retention for auditing can opt into 30 days via their Data Processing Addendum. Zero Data Retention (ZDR) is available for qualifying Enterprise customers. This 7-day retention makes Claude API one of the most privacy-protective in the industry.

Claude Enterprise: better privacy for organizations

If your organization needs Claude, the Enterprise tier offers substantially better privacy:

What you get:

Data protection:

  • Data Processing Agreement (GDPR-compliant)
  • No training on your data (contractually guaranteed)
  • Zero Data Retention option (no storage beyond immediate processing)
  • Configurable retention policies
  • Data residency options (EU, US, Asia-Pacific)

Access controls:

  • SSO via SAML 2.0 or OIDC
  • Role-based access control
  • Admin dashboards
  • Domain capture (automatic enrollment)
  • Audit logging

Security:

  • SOC 2 Type II certified
  • Encryption in transit and at rest
  • Network isolation options (AWS Bedrock, Google Vertex AI)
  • Vulnerability scanning
  • Incident response procedures

Pricing:

  • Claude Team: $30/user/month (5-user minimum)
  • Claude Enterprise: Custom pricing (typically starts ~$60-100/user/month for larger deployments)

Compare this to:

  • ChatGPT Enterprise: $60-200+/user/month
  • Gemini Enterprise: $30/user/month (with Google Workspace).

Real Enterprise deployments:

According to Anthropic’s announcement of enterprise case studies:

Healthcare (Cleveland Clinic):

  • 120 hospitals
  • HIPAA BAA + SOC 2 Type II removed procurement barriers
  • Clinical decision support, documentation

Finance (JPMorgan Chase):

  • 80 trading desks
  • Compliance monitoring systems
  • Regulatory validation complete

Legal (Kirkland & Ellis):

  • 35 global offices
  • GDPR compliance enabled EU expansion
  • Contract analysis, legal research

Should you trust Claude with confidential information?

Here’s the honest answer:

Consumer Claude (Free/Pro/Max): No

Reasons not to use consumer accounts for confidential information:

  • Legal exposure: There is no Data Processing Agreement, which means you have no contractual privacy guarantee.
  • Compliance risk: Using consumer accounts violates most corporate data policies.
  • Training risk: Even if you opt out, your data is retained for 30 days.
  • Access risk: Flagged content may be reviewed by Anthropic employees.
  • Shadow IT: Using personal accounts creates organizational governance gaps.

Enterprise Claude with proper setup: Conditional yes

Enterprise Claude is safe for confidential information when you have implemented the following safeguards:

  • You have a formal Data Processing Agreement in place.
  • You have enabled Zero Data Retention mode (if your industry requires it).
  • You have completed a documented vendor risk assessment.
  • You have trained employees on data classification rules.
  • You have implemented access controls and monitoring.
  • You conduct regular compliance audits.

However, even with Enterprise, Claude should still not be used for:

  • Classified government information requiring specific clearance levels
  • Trade secrets with existential business risk (information that could destroy your company if leaked)
  • Information under court protective orders or legal holds
  • Anything requiring air-gapped systems or offline-only processing

The real question is relative risk

Claude’s privacy is better than most alternatives. According to comparative analyses:

Claude is safer than:

  • Google Gemini (deep ecosystem integration)
  • ChatGPT Free (trains by default)
  • Most free AI tools

Claude is similar to:

  • ChatGPT Enterprise (with proper setup)
  • Microsoft Copilot for Enterprise

Claude is less safe than:

  • Apple Intelligence (on-device processing)
  • Self-hosted open-source models
  • Air-gapped local systems

The trust calculation

Ask yourself:

  1. What’s the worst-case scenario if this data leaks?

  2. Do I have a legal obligation to protect this data?

  3. Am I using the right account type for this use case?

  4. Have I configured privacy settings correctly?

  5. Does my organization have a policy on AI tool usage?

If you answered “catastrophic” to number 1 or “yes” to number 2, don’t use consumer Claude. Consider Enterprise with ZDR, or don’t use cloud AI at all.

Conclusion: Final thoughts on Claude privacy

Claude’s privacy story changed in September 2025. Anthropic went from “we never train on your data” to “we’ll train on your data if you let us.” That’s a significant retreat from its privacy-first positioning.

But context matters. Claude still offers:

  • Opt-in training (not opt-out like ChatGPT)
  • Shorter retention if you decline (30 days vs indefinite)
  • Clearer employee access restrictions
  • Strong enterprise protections with proper contracts.

For individual users, Claude remains one of the most private mainstream AI chatbots, especially if you opt out of training. Tom’s Guide called it “the clear winner” among ChatGPT, Gemini, and Perplexity.

For businesses, Consumer Claude (Pro/Max) is not appropriate for confidential data. Period. Use Enterprise with a DPA, or don’t use it at all.

For maximum privacy, the only truly private AI is one running locally on your own hardware. Everything else is a trade-off between capability and control.

Next steps

  1. Review your settings: Go to Settings > Privacy and confirm your training preference.

  2. Delete old conversations: Remove anything you wouldn’t want retained.

  3. If using for work: Check if you should be on Enterprise instead.

  4. Stay informed: Privacy policies change. Anthropic’s did in 2025, and it could change again.

The bottom line is that Claude’s privacy is good relative to alternatives, but it’s not perfect. Use it knowing what you’re trading. And for anything truly sensitive, remember: the safest data is the data you never share.

Claude privacy FAQs

Claude is safer than many AI alternatives, but “safe” depends on your use case. For casual personal use with training opted out, Claude offers strong privacy protections: 30-day data retention, no training on your conversations, and restricted employee access. For business use, consumer accounts (Free/Pro/Max) are NOT safe for confidential information. You need Enterprise with a Data Processing Agreement. Claude’s privacy is stronger than ChatGPT for individual users but weaker than on-device solutions like Apple Intelligence. Think of it as “safe for what you’d share in a coffee shop, unsafe for what belongs in a vault.”

By default, no. Anthropic’s privacy policy explicitly states that employees cannot access your conversations unless: (1) you explicitly consent (e.g., sharing feedback for debugging), (2) your conversation is flagged for safety violations (abuse, illegal content), or (3) legal compliance requires it (court orders, law enforcement requests). This is more restrictive than ChatGPT’s policy, which simply says OpenAI “may review” conversations without specifying when. However, if your content gets flagged, it may be reviewed by human moderators and retained for up to 2 years.

It depends on your settings. If you opted OUT of training (Settings > Privacy > “Help improve Claude” toggle OFF), your conversations are retained for 30 days for technical/security purposes, then deleted. They are never used to train AI models. If you opted IN, your conversations are retained for up to 5 years and used to train future Claude models. For Enterprise accounts, data is never used for training regardless of settings; it’s governed by your contract. All accounts use data for providing responses, safety monitoring, improving infrastructure, and complying with legal requirements.

Claude is slightly more private than ChatGPT for individual users. Key differences are that ChatGPT trains on conversations by default (you must opt out), while Claude requires you to actively choose. ChatGPT stores conversations indefinitely unless deleted; Claude uses 30-day retention if you opt out, or 5 years if you opt in. Both offer similar privacy controls: temporary/incognito chat modes and memory toggles. ChatGPT’s controls are slightly more independent (you can keep chat history while blocking training), while Claude has clearer employee access restrictions. For enterprise accounts, they’re roughly equivalent: both offer strong protections with proper contracts. When it comes to a winner, it’s Claude for default privacy stance and employee access transparency, and ChatGPT for control flexibility.

Only if you’re using Claude Enterprise with proper safeguards. Consumer accounts (Free/Pro/Max) are NOT appropriate for confidential business information, even if you opt out of training. This is because: (1) No Data Processing Agreement means no legal protection, (2) 30-day retention creates exposure, (3) You’re likely violating corporate data policies, (4) Flagged content may be reviewed by humans. Claude Enterprise is safe for confidential work when you have a formal DPA, Zero Data Retention enabled (if needed), documented risk assessment, employee training, access controls, and regular audits. Even then, don’t use it for classified information, existential trade secrets, or anything requiring air-gapped systems.

Yes, for EU/EEA users. Anthropic uses EU-approved Standard Contractual Clauses (SCCs) for data transfers outside Europe. You have full GDPR rights: access your data, request deletion, data portability, object to processing, and rectify inaccurate information. Email privacy@anthropic.com to exercise these rights. However, the September 2025 opt-in interface has been criticized as a potential GDPR violation; critics called the large “Accept” button with pre-toggled “On” setting a manipulative dark pattern. As of March 2026, no regulatory action has been taken, but GDPR compliance remains contested. Enterprise customers get additional protections through Data Processing Addendums.

For individual conversations: Open the chat > three-dot menu > “Delete conversation.” For all conversations: Settings > Privacy > “Delete conversation history” > Confirm. Account deletion: Contact Anthropic support to request full account deletion. Important caveats are that deleted chats are removed immediately from your interface but retained in Anthropic’s backend for 30 days before permanent deletion. Deleted conversations are never used for training. However, if data was already used for training before deletion, it cannot be removed from the model. For Enterprise accounts, deletion policies are governed by your contract and may differ.

Consumer accounts (Free/Pro/Max) are NOT HIPAA compliant. Do not use them for protected health information (PHI). Claude Enterprise can be configured for HIPAA compliance with: (1) Business Associate Agreement (BAA) signed with Anthropic, (2) Zero Data Retention mode enabled, (3) Additional security controls for PHI protection, (4) Audit logging for compliance tracking. For example, Cleveland Clinic deployed Claude across 120 hospitals after securing a HIPAA BAA and SOC 2 Type II certification. Even with Enterprise, certain features like web search may be disabled under HIPAA configuration. Only use Claude for healthcare data if you have explicit HIPAA Enterprise setup.

It depends. Before September 2025, Anthropic never used consumer conversations for training. After October 8, 2025, you must choose. If you opt OUT (toggle OFF), your conversations are never used for training. If you opt IN (toggle ON), your conversations may be used to train future Claude models and are retained for up to 5 years. Enterprise/API users: Data is never used for training, regardless of settings. Incognito mode: Conversations in incognito mode are never used for training, even if your global setting is “on.” Check your settings: Settings > Privacy > “Help improve Claude” toggle. Many users unknowingly opted in during the September 2025 policy change.

Anthropic’s full privacy policy is at Privacy Policy . Key points are that it:

  • Collects account info, conversation data, technical data, and third-party integration data.

  • Uses data for service operation, safety monitoring, training (if you opt in), and legal compliance.

  • Employees cannot access conversations by default; it requires consent or safety flag.

  • Consumer data: 30-day retention (opt-out) or 5-year retention (opt-in). Enterprise data: Never used for training, custom retention via contract.

  • GDPR/CCPA compliant with standard contractual clauses.

  • Deleted conversations are not used for training. The policy changed significantly in September 2025 from “never train” to “opt-in training.”

It depends on your account type. Consumer accounts (Free/Pro/Max): Your employer cannot see your conversations unless they have legal access to your device or network. These are personal accounts tied to your email. Enterprise/Team accounts: Yes, if you’re using a company-provided Claude account. Enterprise administrators have access to usage analytics, audit logs, and potentially full conversation history depending on configuration. Your IT department can see what you’re doing. Always assume work-provided tools are monitored. Best practice is to never use work Claude accounts for personal conversations and never use personal Claude accounts for work confidential information.

Yes, Claude Enterprise is significantly more private than Claude Pro.

Claude Pro ($20/month): Claude Pro is a consumer account governed by the standard privacy policy. Your data may be used for training if you opt in, with conversations retained for up to 5 years. There is no Data Processing Agreement protecting your data. If you opt out of training, your data is retained for 30 days. Claude Pro does not provide HIPAA compliance or SOC 2 guarantees for your individual account.

Claude Enterprise (custom pricing): Claude Enterprise never uses your data for training, and this is guaranteed by contract. It includes a Data Processing Agreement that is GDPR-compliant. Zero Data Retention mode is available for maximum privacy. The service is SOC 2 Type II certified with full audit rights for your organization. It can be configured for HIPAA compliance with a Business Associate Agreement. You can set custom retention policies based on your needs. Enterprise includes admin controls and comprehensive audit logging.

The privacy gap between these tiers is massive. Claude Pro is designed for personal use, while Claude Enterprise is built for organizations with serious compliance requirements.