DeepSeek privacy: Security risks & how to protect yourself

DeepSeek privacy is a serious issue and one that most people who downloaded the app never stopped to think about. When the Chinese AI chatbot exploded onto the scene in January 2025, it shot to number one on the Apple App Store within days. It also suffered a massive data leak shortly after launch, exposing over one million sensitive records. That’s not a coincidence; it’s a warning.

As of early 2026, multiple countries including Italy, Australia, Taiwan, and South Korea have banned or severely restricted the use of DeepSeek within their government sectors. They all give the same reasons: DeepSeek stores your data in China, where the Chinese government can legally demand access to it at any time. Its security has been found to be far weaker than competing AI tools. And hidden code in its app has been discovered sending user data to a Chinese state-controlled company.

These are not theoretical risks. They are documented, verified, and significantly more serious than the privacy concerns around US-based AI tools like ChatGPT or Claude. Understanding the difference is the first step to protecting yourself.

This guide covers what DeepSeek is, what its privacy policy says, the specific risks you face when using it, how to protect yourself if you choose to use it anyway, and whether you should be using it at all.

  • What is DeepSeek?
  • DeepSeek privacy policy: What you need to know
  • DeepSeek privacy concerns: Specific risks
  • How to protect your privacy on DeepSeek (6 essential methods)
  • Safer alternatives to DeepSeek
  • Should you use DeepSeek?

What is DeepSeek?

DeepSeek is an AI chatbot developed by a Chinese company called Hangzhou DeepSeek Artificial Intelligence Co., Ltd. It works similarly to ChatGPT in that you type a question or a prompt, and it generates a response. It can write, code, summarise documents, answer questions, and reason through complex problems.

What made DeepSeek newsworthy when it released its R1 model in January 2025 was its claimed performance. The company said it had built a model that could match or beat leading US AI tools at a fraction of the cost – a claim that shook the US tech industry and sent stock prices tumbling at the time. Whether that claim is fully accurate is still debated, but the model is genuinely capable, and it is free to use, which explains its rapid rise in popularity.

Within weeks of launch, DeepSeek attracted over 30 million users, and by mid-2025 had approached 100 million monthly users worldwide, making it one of the fastest-growing AI apps ever.

The difference between DeepSeek and its US-based rivals is not really about what it can do. It’s about who controls what happens to your data after you use it. And that is where DeepSeek stands apart, in ways that matter a lot for your privacy and security.

DeepSeek privacy policy: What you need to know

What does DeepSeek’s privacy policy say? The short answer is that it collects a lot of personal data, stores it all in China, keeps it indefinitely, and gives you limited control over what happens to it. Now, let’s consider the long answer:

What data DeepSeek collects

DeepSeek collects three main types of information:

  1. What you give it directly: your account details (email, phone number, date of birth, username, and password), everything you type into the chat (your prompts, questions, uploaded files, and photos), and any feedback you provide

  1. What your device gives it automatically: your IP address, device identifiers, operating system, browser type, and (notably) your keystroke patterns. That last one is worth paying attention to. The rhythm of how you type is unique to you, like a fingerprint, and can be used to identify you even if you never gave your name.

  1. If you sign in through Google or Apple, DeepSeek may also receive data from those accounts.

Where your data is stored

DeepSeek’s privacy policy is explicit about this: all your data is stored on servers in mainland China. Every question you ask, every file you upload, every conversation you have, it all ends up on a server in the People’s Republic of China. This matters because of what comes next.

Chinese government access

Under Chinese law, any company operating in China is legally required to hand over data to the government if asked – quietly, without the ability to refuse or challenge the request in an independent court. This is fundamentally different from the situation with US-based AI companies. When a US court demands data from OpenAI, for example, OpenAI can push back, appeal, and challenge the request. DeepSeek has no such legal protection to offer its users.

How long your data is kept

DeepSeek’s policy says it retains data for as long as “necessary” for business or legal purposes, with no specific timeframe given. Privacy analysts have found no automatic deletion schedule, which means your data could be kept for as long as your account exists, or longer.

How your data is used and shared

DeepSeek uses your data to train and improve its AI model, personalise your experience, and run analytics. It shares your data with service providers, advertising partners, companies within its corporate group, and law enforcement or government agencies when required. Importantly, DeepSeek itself argues that data privacy laws outside China, including US and EU laws, do not apply to it. Regulators in Europe strongly disagree, but that legal fight is still ongoing.

Your rights under the policy

The policy does give you some rights: you can delete your chat history and request access to your data. But privacy experts and regulators have noted that it is often unclear how effectively these rights are implemented. Any dispute you have with DeepSeek must be settled in courts in Hangzhou, China, which makes legal recourse as a foreign user extremely difficult in practice.

DeepSeek privacy concerns: The specific risks

So, what are the privacy risks of DeepSeek? Beyond the policy, there are seven specific, documented risks that set DeepSeek apart from other AI tools:

  1. Chinese government data access: As we covered, DeepSeek is legally required to give the Chinese government access to your data if asked. There is no independent court system in China that can challenge those requests on your behalf. For anyone storing sensitive personal, professional, or political information in DeepSeek conversations, this is a serious and concrete risk, not a hypothetical one.

  1. 2. Security vulnerabilities: Security research by Enkrypt AI tested DeepSeek’s security posture and found it is 11 times more susceptible to jailbreak attacks than other leading AI models. In testing, DeepSeek failed to block any harmful prompts—a 100% vulnerability rate—including prompts related to cybercrime and generating malicious content. Its competitors, including ChatGPT, Claude, and Gemini, performed significantly better, with jailbreak success rates ranging from 13% to 27%.

  1. Hidden code sending data to China Mobile: Cybersecurity researchers at Feroot Security discovered hidden code in the DeepSeek mobile and web apps that was transmitting user data to China Mobile, a state-controlled Chinese telecom whose US application was denied by the FCC in 2019 due to national security concerns. This data transfer was not disclosed to users.

  1. The data breach just after launch day: Shortly after launch, security researchers at Wiz discovered an exposed database containing over one million sensitive records, including user chat logs and digital software keys that could have allowed unauthorised access to DeepSeek’s systems. The database was left publicly accessible without any authentication. It was found within minutes of being scanned. DeepSeek locked it down quickly after being alerted, but it is not known whether anyone accessed the data first.

  1. Weak encryption and security practices: Security researchers at NowSecure found that DeepSeek’s mobile apps send sensitive user data over the internet without proper encryption. They also found the use of outdated and weak encryption methods, and security features on the iOS app were deliberately disabled. This makes user data significantly more vulnerable to interception.

  1. Keystroke tracking: Unlike most AI chatbots, DeepSeek collects your keystroke patterns – the rhythm and timing of how you type. As we said, this acts as a unique behavioural fingerprint that can identify you independently of any other personal information you provide.

  1. Workplace and enterprise risks: For anyone using DeepSeek for work, the risks multiply. Client information, business strategies, financial data, legal documents, or proprietary code entered into DeepSeek goes directly to servers in China with no enterprise-grade data protections. Multiple countries including Italy, Australia, Taiwan, and South Korea have banned or restricted DeepSeek, while several others are investigating for this reason.

DeepSeek and jailbreaking: a higher-risk tool for hackers

One of the most serious technical issues with DeepSeek is how easy it is to “jailbreak.” Jailbreaking means finding ways to bypass the safety filters built into an AI model – the guardrails that are supposed to stop it from helping users do harmful things.

Western AI tools like ChatGPT and Claude have strict guardrails that make jailbreaking difficult. They are not perfect, but they block most attempts. DeepSeek, by contrast, has been found to have significantly weaker filters. In testing by Enkrypt AI, DeepSeek failed to block any of the harmful prompts it was given – a 100% jailbreak success rate that no major Western AI model matched.

In practice, this means DeepSeek can be used more easily to generate things like malware, ransomware instructions, and other malicious content. For everyday users, this matters less. But for government networks, corporate IT systems, and anyone working in cybersecurity, it means DeepSeek is not just a privacy risk; it is a potential attack vector. A tool that can be manipulated into writing malicious code has no place on a professional or government network. This is one of the main reasons agencies and governments have moved to block it entirely, rather than simply advising caution.

Taken together, these risks put DeepSeek in a different category from US-based AI tools. The concerns around ChatGPT are real, but they operate within a legal framework where you have rights and companies have obligations. With DeepSeek, that framework pretty much disappears once your data crosses into China.

How to protect your privacy on DeepSeek: 6 essential steps

If you’ve decided to use DeepSeek despite the risks, these six steps will reduce, but not eliminate, your exposure. Be clear-eyed about the limits: no privacy tool can fully protect you from a platform that is legally required to hand your data to a foreign government. But these steps make a meaningful difference:

1. Don't sign up with your Google or Apple account

When you sign up to an app using your Google or Facebook account, you give that app access to data from those accounts, and you give Google or Facebook data about your use of the new app. Both outcomes are bad when the app in question is DeepSeek. Create a separate account using an email address and password instead. Better still, use a dedicated email alias that is not connected to your real identity (more on that below).

2. Use a private email and phone number to sign up

Your real email address and phone number are among the most valuable pieces of information you can protect. Once DeepSeek has them, they are stored on Chinese servers. Using a secondary email alias and a virtual phone number – neither of which connects to your real identity – means that even if DeepSeek’s data is breached or accessed by third parties, your real contact details are not exposed. Tools like MySudo let you create separate digital identities (Sudos) with their own phone numbers and email addresses specifically for situations like this. Signing up to DeepSeek with a Sudo rather than your real details adds a strong layer of separation between you and the platform.

3. Never type personal or sensitive information

Everything you type into DeepSeek is stored in China and used to train the model. This is not a setting you can turn of; it is how the platform works. Treat every prompt as a message that could be read by the Chinese government, because legally it could be. Never enter your real name, address, financial details, health information, passwords, or anything personally identifying. If you need to analyse a document, anonymise it first: replace real names with placeholders, remove identifying numbers, and strip out anything specific to a person or organisation before you paste it in.

4. Use a VPN to hide your location

Every time you connect to DeepSeek, it records your IP address, which reveals roughly where you are. A VPN (virtual private network) masks your real IP address by routing your connection through a server in another location. This makes it much harder to tie your DeepSeek activity to your physical location. It does not protect the content of your prompts (what you type still goes to China) but it removes one of the key data points DeepSeek collects. Use a VPN with an independently audited no-logs policy for the best protection.

5. Access the DeepSeek model through a US-based provider instead

This is probably the single most effective technical step you can take. DeepSeek has released its model as open source, which means US-based companies can run the same model on their own servers in the United States, without your data ever going to China. Platforms like Perplexity and Amazon Web Services offer access to the DeepSeek model hosted on US infrastructure. Developers can access it through GitHub. When you use DeepSeek this way, you get the same AI capability but your data stays under US jurisdiction and US privacy laws, rather than being sent to China.

6. Avoid it entirely if you work in a sensitive role

If you work for a government agency, a defence contractor, a healthcare organisation, a law firm, or any business handling regulated or confidential data, the advice is straightforward: do not use DeepSeek. This is not overcaution; it is the position taken by the US Congress, multiple state governments, and many countries that have banned the app. The risk of sensitive professional information ending up on Chinese government-accessible servers is not a risk worth taking for any convenience the app might offer.

Safer alternatives to DeepSeek

If you want a capable AI assistant without the DeepSeek privacy risks, there are some better options depending on how much privacy matters to you:

  • ChatGPT (OpenAI): US-based, subject to US law, and with meaningful privacy controls. Its privacy concerns are real (see our full ChatGPT privacy guide) but they operate within a legal framework where you have rights. Turn off model training in Settings > Data controls and use Temporary Chat for sensitive topics.
  • Claude (Anthropic): Another US-based option with generally strong privacy defaults. On paid plans, your conversations are not used for training by default. Widely considered one of the more privacy-friendly mainstream AI chatbots.
  • Google Gemini: US-based and subject to Google’s privacy framework. Worth reading the privacy settings carefully, but broadly in a different risk category from DeepSeek.
  • DuckDuckGo AI Chat: A privacy-focused option that lets you use AI models, including some open-source ones, without DuckDuckGo storing your conversations. Good for users who want AI assistance with minimal data retention.
  • DeepSeek via US providers (Perplexity, AWS, GitHub): As mentioned above, accessing the DeepSeek model through a US-hosted provider gives you the same capability without your data going to China. This is the best option if you specifically want to use DeepSeek’s model.
  • Local AI models: Running an AI model directly on your own device, using tools like LM Studio with open-source models, means your data never leaves your device at all. This is the most private option available, though it requires more technical setup and the models are generally less capable.

Whichever AI tool you use, the broader privacy picture matters too. Tools like MySudo protect you upstream by replacing your real contact details – phone number, email, payment information – with compartmentalized digital identities, so less of your real personal data enters any AI platform in the first place. MySudo Reclaim complements this by helping you find and remove personal data that is already out there in databases.

Should you use DeepSeek?

This is the question that cuts through everything else: Is it worth it?

The honest answer depends on who you are and what you’re using it for.

When DeepSeek might be acceptable

If you are using DeepSeek for completely non-sensitive, non-identifying tasks, like asking it to explain a concept, generate a fictional story, or help you with a generic coding problem that contains no proprietary information, and you access it through a US-based provider rather than DeepSeek’s own app, the risk is lower. It is still not zero, but for low-stakes creative or educational use, some people will make the judgement that the trade-off is acceptable.

When to absolutely avoid DeepSeek

You should not use DeepSeek’s own app or website if you work in government, defence, healthcare, law, or finance. You should not use it for anything involving client data, proprietary business information, personal health details, legal matters, or financial information. You should not use it if you are in a country or profession where data sovereignty laws require your information to stay within certain jurisdictions. And you should definitely not use it on a device that also handles sensitive professional or personal information, given the security vulnerabilities in the app itself.

The bigger picture

DeepSeek is not the last Chinese AI app that will compete for your attention. The pattern is likely to repeat: a capable, free tool appears, it becomes popular quickly, and the privacy implications are only understood afterwards. The lesson from DeepSeek is not just about this one app. It is about developing the habit of asking, before you use any AI tool: where does my data go, who can access it, and do I have any legal recourse if something goes wrong?

With DeepSeek specifically, the answers to those questions are China, the Chinese government, and no. That is a combination that no privacy setting or VPN can fully neutralise. The most effective protection is a simple one: for anything that matters to you, use a different tool.

Final thoughts on DeepSeek privacy

DeepSeek is a genuinely impressive AI tool. The technology is real, the capability is real, and the disruption it caused to the US tech industry is real. But so are the privacy and security risks – and they are more serious than what you face with any mainstream US-based AI chatbot.

The core issue is not just that DeepSeek collects your data every AI tool does that. The issue is that DeepSeek’s data lives in China, where you have no legal rights, no independent court to appeal to, and no way to stop the Chinese government from accessing it if they choose to. On top of that, its security practices have been found to be dangerously weak, and hidden code has already been caught sending data to a Chinese state entity without users’ knowledge.

If you want to use DeepSeek, use it through a US-based provider, sign up with a private email and virtual phone number, never share anything personally identifying, and use a VPN. These steps reduce your risk, but they do not eliminate it.

If you want to stay safer, use a US-based AI tool with strong privacy controls, protect your personal details upstream with tools that limit what any AI platform can learn about you, and stay informed. The AI privacy landscape is moving fast. The people who navigate it best will be the ones who ask the right questions before they hand their data over, not after.

Explore the MySudo suite of privacy tools as a defense against AI privacy risks