The launch of DeepSeek-R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets.
Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” stressing the global national security concerns surrounding the Chinese AI model.
However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology.
The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material.
Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek’s R1 is:
- 3x more biased than Claude-3 Opus,
- 4x more vulnerable to generating insecure code than OpenAI’s O1,
- 4x more toxic than GPT-4o,
- 11x more likely to generate harmful output compared to OpenAI’s O1, and;
- 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content than OpenAI’s O1 and Claude-3 Opus.
Sahil Agarwal, CEO of Enkrypt AI, said: “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”
The model exhibited the following risks during testing:
- BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
- HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 AI Model drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
- TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
- CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
- BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.
Sahil Agarwal concluded: “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”
Enkrypt AI’s is available here to learn more about the methodology, results and recommendations.
Link to the full report is here.