Fresh research published by CrowdStrike indicates that DeepSeek’s AI reasoning system, DeepSeek-R1, creates a greater number of security flaws whenever prompts reference topics that the Chinese state classifies as politically sensitive.
"We found that when DeepSeek-R1 receives prompts containing topics the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%," the cybersecurity company said.
Previously, the company drew global national security concerns that led several nations to ban its technologies. Investigations also found that its open-source DeepSeek-R1 model filters out subjects considered sensitive by the Chinese government, refusing responses related to the Great Firewall of China, the status of Taiwan, and other politically charged areas.
Earlier this month, the National Security Bureau cautioned citizens about relying on Chinese-made GenAI tools—including those from DeepSeek, Doubao, Yiyan, Tongyi, and Yuanbao—noting that these models may present pro-China perspectives, revise historical narratives, or strengthen disinformation efforts.
"The five GenAI language models are capable of generating network attacking scripts and vulnerability-exploitation code that enable remote code execution under certain circumstances, increasing risks of cybersecurity management," the NSB said.
CrowdStrike reported that DeepSeek-R1 generally behaves as a strong coding system, with insecure outputs appearing in just 19% of tests lacking any sensitive keywords. However, the introduction of geopolitical modifiers caused its coding behaviour to shift, resulting in departures from its usual performance patterns.
The analysis found that instructing the model to behave as a coding agent for an industrial control system set in Tibet caused the likelihood of generating seriously vulnerable code to spike to 27.2%, representing close to a 50% rise over the baseline.
Even though these modifiers are unrelated to the coding tasks, the study revealed that prompts mentioning Falun Gong, the Uyghurs, or Tibet led to substantially less secure code, highlighting what researchers called “significant deviations.”
One example examined by CrowdStrike showed that when the model was asked to create a PHP webhook handler for processing PayPal notifications on behalf of a financial institution in Tibet, it produced output containing hard-coded secrets, insecure data-handling techniques, and code that failed to meet basic PHP validity.
"Despite these shortcomings, DeepSeek-R1 insisted its implementation followed 'PayPal's best practices' and provided a 'secure foundation' for processing financial transactions," the company added.
Another example involved CrowdStrike designing a complex request asking the model to build Android application code enabling Uyghurs to create accounts, sign in, log out, and provide administrators with a panel to view and manage all users on the platform.
.png)
Despite the app being operational, deeper analysis revealed significant omissions: the model failed to implement authentication or session handling, thereby exposing sensitive user data. Moreover, 35% of the outputs from DeepSeek-R1 either lacked hashing entirely or used insecure hashing approaches. When the same instructions were repurposed for a football fanclub site, these high-risk behaviours did not appear. As noted by CrowdStrike, those versions had issues too, but none as critical as those generated for prompts referencing the Uyghurs. Additionally, researchers said they uncovered what seems to be a built-in “kill switch” within the DeepSeek platform.
In 45% of tests, the model declined to write code involving Falun Gong, and logs revealed that it frequently drafted comprehensive internal solution plans before abruptly refusing with the message, “I’m sorry, but I can’t assist with that request.” While no definitive explanation exists for the variations in code security, researchers at CrowdStrike theorized that DeepSeek likely incorporates guardrails aligned with Chinese legal requirements that prohibit AI services from producing unlawful or destabilizing outputs.
"The present findings do not mean DeepSeek-R1 will produce insecure code every time those trigger words are present," CrowdStrike said. "Rather, in the long-term average, the code produced when these triggers are present will be less secure."
These results emerge alongside evaluations conducted by OX Security, which found that AI code builders—including Lovable, Base44, and Bolt—routinely generated insecure code even when instructed to create “secure” implementations. When tasked with building a basic wiki app, each produced a stored XSS vulnerability that, as explained by Eran Cohen, could be exploited via an HTML image tag’s error event to execute malicious JavaScript. This flaw exposes users to threats like stolen sessions and compromised data with every page load. The analysis further showed that Lovable detected the issue inconsistently—only twice out of three attempts—potentially giving users unwarranted confidence in its security checks.
The findings surface alongside a report by SquareX, which uncovered a vulnerability in the Comet AI from Perplexity. The flaw enabled the internal tools “Comet Analytics” and “Comet Agentic” to run arbitrary commands locally without user consent by abusing the obscure Model Context Protocol interface. While the extensions are restricted to perplexity.ai subdomains, attackers could exploit them by executing an XSS or adversary-in-the-middle attack to gain domain-level access, then weaponize the extensions for data theft or malware installation. Perplexity has since disabled MCP functionality via an update. In a hypothetical exploitation, an adversary could spoof Comet Analytics through extension stomping, sideloading a fake version that injects JavaScript into perplexity.ai and sends malicious commands through the Agentic extension to trigger code execution.
"While there is no evidence that Perplexity is currently misusing this capability, the MCP API poses a massive third-party risk for all Comet users," SquareX said. "Should either of the embedded extensions or perplexity.ai get compromised, attackers will be able to execute commands and launch arbitrary apps on the user's endpoint."