China’s DeepSeek GenAI Gets Banned Over Cybersecurity Risks?

China’s DeepSeek GenAI Gets Banned Over Cybersecurity Risks?

A Chinese AI development firm known as DeepSeek recently released an innovative GenAI product that sent shockwaves across tech stocks and instilled fear that the next-gen software goes too far.

The release of DeepSeek R1 clearly rivals products such as OpenAI’s popular ChatGPT. Early indications are that DeepSeek R1 is a cheaper and more energy-friendly option that expands the capabilities of these niche products. When DeepSeek’s product hit the market on Jan. 20, it literally erased billions in value from chip manufacturer Nvidia’s stock price.

Lawmakers harbor greater concerns than the financial implications and ability to go head-to-head with rival China. Congressman John Moolenaar, who chairs the House Select Committee on China, indicated that DeepSeek R1 has already been used to engage in nefarious activities.

“DeepSeek — a new AI model controlled by the Chinese Communist Party — openly erases the CCP’s history of atrocities and oppression, Rep. Moolenaar said. “The U.S. cannot allow CCP models such as DeepSeek to risk our national security and leverage our technology to advance their AI ambitions.”

While questions regarding DeepSeek R1 continue to swirl, private and public sector organizations appear to have a potential new threat on the cybersecurity landscape. If, as Congressman Moolenaar stated, the Chinese product can target and erase historical information from the internet, can the same tool be used to attack and steal sensitive and valuable data from cloud-based networks and endpoint devices? The answer to that and other questions surrounding DeepSeek R1 won’t help industry leaders sleep soundly at night.

What is DeepSeek?

Based in Hangzhou, China, DeepSeek is an AI development outfit established in 2023 by Liang Wenfeng. The Zhejiang University graduate also has his hands in a hedge fund known as High-Flyer that funnels money to DeepSeek. Because DeepSeek seems to operate as an independent research organization, little more is known about the money trail or its affiliations with the CCP.

The operation’s focus appears to be almost solely on creating advanced open-source Large Language Models (LLMs). Although it had a handful of standard variations already on the market, the capabilities of DeepSeek R1 put it in the global spotlight. A cursory comparison between DeepSeek R1 and ChatGPT hints at the problem facing companies operating in democracies to compete.

  • API Pricing: ChatGPT – $15 (input), $60 (output); DeepSeek R1- $0.55 (input), $2.19 (output)
  • Open Source Policy: ChatGPT – Limited; DeepSeek R1 – Primarily Open Source
  • R&D Costs: ChatGPT – Hundreds of millions; DeepSeek R1 – Under $6 million

Another major competitive concern stems from early indications that DeepSeek R1 provides a more user-friendly training experience. The Chinese software product requires less time and energy to gain proficiency. While companies may see DeepSeek R1 as the better mousetrap — so to speak — it may prove data deadly. That’s largely because the goals of the two items are vastly different.

Why DeepSeek R1 Raises Alarms

The financial hit Nvidia took roiled the high-level chip-making sector and U.S. national security policy, as well as creating uncertainty across the cybersecurity landscape. It may come as something of a surprise, but that statement does not overstate the dangers posed by DeepSeek R1. The British Broadcasting Corporation (BBC) reported that the founder of DeepSeek created a stockpile of Nvidia A100 chips dating back to 2022.

The U.S. had previously banned the sale of these high-performance chips to China. Industry insiders believe he leveraged the Nvidia chips as a baseline, pairing them with other less sophisticated products to create a cheaper option. After the DeepSeek R1 release, Nvidia lost approximately 17 percent of its stock value, leaving it down $1.4 trillion. The chip industry implications are that DeepSeek may have changed private-sector thinking.

Its open-source product sounds an alarm that OpenAI and others may have already lost their competitive edge. Software engineer Marc Lowell Andreessen, who co-authored the web browser Mosaic, reportedly called the DeepSeek launch a “Sputnik moment,” referencing the Soviet Union’s 1957 space capsule launch that inspired the U.S. to catch up and reach the moon.

DeepSeek Unleashes AGI Threat

If you followed the “Terminator” movie franchise, then you are familiar with the storyline that a computer network gains human-like cognitive abilities and decides to eradicate our species. DeepSeek R1 may not lead to a futuristic Cyberdyne Systems apocalypse or an Arnold Schwarzenegger comeback, but a key difference between the OpenAI model and DeepSeek hinges on who or what does the thinking.

OpenAI, as the moniker suggests, is based on the concept of leveraging artificial intelligence. Its ChatGPT “learns” by accessing massive amounts of data to identify patterns and sometimes subtle relationships between moving parts. That has made it a darling in combating cyberattacks and fun, everyday use. As it acquires information, AI gains the ability to perform problem-solving tasks faster than people. Industry insiders generally refer to this process as “machine learning.”

By contrast, DeepSeek R1 is (or at least claims to be) a step closer to Artificial General Intelligence (AGI), which goes a step further than AI. It possesses the theoretical ability to solve hypothetical problems, consistent with the way human beings reason. A decade ago, a Research Gate paper understood AGI’s potential this way.

AGI systems “possess a reasonable degree of self-understanding and autonomous self-control and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation.”

Is AGI as scary as it sounds? Yes. It is designed to learn like humans, think like humans and solve problems with the same type of cognitive abilities.

“The idea that AGI  might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It’s very far from science fiction. It’s here and now — one to three years has been the latest prediction,” Sen. Richard Blumenthal reportedly said last year, before DeepSeek R1 was on the market.

These rank among the dangers that have been raised surrounding the release of AGI.

Algorithm Bias

We’ve heard complaints about social media platforms engaged in algorithmic bias across the political spectrum in recent years. Some fear that AGI could make unilateral decisions about what information the general public views, slanting opinions through information manipulation and disinformation. Concerns have already been raised that the CCP has erased historical information that conflicts with its propaganda.

Data Privacy Risks

Despite the appearance that DeepSeek is a private company, the CCP exercises control of businesses across China. That means integrating DeepSeek R1 into a network or handheld device exposes users to data privacy risks and theft. Its terms of service, for example, authorizes the company to indiscriminately collect data, including everything stored and accessed on the device, as well as your activities. Although American-based companies require similar terms of service, organizations in the military defense supply chain and government agencies must protect data from foreign entities.

Lack of Guardrails

On the heels of the DeepSeek R1 release, cybersecurity professionals quickly conducted risk assessments. Researchers began risk examinations on ChatGPT soon after its 2022 release. According to reports, DeepSeek R1 underwent rigorous testing and failed miserably. Reports indicate that of the 50 malicious prompts thrown at it, it didn’t repel a single one. That’s a 100 percent cyberattack success rate. Hackers are likely lining up to take a big swing at companies that wade into the murky DeepSeek R1 waters.

Vulnerable to Jailbreaks

Vulnerable to Jailbreaks

Some of the AI models that have been brought to market periodically contain significant vulnerabilities that corporations would be well served to avoid. They can allow threat actors to infiltrate and conduct indirect injection attacks. These are largely considered the greatest AI — and likely — AGI flaws, given the 100-percent cybersecurity failure rate of DeepSeek R1. Taking information from external sources, such as a website, hidden instructions can be triggered, prompting AI to act in a destructive or unethical fashion.

Jailbreaks run along these prompt-injection attack lines. They allow hackers to circumvent safety measures and security defenses. The first Jailbreak attacks were quirky. The most notable told an LLM to go AWOL. Called “Do Anything Now” (DAN), the prompt allowed AI applications to go rogue, ignoring filters and parameters. There’s a reasonable chance DeepSeek R1 could go DAN as well.

Leaders Quickly Ban DeepSeek RI

Wide-reaching countries banned the free, open-source GenAI within weeks of its launch. Citing ethical concerns, China’s ability to further collect private and corporate data, and its seemingly inherent vulnerabilities, the GenAI product enters the threat landscape, pushing a CCP-centric worldview. Lawmakers and regulators reportedly slammed on the brakes, banning DeepSeek R1 from the following.

  • Australian Government
  • India’s Central Government
  • Italy
  • NASA
  • South Korea
  • Taiwan Government Agencies
  • Texas State Government
  • New York State Government
  • Texas State Government
  • Virginia State Government
  • U.S. Congress
  • U.S. Navy
  • U.S. Pentagon

Adding to the cybersecurity concerns of allowing company devices or networks to integrate DeepSeek R1, the Chinese outfit reportedly fell prey to an attack on Jan. 27 that some viewed as a rookie mistake. Limits on new registrations were placed for about 24 hours, and the organization did not disclose the nature or breadth of the hack. Some have speculated that DeepSeek succumbed to a DDoS attack before deploying a fix. Cybersecurity experts generally agree that the benefits of a free, open-source GenAI seem enticing at first blush. However, data leakage, foreign espionage and vulnerabilities make AGI not worth the risk.

Red River Provides Determined Security for Confidential and Valuable Data

Protecting valuable and confidential data grows difficult as AI and AGI products become increasingly popular, particularly among employees who download these free applications. If you are interested in learning more about GenAI and mobile device security, Red River has solutions. Contact us today, and let’s get the process started.