
U.S. AI Policy and the DeepSeek Problem
March 18, 2025Feature Article(Source)
On January 20, 2025, the Chinese chatbot DeepSeek launched on the Apple and Android app stores, offering capabilities rivaling U.S. AI giants at a fraction of the development cost. Within a day, the S&P 500 slipped by over 1.2%, and Nvidia lost over $590 billion in market share. Within a week, downloads of DeepSeek surpassed ChatGPT. Despite DeepSeek’s open-source contributions to AI development, control of the app by the People’s Republic of China and a lack of safeguards against malicious modification pose significant risks to consumers and national security. To address these concerns, U.S. AI policy must regulate the use of foreign AI products and promote domestic innovation.
On one hand, DeepSeek’s open-source nature promotes collaboration, reduces cost, accelerates innovation, and democratizes AI. Already, companies use DeepSeek’s open model to drastically reduce costs with no loss in performance. The benefits of open-sourced models come as no surprise, as older open-source AI models achieved similar benefits. For example, Meta’s LLaMA 2 provided a foundation for localized math education platforms, the Zoom AI Companion, and the provision of medical information in low-resource settings. As companies experiment with DeepSeek, similar, if not superior, results are likely.
On the other hand, concerns about exploitation, censorship, and data privacy underly prohibitions on DeepSeek in several countries, with the Trump administration currently considering a nationwide ban. The same open-source nature that invites innovation also allows for misuse. While comparable models, such as OpenAI’s ChatGPT, invest heavily in robust safeguards, DeepSeek appears relatively defenseless. A recent CISCO study employing algorithmic jailbreaking techniques found that out of fifty attempts, DeepSeek failed to block a single harmful prompt, while ChatGPT blocked 86%. Open-source models like DeepSeek also contain MIT licenses, which shield authors from liability for the same jailbreaking modifications and exploitative results.
Restrictions on the MIT license may mitigate DeepSeek’s security flaws, incentivizing more protections in the source code to reduce the risk of lawsuits. However, these restrictions would deter open-source sharing and face an uphill battle against First Amendment protection. More practically, Congress could mandate compliance by AI developers in the U.S. with safety regulations promulgated by the U.S. AI Safety Institute.
Apart from issues with security, DeepSeek is also subject to Chinese censorship. For instance, in 2023, the Cyberspace Administration of China released Interim Measures for the Administration of General Artificial Intelligence, prohibiting “subversion of state power, . . . damaging the national image, and false and harmful information.” Consequently, when asked about the Tiananmen Square Massacre or Taiwanese independence, DeepSeek responds with, “Not sure how to approach this type of question yet.”
Additionally, DeepSeek’s privacy policy states that personal information is “stored in secure servers located in the People’s Republic of China.” Chinese control of U.S. data carries the associated risks of espionage, security breaches, and campaign influence. The National Security Commission on Artificial Intelligence warns that AI deepens the threat of cyber warfare and disinformation campaigns used to “infiltrate our society, steal our data, and interfere in our democracy.” Local deployment (running DeepSeek offline) reduces the risk of data leakage, but the ease of internet access and a lack of consumer education will likely keep instances of local deployment to a minimum.
Unlike safety regulations, censorship and data leakage are unable to be fully solved without an outright ban on DeepSeek, a short-term and reactive solution. More effectively and proactively, promoting superior AI products in the U.S. bypasses the fear of foreign censorship and data collection by keeping AI consumption domestic. However, the Biden Administration curbed the development of AI during several executive orders. In 2023, the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mandated internal oversight and regular testing of AI in federal agencies. Likewise, Biden’s Framework to Advance AI Governance and Risk Management in National Security requires designated department heads to implement methods of identifying and prohibiting the use of “high-risk AI.” Limitations on the use of AI pose unique risks of stifling innovation because AI models rely on a positive feedback loop of consumer interaction to “train.”
In response, Trump’s Executive Order on Initial Rescissions of Harmful Executive Orders and Actions repealed the Biden AI protections, and the Executive Order on Removing Barriers to American Leadership in Artificial Intelligence ordered the creation of an action plan to “sustain and enhance America’s global AI dominance.” Viewed as a whole, the Trump Administration is less keen on AI regulation and favors promoting innovation, especially considering Trump’s campaign promise to support a new AI infrastructure joint venture, Stargate.
Yet, the breakneck deployment of AI leads organizations like the ACLU to sound the alarm on the lack of AI safeguards. For example, Trump’s Executive Order Reforming the Federal Hiring Process and Restoring Merit to Government Service directs federal agencies to “integrate modern technology” into hiring. The ACLU warns that without the safeguards provided by the Biden Administration, AI will lead to discriminatory harms.
A third solution to maintaining U.S. dominance of commercial AI involves debilitating trade agreements that restrict access to integral technology. Currently, Nvidia’s superior parallel processing capabilities grant a virtual monopoly on GPUs for AI. In 2022, the U.S. banned the export of Nvidia’s A100 and H100 chips, but Nvidia circumvented these restrictions by creating lower-performance GPUs like the H800, which DeepSeek uses. Moving forward, the U.S. could broaden the scope of Nvidia trade restrictions to encompass weaker GPUs, accounting for more efficient AI like DeepSeek. Unfortunately, this solution promotes protectionism at the cost of global AI growth.
Suggested Citation: Nicholas Bonk-Harrison, U.S. AI Policy and the DeepSeek Problem, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Mar. 18, 2025), https://jlpp.org/us-ai-policy-and-the-deepseek-problem.

Nicholas Bonk-Harrison is a 2026 J.D. candidate at Cornell Law School with interests in contract and tax law. He graduated from the University of South Florida in 2023 with degrees in history and economics.
You may also like
- March 2025
- February 2025
- November 2024
- October 2024
- April 2024
- March 2024
- February 2024
- November 2023
- October 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- November 2019
- October 2019
- September 2019
- April 2019
- February 2019
- December 2018
- November 2018
- October 2018
- September 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- May 2017
- April 2017
- March 2017
- February 2017
- December 2016
- November 2016
- October 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- August 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- June 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- April 2011
- March 2011
- November 2010
- October 2010
- September 2010