Lawmakers Introduce No Adversarial AI Act, to Block AI from Adversary Nations
On 25 June 2025, A bipartisan group of lawmakers has introduced the No Adversarial AI Act, aiming to bar U.S. federal agencies from using AI systems developed in “adversarial” countries—namely China, Russia, Iran, and North Korea.
What the Bill Would Do
- Create a public list of banned AI models from specified adversarial countries, maintained by the Federal Acquisition Security Council.
- Prohibit executive agencies from acquiring or using those AI systems—unless a waiver is granted by Congress or the Office of Management and Budget (OMB) for purposes like research or counterterrorism.
- Require updates to the banned list at least every 180 days to capture new threats.
- Allow delisting of a system if it’s proven not to be controlled by a foreign adversary.
Why It Matters
- National security: Lawmakers argue that AI from adversarial regimes—like DeepSeek, linked to Chinese intelligence—could risk U.S. systems and data.
- Tech “Cold War”: Experts warn this is part of a broader competition, viewing AI built by democracies as safer and more aligned with U.S. values.
- Strengthening U.S. supply chains: Senators and tech leaders advocate for continued export controls, especially around advanced chips, to slow adversary advances.
News Gist
The No Adversarial AI Act marks a strong, bipartisan step to keep AI developed by authoritarian regimes out of U.S. government systems.
With built-in exceptions for key research and oversight processes, it aims to strike a balance: secure government technology while not stifling innovation.