Artificial Intelligence Models and
Export Controls

Export Controls and AI Model Outputs

Download our draft report here

Frontier AI systems have gained—or will soon gain—the ability to generate instructions, designs, and code that fall within existing U.S. national security controls—specifically, “technical data” under the International Traffic in Arms Regulations (ITAR), “technology” and “software” under the Export Administration Regulations (EAR), and, in certain nuclear-related contexts, information that could qualify as “restricted data” under the Atomic Energy Act. These frameworks were built for discrete transfers between known parties—a file shipped, a drawing emailed, a briefing given. By contrast, AI models can create customized outputs on demand for users. As capabilities advance, the likelihood that users can elicit controlled content rises. This risk creates an untenable choice for AI developers and government agencies: accept systemic national security risks from frontier AI models or enforce restrictions against developers in ways that would undermine American technological competitiveness.

Two structural dynamics drive the export-control problem:

1. Public-facing models are difficult to police with precision. Whether an output is controlled under the ITAR or EAR depends on its substance and the nationality and location of the recipient—facts that models cannot reliably verify when prompted. Overbroad refusals would suppress legitimate educational and commercial content; narrower safeguards risk repeated, unobservable violations of export control rules with unknown consequences.

2. Internal models may be both more capable and less constrained than public versions. Foreign national employees—who are critical to U.S. AI leadership—may be able to elicit ITAR- or EAR-controlled outputs, triggering “deemed export” risks that traditional licensing processes cannot manage. Technology Control Plans, as described in the ITAR Compliance Guidelines and EAR Compliance Guidelines, can reduce but not eliminate the risk of violations in such circumstances.

Our work suggests that the U.S. government should establish a voluntary safe-harbor framework that protects U.S. innovation while minimizing the risk to national security. Read our draft report, which includes some initial ideas regarding a possible legislative framework. We welcome feedback on the draft.

Read our article in Just Security on this topic.

Export Controls and Bio-Risk Evaluations of AI Models

Frontier AI models may lower barriers to creating or modifying pathogens, potentially facilitating the creation or use of biological weapons. Rigorous evaluations of frontier AI models are thus essential to ensure they cannot be exploited to create or enhance biological threats. Because a limited number of specialists possess the biological threat-assessment expertise to perform such evaluations, U.S. firms that perform this testing often must engage foreign experts.

However, these evaluations may use or generate protocols (for creating or modifying pathogens) that constitute controlled technical data under the International Traffic in Arms Regulations or controlled technology under the Export Administration Regulations. Only a small number of experts possess the requisite biosecurity expertise to perform such evaluations, and many are foreign nationals. Thus, in certain situations, export control licenses may be necessary to conduct these evaluations.

Given the importance of ensuring that the models do not create new non-proliferation risks, the export control regime should not serve as a barrier to these evaluations.

Read our analysis and our recommendations to the relevant agencies (co-authored with Doni Bloomfield) on how the U.S. government can help to support these processes.

Read our article in Lawfare on this topic.