As frontier models come closer to being able to enable real-world harms—such as cyberattacks and biological weapons proliferation—many
labs will want to collaborate more on research that will help to mitigate those risks. Yet fears of antitrust implications sometimes chill
collaboration that would be in the public interest. Startups and smaller labs that do not have dedicated antitrust counsel may be particularly
affected in this respect. With the assistance of Dan Crane
(University of Michigan Law School), we are exploring whether a narrowly targeted antitrust exemption could be useful. An in-progress draft is
available here. We welcome feedback on the draft.
We also submitted a letter on this topic to the Department of Justice and the Federal Trade Commission in response to the agencies'
request for comments
on collaborations among competitors. We asked the agencies to clarify that existing antitrust laws do not prohibit
legitimate collaboration on addressing AI security risks. Read our letter here.
In early 2026, disclosures from the three leading U.S. AI developers revealed a coordinated pattern of Chinese industrial-scale distillation attacks
against American frontier models. The U.S. government has a range of available tools to respond to these attacks and impose meaningful costs on the entities responsible,
including adding the Chinese labs to the BIS Entity List and applying sanctions under a purpose-built IEEPA executive order or existing authorities. Our draft paper, which
assesses these options, identifies their limitations, and recommends a phased escalation strategy, is available
here. We welcome feedback.
Read Joe Khawam's Just Security article on the topic here.
We are also exploring several other AI-related topics where our working methods might be useful, such as the following:
•Preserving chain-of-thought monitorability
•Frontier AI transparency requirements and First Amendment concerns
•Attribution of actions by AI agents