Our initial projects focus on developing legislation related to artificial intelligence. The speed of AI development and diffusion makes adapting our legislative frameworks particularly challenging. Because of these challenges, some AI-related legislative proposals fail to incorporate a sufficiently wide range of views. The Law Reform Institute believes that, by having all relevant stakeholders engage with each other directly, in the context of developing specific text for a bill, we can vastly improve the odds of identifying—in a timely manner—promising legislative fixes for real problems. We are also identifying possible areas for future work on non-AI topics.
As frontier models come closer to being able to enable real-world harms—such as cyberattacks and biological weapons proliferation—many labs will want to collaborate more on research that will help to mitigate those risks. Yet fears of antitrust implications sometimes chill collaboration that would be in the public interest. Startups and smaller labs that do not have dedicated antitrust counsel may be particularly affected in this respect. With the assistance of Dan Crane (University of Michigan Law School), we are exploring whether a narrowly targeted antitrust exemption could be useful. An in-progress draft is available here. We welcome feedback on the draft.
We are also exploring several other AI-related topics where our working methods might be useful, such as the following:
•Preserving chain-of-thought monitorability
•Frontier AI transparency requirements and First Amendment concerns
•Attribution of actions by AI agents