I’ve often heard people say we can’t regulate AI yet because we don’t understand its risks well enough.
I understand the sentiment here, but I disagree. I think it’s precisely because we don’t understand AI risks well enough that we need to regulate AI.
I certainly agree that policymakers need to understand AI better in order to prepare and protect society in a time of huge technological change. There are lots of things that prevent us from understanding AI and its risks well, chief among them the fact that anticipating future technological developments and their impact on society is just fundamentally really, really hard. But another big factor is the big information asymmetry between those inside AI companies and those outside. The people best placed to understand and anticipate AI’s risks are those who have access to frontier models in development, with the ability to test and tinker with models before the rest of the world sees them.
While there's an enormous information asymmetry between those inside and outside AI industry, it's very hard for anyone in the latter group to make good decisions about risks and governance.
This information asymmetry isn’t going to rectify itself, in part because information asymmetries also create power asymmetries. Right now, for example, the UK AI Security Institute (AISI) is doing some great work evaluating and testing AI models for risks, but depends on the goodwill of AI companies to get the access they need to do this work. This both makes crucial work to understand AI vulnerable, and creates worrying incentives: AISI, and other parts of government, are under some pressure to keep the AI industry “on side”.
I believe the only way to really fix this - to ensure policymakers have access to the information they need to understand AI, and therefore govern it effectively - is via regulation. Mandating certain levels and types of transparency from AI companies in the public interest is, I think, the only way to ensure that policymakers have the information they need without it always being on companies’ terms. We need this kind of regulation precisely because we don't understand the risks well enough yet.
To be clear, there are a few different types of regulation we are likely to need for AI, some of which do need a better understanding of risks. If we want to put specific requirements in place for how companies should mitigate specific categories of risk, then yes, we may well need a better understanding of those risks than we have right now. My claim is that we need some type of AI regulation - particularly that ensuring better information exchange and transparency between AI companies and government - now, and that will be crucial to getting to a place where other, more risk-specific, regulation is possible.
I’m aware this doesn’t say much yet about what exactly this kind of transparency should look like in practice or how to make it happen. I'll hopefully have more detail to say about this soon, but I'm envisioning something like legislative requirements for companies to notify a government body when they are training particularly powerful systems, and at the very least provide information about safety tests and evaluations conducted prior to release (ideally these tests would be carried about by third parties and I hope we'll get to a place where it's possible to mandate this before too long also).
There's a lot we don't know about frontier AI systems and their risks. This lack of understanding does put some limits on our ability to govern advanced AI, but it doesn't make it impossible to regulate. Far from it, a high priority for AI regulation today should be ensuring a wider range of actors, especially governments, have access to the information they need to make good decisions about AI governance in the coming years.