How AI policy has changed in a year
And some personal reflections on what this means for priorities
It feels to me that the state of AI policy has changed a lot in the last year. By “the state of AI policy” I mean something like: the kinds of ideas that seem promising for governing AI well; what seems politically tractable; what kinds of questions seem important to answer, etc.
Perhaps this past year isn’t special: AI policy is always moving fast. Perhaps the changes of the last year feel particularly acute for me, because I took a bunch of time off to have a baby. (Maybe this makes me biased, or maybe it means that I’m particularly able to see what’s changed clearly). Either way, it feels useful to me to reflect a bit on what’s changed, and where that leaves us now.
Before I went on maternity leave in May 2024, it felt like we were seeing some good progress in AI safety policy. The UK government, in particular, had been starting to take AI safety really seriously, establishing the UK AI Safety Institute, and seriously considering an AI bill to regulate the most powerful AI companies. International consensus was also building, slowly but surely, on the need to take risks from AI seriously. Governments across the world, including both the US and China, signed a declaration at the Bletchley AI Safety Summit agreeing on the “potential for serious, even catastrophic, harm” from advanced AI models and the need to work internationally to address them.
I’ve been easing back into work since the start of this year, and things feel pretty different. Both the UK and US had a change of government while I was off, and in particular the new US administration seems to have a very different position on AI regulation. I’m no expert in US policy or politics, but you only need to look at JD Vance’s remarks from the Paris AI “Action” Summit to see that the mood on AI risks and safety has shifted: “I’m not here to talk about AI safety... we believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off”. (I’m not sure I disagree with that last remark, exactly, so much as I likely disagree with what counts as “excessive” regulation). This deregulatory stance from the US, among other things, has understandably led the UK to rethink its own approach to regulation, bringing a lot of last year’s progress on AI safety policy to a standstill.
AI progress also continues to leap forward. We’ve seen huge improvements in capabilities across the board, and shifts in the paradigm of frontier AI development - rather than relying solely on massive amounts of pre-training compute, many ‘reasoning’ models have benefitted hugely from more compute and time at the inference stage. Regardless of whether you think we’ll have ‘AGI’ in the next 2-3 years, its clear that we’re likely to see very capable systems and the potential for much more automation, and that the technical paradigm for ‘frontier AI’ may continue to change.
This all raises a lot of big questions for AI safety policy. How do we stop “safety'“ becoming seen as a negative word in certain policy circles, and ensure risks from rapid AI development continue to be taken seriously? What kinds of policies do we need if we might see highly capable - and possibly dangerous - systems released in the next couple of years - and is regulation really a viable path in that world? Conversely, how do we make sure we’re not too short-sighted in all of this, and don’t rush through poorly thought-through policy because the pace of AI progress looks scary?
On a personal front, this has all felt pretty disorientating: I find myself navigating all the normal personal challenges of balancing new parenting and work, while also feeling like a lot of the assumptions underlying my contribution to AI policy have shifted. There’s never a good time to go on maternity leave in this space - but the past year has felt like as challenging a time as any.
Where does all of this leave us? A few reflections:
It seems like we’re in greater need of excellent “comms” work on AI risks than we were a year ago: more people in positions of power seem to be sceptical that advances in AI pose serious risks, and this makes it much harder to pass effective policy. Among other things, I think we need more detailed stories and discussions of ways things could go wrong that people can really get their heads around.
We need more policy proposals for mitigating AI risks that appeal to the politics of those in power today. No matter how convinced governments are that AI safety should be a policy priority, they will always have to balance this with other priorities - particularly innovation and boosting national economies. While I think many AI safety policy proposals won’t hinder innovation anywhere near as much as people fear, we may have to do even better than this. Are there better ways to enhance AI safety and boost innovation at the same time? One area that seems promising for this is work on “defensive acceleration” i.e. investing in technologies which could help reduce specific risks from AI, such as vaccine development tech to reduce risks from AI-developed pathogens.
Strategies for mitigating AI risks that rely mostly on “prevention” - trying to ensure that companies don’t develop or release dangerous AI capabilities in the first place - seem increasingly insufficient. I still think this work is important, but we also need to think much more about what we do if potentially dangerous capabilities are released in the next couple of years: how can we notice early signs of harms (e.g. better incident detection and reporting), how can we prepare governments and other parts of society to respond and prevent harms from compounding to extreme levels?
I think we really, really, need to start putting some laws in place that demand more transparency from AI companies and give governments the power to demand certain types of information and oversight. Much more than this will ultimately be needed, but if we don’t address this information asymmetry, I have very little hope that governments will be able to assess the situation well enough to govern AI effectively. I think this kind of transparency should probably be the main focus of any frontier AI regulation implemented today (and less trying to pin down specific safety obligations for developers).
I’d be curious to hear what others working in the AI policy space think: how would you describe what’s changed in the last year, and what do you think is most needed today?
'I think we really, really, need to start putting some laws in place that demand more transparency from AI companies and give governments the power to demand certain types of information and oversight. Much more than this will ultimately be needed, but if we don’t address this information asymmetry, I have very little hope that governments will be able to assess the situation well enough to govern AI effectively. I think this kind of transparency should probably be the main focus of any frontier AI regulation implemented today (and less trying to pin down specific safety obligations for developers).'
How do you think we achieve this? A frustrating dynamic here is that you don't get transparency for free, you have to have incentives that allow these labs to buy into transparency requirements, even if they became law.
Also curious as to what kind of information do you think transparency should get us?