'I think we really, really, need to start putting some laws in place that demand more transparency from AI companies and give governments the power to demand certain types of information and oversight. Much more than this will ultimately be needed, but if we don’t address this information asymmetry, I have very little hope that governments will be able to assess the situation well enough to govern AI effectively. I think this kind of transparency should probably be the main focus of any frontier AI regulation implemented today (and less trying to pin down specific safety obligations for developers).'
How do you think we achieve this? A frustrating dynamic here is that you don't get transparency for free, you have to have incentives that allow these labs to buy into transparency requirements, even if they became law.
Also curious as to what kind of information do you think transparency should get us?
'I think we really, really, need to start putting some laws in place that demand more transparency from AI companies and give governments the power to demand certain types of information and oversight. Much more than this will ultimately be needed, but if we don’t address this information asymmetry, I have very little hope that governments will be able to assess the situation well enough to govern AI effectively. I think this kind of transparency should probably be the main focus of any frontier AI regulation implemented today (and less trying to pin down specific safety obligations for developers).'
How do you think we achieve this? A frustrating dynamic here is that you don't get transparency for free, you have to have incentives that allow these labs to buy into transparency requirements, even if they became law.
Also curious as to what kind of information do you think transparency should get us?
You might also enjoy this piece: https://writing.antonleicht.me/p/a-moving-target