Thoughts from the International Conference on Large Scale AI Risk in Leuven
End of May 2025, I (Manon Kempermann) went to the International Conference on Large Scale AI Risk organised by the philosophical department of KU Leuven in Belgium. Three days full of talks presenting recent work paired with major keynotes from leading people in the field such as Yoshua Bengio and Iason Gabriel.
While the conference was held at the philosophical institute and most presenters had some background in philosophy, I particularly enjoyed the broad interdisciplinarity of the attendees which led to many interesting discussions after talks and in the coffee breaks. It reminded me again how important it actually is, especially when it comes to safer AI, that we communicate across disciplines and take on many different perspectives. In a lot of the very technical arguments about AI safety, I often find that the societal and human aspects go quickly missing giving a misguided picture of safety. At the same time, some philosophical solutions that completely miss technical feasibility are similarly less helpful from the practical point of view.
In particular Iason Gabriel from Google Deepmind seemed to have a very integrated perspective, when he shared some of his recent work in his keynote. Especially when he talked about his work on “Characterizing AI Agents for Alignment and Governance”, I found it very inspiring how it always connects with the technical, the policy and social side of AI. Not only did I get a lot of value out of these diverse talks and inputs from the philosophical and social science perspective on AI, but I also got to make many new connections and had a chance to discuss my ideas with many other people in the field.