Blog: Talking about flood, shit, democracy and trust in the Netherlands
Jianlong stands in front of his poster and explains the experimental setup. |
In April 2024, I (Jianlong) embarked on my first work trip to the Netherlands. I spent two days at the CWI in Amsterdam 1 presenting a poster and attending workshops on AI, media and democracy, and another day in Utrecht for the annual conference of the Digital Society.
The CWI workshops attracted participants working in various sectors, from education and startups to city councils and media organizations. Among the many consensus shared by the stakeholders, one that most strongly resonated with me as a researcher is the scope of responsibilities of academics. “Doing science” should no longer be equated to writing papers and presenting papers; it is just as important to interact with 1) regulators and lawmakers to enable them to make informed decisions about AI and 2) the public to educate them about the limitations of AI.
There had also been a lot of chatters about how advances in AI (i.e. language and image models) is enabling the political strategy of “flooding the zone with shit”—overwhelming the public with misinformation and thereby disengaging them from truth seeking—and accelerating the “enshittification” of our knowledge sphere by lowering the cost of producing and profiting from tasteless journal articles and ebooks. The room was rather gloomy with many unanswered questions, yet I remained hopeful about our possibility to adapt to the new realities. My contrarian argument on enshittification is that LLMs could 1) devalue the lower-tier, non-expert segment of the digital information market (who’s willing to shell out $7.99 for a 26-page Kindle copy of Goodnight, Sleepy Joe if one can create their own Shakespearean Hie thee, Slow Joe illustrated by Pablo Picasso within minutes?), 2) divert information consumers’ spending to the higher-tier segment, 3) which in turn forces human writers to increase the novelty and rigor of their output to make a better value proposition, 4) ultimately distinguishing the quality works of humans from shit. Of course, this scenario is based on many untested assumptions about how we might respond to the changes brought by LLMs, and I’d love to hear your thoughts about ways we can collect empirical evidence about societal adaptability.
In Utrecht, with “digital trust” being a central theme of the day, academics and practitioners in AI ethics and governance shared their perspectives. Prof Linnet Taylor (Professor of International Data Governance, Tilburg University) pointed out the conflict between the Taylorist, efficiency-focused approach to public management enabled by digital technologies and the equity and care normally expected in public values, Linda Li (project lead in AI ethics, Dutch Police) described the dilemma her team faced in balancing privacy and public safety, and Prof Payal Arora (Professor of Inclusive AI Cultures, Utrecht University) emphasized the need to go beyond theorizing principles and take concrete actions to care for those who are systemically excluded. Throughout the day, Prof Arora also spoke about the “rational optimism” around technology that is prevalent outside the Western world and laid out a case against “respecting cultures” in promoting AI inclusivity—an often patronizing mindset that overemphasizes identity at the cost of contextualized need; we continued our delightful discussions over the sandwich break. My big surprise of the day was a virtual chance encounter with Prof Brent Mittelstadt (Director of Research, Oxford Internet Institute)—who introduced me to the field of digital ethics—in a panel session. I made sure to grill him with a question about operationalizing the notion of a legal duty to ensure the truthfulness of AI output.
Are you intrigued by the topics such as enshittification, misinformation, ethics and societal adaptability to emerging challenges? Subscribe to the I2SC newsletter to learn about our future projects and job openings.
[1] The Dutch national institute for mathematics and computer science, where the Python programming language was born.