Posts

Job: Student Research Assistants (m/w/d)

Image
The Interdisciplinary Institute for Societal Computing (I2SC) is looking to hire 1-2 student research assistants to work on interesting projects contributing to the Society. Interested applicants are kindly requested to email their complete application (short cover letter, CV, summary of academic achievements in one PDF-file) to Katja Raak - kraak@cs.uni-saarland.de

Job: Student Research Assistant (m/w/d) for AI & Misinformation detection

Image
Join Us in Fighting Misinformation! We are seeking a passionate and skilled Research Assistant to contribute to cutting-edge projects at the intersection of AI, digital literacy, and societal impact. Be part of a team dedicated to developing AI-powered tools to help young people combat misinformation. If you are interested, submit your CV and a brief cover letter detailing your relevant experience and interest in combating misinformation through AI to Katja Raak - kraak@cs.uni-saarland.de .

Blog: Scalable Oversight via Debate

Image
As AI becomes increasingly capable of solving complex tasks, ensuring these agents act in line with human values is a growing challenge. Known as the alignment problem , this issue has no easy answer. On one side, it would be great if we could align AI models right from the start with our values. On the other side, we want to be able to control, that the agent indeed follows those values and does not act deceptive or unexpected. Here, we want to explore an approach towards the control aspect.   The Challenge of Maintaining Human Control Imagine developing an AI more intelligent than most humans. While such models may not yet exist, we must be prepared for the possibility. Currently, humans oversee and judge AI outputs, but as AI becomes increasingly capable, this control could vanish. Scalable oversight seeks to prevent this by creating methods that maintain human authority. Scalable Oversight via Debate   Recent work by Khan et al. introduces scalable oversight via debate as...

Blog: Embedding Privacy in Computational Social Science & AI Research

Embedding Privacy in Computational Social Science & AI Research is an increasingly critical area of focus as data-driven technologies continue to evolve. Privacy is not only a fundamental human right but also a cornerstone of ethical research practices. In fields like computational social science (CSS) and artificial intelligence (AI), where large-scale data collection and analysis are common, safeguarding privacy must be prioritized to prevent potential harm. The rise of generative AI models, such as ChatGPT,  has amplified concerns about data security, informed consent, and anonymization, highlighting the need for proactive privacy measures. Embedding privacy into research practices requires a structured approach that spans the entire research lifecycle—from study design and data collection to analysis and dissemination. Privacy-by-design frameworks, regulatory compliance (e.g., GDPR), and techniques such as encryption and anonymization are essential tools for protecting pers...

Blog: The Importance of Good Data in Satellite Imagery Analysis

Introduction The phrase "Garbage-In-Garbage-Out" is well-known in data science, but it takes on new meaning in real-world projects, especially those involving satellite imagery. In these contexts where ground truth labels are often sparse, preprocessing becomes not just a step, but a cornerstone of success. Reflecting on my project, I have realized that understanding and preparing data account for about 70% (if not more) of the work and determines the quality of the results. Data preprocessing ensures that the inputs to your model are clean, structured, and tailored to the problem at hand. This is true for all kinds of data, whether tabular data, text, or images. However, the proper preprocessing steps come from initially "looking" at the data. That means preprocessing is dependent on the data and the task at hand.   Understanding Satellite Images Satellite images are far more than just pictures; they encode a wealth of information about the spectral signature of a ...

I2SC Lecture Series (Recording): David Garcia (Computer Science, University of Konstanz), Language Understanding as a Constraint on Consensus Size in LLM Societies

Image
Date : December 13, 2024 Abstract : The applications of Large Language Models (LLMs) are going towards collaborative tasks where several agents interact with each other like in an LLM society. In such a setting, large groups of LLMs could reach consensus about arbitrary norms for which there is no information supporting one option over another, regulating their own behavior in a self-organized way. In human societies, the ability to reach consensus without institutions has a limit in the cognitive capacities of humans. To understand if a similar phenomenon characterizes also LLMs, we apply methods from complexity science and principles from behavioral sciences in a new approach of AI anthropology. We find that LLMs are able to reach consensus in groups and that the opinion dynamics of LLMs can be understood with a function parametrized by a majority force coefficient that determines whether consensus is possible. This majority force is stronger for models with higher language understan...

Blog: Till attends workshop on Child Sexual Abuse Reduction @Leiden University

The second iteration of the Child Sexual Abuse Reduction Research Network (CSARRN) workshop took place on December 5th and 6th at beautiful Leiden Universiteit in the Netherlands, bringing together experts from law enforcement agencies and researchers from the areas of mainly law and forensic psychiatry to discuss advances in the understanding, detection and investigation of child sexual abuse and exploitation. Although being a major global threat to child safety and with computer science playing a major role both as an enabler of spreading child sexual abuse material (CSAM) as well as in combatting it, it still is a niche area of research in the computer science community. Building on the I2SC research on the spatial distribution of CSAM consumption in France , Till had the chance to present the paper published in Nature Humanities and Social Sciences Communications this year as a poster at CSARRN 2024 and forge new interdisciplinary cooperations in the space of child sexual abuse re...