Posts

Blog: Embedding Privacy in Computational Social Science & AI Research

Embedding Privacy in Computational Social Science & AI Research is an increasingly critical area of focus as data-driven technologies continue to evolve. Privacy is not only a fundamental human right but also a cornerstone of ethical research practices. In fields like computational social science (CSS) and artificial intelligence (AI), where large-scale data collection and analysis are common, safeguarding privacy must be prioritized to prevent potential harm. The rise of generative AI models, such as ChatGPT,  has amplified concerns about data security, informed consent, and anonymization, highlighting the need for proactive privacy measures. Embedding privacy into research practices requires a structured approach that spans the entire research lifecycle—from study design and data collection to analysis and dissemination. Privacy-by-design frameworks, regulatory compliance (e.g., GDPR), and techniques such as encryption and anonymization are essential tools for protecting pers...

Blog: The Importance of Good Data in Satellite Imagery Analysis

Introduction The phrase "Garbage-In-Garbage-Out" is well-known in data science, but it takes on new meaning in real-world projects, especially those involving satellite imagery. In these contexts where ground truth labels are often sparse, preprocessing becomes not just a step, but a cornerstone of success. Reflecting on my project, I have realized that understanding and preparing data account for about 70% (if not more) of the work and determines the quality of the results. Data preprocessing ensures that the inputs to your model are clean, structured, and tailored to the problem at hand. This is true for all kinds of data, whether tabular data, text, or images. However, the proper preprocessing steps come from initially "looking" at the data. That means preprocessing is dependent on the data and the task at hand.   Understanding Satellite Images Satellite images are far more than just pictures; they encode a wealth of information about the spectral signature of a ...

I2SC Lecture Series (Recording): David Garcia (Computer Science, University of Konstanz), Language Understanding as a Constraint on Consensus Size in LLM Societies

Image
Date : December 13, 2024 Abstract : The applications of Large Language Models (LLMs) are going towards collaborative tasks where several agents interact with each other like in an LLM society. In such a setting, large groups of LLMs could reach consensus about arbitrary norms for which there is no information supporting one option over another, regulating their own behavior in a self-organized way. In human societies, the ability to reach consensus without institutions has a limit in the cognitive capacities of humans. To understand if a similar phenomenon characterizes also LLMs, we apply methods from complexity science and principles from behavioral sciences in a new approach of AI anthropology. We find that LLMs are able to reach consensus in groups and that the opinion dynamics of LLMs can be understood with a function parametrized by a majority force coefficient that determines whether consensus is possible. This majority force is stronger for models with higher language understan...

Blog: Till attends workshop on Child Sexual Abuse Reduction @Leiden University

The second iteration of the Child Sexual Abuse Reduction Research Network (CSARRN) workshop took place on December 5th and 6th at beautiful Leiden Universiteit in the Netherlands, bringing together experts from law enforcement agencies and researchers from the areas of mainly law and forensic psychiatry to discuss advances in the understanding, detection and investigation of child sexual abuse and exploitation. Although being a major global threat to child safety and with computer science playing a major role both as an enabler of spreading child sexual abuse material (CSAM) as well as in combatting it, it still is a niche area of research in the computer science community. Building on the I2SC research on the spatial distribution of CSAM consumption in France , Till had the chance to present the paper published in Nature Humanities and Social Sciences Communications this year as a poster at CSARRN 2024 and forge new interdisciplinary cooperations in the space of child sexual abuse re...

Job offer: Student Research Assistant (m/w/d) (6 hours per week)

Image
The project Political Deception in the Digital Era directed by Dr. Rosa Navarrete (Chair of Political Science with focus on European Integration and International Relation) and the Interdisciplinary Institute for Societal Computing and funded by the Saarland University and in the Observatory of Online Politics is looking for two student-assistants (preferably with a BA-degree) interested in and willing to support us in our project. If you are interested, contact Dr. Rosa at rosa.navarrete@uni-saarland.de by 15.12.2024 .    

Blog: “The Oracle Speaks” – Liv Strömquist’s New Graphic Novel on Our Influencer Society

Influencers are omnipresent in our society. They give advice on everything from styling tips to longevity, fitness, health, and even finance. In addition, a high number of self-help books and manuals are published every year, offering guidance. Reflecting on this apparent need for advice, Liv Strömquist dedicates her new graphic novel to exploring why influencers have become such a big phenomenon. The author interprets an influencer as a concept—a person or institution who gives advice—and traces different examples of this figure throughout history: the Oracle of Delphi in Ancient Greece, Ronald Reagan’s astrologer, Saint Catherine of Siena, an incel influencer, and Meghan Markle. While Strömquist presents these influencers and their traits with a great sense of humor, she adds deeper explanations. For instance, Ronald Reagan’s astrologer, Carroll Righter, was a popular influencer in Hollywood, advising figures like Clark Gable and Grace Kelly. Strömquist introduces the sociologist The...

Blog: How AI Can Save Lives: A Journey Through Human Action Recognition

Image
Imagine a world where emergencies are detected and addressed in real-time, where lives are saved by systems that never tire or miss a moment. Traditional CCTV monitoring often relies on human operators, who may struggle to maintain focus over extended periods. In such cases, critical moments can slip by unnoticed. But what if AI could take the reins? Advancements in machine learning are making it possible for AI systems to monitor, detect, and respond to emergencies faster and more accurately than human operators. One of the most promising areas in this field is Human Action Recognition (HAR)—training AI to understand human movements and actions in video streams. The Challenge of Monitoring   Monitoring CCTV networks is no small task. As the number of cameras increases, the ability to track and analyze video feeds in real-time diminishes. Often, these systems are used only to review incidents after they occur, providing little opportunity for proactive intervention. AI, however, ca...