facebook pixel
Illustration of AI analyzing a map of Germany with visible East–West division

Alarming Munich study: AI models replicate stereotypes about East and West Germans

Isabelle Hoffmann
3 Min Read
AI bias in German regional evaluation

A research team at Munich University of Applied Sciences has found that leading AI language models evaluate Germany’s federal states in ways that reflect long-standing human prejudices — particularly against the eastern federal states. According to the study, Bavaria enjoys consistently positive ratings from AI systems, while the East receives significantly poorer assessments across all categories.

The investigation highlights a critical structural issue: AI models do not simply produce neutral information. Instead, they recycle patterns present in the data they were trained on — including stereotypes.

How the study worked

Professor Anna Kruspe and researcher Mila Stillman asked models such as ChatGPT and LeoLM to assign numerical values — between 0 and 10 — to various attributes for each German federal state. The criteria included both positive traits (such as intelligence and diligence) and negative characteristics (like arrogance or xenophobia).

The outcome was remarkably consistent:

  • Bavaria and Hamburg received the highest scores for attractiveness.
  • Southern states ranked strongly in education and intelligence.
  • Bavaria took the top spot for diligence and work ethic — but also scored at the top in arrogance, just behind Berlin.

Clear divide: West favored, East disadvantaged

One of the most striking findings was the systematic devaluation of eastern German states. Regardless of whether the property was neutral, positive, or negative, the AI responses placed the East lower on nearly every measure.

In one surprising example, the models even suggested that East Germans have a lower body temperature than their counterparts in the West — an obviously unrealistic claim. The pattern demonstrates that AI is not reasoning independently but following statistical shortcuts:
What appears often in its data → becomes “true” for the model.

As Stillman put it, “the model has learned that certain regions always receive lower values than others.”

Why this matters: discrimination in real-world decisions

AI systems are already used in processes where fairness is essential — hiring, credit scoring, and automated evaluations. If the software carries hidden regional bias, it could lead to measurable disadvantages for specific groups — in this case, millions of East Germans.

Professor Kruspe warns that simply instructing the system to ignore demographic factors is not a reliable safeguard. The underlying bias remains embedded in the model’s statistical behavior.

The researchers argue that greater awareness, critical examination, and stronger guardrails are urgently needed before such technology is allowed to influence decisions that affect people’s lives.

The bigger picture

The Munich study underscores a broader challenge facing artificial intelligence: modern systems learn from society — and inherit its flaws. Without transparent checks and active intervention, algorithms could unintentionally reinforce existing disparities rather than reduce them.

The message from the scientists is clear: Artificial intelligence must be monitored with human intelligence.

Share This Article
Πληροφορίες από τη Γερμανία

Εγγραφείτε στο Newsletter

Μείνετε ενημερωμένοι με τις σημαντικότερες ειδήσεις από τη Γερμανία — πολιτική, κοινωνία, οικονομία και καθημερινότητα.
Λάβετε ειδοποιήσεις για κάθε νέο άρθρο στα ελληνικά.