How AI Might Be Narrowing Our Worldview—and What Regulators Can Do About It

How AI Might Be Narrowing Our Worldview—and What Regulators Can Do About It
7th August 2025 Arianna Steigman

A new study warns that advanced generative AI systems—like ChatGPT—often generate generic, mainstream content, subtly limiting the range of perspectives and voices users encounter. This is not just a technical issue: it poses real risks to cultural diversity, collective memory, and the quality of democratic debate. Traditional AI governance, which focuses on principles such as transparency and data security, does not fully address these growing concerns.

A Call for “Multiplicity” in AI Principles

Prof. Michal Shur-Ofry of the Hebrew University of Jerusalem, also a Visiting Faculty Fellow at NYU, argues for a new regulatory principle: multiplicity. She explains that as AI tools become woven into daily life—from answering questions to assisting with studies—they frequently default to the most popular answers. For instance, when asked about notable 19th-century figures, ChatGPT highlighted predictable, Anglo-centric names such as Lincoln, Darwin, and Queen Victoria. Similarly, its “best TV series” lists routinely favoured mainstream Anglo-American shows while overlooking non-English options.

This tendency stems from how large language models (LLMs) are built: trained on vast, primarily English-language, digital datasets and drawing their responses from the most statistically frequent information. As a result, less common knowledge—especially about minority cultures—often disappears. Over time, because AI-generated outputs feed back into future models, this narrowing effect compounds.

Prof. Shur-Ofry warns that such uniformity erodes cultural richness, reduces social tolerance, and undermines healthy democratic discourse.

“If everyone is receiving the same mainstream answers from AI, it narrows our world of thinkable thoughts,” she notes.

Solutions for a More Diverse AI Future

Shur-Ofry recommends incorporating multiplicity into both the design and governance of AI:

  • Designing for Diversity: Developers should build features that present or highlight alternative responses, not just one “best” answer. For example, allowing users to adjust the AI’s “temperature” (a setting that increases the variety of content) or notifying users of other possible answers.

  • Building an Ecosystem of AI Tools: Promote access to a range of different AI systems, making it easier to seek alternative viewpoints or a “second opinion.”

  • Enhancing AI Literacy: Equip users with the knowledge to ask better questions, compare answers, and think critically, so they view AI as a tool—not a single source of truth.

In collaboration with Technion’s Dr. Yonatan Belinkov, Adir Rahamim, and Bar Horowitz-Amsalem from Hebrew University, Shur-Ofry is working on practical ways to embed these ideas and boost diversity in LLM outputs.

“If we want AI to truly serve society,” she concludes, “we have to make room for nuance, complexity, and diversity. Multiplicity is about safeguarding the full breadth of human experience in an AI-driven world.”

Read the full research paper, “Multiplicity as an AI Governance Principle”, in the Indiana Law Journal: https://www.repository.law.indiana.edu/ilj/vol100/iss4/6/