Categories: OpinionsProgramming

AI datasets have human values blind spots − new research

February 7, 2025

By Ike Obi, Purdue University 

My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values.

At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content.

To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human feedback. Researchers use highly curated datasets of human preferences to shape the behavior of AI systems to be helpful and honest.

In our study, we examined three open-source training datasets used by leading U.S. AI companies. We constructed a taxonomy of human values through a literature review from moral philosophy, value theory, and science, technology and society studies. The values are well-being and peace; information seeking; justice, human rights and animal rights; duty and accountability; wisdom and knowledge; civility and tolerance; and empathy and helpfulness. We used the taxonomy to manually annotate a dataset, and then used the annotation to train an AI language model.

Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value.


Free Reports:

Sign Up for Our Stock Market Newsletter – Get updated on News, Charts & Rankings of Public Companies when you join our Stocks Newsletter





Get our Weekly Commitment of Traders Reports - See where the biggest traders (Hedge Funds and Commercial Hedgers) are positioned in the futures markets on a weekly basis.





The researchers started by creating a taxonomy of human values.
Obi et al, CC BY-ND

Why it matters

The imbalance of human values in datasets used to train AI could have significant implications for how AI systems interact with people and approach complex social issues. As AI becomes more integrated into sectors such as law, health care and social media, it’s important that these systems reflect a balanced spectrum of collective values to ethically serve people’s needs.

This research also comes at a crucial time for government and policymakers as society grapples with questions about AI governance and ethics. Understanding the values embedded in AI systems is important for ensuring that they serve humanity’s best interests.

What other research is being done

Many researchers are working to align AI systems with human values. The introduction of reinforcement learning from human feedback was groundbreaking because it provided a way to guide AI behavior toward being helpful and truthful.

Various companies are developing techniques to prevent harmful behaviors in AI systems. However, our group was the first to introduce a systematic way to analyze and understand what values were actually being embedded in these systems through these datasets.

What’s next

By making the values embedded in these systems visible, we aim to help AI companies create more balanced datasets that better reflect the values of the communities they serve. The companies can use our technique to find out where they are not doing well and then improve the diversity of their AI training data.

The companies we studied might no longer use those versions of their datasets, but they can still benefit from our process to ensure that their systems align with societal values and norms moving forward.

About the Author:

Ike Obi, Ph.D. student in Computer and Information Technology, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

InvestMacro

Share
Published by
InvestMacro

Recent Posts

You can change your emotions – but it’s a 2‑step process that takes some effort

By Christian Waugh, Wake Forest University  Picture Gigi, having a chat with her boss, when…

6 hours ago

Mythos AI is a cybersecurity threat, but it doesn’t rewrite the rules of the game

By Mohammad Ahmad, West Virginia University  The cybersecurity community went on alert when Anthropic announced…

8 hours ago

The United States rejected Iran’s proposal for resolving the conflict. Oil prices surged again

By JustMarkets  On Monday, US stock markets rose moderately. By the end of the day,…

9 hours ago

EUR/USD on Edge: Middle East and China in Focus

By Analytical Department RoboForex EUR/USD dipped slightly on Tuesday, retreating to 1.1762. The US dollar…

9 hours ago

The US stock indices continue to set new records. China’s exports showed a sharp increase

By JustMarkets  On Friday, the US stock indices once again renewed their record highs. By…

1 day ago

Yen Speculator Bets jump after intervention, CAD & AUD Bets continue higher as USD Index Bets fall

By InvestMacro Here are the latest charts and statistics for the Commitment of Traders (COT)…

2 days ago

This website uses cookies.