Categories: OpinionsProgramming

AI datasets have human values blind spots − new research

February 7, 2025

By Ike Obi, Purdue University 

My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values.

At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content.

To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human feedback. Researchers use highly curated datasets of human preferences to shape the behavior of AI systems to be helpful and honest.

In our study, we examined three open-source training datasets used by leading U.S. AI companies. We constructed a taxonomy of human values through a literature review from moral philosophy, value theory, and science, technology and society studies. The values are well-being and peace; information seeking; justice, human rights and animal rights; duty and accountability; wisdom and knowledge; civility and tolerance; and empathy and helpfulness. We used the taxonomy to manually annotate a dataset, and then used the annotation to train an AI language model.

Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value.


Free Reports:

Get our Weekly Commitment of Traders Reports - See where the biggest traders (Hedge Funds and Commercial Hedgers) are positioned in the futures markets on a weekly basis.





Download Our Metatrader 4 Indicators – Put Our Free MetaTrader 4 Custom Indicators on your charts when you join our Weekly Newsletter





The researchers started by creating a taxonomy of human values.
Obi et al, CC BY-ND

Why it matters

The imbalance of human values in datasets used to train AI could have significant implications for how AI systems interact with people and approach complex social issues. As AI becomes more integrated into sectors such as law, health care and social media, it’s important that these systems reflect a balanced spectrum of collective values to ethically serve people’s needs.

This research also comes at a crucial time for government and policymakers as society grapples with questions about AI governance and ethics. Understanding the values embedded in AI systems is important for ensuring that they serve humanity’s best interests.

What other research is being done

Many researchers are working to align AI systems with human values. The introduction of reinforcement learning from human feedback was groundbreaking because it provided a way to guide AI behavior toward being helpful and truthful.

Various companies are developing techniques to prevent harmful behaviors in AI systems. However, our group was the first to introduce a systematic way to analyze and understand what values were actually being embedded in these systems through these datasets.

What’s next

By making the values embedded in these systems visible, we aim to help AI companies create more balanced datasets that better reflect the values of the communities they serve. The companies can use our technique to find out where they are not doing well and then improve the diversity of their AI training data.

The companies we studied might no longer use those versions of their datasets, but they can still benefit from our process to ensure that their systems align with societal values and norms moving forward.

About the Author:

Ike Obi, Ph.D. student in Computer and Information Technology, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

InvestMacro

Share
Published by
InvestMacro

Recent Posts

US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices

By Anne Toomey McKenna, Penn State  On a Saturday morning, you head to the hardware…

17 hours ago

Signs of economic instability emerge in Oakland County, one of Michigan’s wealthiest

By Grigoris Argeros, Eastern Michigan University and Jordyn Gerwig, Eastern Michigan University  Oakland County, home…

18 hours ago

NZD and CAD strengthen amid rising inflationary pressure

By JustMarkets  The US stock market ended Monday’s trading session with moderate declines. By the…

19 hours ago

Pound Declines Amid Geopolitics and Political Risks

By Analytical Department RoboForex GBP/USD traded at 1.3515 on Tuesday as the US dollar strengthened.…

19 hours ago

EUR/USD Starts the Week Higher, but the Outlook Remains Unstable

By Analytical Department RoboForex EUR/USD moved higher on Monday after a correction, trending towards 1.1759.…

2 days ago

The situation in the Strait of Hormuz remains uncertain

By JustMarkets  By the end of the day, the Dow Jones Index (US30) rose by…

2 days ago

This website uses cookies.