lang icon En
Nov. 22, 2024, 6:47 p.m.
2867

OpenAI Invests in Ethical AI Research with Duke University

Brief news summary

OpenAI and Duke University have embarked on a collaborative, three-year project to develop algorithms capable of predicting human moral judgments, with the ultimate aim of creating a "moral AI." Supported by a $1 million grant, the project is spearheaded by ethics professor Walter Sinnott-Armstrong and researcher Jana Borg. The team's vision is to develop a "moral GPS" that could assist fields such as medicine, law, and business in navigating complex ethical issues. They draw inspiration from previous projects, like algorithms for kidney donation, to guide their efforts. A core challenge in this endeavor is teaching AI to understand the subtleties of human ethical reasoning, which varies significantly across cultures. Existing AI models often rely on Western-centric datasets, leading to biases and reduced applicability in a global context. This limitation was evident in the Allen Institute for AI's Ask Delphi project, which faced difficulties in making consistent ethical decisions due to nuanced language differences. The nature of morality is complex and lacks a universal benchmark, as illustrated by differing ethical theories like Kantianism and utilitarianism. Consequently, creating AI that can accommodate these subjective viewpoints is particularly challenging. These complexities highlight the uncertainties and obstacles in developing AI systems capable of accurately predicting human moral judgments.

OpenAI is investing in academic research to develop algorithms that can predict human moral judgments. In an IRS filing, OpenAI Inc. , the nonprofit arm of OpenAI, revealed it had awarded a grant to researchers at Duke University for a project titled "Research AI Morality. " According to an OpenAI spokesperson, this grant is part of a broader $1 million, three-year initiative aimed at "making moral AI. " Details about this morality research are scarce, with the grant set to conclude in 2025. Walter Sinnott-Armstrong, the principal investigator, and a Duke professor specializing in practical ethics, mentioned in an email to TechCrunch that he could not discuss the project. Sinnott-Armstrong and Jana Borg, the co-investigator, have previously explored AI's role as a "moral GPS" to aid human decision-making. Their past work includes developing a "morally-aligned" algorithm for kidney donation decisions and examining scenarios where people prefer AI's moral decisions. The press release states that the aim of the OpenAI-funded research is to design algorithms capable of predicting human moral judgments in situations involving conflicts among morally relevant factors in medicine, law, and business. However, capturing the complexity of morality remains a significant challenge for today's technology. In 2021, the Allen Institute for AI created Ask Delphi, a tool intended to provide ethical recommendations. While it performed well with straightforward dilemmas, minor changes in wording could lead Delphi to condone almost anything, even smothering infants. This issue arises from how modern AI systems operate. Machine learning models are essentially statistical tools.

By analyzing a vast number of examples from the web, they learn patterns to make predictions, such as associating "to whom" with "it may concern. " These models lack an understanding of ethical concepts and the reasoning and emotions involved in moral decisions. Consequently, AI often reflects the values of Western, educated, industrialized societies, since its training data comprises predominantly such viewpoints. Not all values are represented in AI outputs, especially if the individuals holding those values do not contribute to online content used for training. AI systems also incorporate a range of biases beyond Western perspectives. For instance, Delphi deemed being straight as more "morally acceptable" than being gay. The task facing OpenAI and its researchers is complicated by morality's inherent subjectivity. Philosophers have debated ethical theories for centuries without achieving a universal framework. Claude advocates Kantianism, emphasizing absolute moral rules, while ChatGPT leans slightly toward utilitarianism, focusing on the greatest good for the greatest number. The superiority of one approach over the other is subjective. Creating an algorithm to predict human moral judgments must consider all these factors. Achieving this goal is exceptionally challenging—assuming it's even feasible.


Watch video about

OpenAI Invests in Ethical AI Research with Duke University

Try our premium solution and start getting clients — at no cost to you

Content creator image

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

April 4, 2026, 10:27 a.m.

Bidview Marketing's Cameron LiButti Discusses the…

In recent years, the field of search engine optimization (SEO) has undergone significant changes, especially with the rapid advancements in artificial intelligence (AI).

April 4, 2026, 10:26 a.m.

Smmwiz.com Identified as the Leading SMM Panel In…

By 2026, social media stands as one of the most competitive and performance-focused digital arenas.

April 4, 2026, 10:22 a.m.

Perplexity AI Faces Class-Action Suit Over Secret…

Perplexity AI is facing a proposed class-action lawsuit filed in the U.S. District Court for the Northern District of California in San Francisco.

April 4, 2026, 10:18 a.m.

OpenAI and Anthropic Expand Sales Teams Amid AI M…

OpenAI expanded its enterprise sales team dramatically from 10 to 500 employees in under two years, with Anthropic rapidly following suit, targeting $20 billion to $26 billion in revenue by 2026.

April 4, 2026, 6:28 a.m.

Z.ai Goes Public on Hong Kong Stock Exchange

Z.ai, previously known as Zhipu AI, has reached a major milestone by becoming the first prominent large language model (LLM) company from China to be publicly listed on the Hong Kong Stock Exchange.

April 4, 2026, 6:15 a.m.

Gartner Predicts AI-Driven Sales Enablement Will …

A recent study by Gartner, Inc., a leading business and technology insights firm, reveals that sales organizations adopting AI-driven enablement functions are set to significantly speed up their sales processes.

April 4, 2026, 6:15 a.m.

Google Tests AI-Generated Headline Rewrites in Se…

Google has recently confirmed it is conducting a limited experimental test using artificial intelligence (AI) to generate rewritten headlines for traditional Search results.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

AI Company welcome image

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today