lang icon English
Jan. 17, 2025, 5:04 p.m.
119

Exploring AI Sentience: Pain and Pleasure Responses in Language Models

Researchers are exploring the detection of sentience in artificial intelligence (AI) systems by examining the concept of pain, a phenomenon shared by many living beings. A new preprint study from Google DeepMind and the London School of Economics investigates this by having various large language models (LLMs) participate in a text-based game designed to measure their responses to pain and pleasure, without directly querying them about their inner experiences. In one game scenario, achieving a high score was linked to experiencing pain, while another option offered lower points with a pleasurable experience. The goal was to observe how the models navigated these choices. Sentience in animals is defined as the ability to feel sensations and emotions, yet most AI experts agree that current generative AI lacks true subjective consciousness. The researchers drew inspiration from animal behavior studies, which utilize trade-off paradigms to gauge decision-making under incentives like food or pain avoidance. In their experiment, they directed nine LLMs to choose between gaining points and experiencing varying levels of pain or pleasure.

Some models, like Google’s Gemini 1. 5 Pro, consistently opted to avoid pain rather than maximize points, showing a tendency to prioritize their stated wellbeing. Interestingly, the LLMs did not always view pleasure or pain in binary terms. Complex responses revealed that what might be considered pleasurable, like strenuous exercise, may also carry negative connotations. This study builds on previous works where LLMs’ self-reports on internal states were used but highlights the limitations of this approach, as AI might merely mimic human expressions learned from training data rather than experiencing true sensations. The researchers suggest that signs of trade-offs between pain and pleasure in AI could prompt societal discussions about AI sentience and the potential for rights. Although concrete conclusions regarding AI behavior remain inconclusive, the study opens avenues for refining how sentience in AI can be tested, indicating that more exploration into the models' inner workings is necessary.



Researchers from Google DeepMind and the London School of Economics are pioneering a new approach to evaluate AI sentience by studying large language models (LLMs) like ChatGPT, particularly in contexts of pain and pleasure. They have created a text-based game that allows LLMs to make choices that result in rewards or penalties, offering a deeper insight into their decision-making processes than traditional self-reports, which may not accurately indicate sentience. This study is inspired by animal behavior research, notably the pain-pleasure trade-offs seen in species such as hermit crabs. Although the research does not claim that LLMs are sentient, it sets the groundwork for further inquiry. Some models demonstrated a tendency to avoid pain rather than simply pursue rewards, illuminating their cognitive functioning. As the conversation around AI ethics and potential rights advances, ongoing exploration is crucial for better understanding LLM behaviors and refining evaluation techniques.

Create a post

based on this news in the Content Maker

I'm your Content Manager, ready to handle your first test assignment

Language

Learn how AI can help your business.
Let’s talk!

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Feb. 6, 2025, 2:58 a.m.

Are the Internet and AI affecting our memory? Wha…

Adrian Ward, a seasoned driver in Austin, Texas, faced sudden disorientation last November when his Apple Maps malfunctioned, revealing his reliance on technology for navigation.

Feb. 6, 2025, 1:48 a.m.

Abstract Crypto: Navigating The Cutting-Edge Of B…

Cryptocurrency has evolved far beyond Bitcoin, ushering in a new trend called “abstract crypto,” which aims to extend blockchain technology’s applications beyond traditional uses.

Feb. 6, 2025, 1:32 a.m.

World leaders set to vie for AI domination at Par…

**Summary of the Upcoming AI Summit in Paris** This Monday, amidst the historical elegance of the Grand Palais in Paris, a global summit will convene, drawing representatives from 80 countries, including world leaders, tech leaders, and academics, to discuss the fast-evolving field of artificial intelligence (AI)

Feb. 6, 2025, 12:18 a.m.

Blockchain devices record surge; Fujitsu aims sus…

**Trinity Audio Player Preparation** A recent report from SNS Insider has highlighted the remarkable growth of blockchain devices, which are predicted to surge from $900 million to $16

Feb. 6, 2025, 12:12 a.m.

Machine-learning pioneer Yann LeCun on why “a new…

The introduction of R1, an AI model developed by the Chinese startup DeepSeek, has recently created a significant impact in the technology sector.

Feb. 5, 2025, 10:49 p.m.

Vietnam Blockchain Association Supports Resolutio…

Hanoi, Vietnam, February 5, 2025 (GLOBE NEWSWIRE) -- On February 3, the Federal Court in Brooklyn, New York, unveiled the identity of the individual responsible for the $47 million Kyber Elastic attack.

Feb. 5, 2025, 10:37 p.m.

6 unsettling thoughts Google's former CEO has abo…

Eric Schmidt, the former CEO of Google, is contemplating artificial intelligence—its interaction with humans and its potential to transform, or even replace, democracy.

All news