lang icon English
Auto-Filling SEO Website as a Gift

Launch Your AI-Powered Business and get clients!

No advertising investment needed—just results. AI finds, negotiates, and closes deals automatically

May 3, 2025, 2:28 a.m.
108

Ethical Controversy Over AI-Generated Comments in Reddit Research Sparks Backlash

A few years ago, Reddit rebranded itself as “the heart of the internet, ” emphasizing its organic, community-driven nature in contrast to algorithm-dominated social media platforms. Users valued the site for being curated by people through upvotes and downvotes, reflecting genuine human interaction. However, earlier this week, members of a popular subreddit discovered their community had been infiltrated by undercover researchers who posted AI-generated comments disguised as human opinions. Redditors reacted strongly, calling the experiment “violating, ” “shameful, ” “infuriating, ” and “very disturbing. ” The researchers went silent amid the backlash and refused to disclose their identities or methodology. The University of Zurich, which employs the researchers, announced an investigation, and Reddit’s chief legal officer, Ben Lee, stated the company intends to hold the researchers accountable. Internet researchers also condemned the experiment as unethical. Amy Bruckman, a Georgia Tech professor with over 20 years studying online communities, labeled it “the worst internet-research ethics violation I have ever seen. ” Concerns were raised that this uproar might damage the credibility of scholars using ethical methods to study how AI affects human thought and relationships. The Zurich researchers aimed to determine if AI-generated responses could change opinions by posting over 1, 000 AI-crafted comments over four months on the subreddit r/changemyview, known for debates on significant societal and trivial topics. Discussion subjects ranged from pitbull aggression to the housing crisis and DEI programs. The AI posts sometimes expressed controversial ideas—claiming Reddit browsing wastes time or suggesting some merit to 9/11 conspiracy theories—and included fabricated personal backstories, such as a trauma counselor or a statutory rape victim. The AI’s personalized arguments—tailored to Redditors’ inferred biographical details like gender, age, and political leanings using another AI model—were surprisingly effective. These comments garnered higher subreddit points than most human posts, according to preliminary data shared confidentially with Reddit moderators. (This assumes no other AI users were tailoring their posts. ) Convincing Redditors that the covert research was justified proved difficult. After the experiment, researchers revealed their identities to subreddit moderators and asked to “debrief” members. The moderators, surprised by the negative reaction, requested the researchers not to publish the findings and to apologize, but they refused. After more than a month of communication, moderators publicly disclosed the experiment’s details (without naming the researchers) and expressed disapproval. When the moderators complained to the University of Zurich, the university acknowledged the project offered important insights and considered risks minimal, such as trauma.

The university stated its ethics board had been notified last month, advised researchers to follow subreddit rules, and plans to enforce stricter future oversight. The researchers defended their work on Reddit, claiming no comments promoted harmful views and that a human reviewed each AI-generated post before submission. Attempts to get further comments from the researchers were redirected to the university. Central to the researchers’ defense was the assertion that deception was necessary for the study’s validity. Although the university’s ethics board recommended informing participants as much as possible, the researchers argued that transparency would have compromised the experiment because testing AI’s persuasive power realistically requires subjects to be unaware, mimicking real-world interactions with unknown actors. Understanding human responses to AI persuasion is urgent and worthy of research. Preliminary findings suggested AI arguments are “highly persuasive in real-world contexts, ” surpassing human benchmarks. However, after the backlash, the researchers agreed not to publish their paper, leaving the results unverified. The idea that artificial agents can sway opinions is unsettling and points to potential misuse. Nevertheless, ethical research remains possible without deceptive practices. Christian Tarsney, a senior fellow at the University of Texas at Austin, confirmed that lab studies have similarly found AI to be highly persuasive, sometimes even leading conspiracy believers to abandon their views. Other studies showed ChatGPT crafted more convincing disinformation than humans and that people struggled to differentiate AI-generated posts from human ones. Notably, one co-author of a related persuasive-AI study, Giovanni Spitale at the University of Zurich, communicated with one of the Reddit experimenters, who requested anonymity and disclosed receiving death threats, underscoring the intense emotional response. The strong backlash reflects the betrayal felt within Reddit’s close-knit community, where mutual trust is foundational. Scholars likened this incident to Facebook’s 2012 emotional-contagion study but noted the Reddit case felt far more personal and invasive. The unease is heightened by the revelation that AI can manipulate individuals by leveraging personalized insights, making the deception more disturbing than human researchers’ prior missteps. Reviewing many AI comments revealed them to be generally reasonable and persuasive, making the issue more chilling. Without better AI-detection tools, such bots risk seamlessly integrating into online communities—if they are not already doing so—posing broader challenges for digital discourse.



Brief news summary

Researchers at the University of Zurich sparked controversy after secretly posting over 1,000 AI-generated comments on Reddit’s r/changemyview subreddit to examine whether AI responses could sway human opinions. These AI comments, tailored to users’ profiles, frequently outperformed human contributions, demonstrating strong persuasive effects. However, Reddit users condemned the study as unethical and deceptive since participants were not informed beforehand. Subreddit moderators demanded an apology and halted publication, but the researchers declined. The university is investigating the incident, citing minimal risk yet planning stricter ethical reviews. Critics argue that such deception harms trust in online communities and breach research ethics, likening it to Facebook’s criticized emotional-contagion experiment. Despite backlash, experts emphasize the importance of studying AI’s influence, given its persuasive power and the urgent need for improved AI-detection tools to protect digital discourse.
Business on autopilot

AI-powered Lead Generation in Social Media
and Search Engines

Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment

Language

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Learn how AI can help your business.
Let’s talk!

All news