Rise of AI-Generated Fake Videos on Social Media Erodes Trust in Real News
Brief news summary
Following recent US strikes on Iran, AI-generated fake videos showing false events—like a US aircraft carrier being destroyed or the Burj Khalifa on fire—have surged on social media platforms such as X. Once a reliable source for real-time news, X’s credibility has declined sharply since Elon Musk’s takeover, due to weaker moderation, engagement-driven algorithms, and emphasis on its own flawed AI tool, Grok. These factors have enabled the rapid spread of misleading political AI content. The videos serve varied purposes: some celebrate military actions, while others aim to confuse and undermine public understanding. Disturbingly, even Grok has misidentified AI fabrications as genuine. Experts warn that such deepfakes severely damage trust in authentic footage, making it increasingly difficult for audiences to distinguish truth from falsehood. In this environment, maintaining accurate awareness demands greater vigilance, as our ability to rely on visual information faces unprecedented challenges.A US aircraft carrier destroyed by Iranian missiles. American bombs leveling a nuclear power plant. The Burj Khalifa consumed by flames. None of these events actually occurred, yet that hasn’t stopped people from sharing fake videos online. Since Trump’s weekend strikes on Iran, AI-generated videos showing entirely fabricated, yet realistic, scenarios have been spreading rapidly across X and other social media platforms. For years, X (formerly Twitter) served as a vital tool for real-time information during breaking news. However, that role seems to be fading. Since Elon Musk took over, the platform’s reliability as a news source has steadily declined. Moderation has been severely weakened, the algorithm prioritizes engagement over accuracy, and resources have been diverted to their own problematic AI platform—Grok. We’ve witnessed politically charged AI content on the platform before, such as a fabricated video of Jake Paul at Iranian protests. But the recent strikes in Venezuela and now Iran have unleashed a torrent of misleading AI-generated videos. The reasons behind this content vary.
Some creators appear to celebrate, using AI to glorify Trump and Netanyahu’s military actions. Others seek to sow doubt about the conflict, eroding American public trust and polluting the information space to the point where truth becomes unclear. Alarmingly, Grok—X’s native AI tool—has been incorrectly labeling AI-generated videos as genuine. (An X spokesperson hasn’t immediately responded to requests for comment but pointed to recent posts by the company’s safety team. ) Last month, Arianna Coghill from Mother Jones spoke with AI content expert Jeremy Carrasco about this very issue. Carrasco finds the fake content troubling but emphasizes that the bigger damage is how this flood of AI content affects our trust in real videos. When fake footage becomes convincing and widespread, people begin to doubt everything—including authentic footage of actual events. This is the challenging reality we now face. Being informed is more crucial than ever, but in these times, it requires extra caution about what you accept as truth—even if it appears to be captured before your very eyes.
Watch video about
Rise of AI-Generated Fake Videos on Social Media Erodes Trust in Real News
Try our premium solution and start getting clients — at no cost to you