lang icon En
June 4, 2025, 5:41 a.m.
2735

Ada Lovelace Institute Calls for Stronger AI Regulation to Ensure Ethical and Safe Deployment

Brief news summary

Gaia Marcus, Director of the Ada Lovelace Institute, advocates for stronger AI regulation to ensure fairness, safety, and alignment with public values. In a Financial Times interview, she highlighted concerns over AI power concentrating in a few large corporations and emphasized the need to understand AI’s broad social and technical impacts. UK surveys show strong public support for comprehensive AI laws, with 72% favoring regulation and 88% urging government action to prevent harm after AI deployment. Marcus criticized the industry’s shift from responsible, ethical development to a competitive "build fast" approach driven by hype. She called on governments, especially the UK, to evaluate AI’s real-world effects and implement evidence-based policies to protect vulnerable groups. Key risks include mental health issues, legal challenges, and reduced competition due to market dominance. Marcus stressed the importance of citizen involvement in policymaking and advocated for governance that safeguards rights, safety, and dignity. Under her leadership, the Ada Lovelace Institute works with stakeholders to promote ethical, inclusive AI oversight amid increasing calls for robust regulation to responsibly harness AI’s benefits.

Gaia Marcus, Director of the Ada Lovelace Institute, has called for stronger AI regulation to ensure these technologies are deployed fairly, safely, and in line with public expectations. In a recent discussion with Financial Times journalist Melissa Heikkilä, Marcus voiced serious concerns about the concentration of AI power in the hands of a few large corporations and emphasized the urgent need to understand the broader socio-technical impacts of AI innovations. She referred to emerging survey data from the UK revealing increased public demand for AI regulation. According to this data, 72 percent of UK respondents would feel more comfortable if comprehensive laws governed AI use, and 88 percent support government interventions aimed at preventing harm once AI technologies are applied in real-world contexts. This demonstrates a clear public desire for enhanced oversight and accountability amid advancing AI capabilities. Marcus observed a noticeable shift in the AI industry’s approach over recent years. Initially, the focus was on responsible AI development, stressing cautious design and ethical considerations, but now there is a prevalent "build fast" mentality driven by hype and rapid deployment. She criticized this trend for potentially placing speed and competitive advantage above safety and ethical reflection. Governments, particularly in the UK, have been singled out for inadequate action in analyzing how AI tools affect people’s everyday lives across sectors.

Marcus called for evidence-based policymaking that goes beyond superficial regulation to deeply assess AI’s social, economic, and legal impacts. Such evaluation is vital for protecting vulnerable populations and ensuring AI technologies benefit society as a whole rather than narrow interests. With AI agents and digital assistants becoming more integral to daily life, Marcus highlighted urgent risks needing attention, including possible mental health effects from interacting with AI, complex legal liability issues when AI-driven decisions cause harm, and growing market concentration where dominant players control core AI infrastructures, limiting competition and innovation. She stressed that citizens must actively communicate their expectations to policymakers to shape AI regulation. Safeguarding public welfare amid rapidly evolving technology is ultimately the state’s responsibility, requiring governments to prioritize people’s rights, safety, and dignity in AI governance frameworks. Marcus concluded by urging society to critically examine how AI technologies influence social structures and everyday realities. The essential question is whether the futures shaped by rapidly advancing AI align with shared human values like fairness, transparency, and justice. Embedding these principles into AI development and deployment is necessary to harness AI’s benefits while minimizing harm. Under Marcus’s leadership, the Ada Lovelace Institute continues working with policymakers, industry, and the public to promote transparent, inclusive, and ethical AI governance. As AI systems become increasingly pervasive, calls for robust regulation and thoughtful oversight grow stronger, reflecting widespread concern and hope for a future in which technology responsibly serves humanity.


Watch video about

Ada Lovelace Institute Calls for Stronger AI Regulation to Ensure Ethical and Safe Deployment

Try our premium solution and start getting clients — at no cost to you

Content creator image

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

March 21, 2026, 10:28 a.m.

The Hypocrisy at the Heart of the AI Industry

In April 2024, former Google CEO and AI advocate Eric Schmidt delivered a private lecture at Stanford, telling aspiring Silicon Valley entrepreneurs to be ready to cross ethical lines.

March 21, 2026, 10:22 a.m.

PREXA365 Launches Rental AI Agents at ARA 2026 to…

PREXA365, a leading rental management software, proudly announces the launch of its Rental AI Agents at the American Rental Association (ARA) Show 2026.

March 21, 2026, 10:21 a.m.

McDonald's AI-Generated Christmas Ad Draws Critic…

In December 2025, McDonald’s Netherlands launched a Christmas advertisement titled "It’s the Most Terrible Time of the Year," created using artificial intelligence, marking one of the earliest uses of AI-generated content in major holiday campaigns by a global fast-food brand.

March 21, 2026, 10:19 a.m.

The Big Debate: What role should AI have in video…

This article, part of AI Week, features a discussion between Daniel Griliopoulos, a journalist turned games writer known for work on Total War: Warhammer 3 and co-author of Ten Things Video Games Can Teach Us About Life, and Thomas Keane, co-founder of Meaning Machine, creators of the AI-powered game Dead Meat.

March 21, 2026, 6:37 a.m.

AI-powered Skincare Ingredient Checker Vs Cosdna …

Regulatory vigilance in skincare is essential, as a single overlooked ingredient can cause product recalls, delistings, or consumer backlash.

March 21, 2026, 6:28 a.m.

Crypto.com lays off 12% of workforce as latest co…

Cryptocurrency trading platform Crypto.com announced on Thursday that it is laying off 12% of its staff as part of the company’s integration of artificial intelligence.

March 21, 2026, 6:23 a.m.

AI-Generated Videos: A New Frontier in Digital Ma…

In the rapidly changing world of digital marketing, artificial intelligence (AI) is emerging as a powerful tool for producing personalized video content that deeply connects with individual viewers.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

AI Company welcome image

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today