Gaia Marcus, Director of the Ada Lovelace Institute, has called for stronger AI regulation to ensure these technologies are deployed fairly, safely, and in line with public expectations. In a recent discussion with Financial Times journalist Melissa Heikkilä, Marcus voiced serious concerns about the concentration of AI power in the hands of a few large corporations and emphasized the urgent need to understand the broader socio-technical impacts of AI innovations. She referred to emerging survey data from the UK revealing increased public demand for AI regulation. According to this data, 72 percent of UK respondents would feel more comfortable if comprehensive laws governed AI use, and 88 percent support government interventions aimed at preventing harm once AI technologies are applied in real-world contexts. This demonstrates a clear public desire for enhanced oversight and accountability amid advancing AI capabilities. Marcus observed a noticeable shift in the AI industry’s approach over recent years. Initially, the focus was on responsible AI development, stressing cautious design and ethical considerations, but now there is a prevalent "build fast" mentality driven by hype and rapid deployment. She criticized this trend for potentially placing speed and competitive advantage above safety and ethical reflection. Governments, particularly in the UK, have been singled out for inadequate action in analyzing how AI tools affect people’s everyday lives across sectors.
Marcus called for evidence-based policymaking that goes beyond superficial regulation to deeply assess AI’s social, economic, and legal impacts. Such evaluation is vital for protecting vulnerable populations and ensuring AI technologies benefit society as a whole rather than narrow interests. With AI agents and digital assistants becoming more integral to daily life, Marcus highlighted urgent risks needing attention, including possible mental health effects from interacting with AI, complex legal liability issues when AI-driven decisions cause harm, and growing market concentration where dominant players control core AI infrastructures, limiting competition and innovation. She stressed that citizens must actively communicate their expectations to policymakers to shape AI regulation. Safeguarding public welfare amid rapidly evolving technology is ultimately the state’s responsibility, requiring governments to prioritize people’s rights, safety, and dignity in AI governance frameworks. Marcus concluded by urging society to critically examine how AI technologies influence social structures and everyday realities. The essential question is whether the futures shaped by rapidly advancing AI align with shared human values like fairness, transparency, and justice. Embedding these principles into AI development and deployment is necessary to harness AI’s benefits while minimizing harm. Under Marcus’s leadership, the Ada Lovelace Institute continues working with policymakers, industry, and the public to promote transparent, inclusive, and ethical AI governance. As AI systems become increasingly pervasive, calls for robust regulation and thoughtful oversight grow stronger, reflecting widespread concern and hope for a future in which technology responsibly serves humanity.
Ada Lovelace Institute Calls for Stronger AI Regulation to Ensure Ethical and Safe Deployment
Each week, we spotlight an AI-driven app that solves real issues for B2B and Cloud companies.
Artificial intelligence (AI) is increasingly influencing local search engine optimization (SEO) strategies.
IND Technology, an Australian company specializing in infrastructure monitoring for utilities, has secured $33 million in growth funding to boost its AI-driven efforts to prevent wildfires and power outages.
In recent weeks, an increasing number of publishers and brands have faced significant backlash as they experiment with artificial intelligence (AI) in their content production processes.
Google Labs, in partnership with Google DeepMind, has introduced Pomelli, an AI-powered experiment designed to help small-to-medium-sized businesses develop on-brand marketing campaigns.
In today’s rapidly expanding digital landscape, social media companies are increasingly adopting advanced technologies to safeguard their online communities.
A version of this story appeared in CNN Business’ Nightcap newsletter.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today