lang icon En
Feb. 5, 2025, 9:04 a.m.
2045

Alphabet Lifts Ban on AI for Weapons and Surveillance: A Controversial Shift

Brief news summary

Alphabet, Google's parent company, has revised its policies to allow the use of artificial intelligence (AI) in military and surveillance contexts, in response to growing concerns regarding the impact of these technologies. The company argues for collaboration between the private sector and democratic governments to enhance national security within the AI domain. However, experts are increasingly worried about the consequences of AI in warfare, particularly with the rise of autonomous weapons. In a recent blog post, executives James Manyika and Demis Hassabis called for a reassessment of their 2018 AI principles to better align with the rapid technological advancements. They highlighted the importance of democratic governance in shaping AI development to protect values like freedom and human rights, and they promote collaborations with like-minded organizations to ensure that AI fosters safety and economic prosperity. Nevertheless, discussions on governance and the ethical implications of AI in military applications persist, especially regarding the risks associated with autonomous systems making critical decisions about life and death.

The parent company of Google has abandoned its longstanding principle by lifting a ban on the use of artificial intelligence (AI) for creating weapons and surveillance technologies. Alphabet has revised its AI usage guidelines, removing a section that had previously prohibited applications deemed "likely to cause harm. " In a blog post, Google defended the change, asserting that businesses and democratic governments should collaborate on AI initiatives that "support national security. " Experts suggest that AI could be extensively utilized on the battlefield, although there are significant concerns regarding its application, particularly concerning autonomous weapon systems. According to the blog, Google emphasized that democracies ought to lead AI development, adhering to "core values" such as freedom, equality, and respect for human rights. "We believe that companies, governments, and organizations that share these values should collaborate to develop AI that safeguards individuals, fosters global prosperity, and reinforces national security, " the post stated. The blog, authored by senior vice president James Manyika and Demis Hassabis, head of Google DeepMind, acknowledged that the original AI principles released in 2018 require updates due to advancements in technology. However, there is an ongoing debate among AI experts and professionals about how this powerful new technology should be regulated, the extent to which commercial interests should influence its course, and the best strategies for mitigating risks to humanity. Controversy also surrounds the application of AI in military operations and surveillance tools.

The greatest concerns center on the possibility of AI-controlled weapons capable of autonomous lethal action, with advocates arguing that immediate regulations are necessary. The Doomsday Clock—symbolizing the proximity of humanity to destruction—cited this concern in its latest evaluation of global dangers. "AI systems incorporated in military targeting have been deployed in Ukraine and the Middle East, with several countries advancing efforts to integrate AI into their armed forces, " it noted. "Such initiatives raise critical questions about the degree to which machines will be permitted to make military decisions, including those that could result in large-scale fatalities, " it continued. Catherine Connolly from the organization Stop Killer Robots echoed these concerns: "The significant financial investments being made into autonomous weapons and AI targeting systems are deeply troubling, " she told The Guardian.


Watch video about

Alphabet Lifts Ban on AI for Weapons and Surveillance: A Controversial Shift

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Feb. 16, 2026, 1:26 p.m.

ByteDance promises to tighten up its new AI video…

ByteDance released Seedance 2.0 less than a week ago, sparking outrage among artists worldwide with a viral AI-generated clip featuring Tom Cruise and Brad Pitt fighting.

Feb. 16, 2026, 1:24 p.m.

An AI voice recorder that can make sales? This on…

The ideal scenario for office workers is to simply press a button on a device that records meetings, transcribes conversations, and converts them into actionable tasks.

Feb. 16, 2026, 1:21 p.m.

Microsoft's AI-Powered Copilot: Enhancing Product…

Microsoft has officially integrated an AI-powered assistant, Copilot, into its widely used Office Suite, marking a major advancement in user interaction with productivity software.

Feb. 16, 2026, 9:26 a.m.

Seedance 2.0

Seedance 2.0 is a cutting-edge image-to-video and text-to-video model created by the tech company ByteDance.

Feb. 16, 2026, 9:24 a.m.

IBM's Watson Health Partners with BioTech Innovat…

IBM’s Watson Health division has formed a strategic partnership with BioTech Innovations, a leading biotechnology firm, to revolutionize drug discovery using advanced artificial intelligence (AI) technologies.

Feb. 16, 2026, 9:19 a.m.

Edge Marketing strengthens AI-led search capabili…

Edge Marketing has announced the appointment of internationally recognised, award-winning AI and SEO specialist Luke Gosha as its new Head of Search & AI Strategy.

Feb. 16, 2026, 9:16 a.m.

What most AI sales tools get wrong about human co…

For years, sales technology has been built on the premise that faster is better—quicker replies, faster follow-ups, and speedier closes.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today