 
        This article was originally featured in The Algorithm, our weekly AI newsletter. To receive stories like this directly in your inbox, sign up here. After a refreshing week spent picking blueberries in a forest, I'm back and ready to dive into the messy ethical implications of AI in warfare. Arthur Holland Michel expertly examines the complex ethical questions surrounding the military's increasing reliance on artificial intelligence tools. With countless ways for AI to fail catastrophically or be misused in conflict scenarios, the lack of clear regulations is concerning. Holland Michel's piece illustrates the alarming lack of accountability when things go awry. Last year, I discussed how the war in Ukraine spurred a boom in defense AI startups. The current hype cycle has only intensified this trend, as companies—and now even the military—rush to incorporate generative AI in their products and services. Recently, the US Department of Defense announced the establishment of a Generative AI Task Force, with the aim of integrating AI tools like large language models across the department. The potential for improvement in intelligence, operational planning, and administrative processes is enormous. However, Holland Michel's article highlights why implementing AI in the first two use cases might be ill-advised. Generative AI tools, such as language models, are unreliable and often produce fabricated information. They also present grave security vulnerabilities, privacy concerns, and inherent biases. Deploying these technologies in high-stakes situations could lead to deadly accidents where assigning responsibility becomes a murky challenge. While everyone acknowledges that humans should have the final say, the unpredictable behavior of technology further complicates decision-making, especially in rapidly evolving conflict environments. Some experts worry that those at the lowest rungs of the hierarchy will bear the brunt of the consequences when things go wrong. Holland Michel writes, "In the event of an accident—regardless of whether the human was wrong, the computer was wrong, or they were wrong together—the person who made the 'decision' will absorb the blame and protect everyone else along the chain of command from the full impact of accountability. " Surprisingly, the companies supplying the AI technology may escape any consequences when failures occur during warfare. Loopholes in the US regulations governing AI in warfare, which are merely recommendations rather than legally binding, provide these companies with an advantageous position. This makes it extremely challenging to hold anyone accountable for the ramifications. Even the EU's forthcoming comprehensive regulation for high-risk AI systems, the AI Act, excludes military applications, despite their classification as the highest-risk use cases. While the search for innovative applications of generative AI continues, I personally look forward to the day when it becomes mundane. As public interest in the technology wanes, companies might discover that these tools are better suited for low-risk, routine applications rather than solving humanity's most critical issues. Utilizing AI in productivity software like Excel, email, and word processing may not be the most glamorous idea, but compared to warfare, it presents relatively low stakes—while also delivering faster and more effective results for the mundane aspects of our jobs. Boring AI is less prone to breaking and, most importantly, won't lead to loss of life. Ideally, we will eventually forget that we are even interacting with AI. (Remember when machine translation was a cutting-edge concept in AI?Now, most people don't even consider the role it plays in powering Google Translate. ) This is why I have greater confidence in organizations like the Department of Defense successfully integrating generative AI into their administrative and business processes. Boring AI lacks the moral complexities and unpredictability of its counterparts.
It may not be magical, but it works. AI's struggle with decoding human emotions raises the question of why regulators are focusing on this aspect. Despite all the buzz surrounding ChatGPT, artificial general intelligence, and concerns about job displacement, regulators in the EU and the US have escalated warnings against AI and emotion recognition. Emotion recognition involves using AI analysis of video, facial images, or audio recordings to identify a person's emotions or mental state. So why is this a top concern?Western regulators are particularly apprehensive about China's utilization of this technology and its potential for social control. Moreover, evidence suggests that emotion recognition systems simply do not function reliably. Tate Ryan-Mosley delves into the complex issues surrounding this technology in last week's edition of The Technocrat, our newsletter focusing on tech policy. Meta is preparing to release a free code-generating software that focuses on the LLaMA 2 language model's ability to generate programming code. This poses a significant challenge to similar proprietary code-generating programs offered by competitors such as OpenAI, Microsoft, and Google. The open-source program, called Code Llama, is scheduled for imminent launch, as reported by The Information. OpenAI is currently testing GPT-4 for content moderation. Using this language model to moderate online content could substantially alleviate the mental toll content moderation takes on human moderators. OpenAI has reported promising initial results, though the technology is not yet surpassing the ability of highly trained humans. There are still many unanswered questions, such as whether the tool can adapt to different cultures and grasp contextual nuances. Google is developing an AI assistant that offers life advice. These generative AI tools could serve as life coaches, providing suggestions, planning instructions, and tutoring tips. Two prominent tech figures have stepped away from their positions to build AI systems inspired by bees. The newly founded AI research lab, Sakana, draws inspiration from the animal kingdom. The company's founders, both renowned industry researchers and former Googlers, plan to create multiple smaller AI models that work together. Their concept revolves around the idea that a "swarm" of programs could be as powerful as a single large AI model. Renowned former CEO of Google, Eric Schmidt, argues that science is on the cusp of becoming much more thrilling—a transformation that will have a profound impact on all of us. The Modern Turing Test, which assesses the real-world capabilities of AI rather than superficial appearances, is deemed more revealing. And what better measure of capability is there than the ability to generate profits?MIT researchers have developed a software called PhotoGuard that subtly alters images in ways imperceptible to humans while preventing AI systems from tampering with them. We are currently experiencing difficulty in saving your preferences. Please refresh this page and update your preferences once more. If you continue to receive this message, contact us at customer-service@technologyreview. com with a list of the newsletters you would like to receive.
None
 
                   
        OpenAI has secured an unprecedented $40 billion funding round, valuing the company at $300 billion—making it the highest-value private technology deal ever recorded.
 
        Artisan AI is leading a new era of automation with its AI-driven digital workers called "Artisans." These advanced AI entities are designed to automate routine business tasks and provide strong support to human teams, aiming to boost productivity and streamline organizational workflows.
 
        MarketOwl AI has recently launched a range of AI-powered agents that autonomously manage various marketing tasks, offering an innovative alternative that could potentially replace traditional marketing departments in small and medium-sized enterprises (SMEs).
 
        Artificial Intelligence (AI) is swiftly becoming a core component in the development of Search Engine Optimization (SEO) strategies worldwide.
 
        A necessary component of this site failed to load.
 
        Search engines continuously update their ranking methods, causing SEO strategies to evolve constantly.
 
        Around 2019, before AI became widespread, C-suite leaders’ main concern was getting sales executives to update CRM systems accurately.
Launch your AI-powered team to automate Marketing, Sales & Growth
 
    and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today