Hollywood once portrayed computers launching nuclear missiles, but the true game-changer in combat is self-guided aircraft. Arms races escalate rapidly during wartime. Prior to Russia's invasion of Ukraine two years ago, there was intense debate over the ethics of using land mines and cluster munitions, leading many states to sign agreements against their use. However, in the pursuit of victory, governments often discard their reservations and eagerly embrace once-controversial technologies. This is evident in the ongoing war between Russia and Ukraine, where both sides are employing millions of unmanned aerial vehicles (UAVs) relying heavily on artificial intelligence (AI) to guide their surveillance and attacks. These drones range from simple civilian kits to advanced attack weapons like the Iranian-built Shaheds, which the Russians have extensively deployed against Ukraine. As nations deploy increasing numbers of drones, human operators find it challenging to effectively monitor them all. The notion of granting computer algorithms control over lethal weaponry raises significant concerns, as programming machines to determine when to fire and at whom could have devastating consequences for noncombatants. This ethical dilemma should spur intense moral discussions. However, in reality, the urgency of war often bypasses such debates. Both Ukraine and Russia are desperate to leverage AI for an advantage, thus dismissing any reservations they might have had about the military use of artificial intelligence. Similar calculations are likely to unfold in future conflicts, including any potential conflicts involving the United States and China. Before the Russian invasion, the Pentagon made efforts to emphasize the continued involvement of humans in the decision-making process before deploying deadly weapons. However, the increasing role of AI drones on Russian and Ukrainian fronts, coupled with advancements in the accuracy and efficacy of these systems, suggests that military strategists worldwide will become accustomed to what was previously considered unthinkable. Decades before AI found its way onto the battlefield, anxiety arose from its potential use in warfare. The 1983 film WarGames depicted humanity successfully averting nuclear destruction orchestrated by AI. In the movie, the U. S. military, concerned about human hesitation in launching nuclear weapons, entrusted control of their strategic nuclear arsenal to a supercomputer named WOPR (War Operation Plan Response). The film's protagonist, a teenage computer hacker, unintentionally tricked the system into believing the U. S. was under attack, and only human intervention stopped AI from initiating a retaliatory strike capable of annihilating all life on Earth. The debate surrounding AI-controlled weapons followed a similar trajectory in the ensuing four decades. In February 2022, coinciding with Russia's full-scale invasion, the Bulletin of the Atomic Scientists published an article pondering the implications of granting AI control over nuclear weapons, raising concerns about potentially catastrophic mistakes resulting from flawed programming or problematic data analysis. However, the true impact of AI lies in enabling numerous small, conventionally armed systems, each independently guided by its own programming and operating without human direction. A prime example is the Russian "kamikaze" Lancet-3 drone, which poses a significant threat to Ukrainians due to its small size, maneuverability, and stealthy nature. Despite costing approximately $35, 000, the Lancet-3 can inflict substantial damage on battle tanks worth millions of dollars. The Wall Street Journal highlighted Russia's incorporation of AI technology to ensure the autonomous operation of Lancets.
This AI relies on Western technologies, circumventing sanctions with the assistance of outsiders. The drone's target-detection technology allows it to identify and attack Ukrainian weaponry systems, such as the distinctively German-made Leopard battle tank. Essentially, every Lancet drone carries its own onboard version of WOPR. Ukraine, too, is fiercely competing in the AI race. Lieutenant General Ivan Gavrylyuk, the Ukrainian deputy defense minister, recently spoke of efforts to incorporate AI systems into French-built Caesar self-propelled artillery pieces. The AI would expedite target identification and aid in selecting the optimal ammunition. Such time-saving measures could make a decisive difference if Ukrainian artillery operators detect a Russian battery before being spotted themselves. Moreover, AI-driven optimization can significantly reduce ammunition consumption. Gavrylyuk estimated that AI could lead to a 30 percent reduction in ammunition usage, a substantial benefit for a country facing depleted ammunition supplies due to congressional inaction from the United States. The AI-powered weaponry utilized by Ukraine and Russia merely offers a glimpse into what awaits on future battlefields. China and the United States, the world's leading military powers, undoubtedly study and learn from the ongoing conflict. The United States has openly discussed its ambitious Replicator project, a highly autonomous AI-driven initiative designed to counterbalance China's mass advantage. The project envisions a large number of autonomous vehicles and aerial drones accompanying American soldiers, assuming roles previously fulfilled by humans. These AI-driven forces, potentially solar-powered for extended endurance, could conduct scouting missions, defend troops, deliver supplies, and, albeit more discreetly mentioned by Deputy Defense Secretary Kathleen Hicks, attack enemy targets. Hicks expressed an astonishingly ambitious timeline, hoping that Replicator would be deployed in some capacity within two years. Projects like Replicator inevitably raise questions about the diminishing role humans will play in future combat scenarios. If both the United States and China amass thousands, or even millions, of AI-driven units capable of attacking, defending, scouting, and supplying, then what role should human decision-making occupy in this new form of warfare?How will battles fought by swarms of drones impact human casualties?These ethical quandaries abound, but they often take a backseat when conflict erupts, superseded by the relentless pursuit of military superiority. Over time, the relentless progress of AI could bring about significant changes in how the most powerful armed forces equip themselves and deploy personnel. Human-piloted fixed-wing aircraft, for instance, may face an uncertain future if combat drones continue to be remotely controlled by operators or operate autonomously. Manned aircraft have limitations, including constrained flight durations due to human factors, the need for accommodating space for humans onboard, and complex life support systems. As an example, in 2021, a British company secured an $8. 7 million contract for explosive charges in pilot-ejector seats, excluding the total cost for developing, installing, and maintaining the entire seat system, which likely runs into nine figures. Highly effective AI-guided drones, on the other hand, are a relative bargain at $35, 000. While the fictional WOPR nearly triggered a nuclear war, real-life AI systems are becoming increasingly affordable and effective. AI warfare is undoubtedly here to stay.
None
Искусственный интеллект Watson Health от IBM достиг важной вехи в медицинской диагностике, получив показатель точности в 95 процентов при обнаружении различных видов рака, включая рак легких, груди, простаты и колоректальный рак.
Раннее на этой неделе мы спрашивали старших маркетологов о влиянии искусственного интеллекта на профессии в маркетинге, получив широкий спектр продуманных ответов.
Vista Social добилась значительного прорыва в управлении социальными сетями, интегрировав технологию ChatGPT в свою платформу, став первым инструментом, внедрившим передовой разговорный искусственный интеллект OpenAI.
CommanderAI привлекло 5 миллионов долларов на начальном этапе финансирования для расширения своей платформы аналитики продаж на базе ИИ, специально ориентированной на индустрию отходоперевозок.
Melobytes.com запустила инновационную услугу, которая преобразует создание новостных видео с помощью технологий искусственного интеллекта.
Бенжамен Уи прекратил деятельность Lorelight — платформы для оптимизации движка поиска (GEO), предназначенной для мониторинга видимости бренда в ChatGPT, Claude и Perplexity, после того, как он пришёл к выводу, что большинству брендов не нужен специализированный инструмент для контроля видимости в AI-поиске.
Краткое изложение ключевых моментов Аналитики Morgan Stanley прогнозируют, что продажи искусственного интеллекта (ИИ) в секторах облачных технологий и программного обеспечения вырастут более чем на 600% за следующие три года и превысят 1 триллион долларов в год к 2028 году
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today