August 2, 2025
Nations Gather at UN to Tackle AI Weapons as Global Regulation Struggles to Keep Up
Latest News World

Nations Gather at UN to Tackle AI Weapons as Global Regulation Struggles to Keep Up

May 13, 2025

GENEVA / NEW YORK — Representatives from countries around the world are convening at the United Nations this week to reignite discussions around the regulation of AI-powered autonomous weapons, amid growing concern that the development of such lethal technology is outpacing international oversight.

The talks come at a pivotal moment. From Ukraine to Gaza, AI-assisted and fully autonomous weapons systems are increasingly deployed on the battlefield. As global military spending continues to rise, so does investment in next-generation war technologies, raising alarms among human rights groups and arms control advocates.

Despite the urgency, progress in establishing clear and binding international laws has been painfully slow. Since 2014, signatories to the Convention on Certain Conventional Weapons (CCW) have met in Geneva to consider regulating or even banning fully autonomous weapon systems that can operate without human intervention. Yet, nearly a decade later, there is still no legally binding global framework.

UN Secretary-General Antonio Guterres has called for concrete international rules to be in place by 2026. However, many nations remain divided over how — or even whether — such systems should be governed.

“Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass,” said Alexander Kmentt, Director of Arms Control at Austria’s Foreign Ministry.

UN Hosts First-Ever Autonomous Weapons Meeting

Monday’s meeting at the UN General Assembly in New York marks the first official gathering solely dedicated to the issue of autonomous weapons. Although not legally binding, the discussions are seen as a major step toward raising diplomatic pressure on countries hesitant to accept international limitations.

While traditional arms control forums like the CCW have made limited progress, this week’s broader consultations aim to go further — tackling not only technical and military issues, but also the ethical, legal, and humanitarian implications of AI weapon systems. The agenda also includes the use of these weapons by non-state actors, which presents unique challenges.

Human rights organizations and campaign groups are urging nations to push forward toward a legally binding international treaty that prohibits or tightly regulates fully autonomous weapons.

“The technology is moving so fast. The idea that you wouldn’t want to rule out the delegation of life-or-death decisions to a machine seems extraordinary,” said Patrick Wilcken, Military, Security and Policing Researcher at Amnesty International.

Major Powers Hesitant on Binding Commitments

A key obstacle remains the position of several major military powers. While 164 nations supported a 2023 UN General Assembly resolution urging action on autonomous weapons, powerful states like the United States, Russia, China, and India continue to resist calls for a binding framework.

A spokesperson for the US Department of Defense stated, “We have not been convinced that existing law is insufficient,” suggesting that, in some cases, autonomous weapons may reduce civilian casualties compared to traditional weaponry.

India, Russia, and China did not respond to media inquiries, but their past positions indicate a preference for national-level guidelines or reliance on existing international humanitarian laws.

The Global Spread of AI Weaponry

In the absence of regulation, the use of autonomous weapons is spreading. Research from the Future of Life Institute indicates that nearly 200 types of AI-powered systems have been used in conflict zones across Ukraine, the Middle East, and Africa.

Russia, for instance, has deployed over 3,000 ‘Veter’ kamikaze drones in Ukraine — capable of autonomously locating and attacking targets. Ukraine, for its part, is employing semi-autonomous drones but declined to comment on their capabilities.

Israel has also come under scrutiny for deploying AI-based systems to select targets in Gaza. Israel’s diplomatic mission in Geneva maintains that all military technologies used are in compliance with international law.

Still, organizations such as Human Rights Watch argue that the legal and moral questions surrounding accountability remain unresolved. A recent HRW report warned that a failure to regulate these systems could trigger an AI-driven arms race and serious threats to human rights.

“We do not generally trust industries to self-regulate. There is no reason why defense or tech companies should be more worthy of trust,” said Laura Nolan of Stop Killer Robots, a coalition advocating for a global ban.

What’s Next?

This week’s UN discussions serve as a critical test of whether the international community can build consensus on one of the most pressing security and ethical issues of the modern era. The outcomes could shape the direction of future negotiations, including the next round of CCW talks scheduled for September.

For advocates, the message is clear: without urgent and unified action, the world risks crossing a dangerous threshold — one where machines may soon decide who lives and who dies on the battlefield.

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *