background preloader

Targeted killing

Facebook Twitter

Gagner la guerre du futur considérations juridiques et éthiques sur l’intelligence artificielle. Background on Lethal Autonomous Weapons Systems in the CCW. Information on and documents of 2020 meetings of the GGE on LAWS will be available in due course on UNODA meetings place In 2013, the CCW Meeting of State Parties decided that the Chairperson will convene in 2014 an informal Meeting of Experts to discuss the questions related to emerging technologies in the area of lethal autonomous weapons systems (LAWS).

Background on Lethal Autonomous Weapons Systems in the CCW

The first informal Meeting of Experts was held in 2014 and chaired by Ambassador Simon-Michel. A further two informal Meetings of Experts were held in 2015 and 2016 and chaired by Ambassador Michael Biontino of Germany. The final reports of the CCW informal Meetings of Experts on LAWS are available at the following links: Systèmes d'armes létales autonomes, quelle est l'action de la France ? - Ministère de l’Europe et des Affaires étrangères. Consciente de ces enjeux, la France a clairement affirmé qu’elle ne développerait pas de SALA, et a agi pour que la communauté internationale se saisisse de cette question.

Systèmes d'armes létales autonomes, quelle est l'action de la France ? - Ministère de l’Europe et des Affaires étrangères

A son initiative, des discussions sur le sujet ont été lancées depuis 2013 aux Nations unies, dans l’enceinte de la Convention sur certaines armes classiques (CCAC). Dans ce cadre, la France promeut avec ses partenaires l’adoption d’un ensemble de principes, visant à régir le développement et l’usage des systèmes d’armes autonomes. Quels sont les enjeux liés aux SALA ? Les technologies de l’intelligence artificielle ont des applications militaires diverses qui présentent un intérêt opérationnel réel pour les forces armées, par exemple dans le domaine de la reconnaissance ou de l’aide à la décision. D’une manière générale, le développement des applications militaires de l’intelligence artificielle n’a pas vocation à remplacer le commandement humain. Artificial Intelligence Moving to Battlefield as Ethics Weighed. The Pentagon, taking the next big step of deploying artificial intelligence to aid troops and help select battlefield targets, must settle lingering ethical concerns about using the technology for waging war.

Artificial Intelligence Moving to Battlefield as Ethics Weighed

Microsoft chief Brad Smith says rise of killer robots is 'unstoppable' The Explosive-Carrying Drones in Venezuela Won’t Be the Last. Bill would allow Connecticut police to put weapons on drones. HARTFORD, Conn.

Bill would allow Connecticut police to put weapons on drones

(AP) — Connecticut lawmakers are considering whether the state should become the first in the country to allow police to use drones outfitted with deadly weapons, a proposal immediately met with concern by civil rights and liberties advocates. The bill would ban the use of weaponized drones, but exempt police. Autonomous Military Vehicles the Backbone of Next-Gen U.S. Might. Drone-ing out the Peace: Legality in International Law - iPleaders. Autonomous Military Vehicles the Backbone of Next-Gen U.S. Might. Iraq Is Preparing an Armed Robot to Fight ISIS. Drone-ing out the Peace: Legality in International Law - iPleaders. Drones, the Mullah, and legal uncertainty: the law governing State defensive action. Are Robot Warriors Finally Coming to the Battlefield?

Advertisement - Continue Reading Below Despite the success of armed flying drones, their counterparts on the ground have never made it over the starting line.

Are Robot Warriors Finally Coming to the Battlefield?

The Pentagon's recent history is littered with failed efforts for armed robots. Joint Letter to President Obama on US Drone Strikes and Targeted Killings. The Honorable Barack Obama President of the United States White House 1600 Pennsylvania Ave., N.W.

Joint Letter to President Obama on US Drone Strikes and Targeted Killings

Washington, D.C. 20500 Re: Shared Concerns Regarding U.S. Drone Strikes and Targeted Killings Dear President Obama, The undersigned human rights and civil rights groups write to convey a statement of shared concerns regarding U.S. targeted killing policy. Sincerely, American Civil Liberties Union Amnesty International Center for Human Rights & Global Justice, NYU School of Law Center for Civilians in Conflict Center for Constitutional Rights Global Justice Clinic, NYU School of Law Human Rights First Human Rights Institute, Columbia Law School Human Rights Watch Open Society Foundations The undersigned human rights and civil rights groups urge the United States to take essential steps to ensure meaningful transparency and legal compliance with regard to U.S. targeted killing policies and practices, particularly those outside the internationally-recognized armed conflict in Afghanistan.

Drones Kill More Civilians Than Pilots Do. If you’re a dedicated Wilsonian, the past quarter-century must have been pretty discouraging.

Drones Kill More Civilians Than Pilots Do

Convinced liberal democracy was the only viable political formula for a globalizing world, the last three U.S. administrations embraced Wilsonian ideals and made democracy promotion a key element of U.S. foreign policy. For Bill Clinton, it was the “National Security Strategy of Engagement and Enlargement.” For George W. Targeted Killing. Importing the War on Terror: Glenn Greenwald & Activist Trevor T. The Drone Papers. We Can Now Build Autonomous Killing Machines. And That's a Very, Very Bad Idea. The Inevitabilities of Killer Robots. Illustration: Shaye Anderson In October 2012, nine civil society organizations met in New York and agreed to work together to create the Campaign to Stop Killer Robots.

The Inevitabilities of Killer Robots

We should not dismiss the dangers of 'killer robots' so quickly. In an open letter I helped publish on July 28 – which has now been signed by more than 2,700 artificial intelligence (AI) and robotics researchers from around the world – we stated that “starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”.

We should not dismiss the dangers of 'killer robots' so quickly

A few days later, philosopher Jai Galliott challenged the notion of a ban, recommending instead that we welcome offensive autonomous weapons – often called “killer robots” – rather than ban them. I was pleased to read Jai’s recommendation, even if he calls the open letter I helped instigate “misguided” and “reckless”, and even if I disagree with him profoundly. This is a complex and multi-faceted problem, and it is worth considering his arguments in detail as they bring several important issues into focus.

Four points. Robot Soldiers Are Coming!