Search: robots

Chris mentioned earlier the NPR interview with Brookings Institution scholar PW Singer on his new book, Wired for War. I am naturally reading the book as we speak, but for those wanting a useful, clear, short take from Singer himself, check out the Winter 2009 issue of the Wilson Quarterly, and Singer’s cover article, “Robots at War: The New Battlefield.” (The whole article appears to be available at the link. Hooray! I’ll be commenting on the article in an invited set of letters that the WQ will publish in the...

...machine-applicable language. Is there anything different about it being a machine? Or is the problem of autonomous battlefield robots, as a matter of law and ethics, simply one of translation – to try and achieve how the ideal soldier would behave? These are some of the questions I want to take up later this week about genuinely autonomous battlefield robots. Meanwhile, if you would like some further reading, some of the most fascinating work in the area of ethics and law applied to autonomous battlefield robots is being done by...

Over the years a few of us have written issues concerning battlefield robots. (See, for example: 1, 2, 3, 4, 5.) Sometimes, we had links to remarkable videos of quadruped robots stomping through forests. Those robots and videos were made by Boston Dynamics, a company that started from an MIT research group. Besides its designing quadruped robots, Boston Dynamics gained further renown when, in 2013, it was acquired by Google as part of that company’s broad push into robotics. Just last month, one of Boston Dynamics’ new videos wen viral;...

troops, and not especially concerned about collateral damage that might well have prevented NATO armies from deploying the robots in that current technological generation. I suspect, in other words, that robots with autonomous firing capability will come to the battlefield sooner rather than later – but not necessarily in a form acceptable to the US and NATO on legal and ethical grounds, and not deployed by the US and NATO, either. Here, by the way, is a useful link sheet from a class on Robots and Society at Georgia Tech....

...them. The same technology, cost, safety, efficiency, and so on, drivers that push for fire surveillance in the Sierra Nevada will be exactly the same ones that drive the military to use the technology. One can call it an arms race, I suppose, but only if one imagines that it is all about military use, otherwise it is a misleading way of thinking about the technology. A better way to think about this is to go back to what make robots robots. In general, there are three conceptual pieces: A...

...the US simply has the possibility of dropping bombs on the offending robots and, poof. Problem goes away. No special technology response needed. But I have a feeling it’s not that simple. There may be collateral damage issues in simply dropping a bigger bomb or any kind of bomb. But the far more important worry is that the autonomous firing weapons robots with impure ethical circuits might very well not be big, vehicle sized battlefield robots – easily identified and destroyed on the battlefield. Rather, they might be whole swarms...

1 In my last post about battlefield robots, I quickly breezed through the ethical and legal priors that technology would go through before reaching the fundamental issues of autonomous battlefield robots – autonomy in decisionmaking in the use of weapons on the battlefield. Leaving aside the questions of exactly how that can be achieved as a matter of actual program (although, in fact, the ‘how’ is a primary question for me – too often, law professors and philosophers wave their hands at the practicalities, whereas the actual issues of translation...

El roam Thanks for the post , just two reservations , with your permission : First, you insist on the gap, between robots and human being, in exercising complex discretion (tactical, and strategic one) and indeed so!! Yet , if so , the very basic or fundamental definition of robots , as been brought in the post : " Autonomous weapons systems, or ‘killer robots’ as they are referred to by others, are sophisticated weapons systems that, once they have been activated, can select and attack targets without further human...

...the public’s concern about futuristic robots feeding on the human population, but that is not our mission,” stated Harry Schoell, Cyclone’s CEO. “We are focused on demonstrating that our engines can create usable, green power from plentiful, renewable plant matter. The commercial applications alone for this earth-friendly energy solution are enormous.” (emphasis in the original) I’m going out on a limb — a non-human one, of course — and claiming that this is the first time Article 15 has ever been mentioned in a press release about robots. Bonus IHL...

been hypothesized. I had previously referred to such robots using William Gibson’s term “slamhounds.” According to Danger Room, DARPA is now working to develop quadruped hunter robots, but is going for a different animal metaphor, calling the project “Cheetah.” Boston Dynamics, the developer of BigDog/ AphaDog, is running the Cheetah project. Adam Rawnsley of Danger Room writes: As the name implies, Cheetah is designed to be a four-legged robot with a flexible spine and articulated head (and potentially a tail) that runs faster than the fastest human. In addition to...

...be, in the words of a recent article from the Institute of Electrical and Electronics Engineers, “capable of selecting and engaging targets without human intervention.” However, while the Alrobot would not be autonomous, Defense One also notes that it will also not be the first remotely-controlled battlefield weapon deployed in Iraq: Back in 2007, the U.S. Army deployed three armed ground robots called the Special Weapons Observation Reconnaissance Detection System, or SWORDS, from weapons maker Foster-Miller (now owned by Qinetiq). SWORDS basically consisted of a Foster-Miller TALON robot armed with...

isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.” The Directive seeks to “‘minimize the probability and consequences of failures’ in autonomous or semi-autonomous armed robots ‘that could lead to unintended engagements’, starting at the design stage.” Its solution – unlike HRW’s call for what its report terms an “absolute ban” – is based upon constant reviews of the military system (unintended effects on weapons systems might occur because of changes to non-weapons...