Search: battlefield robots

...development and use of AI where its potential benefits can outweigh its risks.  No technology, including weapons and weapons systems, is infallible.  The approach to accountability for unwanted outcomes should be no different than with any other means or method of warfare.             Hyperbolic calls to ban “Killer Robots” specifically, or militarized AI generally, have gained little to no traction among States.  This should come as no surprise given the potential AI offers to exponentially increase the speed, efficiency, and accuracy of operations and reduce the inherent and infamous fog of...

...military advantage through weapons research. Instead of green technology, the competitionists invent killer robots. Instead of cyber security, their researchers find new ways to destroy through computer applications. China builds coal-fired power plants to pay for a navy bigger than the U.S.’s. The contrast with the non-polar world could not be greater. The pandemic is leading to de-coupling from hegemonic rivalry. Figures like Putin, Trump, or Xi, are not found among those committed to solidarity. And the solidarists are everywhere. Their names reach headlines without stoking cults of personality, including,...

...to society and/or to innovation. (Uh, you know, issues having to do with African cyberpunk, DNA hacking and stuff like that. And don’t even let Ken Anderson (1, 2, 3, etc.) or me (1, 2, etc.) get started on robots…) So I was happy to see that the current issue of Scientific American looks at “The Future of Science: 50, 100, and 150 Years from Now.” Heady stuff. Ubiquitous computing, biotech, colonizing Mars, possibly even my long-awaited flying cars. But reading this with the cool eye of a lawyer (as...

...stronger in recent times is provided by artificial intelligence and lethal autonomous weapons systems. Many argue that, by removing human emotions such as anger or fear, killer robots seem to be more ready to engage in hostilities without the possibility of making mistakes. The absence of any emotional experience would allow, according to this position, for an unbiased participation in hostilities. But this, of course, is not clear, since the absence of combatant decision-making ends up de-humanising the conduct of hostilities and reducing war to a cold and inexpressive algorithm....

...policeman bides his time, and then, as she draws closer to him, he whispers to his dead wife—murdered by the occupiers—that he’ll see her soon. His thumb presses the detonator, and the ceremony is ripped apart, along with a sense of security and optimism for the occupying power. If this sounds like Iraq, it should. But it’s the season premiere of Battlestar: Galactica, the Sci Fi Channel’s acclaimed remake of the kitschy Star Trek also-ran. In its previous two seasons, Battlestar has hinted at war-on-terrorism overtones. The evil Cylon robots...

...White House site, which I haven’t included). It is more substantive than one might have anticipated – it discusses private space flight initiatives, the International Space Station and – naturally! – robots. Update: Response from the Air Force General Counsel’s Twitter feed (and I recommend both the Twitter feed (@AirForceGC) and blog: Air Force GC ‏@ AirForceGC Still smarting from Death Star decision, but must admit weapons review would have been a bear. Referring to US legal requirements for a review of the legality of all weapons systems, meeting the...

...…[the show] reduced humanity to its essentials’. This is evident in the show’s premise. BSG focuses on the 50,000 human survivors of a surprise genocide launched by intelligent robots, known as Cylons. These remaining humans are protected by a military ship, the eponymous Battlestar Galactica. The Moore/Eick reboot began with a mini-series which aired in 2003 and depicted the Cylon attack and humanity’s initial responses to it. Season One of BSG focuses on the flight of the civilian fleet, protected by the Galactica, from Cylon pursuit (who are intent on...

...the robo-warriors, FP brings you the Top Ten Stories You Missed in 2007. Here’s the list itself; go to Foreign Policy for the full-text explanations: 1. The Cyber Wars Begin 2. U.S.-Mexico Border Fence Gets Cut in Half 3. Dear Osama: We’re Breaking Up 4. Waiting on the Iraqi Navy 5. The Cubans Are Coming 6. The American Heartland Grows Crops—With Human Proteins 7. Thai Junta Gives Itself A Raise 8. Dengue Fever Runs High 9. American Jews Turn Away From Israel 10. Armed Robots Take the Field in Iraq...

...(159) In sum, Boothby provides a highly interesting conceptual framework within which autonomous weapons systems should be regulated. His argument pushes contemporary debates forward by finding a plausible middle ground between those advocating a strict ban on ‘killer robots’, and those (few) who seem to have unlimited confidence in this type of technology. Nevertheless, I believe there is still more work to do before concluding that a human on the loop will make the employment of autonomous weapons systems safe enough to be used both in armed conflict and in...

...vague in their proposals for global governance and regulation. It potentially risks undermining efforts for the setting of binding legal norms on the development, testing, and use of AI in the military domain.  What is Responsible AI in the Military? Many portrayals of (anthropomorphic) AI in science-fiction feature humanoid robots and androids which are depicted as beings with their own conscience. This gives the wrong impression that AI applications can be legal subjects, and thus are legally, morally, and ethically responsible for their actions or thoughts.  But as pointed out...

...is the link if you’re interested. Meanwhile, over at Lawfare, Human Rights Watch’s Tom Malinowski, Benjamin Wittes, Matthew Waxman, and I have been debating the recent HRW report calling for a ban on “Killer Robots.” Tom’s latest response – though mostly a serious discussion, well worth reading, though I’m afraid it doesn’t finally manage to persuade me – has a video at the end that I will always, always fondly treasure. It’s great. (It’s in Hindi, and though I didn’t know Tom knew Hindi, I’m going to trust his subtitles.)...

...judges are akin to autonomous robots who mechanistically and abstractly apply inbred, dry legal principles to meticulously pruned fact patterns. To the contrary, good judging is an intensely human and dynamic experience. American Justice Oliver Wendell Holmes spoke of this eloquently last century and Judge Richard Posner has done so in this one. And, on a macro level, good judging requires growth of the entire judicial collective conscience. Being aware of what is going on in the wider world is certainly an integral part of that. And it is, I...