John Pike on “Stone-Cold Robot Killers on the Battlefield” in the Washington Post

John Pike on “Stone-Cold Robot Killers on the Battlefield” in the Washington Post

John Pike, of GlobalSecurity.org website, has a provocative op-ed in today’s Washington Post (January 4, 2009, B3) arguing that the evolution of battlefield robots might mean robots as the soldiers that do the killing on future battlefields:

Within a decade, the Army will field armed robots with intellects that possess, as H.G. Wells put it, “minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic.”

Let us dwell on “unsympathetic.” These killers will be utterly without remorse or pity when confronting the enemy. That’s something new. In 1947, military historian S.L.A. Marshall published “Men Against Fire,” which documented the fundamental difference between real soldiers and movie soldiers: Most real soldiers will not shoot at the enemy. Most won’t even discharge their weapons, and most of the rest do no more than spray bullets in the enemy’s general direction. These findings remain controversial, but the hundreds of thousands of bullets expended in Iraq for every enemy combatant killed suggests that it’s not too far off the mark.

Only a few troops, perhaps 1 percent, will actually direct aimed fire at the enemy with the intent to kill. These troops are treasured, and set apart, and called snipers.

Armed robots will all be snipers. Stone-cold killers, every one of them. They will aim with inhuman precision and fire without human hesitation. They will not need bonuses to enlist or housing for their families or expensive training ranges or retirement payments. Commanders will order them onto battlefields that would mean certain death for humans, knowing that the worst to come is a trip to the shop for repairs. The writing of condolence letters would become a lost art.

For a lot of reasons, I don’t think this is where the evolution of battlefield robots will go; you can check out some of my earlier posts on this blog.  To start with, the “within a decade” prediction, at least as far as the robot snipers that Pike imagines, is simply not imaginable.  It will be longer than that before we reach the point, if we ever do, of genuinely autonomous sniper robots replacing humans as the trigger finger.  In any case, so far as I can discern, if we ever get to autonomously weapons-firing robots, it will be after a period – one that might never end – of battlefield robots that replace human soldiers in the myriad non-weapons-firing roles, such as the delivery of supplies and ammunition, etc, for the express purpose of reducing human exposure down precisely to the people who do the shooting.  

I’m not opposed in absolute principle to robots on the battlefield that might eventually make autonomous firing decisions.  It is a question of what the technology of the future is able to do and not do.  Just as we are trending toward medical diagnostic technology that might eventually make better and faster decisions about treatment options than human doctors, it is not impossible that we might eventually come up with technology that, exactly because it is not prey to human emotions, biases, stresses, etc., might do a better job in deciding to hit a target than a human would, at least on some suitable average run of cases.  It is easy to imagine a world of tomorrow in which it would be considered mad, malpractice, and simply culturally unacceptable that a doctor would not be directed in diagnostics by a far superior machine; it is easy to imagine a world of tomorrow in which people would look back and think it mad and bad that we ever trusted humans rather than emotionless, stressless robots to make firing decisions on the battlefield.  

Or maybe not.  I don’t know what technology will be able to do; I’m not ruling the possibility out in advance, or ruling it out of bounds on some a priori principle that only a human can pull the trigger.  It seems to me a long way before one reaches the actual threshold of such judgments, however.  In the meantime, I think Pike is wrong in suggesting that the value of battlefield robots will be – or is understood by US military planners today – to be about stone-cold robot killers.  It is, rather, today all about finding ways to replace everyone but the human shooters.

Is it so very hard to imagine a future, and a future technology, in which it was a war crime for the human, rather than the robot, to decide to fire the weapon?

Print Friendly, PDF & Email
Topics
Foreign Relations Law, National Security Law, North America
Notify of
Charles Gittings

What’s hard to imagine is how anyone could delude themselves that such monstrosities would be a good idea. It’s pure insanity.

Charles Gittings

As for Star Trek episodes, try this one:

“The Arsenal of Freedom”

TNG season 1, episode 21, April 11, 1988.

http://en.wikipedia.org/wiki/The_Arsenal_of_Freedom

bill todler

Has Mr. Pike ever actually fired a weapon?  “millions of rounds” suggests not.  Firing at something and hitting it can be difficult enough on a range, try that when that “something” is firing back, there is dust and smoke and your hands are trembling.
As for robot warriors… Lets hope microsoft doesn’t write their operating system.

Shannon Love

“Men Against Fire,” has been largely refuted. Marshel’s methodology was at best flawed and may have been actively fabricated. Most soldiers do indeed fight. 

More to the point, we already have autonomous robots in the form of guided missiles. The real breakthrough will come when we let weapon systems choose their own targets. I don’t see that happening anytime soon given the obvious difficulties. The moral responsibility for a robots actions attach to the individuals who programmed the robot and chose its targets. 

Cecil Turner
Cecil Turner

Interesting.  Concur it’s not terribly close, nor do I think it’s a good idea (though for what I suspect are different reasons).  Haldeman had a good treatment of the concept of war robots in Forever Peace, though the purpose of those “soldierboys” was to protect the operators, who retained control over firing decisions.  The obvious drawback (touched on lightly) was that as the operators were not vulnerable, the carnage inflicted by the robots had less moral impact, and thus weren’t as decisive.
The one place where autonomous engagement decisions might be feasible rather quickly is in the UCAV application . . . where ROE is a bit more cut and dried.  I still don’t see a good reason to take the operator out of the loop, however, and suspect the implementation will lag far behind the physical capability.

Eric Blair
Eric Blair

Mr Pike’s use of Marshall demonstrates his ignorance. As long ago as the 1980’s, Marshall’s conclusions were suspect: http://warchronicle.com/us/combat_historians_wwii/marshallfire.htm

And, being in the US Army at the time, I remember this getting chewed over by Officers and NCO’s then, as one of the focuses of infantry training at the time was making sure that infantrymen knew how to aquire targets and shoot at and hit them with their weapons. From the videos I’ve seen of combat in Iraq and Afghanistan, it looks like soldiers are using their weapons just fine.

The whole thing fails, and not just for the reasons you assert, although those are good too.

cathyf
cathyf

“Within a decade…” ?  Have these people noticed what the current state of the art is for video game AI opponents?  And that’s in a virtual setup (no engineering challenges of phisically navigating over terrain, etc.) where the laws of physics don’t necessarily apply.

E D Maner
E D Maner

Immunity corrupts…

Soon enough, we will see the first case of an
operator of a remotely controlled war machine,
engaged in ground combat “Up Close & Personal”,
who acquires the sort of psychopathic sociopath
mindset one sees in some computer gamers, and
brings it back with him into the real world.

Joe
Joe

I think Pike is looking for something to be alarmist about. We currently do not have robots deciding when to fire weapons and I do not see them being given that job anytime soon.

And when they do come, they will not be sniper, they will be sentry, there order will be to stand a watch, shoot only at humans that aren’t authorized (All animals will likely be protected by EPA mandated regulations). Because what robots really do well is wait, without sleeping.

Irving
Irving

Autonomous firing decisions have been around for a long time, and the legal/war crime ramifications have be evolving with them as well. People may be thinking “Terminator,” but land & naval mines have been making such autonomous decisions for over a century. Meet a defined decision trigger, and away it goes… Whether that’s a simple trip wire, or a complex magnetic/sonar pattern match…the decision to engage takes place…and with CAPTOR…it’ll even chase you down.

Alex

Is it so hard to believe that if these robots were uber-precise snipers with amazing abilities, that instead they would be programmed to shoot the hands off the enemy instead of mortally wounding them? Why do we assume that we will program these robots that killing is the ideal activity? If we had these robots that could run into a gun battle and deal death, why not instead have then run into a gun battle with the primary intent to disarm the opponent?
In war, making the enemy surrender is always a superior option than defeating them in battle.

cthulhu
cthulhu

With all the argument over Rules Of Engagement during our last few military actions, I would imagine that there would be an increasing use of mechanized fighters to follow mechanical rules.

Tom Billings
Tom Billings

Response…In writing his article for impact, Dr. Pike has once again substituted a single focus for the complex levels of decision making on a battlefield. It is poetic to say a battlefield is “Kill or be killed”, or that snipers are all “stone-killers”, and that poesy sells an opinion. It says *nothing* about standardized Rules Of Engagement(ROE), that would still be set by humans. In fact, already, the majority of combat decisions to be made are not in pulling a trigger against another human, but against or around some other obstacle to the mission objective. Humans on the other side are only *one* of those obstacles, albeit a crucial one. Getting *those* decisions made, by an agency capable of reading an instantly updated set of data points will affect the battlefield far more than whatever ultimately pulls the trigger, at least under any probable rules of engagement. Already, we see industrial society’s culture imposing Rules Of Engagement (ROE) on its militaries that would have been considered intolerable only 50 years ago. The latest contrast between industrial and agrarian cultures in this, around Gaza, is only one of many over the last 20 years. The IDF is expending huge amounts of… Read more »

Grimmy
Grimmy

“These findings remain controversial”

Then it’s not a “finding”, is it?

And when a controversial claim is the basis of your point, then you’ve got a really weak argument indeed.

“Most real soldiers will not shoot at the enemy. Most won’t even discharge their weapons, and most of the rest do no more than spray bullets in the enemy’s general direction” has got to be the stupidest thing I’ve read in a while.

Charles Gittings

Well the give away is the last three paragraphs of Pike’s op-ed — he’s got visions of invincible US uber-menschen establishing a global hegemony to bring on the millennium dancing in his head.

People have been having such visions about things like machine guns, submarines, poison gas, dreadnoughts, airplanes, ballistic missiles, jet engines, atom bombs, and smart weapons — ETC — for a very long time. This guy is just one more nut in a large and very stale fruit-cake.

In reality, anything that we devise can be countered and defeated short of a Dr. Strangelove doomsday scenario. The most dangerous weapon in existence is the human mind — everything else is just an accessory.

Cecil Turner
Cecil Turner

Ooooh, good point by Irving.  (And, of course, there are some much-ballyhooed recent advances in automated defensive networks like the one they were test-bedding at the border.)  Looks like the next big step is offensive decisionmaking.
And SLAM can’t get any love here, but I’ll defend him a bit.  On the first point, his famous ratio may not stand up to rigorous statistical analysis, but challenges like this one show his method was in fact fairly objective (and his claim, “It’s the best we can do” is somewhat persuasive).  My very limited personal experience suggests a surprising percentage of men faced with the usual fleeting opportunities to fire weapons tend not to do so (some take excessive cover, some cower, some fumble, some just miss it).  I would never use that ratio (and in any event there are so many factors . . . length of engagement, weapons and terrain, suppressing fire, experience) that a single datapoint is close to meaningless–also, there are technological and cultural changes that one would expect would tend to drive it somewhat higher–but it probably qualifies at least as what you’d call an “expert opinion.”

Mike Pierce
Mike Pierce

Response…I found this line interesting because it seems that Pike has taken the worst nightmare scenarios of Hollywood and woven them into a yarn about FCS.

“These killers will be utterly without remorse or pity when confronting the enemy.”

That is very similar to the line uttered by Michael Biehn to Linda Hamilton when he tries to convince her that the Terminator will not stop “until you are dead”.

In fact, one can see strains of Gort and ED209 here as well. Pike’s vision is, ostensibly, of the remorseless battlefield robot killer of movie fame, while making a comparison between “real soldiers and movie soldiers”. Humans are no match for their accuracy or pitiless brutality. On the battlefield of the future, they reign supreme and brook no quarter to hapless human enemies who get in their way.

One comes away from this piece thinking that Pike blurred the lines between real robots and movie robots as well.

In the end, is Pike making a statement for or against the FCS, because making battlefield robots needs to be halted in the past before it becomes the future, as in “Terminator 2: Judgement Day”?

Fox2!

Did Marshall’s pool include only combat veterans, or did it include all soldiers? Cooks and bakers and airplane maintainers generally don’t get the opportunity to engage the enemy, 

Graham
Graham

I believe that Pike is being overly optimistic (some will dispute this word) on two points:

1. Timeframe. Outside of specialized uses, there is no way that we will see independent general-purpose robotic soldiers within a decade.

2. Such robots will offer certain advantages, but they are from from the panacea that Pike suggests. The robots will indeed significantly increase the US’s lead in conventional high-kinetic warfare, but, as is the case today, its enemies will avoid such warfare and engage in tactics where American firepower will be less relevant than its willingness to use it.

I have disagree with some of the commenters as well; I’ll limit myself to the first one, Charles Gittings:
“What’s hard to imagine is how anyone could delude themselves that such monstrosities would be a good idea. It’s pure insanity.”

Despite Pike’s pulpish presentation, do try to rid yourself of the apocalyptic SF stereotypes. There is such a thing as on-off switches.

As to “pure insanity,” do you consider it more sane to send human soldiers to their deaths? Should pure sanity dictate our acceptance of Rwanda, Darfur, etc?

Charles Gittings

Graham, I started out programming mainframes in 1973. I know all about on-off switches, and even know how to RTFM. Did you want to instruct me on how to tie my shoes or blow my nose too? I also know thing or two about rogue programmers, hackers, and corrupt / clueless corporate management. Gee whiz, an invincible army of super-robots — that would have made the last seven years under the Bush gang just wonderful huh? The reality is that we’d have been better off with no army at all as long as idiotic criminals were running things: if they ever had to fight a real war, they’d LOSE, and badly.  They raped Iraq for precisely the reason that it was a defenseless pushover, and then they completely fucked it up anyway, just because they were a bunch of clueless fools who didn’t have the faintest idea what they were doing other than getting off on murdering people. “The robots will indeed significantly increase the US’s lead in conventional high-kinetic warfare,” you say… Think that will last as long as our lead in manufacturing computers, airplanes, automobiles, or steel? There was never a horse couldn’t be rode, and never a… Read more »

trackback

[…] over at Opinio Juris, Kenneth Anderson asks: “Is it so very hard to imagine a future, and a future technology, in which it was a war […]

Fen
Fen

“Did you want to instruct me on how to tie my shoes or blow my nose too?”

Only when you veer from something you appear to know about [mainframes] into something you don’t [Iraq]. I do appreciate your ignorant BDS rant, though. Nice disqualifier.

Charles Gittings

Fen,

Shove it: the derangement is all yours.

M. Gross
M. Gross

Err… well, I arrived on the scene after civility had departed, but let me comment nonetheless.

I imagine, in a world of autonomous killing machines, war crimes will be limited to unlawful usage and/or mistakes in programming said machines.  After all, unlike a human witness, a machine has a flawless memory and logs of all it has done, allowing one to completely reverse engineer how it came to pull the trigger, after the fact if need be.

Machines won’t stop genocide, of course, because only sentient beings have the motivation to commit genocide.  Robots would be only one more tool to that end.

Robots would, however, reduce killings in the heat of passion on the battlefield.

Charles Gittings

Oh, right, they’ll be able to do perfect reconstructions from the logs just like the White House is doing with all those emails that seem to have disappeared. Do have any idea how much actual data you’re talking about for just a single robot?

I shudder to think. You might want to have a look at the report on the Columbia disaster NASA just released. No shortage of data there, the problem is making any sense of it. Only took them six years.

Graham
Graham

“Did you want to instruct me on how to tie my shoes or blow my nose too?”

No, but you obviously need help in learning how to construct coherent arguments.

Responding to two points I was able to glean from the spittle:

If you think that the current US government is composed of “idiotic criminals,” and is less to be trusted with such weapons than other powers, I would be glad to instruct you in how to partake in the pleasures of global travel.

And if your other argument is that technology doesn’t stand still (and that these weapons will thus not be the last word or insurmountable) , I can but congratulate you on your awesome insight. Must come from that mainframe work.

Moving back to Pike, I find myself in agreement with Mike Pierce. It almost seems as if Pike is pulling a reverse “I come to bury Cesar.”

Another issue, which can be legitimately raised without hyperventilating, regards what safeguards (technical or political) can be put in place to remove the temptation of using these weapons against the domestic populace.

Charles Gittings

Graham,

You’ll have a lot less spittle to contend with when you stop drooling, and I’d be very surprised if you knew even half as much about world history as I do.

I’m not in doubt about Bush, Cheney, and their gang: I’ve spent the last seven years investigating their crimes. I’m not in doubt about the demented idiots, hypocrites, and fascists who support them either.

And your ignorance and / or dishonesty doesn’t change the facts.

M. Gross
M. Gross

Oh, right, they’ll be able to do perfect reconstructions from the logs just like the White House is doing with all those emails that seem to have disappeared. Do have any idea how much actual data you’re talking about for just a single robot?

I’d much rather have too much data than too little. At least the robot has no motivation to lie, nor has their memory been clouded by the heat of the moment and the passage of time. Should the crime only come to light or trial years later, the evidence trail will be as fresh as the moment it occurred.

Charles Gittings

PS: I didn’t say anything about trusting the criminals of the Bush gang less than similar criminals of other nations. I stated an objective fact: that we would have been better off with no army at all — it would have limited their ability to do damage and waste money.

Charles Gittings

Well sure, but the problem isn’t accumulating data, it’s analyzing it. I don’t know how much you know about current Army doctrine (and I don’t keep up with it that closely these days), but they gather plenty of data right now. Robots don’t have motivations, they embody the motivations of the designers and users.

Do you suppose such robots couldn’t be misused?

A soldier has a duty not to obey an unlawful order. It’s not something a soldier will do lightly, but it does happen now and then. A robot will just do what it’s instructed to do regardless.

Now you’ll say that there will be safe-guards etc, but there will also be software bugs, mechanical breakdowns, and human errors by users.

Would you trust your child to a robot car?

This stuff is just nutty. It’s mostly a scam for feeding welfare money to defense contractors, like so much of the military budget these days. We don’t need robot soldiers, what we need are political and military leaders who are more intelligent than a coin-toss.

False premises lead to false conclusions… GIGO.

Charles Gittings

Ken,

So thinking about the question I asked M. Gross…

“Would you trust your child to a robot car?”

Some other questions occurred to me:

Would you trust a computer to…

Grade a written law exam?
Render a judicial opinion on a matter of constitutional law?
Decide the verdict in a criminal case?

And I’d be very interested to hear some answers.

Graham
Graham

Charles:

You know, you’re almost amusing.

“I’m not in doubt about Bush, Cheney, and their gang: I’ve spent the last seven years investigating their crimes.”

Well, I rest my case with that statement of yours.

“I didn’t say anything about trusting the criminals of the Bush gang less than similar criminals of other nations. I stated an objective fact: that we would have been better off with no army at all — it would have limited their ability to do damage and waste money.”

In other words, you consider your government so criminal that you would leave your country defenceless to the criminal governments of other nations. Is that supposed to be a rebuttal of my characterization of your position, or a confirmation thereof?

“Would you trust your child to a robot car?”

Today, in real-world situations? Of course not. In several years? Very possibly so, and more so than to many human drivers. My first reaction to this suggestion is certainly not a histrionic “Insanity! Pure Insanity!”  If you take a break from your lengthy criminal investigations, you may find that technology has progressed since your mainframe days.

Charles Gittings

Graham, No — I consider the Bush gang so incompetent that it’s stupid to let them engage in warfare (or much of anything else) for the same reasons it’s stupid to let a five-year-old play with loaded fire-arms or drive a car. They haven’t defended us from anything, they’ve squandered enormous resources doing things that on balance have done nothing but make us weaker and more vulnerable. And one of the worst things they’ve done is systematically subvert our laws for the sake of committing crimes that we once executed Nazis  for committing — indeed, crimes that our ancestors once executed Charles I for committing. What you don’t get is that with demented gangsters like Bush and Cheney running things, we ARE defenseless for all practical purposes  because they’re incapable of making rational decisions. I’ve literally been investigating them for war crimes for seven years now, and it’s a literal fact that what motivated me to do that — starting on 2001.11.13 mind you — was that back in 1987 I saw 911 coming, and I’ve been working on the implications of a ‘war on terror’ ever since. Having people like Bush and Cheney in office was one of my… Read more »

Graham
Graham

Since you’re actually presenting your case slightly more constructively, I’ll actually try to engage you constructively: Re the robot car analogy: As I mentioned in my very first comment: 1. Not only are such weapons not currently feasible, it will be several good years until they are. 2 Even when they will start being feasible, great thought will have to be given to enemy countermeasures, as both of us hinted at; eg enemy combatants embedded among civilians. I assume both of us are in agreement up to here. Where we presumably disagree: 1. Just how long “several good years” might be. I believe that the original article is overly optimistic. From the impression you’ve given, I believe you’re overly pessimistic. 2. How useful non-perfect versions of these weapons would be. 3. The utility of currently funding research in these weapons. And maybe: 4. How concerned we should be about the US government getting such weapons. We can have good-faith disagreements about these issues, but you seem quite quick to attribute bad faith. Besides which, your first comment (“delude … monstrosities … pure insanity”)  does not in itself reflect much deliberation and is not exactly conducive to reasonable discussion. Re Bush… Read more »

Charles Gittings

Ah, well after the last eight years it would be very difficult for me to suppose that anything about reality was especially obvious Graham, not that I was confused about it eight years ago. The clearest thing about the world today is the general state of oblivion and denial. Not an American huh? Neither are Mr. Blair or Mr. Howard, both them co-conspirators in the crimes of Mr. Bush. I really don’t mean to judge you, but when you toss rocks, all I can do is estimate the trajectory. As for the rest, I can only judge by what is said. Optimism and pessimism don’t even register: things are what they are. I see plenty of applications for remote control robots right now. I see absolutely no need for autonomous combat robots under any circumstance, and would strongly favor banning them entirely. There’s a lot of complexity behind that view, but the basics are pretty simple: warfare is obsolete and we need to stop before we fight one war too many. Another aspect that’s fairly simple is that it would be a lot easier to stage a military coup with robots than with humans who have families, loyalties, and consciences.… Read more »

Graham
Graham

Charles: 1. Re your criminal conspiracies. All I’ll say is that once a theory calls for a conspiracy to be too vast, I tend to bail. 2. Re the “military coup,” you’ll notice I’ve made a similar point, if slightly less excitedly. I welcome a discussion as to what safeguards (if any) might be possible. 3. “Optimism and pessimism don’t even register: things are what they are.” Things are indeed what they are, but what things might be like in the future requires some extrapolation, not to say speculation. As I said, we seem to have different opinions regarding the rate of technological progress in the future. 4. “Warfare is obsolete.” Hey, I’d love for that to be true. But until we reach that happy state, I want my side (which I generally consider to be that of the US) to have the best weapons. Of course, if you consider your government to be as untrustworthy as others, you might disagree. (Exercise for the reader: but not necessarily so.) 5. Re “remote control robots,” excellent point! (One which could have more profitably been made some 30-odd comments ago.) Remote control (RC) robots (depending on your definition) are already in use,… Read more »

Charles Gittings

Graham, 1. Facts are not theories — you bail on them at your peril. And I’m sorry, I have quite literally spent the last seven years of my life investigating the facts of the Bush administration’s CRIMES — I’m not doubt about what I’ve said, and your ignorance, willful or otherwise is your problem. 2. About the same as any computer system, and there’s no such thing as perfect security. 3. I’m just not in the habit of making assumptions unless I have to. Like I said, I started programming on mainframes in 1973. Do you seriously imagine that I’m not well aware of how rapidly our technology is advancing? I haven’t seen anything that would lead me to believe the overall rate is slowing down. I just understand that autonomous combat robots are 1) not practical just yet, and 2) an absolutely idiotic idea. 4. Well it is true — I didn’t claim it was widely understood to be true, though it has in fact been understood since classical times. And you should rejoice in that case, because your side does have the best weapons and is in absolutely no danger whatever of not having the best weapons for… Read more »

Graham
Graham

Don’t worry people, this will all be over soon. Charles: 1. It must be quite frustrating for you how so many of us sheeple have been somehow blinded to those truths you have uncovered. 2. The safeguards I’m talking about necessarily need to be at an entire other level than computer security. Completely non-trivial. Pity we didn’t have a discussion on that. 3. “I just understand that autonomous combat robots are 1) not practical just yet, and 2) an absolutely idiotic idea.” It is now blindingly obvious that we agree on the first point (as should anyone remotely informed). As to your second point, are you claiming that such weapons ought not to be developed (not relevant to this particular point) or that they will never be technologically practical (in which case I respectfully submit you’re stuck in 1973)? 4. ““Warfare is obsolete … I didn’t claim it was widely understood to be true, though it has in fact been understood since classical times.” The generally accepted definition of obsolete is “no longer in use.” I seem to recall a few wars taking place since classical times. “And you should rejoice in that case, because your side does have the… Read more »

Charles Gittings

Graham, 1) Like I said, your ignorance, willful or otherwise is your problem. I don’t use words like “sheeple” and don’t even think like that. To me, your posturing BS is in the same category as someone denying the Holocaust or Armenian genocide, and it’s not scoring you any points. FACTS are FACTS. 2) “The safeguards I’m talking about necessarily need to be at an entire other level than computer security.” Well they’re not, because security extends to every aspect of development, maintenance, and use. It’s your figment, not mine. 3) Oh I don’t see any reason to suppose they aren’t technically feasible — that’s just matter of time and money. Are you deaf? I keep asking what you think you need these things for — I see them as an uncomplicated waste of time and money. Was the Maginot Line technically feasible?? D’oh — and find a faint clue already. 4) Your petty quibbling is tiresome: obsolete, adjective “1 a: no longer in use ***OR*** no longer useful” http://www.merriam-webster.com/dictionary/obsolete 4.  “Since you seem to also be an expert on classical times, I assume you’re familiar with the expression “resting on one’s laurels.” As a matter of fact, that’s how… Read more »

Graham
Graham

1. Better and better. Anyone who doesn’t think that the US is run by “idiotic criminals” is akin to holocaust deniers. You’ve really mastered that Dale Carnegie course. 2. If you mean to include the Constitution as a subset of computer security, I’ll grant your point. 3. “I don’t see any reason to suppose they aren’t technically feasible — that’s just matter of time and money. Are you deaf?” I must have been deafened by your robot car example (ia). Am I to understand that even if robot cars are technically feasible (ie they work), you still wouldn’t trust them evil robots? (Upon further thought, you have a point. They might be taken over by Haliburton to harm you for uncovering the truth about this criminal regime.) 4. Heh. I’m the one quibbling. Fine, war is “no longer useful.” Next time that someone starts a war or a genocide, we`ll simply tell them that war is not useful. Boy, will they have egg on their faces. 5. “Any system or device has costs and benefits.” Except for the weapons we are considering, which you have been claiming have no benefits whatsoever. 6. (And your second #4; your mainframes must have… Read more »