As killer robots are nominally no different than land mines I could see support for banning them. My understanding however is that land mine use (or area denial weapons), are still allowed if there is a way to definitively disable them at the end of hostilities. If my understanding is correct, and robot weapon designers are able to successfully counter with the notion "but we can turn them off after we're done." then this effort won't go as far.
At a conference over the weekend in one of the couch discussions there was a suggestion of a 'nearness' limit, sort of you can't use deadly force unless you are within a 10 mile radius of that use. The goal being to outlaw more developed countries flying drones over less developed countries and picking off their citizens.
Can we please not pretend to miss the obvious truth ? Here's the choices we've already made :
What is more important ? "respecting" religion or human rights [1]. Choice made : "respecting" religion (read: letting muslims execute ex-muslims, gays and ...)
What is more important ? Peace with China or human rights (See Tibet, XinJang and ...) ? Choice made : peace/trade with China.
What is more important ? Communism(/socialism) or human rights ? (because in the case of Cuba and Venezuela they're opposing forces) Choice made : communism is more important.
What is more important ? Human rights or the Trade with Indonesia (muslim state, executes gays, has plans to execute ex-muslims, ...) ? Trade with Indonesia
Human rights are dead letter. Why ? Because we've chosen against them. The same goes for this principle.
I think I'm going to save this one. I think there is a T-shirt in there somewhere, something like on the front "Get dressed or Cure Cancer?" on the front and "Damn, guess I'll have to Cure Cancer tomorrow." on the back :-)
Requiring a human to be in the loop in all circumstances is impractical. Communications can be disrupted. Autonomy is also a software issue. It's easy for a country to say they have humans in a loop, then in a real war it would be trivial to change it.
And "robots" are not different than any other weapon. Bullets and missiles can be aimed but they don't discriminate and they can end up (and often do) hitting civilians and unintended targets. Land mines don't discriminate at all. And what difference does it make if you cover an area with land mines or put a autonomous turret to watch it instead? I'd argue the turret is better since it can have at least some ability to distinguish enemies from civilians and wildlife, and can be removed much easier after the conflict is over.
> Requiring a human to be in the loop in all circumstances is impractical
At least theoretically someone made the decision to kill and could be accountable, in case of killer robot with AI that decided to kill mistakenly who is responsible?
The person that ordered the drone to do what it did is accountable for it. Just like you are accountable for where bullets go when you fire a gun or who a landmine that you place kills.
Couldn't they just ban killer humans? That would prevent most of war deaths. Shouldn't we part with this barbaric notion that killing someone is ok, as long as the killer is in the army?
Totally, if such a ban were in place that would have certainly prevented Al Queda from bombing and killing. It would also prevent Afghan tribal conflicts and the Sunni-Shia battles. It certainly would have stopped Chechnya and the Serbo-Croation war and probably Kosovo as well.
Let's do that. Let's ban bad people. Let's make it international law that all disagreements must be settled via pillow fights with squadrons of 12 year old girls. Of course the Muslim extremists would naturally lose since they'd never let an 12 year old girl out of the house
in the first place. So they'd have to send boys. And since proper war fighting pillows are in short supply in places like Yemen, then they have to use more readily available materials like diesel fuel and fertilizer, you know just to be fair, given their logistical disadvantages in lacking both available 12 year old girls and properly spec'd combat pillows.
So now the law abiding rest of the world sends in the 12 year old girl combat pillow division (polyfill only since feathers have been banned as cruel to birds) and the promptly get set on fire and raped by an enemy that obviously does not give a flying fuck about a group of cognac-sipping first-world diplomats who decide to ban killing when waging war.
Banning land mines didn't help much either. People will use whatever they please to kill each other. Biological and chemical weapons are not used not because they are banned but because they suck as military tool anyway and nukes aren't really economical, all things considered, against anything else but nukes. Countries that treat armies seriously (much less of those than 50 or 100 years ago) are stockpiled sufficiently with all banned weaponary.
Personally I'm all for killer robots. I don't trust humanity of invading army any more than I trust autonomous war machine software. At least terminator won't kill my kids or rape my wife because it's bored.
So, here's the nasty undercurrent to all this, right?
Drone warfare (whether by land, sea, or air) is about using disposable machines to kill and injure human beings. Engineering dictates that we'll eventually optimize away the part of the control system that is slowest and most prone to failure: the people.
The discussion of "How can we keep people running the robots" is uninteresting, because the entire deck is stacked to guarantee that it will be rendered moot.
~
The real discussion--I posit--is somewhat darker and more chilling:
In order to field a drone army, you need capital. You need factories to build the devices, you need command and control infrastructure to deploy them, and you need bright minds to develop them. Drone warfare is difficult to conduct in any meaningful fashion as a third-world nation, or more importantly as a populace in rebellion.
To put it bluntly, the use of these engines of war is limited only to the rich kids, and there is no chance for appeal or mercy when you are identified as a target.
Think about that for a second.
The wealthy murderer who decides to unleash these does so without any skin in the game, without any chance of dealing with repercussions back home for lost sons and daughters, without any care whatsoever except for a line-item expense. Stubborn rebel holdout? Spin up more terminators the same way we spin up dynos to deal with spikes in load.
The teenage kid holding the rusted AK their parent just dropped, looking at the robot which just made them an orphan? No chance in hell that they'll be spared because they are obviously not a threat--they are a human wielding an automatic rifle, p = .975, execute.
~
This whole thing needs to become verboten, forbidden, the same way we nominally treat chemical and biological weapons.
If we support our .gov and .mil in the use of these weapons, we'll be doing everyone a disservice, and come the day we decide to rescind the support which backs those bastards, we'll find that they no longer need our support for they already have the drones and the capital to make their whims felt.
The flipside of this argument is that wars happen anyway. We might as well use the machines to save lives. (I don't necessarily believe this; I don't know what I believe, but this is the other side of the debate).
The prolonged wars in Iraq and Afghanistan happened despite the repercussions for lost sons and daughters. We're losing people right now, and hundreds of thousands of Iraqis lost their lives because we didn't have enough manpower (or weren't willing to take the risk) to enforce order. If we could spin up machines the way we spin up dynos, these problems would go away. No American soldiers would die. We'd spin up production and deployment to enforce near total order. We could reduce Iraqi deaths by orders of magnitude. The machines don't necessarily have to act like terminators. They could act like military police too.
This might be a scary future, no doubt. But it might be a bright one too. If we trust our values and trust that our elected representatives won't go on mindless killing sprees, the machines might be a blessing in disguise. (Also note that by the time we get machines with this capability, our government, social structure, our entire economic model, and our appreciation for civil liberties will evolve quite far from where it is now too).
We might as well use the machines to save lives...No American soldiers would die. We'd spin up production and deployment to enforce near total order.
The mistakes made in Iraq which led to rebellion are complex, but they didn't have to do with manpower alone. Disbanding the army and most of the administration are the two largest mistakes IMO - those have nothing to with US manpower and meant a huge supply of Iraqis with nothing to lose, military training, and access to huge arms dumps. They could equally well fight and subvert machines as they could people.
It's interesting the society you propose for Iraq as a bright new future - total order = totalitarian dictatorship imposed by force by a proconsul directing machines. Would you accept that for your own country? Would you rebel against it if a foreign army invaded on a pretext and occupied the country using robot warriors? We've already half-glimpsed that future with the use of drones in Afghanistan, Pakistan, Yemen - and it isn't pretty, resulting in many more civilian deaths.
I agree machine soldiers are almost inevitable, but it will not be a good thing - it'll be a further step in abstracting away warfare so that those ordering killings a world away risk nothing but money (easily reproduced) and domestic reputation (easily managed).
i personally think that it is a GOOD thing that we lose sons and brothers in wars. It makes everyone think twice before engaging in war. It also gives you an incentive to stop (or to your citizens to pressurize you to stop).
If these machines are allowed to be used, war would be a push of the button away. And that is never a good thing.
Sure ideally, war should be ONLY between machines so no one on either side dies, but as the parent pointed out, this tech is for the rich kids. The poorer countries would suffer immensely in such bloodbaths
It is true that this could be _one_ of the flip side arguments in favor of using machines. However, as we probably know, it's a pretty weak one. I don't think any human beings should be subjected to having somebody else's idea of a "brighter future" shoved down their throats regardless of whether they agree with it or not. This is especially true when the people deciding this future is half way around the globe and does not actually have to live that future.
Remember "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety."
Your entire argument seems to be built around paranoid and hysteria and things you've seen in sci-fi movies.
We already have drones that kill and injure human beings. They're called missiles and torpedoes, and the third world has plenty of them too.
Also, there's this assumption that keeps getting made that humans will always make better decisions than machines. Personally, I think you're seriously overestimating the ability of scared, tired 18 year olds to make quick rational decisions under extreme pressure. Every single war ever, is littered with examples of human beings making mistakes, which end up with the deaths of allies or civilians, it's not even remotely rare.
And you know what, machines will make mistakes too, they'll kill their own side or civilians occasionally. But after they do that, their software can be improved and refined, over and over and over, with no limit. They will constantly and forever be improved.
A human soldier will not. There's only so much training you can give to someone in nine months before they're in combat.
And there's also the fact that machines are a helluva a lot less likely to think it's funny to piss on a Koran or to go rape some of the locals.
Your entire argument seems to be built around paranoid and hysteria and things you've seen in sci-fi movies.
I'm a mechanical engineer by training and a software engineer by profession--the amount of work to create a robot whose sole purpose is to open fire upon things matching the heat signature and rough shape of a person is well within the reach of even a small university research lab, much less a defense contractor.
There is no paranoia or hysteria here; the tools and methods have already been developed.
They will constantly and forever be improved.
The nature of improvement will be technical in nature ("Did it engage the designated target fast enough? Can we make it easier to manufacture? Is the precision of the weapon carried able to be increased?"); there is no reason for the rules of engagement or anything else to be improved, because it is an arbitrary decision--a decision placed, as I've suggested, in the hands of the rich and powerful and least likely to be empathetic with the intended targets.
There's only so much training you can give to someone in nine months before they're in combat.
Quite so, and a nice part of that is that it is hard to beat basic empathy and humanity out of a person--the machine, by contrast, never had it to begin with.
The soldiers, though they may make mistakes, are more likely to be able to think critically and show compassion and mercy.
And there's also the fact that machines are a helluva a lot less likely to think it's funny to piss on a Koran or to go rape some of the locals.
No, they'll just kill them instead, as per orders.
EDIT:
We already have drones that kill and injure human beings. They're called missiles and torpedoes, and the third world has plenty of them too.
Those are munitions, not drones. Most importantly, they are deployed by human beings and are controlled by same.
For certain definitions of improved. Military victory will always be considered more important than civilian survival, so I don't think you can expect machines to improve the moral calculus of war (which always tends to total war).
Interesting point. The machine soldiers would solve the problem of rampant soldiers raping locals.
But if you look at drones, haven't the drones been around long enough? Their algorithm still seems to be faulty considering all the civilian casualties they have caused.
Modern warfare is already vastly stacked against the poorer side. This has arguably always been the case but it is especially true in the present with billion dollar warships and million dollar jets, missiles, satellites, etc. Not to mention nuclear missiles.
Robots don't change anything. Your evil rich people can just as easily fight the rebels with human flown planes or human guided drones and paid mercenaries.
And please with the "imagine a robot shooting a kid, we can't ever allow that to happen" hypothetical. Soldiers kill innocent people all the time, just on accident. Missiles miss, land mines get abandoned and kill innocent kids, warfare has never been tame. A robot would arguably be better because it might rarely if ever miss, it could have better vision than a human, it could possibly be a better judge than even a human with sufficient AI, and it never gets emotional, angry, or tired or insane.
This is the second time in this thread that you've brought up the shortcomings of current drones as evidence against the use of future robots. And it's the second you've conflated human-operated drones, possessed of little intelligence or ability to learn, with hypothetical fully-autonomous robots.
Would these robots behave better than humans or human-controlled drones? That remains to be seen. But the emphasized shortcomings of current aerial drones are about as relevant to future aerial robots as RC toys are to self-deiving cars. (Much less future ground-based robots, which in uncertain situations might be able to choose a third option besides "shoot to kill" or "fly away".)
Those kids were killed by people radio-controlling bombs, not by autonomous drones.
On the other hand, how many civilians, including kids, have been killed by troops on the ground for motivations such as rape, boredom or robbery? Robots wouldn't have this motivation for useless/counterproductive violence, but some human soldiers unavoidably do.
The population could make it costly by exploiting the weakness of state-of-the art armies of the time, hence partisans. Today the population can make it costly by exploiting today's army's weaknesses, hence IEDs. Future populations can make it costly by exploiting future army's weaknesses, which will probably involve a lot of hiding and striking from where robots can't go, and possibly cracking systems.
Speaking of which, has anyone here played Deus Ex? Rewiring big drones surrounded by large amounts of troops to reverse their friend/enemy detection was the most fun I had in months.
Not that this is funny in real life, but the game is great at exploring how fighting that includes supersuits and drones could work.
How exactly does a drone replace boots on the ground - bipedal robots at the moment are fabulously expensive corporate toys - I doubt that asimo is going to be replacing a squaddie anytime soon
Well, technically a message to village elders "We're watching you through airborne drones - if you'll give support or food to guerillas, we'll bomb your kids" would be somehow analogous to boots on the ground enforcing the same compliance.
There are numerous examples of simple tracked vehicles armed with weapons. It does not take much imagination to see how those combined with larger drones (tanks, humvees), artillery, or aircraft could be used.
Um real warfighting isn't like anime where cute chicks in tight fitting catsuits fight with giant robots.
Tracked vehicles have the dalek problem how do you go inside a building/cave to engage the enemy - watch black hawk down how do you replace a seal team or ranger brick with robots
And any military tracked vehicle is not simple one of the main reasons that MBT's still have 4 man crews is they need the fleshy meatbags to keep the bugger running.
Here's really the root of the problem. Thousands of years ago, real warfighting consisted of men in centuries and phalanxes, piking each other to death--it wasn't sporting to, for example, ambush a legion as it marched through a forest and cut them down ("Quintili Vare, legiones redde!").
A century and a half, "real warfighting" involved massed troops lining up across from one another and firing in volleys while ignoring civilians--it wasn't sporting to, for example, raze towns and crops and destroy railroad tracks in order to demoralize the enemy.
Two decades ago, "real warfighting" involved massing mechanized infantry, achieving air superiority, and engaging regular forces until their ruler surrendered.
The problem is that this notion of "real warfighting" is constantly changing. Once upon a time, real warfighting didn't involve violating neutral national borders and airspace and dropping munitions on civilians without having clear proof of the target or its strategic value. That is clearly no longer the case, as we've seen with the drone strikes that have been carried out.
We can't ignore the probability that one day, "warfighting" will mean a text sent over IRC to an automated artillery battery to target a village and fire for effect, because the little killbots saw their one target disappear into a place with stairs.
Everything about warfare has always been stacked for the rich. Honestly, how could wealth disparity exist if didn't gain the rich the power to defend their wealth against those who would want to take it?
Drone warfare, low-level warfare and terrorism would be the exceptions. Right now for drone warfare the software is too difficult to write (hard to believe. Hooking up opencv face recognizer to a robot that can aim a rifle is not hard).
That will change. Not soon, it'll take maybe a decade. But it will change.
Oh and ... dumb question ... who enforces any rules that are decided upon during war ? Without that component we might as well legislate the ethics of solar flares ... Right now there are tons of countries that are either in blatant violation of things like human rights and guilty of war crimes, and no-one's getting persecuted. Not just in problem countries like Iran or North Korea, there's plenty in Africa, Saudi Arabia would be another obvious example, even South America. Only in very specific cases does anything seem to happen ... it seems to me more of a pr-campaign than an actual attempt at enforcement.
Not that having zero practical use or effectiveness has ever stopped anyone from legislating, but why not legislate something equally fictional, but more fun ? Let's legislate traffic lights for faster-than-light travel ! Or let's demand something be done about fictional abuse of Indonesian babies ! Won't somebody think of the children ? Let's legislate that no-one can award the "worst dressed sentient being in the universe" prize to the same character more than 6 times in either prose or poetry. Also it seems prudent to preventively outlaw Vogon poetry. These seem like more productive endeavours.
I agree that "making laws" isn't necessarily a solution, but it lays the foundation of future debates. This topic would no doubt involve a ton of debate and it would be VERY useful if it materializes into international law. "Why" you ask?
Well because when someone violates it, there would be no question of whether this is right or wrong. We wouldn't have to go through the whole above debate all over again, just to come to a consensus that this was evil. You could just cite the law and (hopefully/in an ideal world) prosecute.
Ps: They still might be able to cirumvent drone strikes by saying that by defintion of war agreed upon in the first geneva convention, it isn't war. so this rule doesn't apply. I hope they think of this when making the law
in line with the mention of the Geneva Convention in my other post, i just remembered something. I am not sure how legally correct this is it, but someone brought it up in a Model UN that govt officials who sanction drone strikes can NOT be tried for war crimes because they aren't technically Soldiers. If this is true, wtf? The rules seriously need major revision in sight of modern tech
UN govt officials who sanction drone strikes can not be tried for war crimes because they're diplomats of a sovereign organisation. You have to understand what kinds of people these are (in general). They're part of powerful political families, spoiled rich kids with entitlement issues that would make all US congresscritters volunteer for public apologies. They don't agree on anything, least of all human rights (note that the UN human rights council is dominated by governments that don't respect human rights. Which is why the only government to get dinged on human rights is the one government that actually attempts to respect human rights in the region). They will go to war if they saw any of their members threatened with so much as a citation. So no they can not be tried for war crimes.
They will, however, attack others every now and then.
So international law, and human rights, effectively don't exist. So of course since this is just used as a political tool, it can be cheated with when convenient. That's why they went so ridiculously hard after Serbia and let (for example) North Sudan/Egypt/Saudi Arabia/... continue their crimes without a peep.
Even when they do have money, it only works when enforcement works. So in order to get a conviction you need :
1) the political interests need to be aligned
2) some military needs to kidnap the sacrimonial lamb* and this can't lead to an incident
3) the lamb can't have any good political structure left to support him
That's all international law is. Look at the money behind it and you will understand how it works. Attempt to analyse it legally and it will remain a mystery. Look at the conflict between Russia and Saudi Arabia and you will see why they intervened 10 years after the Yuguslav wars, and why they only convicted Serbians, and take note that in this "genocide" of civilians, first only young males, armed, were killed. The whole thing was precipated by an attack by said "victims", which was obviously part by an attempt at territorial conquest by said victims, and more than a few of the perpetrators died. Another army tried to intervene and couldn't because it suddenly found itself under attack by the "victims" and the commander decided to withdraw (it's not that they couldn't intervene, they didn't after finding themselves under attack by the victims). Both America and Europe took the side of the "victims". Understand why that last part happened, who was on whose side, and you'll see where the convictions come from, and just who and what the ICJ is working for.
Note that I'm not making the case that all governments are corrupt (beyond a certain degree). Just that a toothless, powerless, jurisdiction-less, "court" 100% dependant upon foreign interests cannot have a shred of integrity in practice.
But of course, don't let reality stop you from feeling entitled, and in the US, and some parts of Europe, you can actually more-or-less enjoy these rights (keep in mind that in the EU, due to WWII, you don't have full freedom of political opinion. If you think human rights will save you from conviction if you do something that's technically a human right but flies in the face of this policy, don't expect human rights arguments to save you from conviction. In Europe freedom of ideology does not apply nazism and a few forms of communism/socialism). If you actually wish to test human rights policies though ... I'd advise you NOT to leave the US.
* they're generally guilty, although proof tends to be lacking in the extreme, and people convicted are generally not the only guilty parties.
Even if people are manning the drones, you could have one person handling thousands of drones, and just being offered a long series of yes/no prompts ("Shoot here?"). As long as there is any way for a country to decide to deploy drones they will find a way to comply with the letter of whatever restrictions you put in place.
The drones are perhaps a good opportunity to look at the broader picture. Maybe we need to recognize that one country sending weapons and troops into another country on their own initiative is a fundamental violation of human rights and basic morality. The only way troops or weapons should be deployed is to stop greater harm from happening, and the only entity that can judge the greater harm is one that looks beyond national borders.
For me the simplest way to "clean up" war is to make all war illegal, and only allow troop and weapon deployments outside of a country's own national borders under the UN banner for peacekeeping purposes. This sidesteps the rich kids vs poor kids issue because there is no us vs them, there is only the global community and how it polices itself. Not that the UN is necessarily ready for such a thing, but even in its inadequate capacity it would be a whole lot better than the horrible inhumane mess we are in right now.
Yeah. Watch a pro playing starcraft. Keyboard and mouse, selecting and directing, almost faster than the eye can follow. This group of terminators, attack here. That group, charge the bunker over there.
Modern army control systems are already a lot like starcraft, even up to the progress bars that get shorter and change color as tank crew members get killed.
Tell me how well that ban on chemical and biological weapons is working out. Lets be honest about it, the art has to be advanced in countries where it might be consider abhorrent to engage in such combat just to insure that when the other guy does use it it can be defended against.
Automation is coming whether we like it or not. If we end up with a world were large armies are no longer fielded and only robots exist on the combat field perhaps we can move weapons to those that incapacitate instead of kill. We can only get there is we continue to push development. Regardless, strikes will become more surgical which could reduce casualties amongst non combatants. Figure it this way, if they become good enough maybe the leaders will be the ones looking over THEIR shoulders.
'Drone warfare (whether by land, sea, or air) is about using disposable machines to kill and injure human beings'
The disposable machines are built by human beings belonging to a nation (or entity) who are interested in self-preservation and protecting the nation. Of course profit doesn't hurt.
Thinks about this. Lately, the rich nations are facing shortage of men willing or able to risk death/injury. The populace lacks the stomach.
And the 'poorer' nations (ones with little to lose) have no qualms about bloodshed as long as they get what they want.
I see drones in warfare as inevitable. How autonomous will they be? Think no one knows for sure at this point...
I find it insane that human with AK47 still poses any threat in the warzone. I look forward to the times when human with small arms has absolutely no value on the battlefield. War should be only for the richest kids.
Banning Killer Robots is a great. But I'm fairly certain they are built by Killer Humans. These are just a natural extension of a society that values warfare and hegemony over social justice and peace. Note the differences in robotics applications between countries per the number of wars they've recently engaged in.
Eventually, you can expend your supply of Killer Humans if you get too bloodthirsty. You can wage war to the point where the populace is no longer willing to support you--even the elaborate decoupling of war from daily life the US has accomplished was still not enough to prevent the slow tide of opinion from shifting.
Killer Machines, however, can be replaced as they are expended, and the plans setting them in motion proceed without issue. The populace never stops it--as long as there is money to buy material to make the things, they can be used.
Thus, the use of drones in warfare is not self-limiting the same way that the use of humans is.
Money and material becomes sort of scarce in times of war.
Also if machines become advanced autonomous weapons there is no point to target humans since the only thing that can harm your machines are enemy machines.
Everybody becomes civilian and killing anybody becomes a war crime. Yay.
Well, I suppose one side could publicly advertise their machines are programmed to first kill the enemy machines, and then the enemy humans, just to tactically raise the stakes a little bit when negotiating surrender conditions.
Crazed dictators exist, so robots make them more dangerous, not less. And robot builders are likely to get targeted by everybody. So no bloodless machine fights.
We probably should ban them at some point. However, personally I'd like to see an arm's race for the next couple of decades. Massive military spending could fund the R&D needed to get us to commercial uses. Jet fighters, and breaking the sound barrier were driven by the military, for example.
Having said that, I must say that I'm actually quite uncomfortable with machines deciding when to pull the trigger. Robocop is etched in my mind. Hope the remake keeps that scene.
Ah, yes, ban everything that you don't like. That would get rid of it for sure.
Chemical and biological weapons are banned? Yes.
Do we have them? Yes.
Will we use them to survive? FUCK YES.
So stop this meaningless "Geneva talks". "Killer robots" will be built and will be used.
Don't ban weapons. Ban wars. I don't see why 1st world countries would need them anyway.
This kind of cynicism is a strange thing. It sounds like a hardened realist talking, but in reality it's fairly naive.
Chemical, biological weapons use has been greatly curtailed by rules of war. So has "aggressive" war (war, because we want your territory). So have various civilian targeting tactics. They haven't been eliminated, but there is a lot less than otherwise. It's not perfect. Enforcement is erratic and case-by-case. Powerful countries are more exempt than weak ones.
Nuclear proliferation has been effectively slowed by the NPT. Again, not perfectly but better than nothing.
This is such a fascinating topic because the field of robotics outside of industrial applications is still so nascent. I'm not here to weigh in, just to drop off some useful tidbits for other people interested in this ethical gray area as well.
This theme of "terminators" and "killer robots" has been really prevalent in the field lately because of the DARPA Robotics Challenge[1], the latest in DARPA's Grand Challenges, which I'm currently participating in on one of the Track B teams. Many people see the break away from bomb-squad bots and factory floor robotic arms in to humanoids as a really scary thing, and the DRC seems to amplify that (if a robot can hold a sawzall, it can hold a rifle).
Just last month in Atlanta, the IEEE Humanoids conference took place. Dr. Ronald C. Arkin[2] gave a talk exactly on this topic from an ethics perspective, titled "How to NOT build a Terminator", and it was exceptional. It was a plenary talk and not a paper, so I can't really find any record of it to share. Unfortunate.
Tangentially, the lab that I work in (the robotics group at the Florida Institute for Human and Machine Cognition[3][4]) has employed and continues to employ a principle that we call "co-active design"[5] where we actively work to keep a human in the loop at all times; we're definitely not looking to build a "killer robot". It's an interesting design problem that overlaps a lot with UI and UX, popular topics here on HN.
And lastly, a shameless plug for the field itself; a lot of people don't realize just how software-oriented robotics research (especially humanoids, where the fun problems are) is. A lot of people are stuck on it being a hardware endeavor. While it's true that a chunk of robotics falls in the mechanical engineering domain, there's plenty of room for hackers from tons of different disciplines to get involved. There's interesting people solving interesting problems, from the Open Source Robotics Foundation (the Willow Garage spin-off) to private groups like Boston Dynamics, yet so many people still see the field as a black-box that only opens up for the hardware inclined. I could see a talented group with the right hacker mindset doing some really interesting stuff in robotics with the right impetus and execution.
But one thing needs to be clarified here: the fear over "killer robots" is that the capacity to make war could be placed largely on autopilot. The danger isn't so much a Terminator-style robotic uprising, which is so farfetched in today's world as to be sensationalistic (if not laughable). Rather, the danger -- a very real one, at that -- lies in an accidental war as the result of malfunctions or data-interpretation errors.
We have more to fear from dumb AI than from smart AI.
As it turns out, the US and the Soviet Union came perilously close to nuclear war on multiple occasions over the result of computer errors. (For an interesting look into the subject, I recommend Eric Schlosser's "Command and Control," a recently released history of nuclear weapons policy over the last 50 years).
Banning autonomous or semi-autonomous tactical systems ("Terminators") seems misguided and impractical. These things will get built, and some of them are being built already. The jury is still out on the ethics of building them (Do they save lives by taking humans off the front lines? Or do they take lives by turning war into a sort of game?). But they're being built, and that genie's not going back in the bottle.
But banning fully autonomous strategic systems ("Skynet") seems more fruitful and worthwhile. Again, this is not because we expect "Skynet" to become self-aware and intentionally initiate a nuclear holocaust. It's because "Skynet" might misinterpret something and accidentally initiate a nuclear holocaust.
(I don't mean to dismiss the very real role of human error in this same domain. But fail-safes, in the form of humans having the final decisionmaking authority over warfare at a strategic level, make a lot of sense.)
I'm a layman here, and I'm way out of my depth in discussing the intricacies of war making AI. But the way I see it, we need to avoid a sort of dangerous middle ground here. Either we keep AI dumb and intentionally non-autonomous, or we make AI a heck of a lot smarter (therefore reducing the risk of fatal errors).
In a weird way, building Skynet might be less risky than getting halfway to Skynet. A relatively dumb AI with full autonomy is a frightening thing. An extremely smart AI with the same degree of autonomy is a little unnerving, perhaps, but it's significantly less dangerous in practical terms.
Agreed. The really interesting point of the "How NOT to build a Terminator" talk was not just that we had to avoid building evil machines, but that we have the potential to build machines with better ethics than human soldiers. Avoid atrocities committed to angry soldiers, avoid accidental wars started by twitchy trigger fingers, avoid intentional wars started by people who weren't authorized to start those wars, that kind of thing. The devil is always in the details, but a ban can only ensure that the people building these weapons are the ones that won't put in the proper effort to make them good.
A strategic AI system commanding human troops is reasonable; the deployment of tactical AI (more honestly dumb targeting and fire-control weapons systems) is not.
I have heard that Gazebo(Simulator) is used extensively for the DARPA challenge, have you had much experience with it? Does it save you any time and how practical/applicable is the simulation compared to the real world?
I had the opportunity to build my own robot in it, the furthest I was able to go was getting 2D navigation working based on some libraries in ROS using a laser scanner. Unfortunately I can't upload that video since my computer was too slow, but I made another one.
We were required to use ROS/Gazebo during the first phase of Track B, but we mostly lean on a simulation environment that was developed by one of our PI's years ago and is maintained/improved in house.
The robotics field has been in the "about to explode" phase for maybe something like 20 years... At any rate, when you get that hacker group together, count me in!
I think this campaign suffers from something of a branding problem in that "killer robots" theoretically includes human-controlled robots.
They need to be extremely up-front that what they're opposed to is autonomous battlefield robots. From the title/link, I assumed they opposed all battlefield robots (like the ridiculous drone) crowd, which is an absurd and extreme position.
Their actual position, of opposing autonomous robots, is a lot more sensible and should be their sole focus.
I don't see how it's absurd to be opposed to technology that makes violence more convenient for wealthy aggressors. One could argue that reducing the human costs of war would only bring more war.
That theory certainly played out after World War I. After seeing that ridiculously high cost of life, war was prevented for generations.. It wasn't until 200 years later that World War II happened.
Sorry rounding error.. I meant 20 years later and World War II was far worse than WWI..
As far as 'wealthy aggressors' who do you mean? Certainly not the ultra wealthy Serbs and Croats, you couldn't possibility mean the wealthy Al Queda groups or the fabulously wealthy Taliban groups. Or Hebollah.. The Ba'ath party was wealthy, so maybe you mean them?
Certainly you must be referring to those aggressors right?
Consider how much more information is now available about human costs, though. If your brother/father/son/friend went to war in the early 20th century and died, you probably got a photo and a posthumous medal. Now, Collateral Murder goes viral and gets millions of views.
I suppose an autonomous robot with good AI that requires an operator to log in over SSH and punch Y before every shot would qualify as non-autonomous.
Of course, monitoring each shot is very tedious, hence:
UNITS="wastelayer merciless thunderdeath"
UNIT_SIZE=4096
for UNIT in $UNITS; do
for I in `seq $UNIT_SIZE`; do
yes | ssh operator@$UNIT$I epclient
done
done
At a conference over the weekend in one of the couch discussions there was a suggestion of a 'nearness' limit, sort of you can't use deadly force unless you are within a 10 mile radius of that use. The goal being to outlaw more developed countries flying drones over less developed countries and picking off their citizens.