Search This Blog

Wednesday, May 7, 2014

The emerging ethical considerations of robotic programming


Perhaps you've heard about an apocryphal ancient Chinese curse that goes: "May you live in interesting times." The idea being that interesting vs. boring is a choice that an ancient Chinese wise man would have made in a millisecond, spending the rest of his days living happily-ever-after watching trees grow. Here in the 21st century, we don't have that choice. We do live in interesting times. When you have the CEO of Amazon claiming that in a few years he'll have robotic drones delivering your packages to your home, you have to think...how interesting!


Wired magazine has an article that is strongly reminiscent of Isaac Asimov's fictional idea of what robotics might be like in the future and the ethical conflicts that might develop when a binary machine is expected to participate in a poly-chromatic plenary world with infinite possibilities
The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.
Please suspend your disbelief for a minute and accept for the moment that in the future, autonomous vehicles, i.e. robotically controlled vehicles are capable of fully interpreting traffic up to and including who is and who isn't wearing a motorcycle helmet, and then accept the possibility of that one-in-a-million moment in time where there are only two paths possible and each path is blocked by a motorcyclist, and—in an ever more unlikely scenario—that one has a helmet while the other does not. Finally, imagine that this probably one-in-a-billion scenario has been predicted by robotic car programmers and they have decided that the guy with the helmet is getting creamed. Were you able to suspend your disbelief sufficiently to accept all of the foregoing? Yeah, me neither. I think that the technical term for this false dichotomy is known as the "constipated bear dilemma." This dilemma is popularly expressed thusly: What if a wild bear crapped in your bathtub?

It's my opinion that Wired magazine's Patrick Lin probably picked the wrong futuristic autonomous industry to postulate ethical dilemmas about. While autonomous vehicles may one day face such an unlikely binary dilemma, it's much more likely that such a dilemma actually will occur on some future battlefield as a battle-droid faces a circumstance which may or may not have been anticipated by its programmers.

Drones and other remotely controlled vehicles are common today. It's not too big a stretch of the imagination to think it likely—or at least remotely possible—that autonomous war machinery—battle-droids—will one day roam future battlefields identifying fellow soldiers—both human and droid—ignoring non-combatant civilians, and eliminating enemies. So now, Mr. Lin, imagine an enemy combatant holding a non-combatant civilian hostage and using that civilian as a human shield. What can we believe that the droid programmers will already have decided, thousands of miles away in distance and years away in time? One could really jump the shark at this point and also assume the likelihood of possible defects in materials and workmanship, breakage and wear-and-tear in droid sensory devices, computer viruses introduced by the enemy, inadvertent programming bugs, etc.

The more I consider the idea of autonomous machinery pondering the choice of: to kill or not to kill, the less I like it.

No comments:

Post a Comment