Thursday, April 29, 2010

Robots' Rules of Disorder

Robot Rules
Robots have the potential to act in the real world. Attorneys and legal scholars are now puzzling over how harmful actions of robots will be assigned liability, and particularly how robotic maneuvers will fit into traditional legal concepts of responsibility and agency.

One possible avenue would be to view the robot as an agent of its owner, who would be presumed liable for the robot’s actions, but Ryan Calo, a fellow at the Stanford Law School’s Center for Internet and Society, says it’s not so simple.

“Let’s say you rent a robot from the hospital to take care of Granny, and the neighborhood kids hack into the robot and it menaces Granny and she falls down the stairs. Who’s liable?” he asks. Possibilities include the hospital (which released it), the manufacturer (it’s easy to hack into), the neighborhood kids, or the consumer who failed to do something easy like update the software.

Damages are another puzzler. “Society tolerates Microsoft Word eating your thesis, but it won’t tolerate a robot running into somebody,” Calo says. “If you look at cases where computers have caused physical injury, then you could recover—for example, if the computer gave a cancer patient too much radiation.”

It was all so simple for Isaac Asimov, with his three rules:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, Asimov was first a scientist before a science fiction writer.

Now you got lawyers involved. So the new Asimov 4th rule will be Shakespearean: "The first thing we do, let's kill all the lawyers."

Labels: ,

0 Comments:

Post a Comment

<< Home