“A robot may not injure a human being or, through inaction, allow a human being to come to
— Isaac Asimov’s First Law of Robotics
CUSTOMER: “More Saurian brandy, you clinking, clanking, clattering collection of caligenous junk!”
BARTENDER: “N-n-negative, Doc-Doc-Doctor Smith [whirr, click]! You ha-ha-have reached your m-m-maximum alcoholic tolerance and must [fzzz] abstain from further ingestion of intoxicating b-b-beverages [sputter].”
Stupid robot bartenders. They think they’re so artificially intelligent.
Of course, you really have to blame ol’ Isaac Asimov if your robot bartender cuts you off. He’s the one who famously came up with the Three Laws of Robotics in his 1942 short story “Runaround.”
They were quickly adopted in all of his stories featuring robots and then generally became accepted throughout science fiction. However, the Three Laws raise certain ethical questions in the real world as artificial intelligence inches closer and closer to being a reality.
This whole first-do-no-harm concept … just how far does it go?
Could a robot bartender keep providing a human being with drinks even though the dumb meat bag make get behind the wheel of a car (or hovercraft) and quite possibly hurt himself or others?
That’s the question raised in the short film “A Robot Walks Into A Bar,” directed by Alex River.
The Verge reports that the movie deals with a robot who grapples with the unintended pain he causes to the people around him while being unable to stop the violence that human beings to do one another.
(See the “inaction” clause of the First Law.)
The film addresses some issues human beings may have to grapple with themselves soon. For instance, are Asimov’s laws workable if the emotional stakes are high?
The full film is available at futurestates.tv. The site also includes a collection of meditations on technology and the future.