www.bizpodia.com/2016/02/storytelling-may-be-secret-to-creating.html
By Aaron Krumins
One of the more disturbing trends in robotics is how often
some researchers gloss over the moral complexities of AI by
suggesting Isaac Asimov’s “Three Laws of Robotics”
will be sufficient to handle any ambiguities robots encounter. This
only serves to demonstrate how unfamiliar many technologists are with
the depth of the issue, for as a closer reading of Asimov’s work
reveals, the three laws of robotics leave plenty of room for disastrous
outcomes.
There are encouraging signs, however, that at least some
other researchers are taking the problem seriously. Two figures leading
the charge in this direction are Mark Riedl and Brent Harrison of
Georgia Institute of Technology. They are pioneering a system called Quixote, by which an artificial intelligence learns “value alignment” by reading stories from different cultures.
This may seem a strange approach to the problem — until one
considers how regional questions of morality are. For example, two
counties only a few miles apart in the United States may have vastly
different ways of perceiving moral and ethical issues. For instance,
spanking your child may be considered a morally unacceptable act among
an affluent urban population, whereas immigrants or more rural
communities may find this form of punishment more acceptable. This is
not to caste aspersions in either direction, but simply to point out
that even among humans, there is little consensus on moral and ethical
issues. How much more difficult will it be for humans to agree on what
constitutes ethical robot behavior?
As it turns out, there may be a way, and it’s not so
different from how humans learn the values particular to the region they
grow up in: by telling stories to each other. The fables and legends
particular to a locale are frequently laced with moral directives that
provide clues to a growing child on what constitutes morally acceptable
behavior in their culture. The idea behind Quixote is that the same
principles can be applied to shaping robot behavior.
Quixote builds on Riedl’s prior research project, which demonstrated
that an artificial intelligence can be trained to identify a correct
sequence of actions by crowdsourcing story plots from the Internet. This
is not at all dissimilar from the RoboWatch
project we reported on earlier in the year, which showed how an AI
could be trained to learn simple household tasks like the steps involved
in making a peanut butter sandwich by watching YouTube videos. The
Quixote system takes this a step further, adding a reward signal that
reinforces certain behaviors and punishes others during a
trial-and-error learning process. In essence, the AI learns to imitate
the behavior of the “good guy” in the stories it reads from the Internet
and avoid the behavior perpetrated by the ignoble characters.
However, the approach is not without its pitfalls. In some
parts of the world, for instance, female genital mutilation is
considered de rigueur, though the bulk of humanity regards such
practices as objectionable and would certainly not want robots
perpetuating them. In other words, in training robots on stories modeled
after our own cultural precedents, we may wind up giving new life to
behaviors that are better left to wither on the vine. Therefore,
formulating a robust and universal ethical system — usually the pastime
of ivory tower philosophers — is likely to take on
increased significance as we consider the possible repercussions of
endowing robots with the same porous and inconsistent moral norms
exemplified by our own regional populations.
Storytelling may be the secret to creating ethical artificial intelligence
Reviewed by Geezgo
on
20:58
Rating:
Subscribe to:
Post Comments (Atom)
Post a Comment