Saturday 23 April 2011

Six ways to build robots that do humans no harm - FeiFan Lu

With the relentless march of technological progress, robots and other automated systems are getting ever smarter. At the same time they are also being given greater responsibilities, driving cars, helping with childcare, carrying weapons, and maybe soon even pulling the trigger.
But should they be trusted to take on such tasks, and how can we be sure that they never take a decision that could cause unintended harm?
The latest contribution to the growing debate over the challenges posed by increasingly powerful and independent robots is the book Moral Machines: Teaching Robots Right from Wrong.
Authors Wendell Wallach, an ethicist at Yale University, and historian and philosopher of cognitive science Colin Allen, at Indiana University, argue that we need to work out how to make robots into responsible and moral machines. It is just a matter of time until a computer or robot takes a decision that will cause a human disaster, they say.
So are there things we can do to minimise the risks? Wallach and Allen take a look at six strategies that could reduce the danger from our own high-tech creations.
  1. Keep them in low-risk situations
  2. Do not give them weapons
  3. Give them rules like Asimov's 'Three Laws of Robotics'
  4. Program robots with principles
  5. Educate robots like children
  6. Make machines master emotion

No comments:

Post a Comment