I have just read a very interesting article from the BBC called “Ready for the robot revolution?”
The article itself is very interesting and a great starting point for discussion in class but two aspects were worth discussion.
The first involves Azimov’s 3 Laws of Robotics – this is a standard and one I make use of in class
Isaac Asimov, outlined ‘Three Laws of Robotics’ in a novel featuring human-like robots. The rules were designed to protect people from harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
It is idealistic and interesting and has been the cause of many heated discussions in class.
However, the article also present this ethical code as well
“The UK’s Engineering and Physical Sciences Research Council, together with the Arts and Humanities Research Council, has drafted a set of ethical principles for robot design – which can be summarised as follows:
1. Robots should not be designed solely or primarily to kill or harm humans.
2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
3. Robots should be designed in ways that assure their safety and security.
4. Robots are artefacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
5. It should always be possible to find out who is legally responsible for a robot.”
The second aspect of interest for me is the potential conflicts we see. Robots have long been used in manufactoring. While initially this caused much concern, who now hears about this? Are we going to see the same concerns expressed for domestic and commercial robots and will this to fade?