In This Article
Robots ethics is typically related to the ethics of technology mainly include AI and robotic ethics. It is the part of the technology that is concerned with the moral behavior of humans as they design, built, and use artificial intelligence. Not confusing it with actual learning behavior of robots, but the learning of artificial moral agents (AMA) to teach them moral behavior.
Narrowing down to robotic ethics, it is the ethical learning of robots that occurs whether robots pose a threat to humans or whether some uses of robots are problematic. For instance, killer robots in war. Another concern is related to how robots should be designed to behave ethically which is called machine ethics.
What is Robotic Ethics?
It is a sub-field of information technology that has direct links to legal as well as socio-economic concerns. Now researchers are trying to tackle the ethical questions about robotic technology and implementing it in society in a way that is harmless to the human race.
The issues are as old as the robot itself. Robotic ethics require a team of experts from several disciplines who knows about the laws and regulations of the technological achievements in robotics and AI.
It is involved by the ethical behaviors of – robotics, computer science, artificial intelligence, theology, biology, physiology, philosophy, sociology, psychology, cognitive science, neuroscience, law, and industrial design.
The main question regarding the possibility of teaching ethics unfolds that we are not used to the idea of ethical decisions of machines making. Let’s see another picture.
Teach Robots Ethics
Before commenting on this, just imagine that your car arrives at your home sharp on its scheduled time to take you to your work. You sit in the back seat and remove the electronic reading device from your briefcase to scan the papers.
There has never been even a tiny problem on the journey before. But, today an unusual and terrible thing happens. There are two children playing on the road in front of you. There is a shortage of time to brake and if a car skidded to the left it may hit an oncoming bike.
Neither the situation would be good but which is the least bad?
It sounds terrible to think but it can happen in a driverless car autonomously running down the road. The scene will become very much common in the next decade or so and you need to gear up yourself for that.
There will be many issues in self-driving which will not just technical, mechanical, or even electronic but moral.
I want you to ask the above-mentioned question to your fellows and put the given answer in the comment section so that we can know how they will survive from the situation.
Not expanding the horizon from a small autonomous car towards the autonomous weapons. Yes, this is not science fiction but are weapons already operating without being completely controlled by humans. Missiles exist that can change course if there is an enemy counter-attack.
Researchers believe that there should have a program that prohibited autonomous weapons for deliberately killing civilians. Even it is a hideously difficult task to teach robots to distinguish between the enemy and the friend. An alternative approach to solving this problem is normally known as “machine learning”.
According to the philosopher Susan Anderson,
“The best way to teach the robotic ethics is first to program them in certain principles like avoid suffering or promote happiness and then have the machine learning from particular scenarios to apply these principles to the new situations.”
For instance, a robot needs to learn how to distinguish between the surgeon who is holding a knife which he is using to save the injured and a knife holding by the enemy combatant to kill the opponent.
Problems in Ethical Learning
Robots and AI are built by using machine learning which is based on algorithms. The root cause of ethical problems is directly associated with machine learning that throws up problems of its own.
The topmost is that is learning the wrong lessons. For instance, machines that learn languages from mimicking have shown biased results. Names of males and females have a separate association.
The machine may believe that Joanna or Fiona is not suitable to be a scientist than a John or Fred. These biases need to be alert and try to combat.
Another fundamental challenge is that machines evolve through a learning process that may be unable to predict how they will perform in the future. We not either able to understand how it reaches the decisions which are the unsettling possibilities specifically when the robot is making biased decisions.
A fractional solution may be to insist that codes will be audited if things do go wrong for a longer period. Handing over everything to robots is a silly and unsatisfactory decision.
Now, rethink the question of a driverless car in a crucial situation. The very likely answer to the question is that we need to program the cars to survive if such a situation occurs. Ultimately, we can hope that our machines are ethically programmed.
Either you like it or not, but there will be a time when more and more decisions will be delegated to robots that are currently taken by humans.