When I arrived home yesterday I found a copy of the 1843 Magazine on the table. This is a more ‘relaxed’ publication by The Economist.
Despite covering topics such as style, food and drink, it also covers technology. Don’t judge a book by its cover? Well it’s the cover that got my attention this week.
How do you teach an AI right from wrong? What is right and wrong? And what if the scenario changes? Morality poses very difficult questions even for us. When it comes to passing on our values, to machines of all things, how can we do this?
Just as in our lives it’s impossible to come up with an answer for every possible scenario before it arrives. So, laboriously pre-programming a response to every situation an ethical machine may encounter is a no good solution, as the article suggests.
So what is the best way to do this? There are many research institutions that tackle this question such as GoodAI, The Future of Life Institute, The Responsible Robotics Group and The Global Initiative on Ethical Autonomous Systems. I won’t summarize each approach here, but I suggest you take a look. Even here we are faced with a difficult question, because there can be more than one way to do something right and many ways to mess up as well.
Consider all the possible applications that ethical robots may be used for. The article discusses warfare and gives rescuing soldiers and taking out targets as examples of possible morally challenging situations.
It even suggests providing the AI’s with a robot equivalent of ‘guilt’. This will be of course, a programmed response rather than a truly felt emotion. Because, robots can’t feel, but we can. The decisions made by robots in the future will affect us to an extent that we probably cannot comprehend at this time.
Thinking back to the example the article provides of warfare; there are internationally agreed laws that lay out rules of engagement, but what if these are broken not by a group or individual, but by an autonomous AI that is faced with an ethical decision and chooses wrongly? Who is responsible for this? The programmers? The supervisors? The government? The nation?
Technology and programming is always vetted before put out to market. It’s time to vet a robot’s capability to make ethical decisions. Do you think we are ready to make this decision and what other challenges to you see that may arise from turning over ethical decisions to AI? Let’s discuss. Email in contact page.