A common misunderstanding of artificial intelligence is that it has its own source of intelligence. AI is software controlled by a series of logical reasoning set out in code.
Programmed by us and provided with data it analyzes via machine learning, AI makes deductions based on consuming these data inputs. It processes information based on a prescribed set of logical reasoning, and then creates an output followed by actions based on these deductions.
So, at the end of the day, we are responsible for the actions carried out by AI. Its code, after all, is a reflection of the way we think and reason.
But what happens when AI encounters a moral dilemma?
Let’s take a look at the age old trolley dilemma described by Philippa Foot in 1967:
“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:
1. Do nothing, and the trolley kills five people on the main track.
2. Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?”
When there are ethical dilemmas such as the above, what are our obligations to enforcing code to deal with such situations? And who must make these decisions? The programmers, the companies that program the AI, the government, or is it our vote?
The way we program these decisions provides a good snapshot of our own value system and as AI consumes data, its actions will better reflect these values.
You could say that this evolution of AI is similar to that of the character Data on Star Trek 2nd Generation. In early episodes, Data is very android-like, but he evolves throughout the series having learned from analysis of his experience. That’s what we do too as we go through life. We have built-in learning algorithms and are strongly influenced by our experience. So far, that hasn’t been replicated in code and this will remain so until AI assume a reasoning capacity that is beyond what is initially programmed into them.
As we become faced with greater automation in society, what moral framework should guide us? Processing large sums of data for machine learning will result in the most popular morals laying down the law, but is this always a good thing? For example, think of Microsoft’s experiment with AI chatbot Tay, that after only one day live on Twitter, began to make some, well, unfortunate remarks from racist an sexist to anti-semitic.
All in all, it’s not necessary for an AI to understand us. AI is a program, but an AI trained on human emotion could provide some interesting insight into us. Can we solve the trolley dilemma once and for all?
Check out this short story that is a modernized version of the trolley dilemma. Think of the various variables in this situation. First, we have the pedestrians who have decided to illicitly cross the freeway, second, there are the speeding vehicles, and third, the speeding truck that does not have up to date AI software integration. At the end of the story, it’s the driver of one of the speeding cars that is hit and killed by the speeding truck. Although it was the truck driver’s fault for the accident, the AI software deduced that it would be better to risk the life of its vehicle’s operator than to run over the pedestrians. Overall, there are a number factors here that contribute to a morally complex situation.While we use our “gut feeling” to determine what is right in this situation, can we get to the bottom of our morals to create accurate software? MIT has hopped right to the task by creating a website where you can judge on a variety of driverless vehicle scenarios that involve ethical decision making.
This brings us to yet another dilemma. Studies have shown that how we behave online shows discrepancy with data collected on live behavior. Have you watched the movie Nerve? When everyone judges from the app that one person should harm another, it’s not quite the same when all of a sudden everyone is in one place chanting for someone to take another’s life. I won’t say what happens, but it’s an interesting social situation that considers the social implications of technology, and our behavior online versus IRL (in real life).
The issue with all of the above, is that we can’t agree. Many similar dilemmas in society are outlined by the legal system. As our technology, and specifically action oriented AI, becomes more sophisticated, should the legal system rule on cases that determine coding to deal with ethical dilemmas such as this? Even Elon Musk agrees that we need a regulatory body to oversee the development of AI.
Often we criticize technology for its potential negative affect on us, but we must remember that it is us who create technology. What does our technology say about humanity, and what is your take on the trolley dilemma?
By @scifiannemarie