Self-driven vehicles have to decide whether passengers or pedestrians are more important.

The increasing growth of the technological revolution is really unprecedented; our smartphones virtually assisting us with our basic wants like navigation, … We can safely say that technology is a prosthetic, without it our lives are just incomplete. Technology has even taken the role of man already in most industries — machines now work efficiently as compared to man. No wonder self-driving vehicles are now a thing. But there’s a catch: How will self-driven cars react in an accident? This argument is one of the many controversial issues that the rise in technology poses to us.


Related media: The Ethical Dilemma Of Self-Driving Cars


The Inevitable Fate

They say self-driven vehicles are safer than vehicles with human drivers. That’s seemingly true, but that’s quite controversial, though, as Artificial Intelligence (AI) controlled vehicles inch closer and closer into our highways and streets, we’ll probably hear more and more about AI controlled vehicles getting into road crashes — even the well experienced driver can’t avoid every accident. So, what if a self-driven vehicle finds itself in a situation where a deadly accident is inevitable, how should it make a decision to minimize the damage? This inevitable scenario is one of the challenges that engineers face.

We guess you’ve heard of the trolley problem. It’s one of the most controversially debates in moral philosophy that’s addressing such issues. Here’s how it goes:

Let’s say you’re a train engineer, and an out-of-control engine is barreling down the tracks. There are five people tied to the tracks right in its path, certain to be killed if the train hits them. The good news is that you’re standing by the lever that will send the train to a different track. The bad news? There’s one person tied to that track. Do you pull the lever? If you do, will you be guilty of murder?

There are tons of variations on the problem — variations where there are more people on the other track, variations where you know some or all of the victims, and even variations where the person you’d choose to sacrifice ends up being the villain who caused the whole mess in the first place.

This moral philosophical quandary was first made popular by Philippa Foot in 1967. She probably conceived it as more of a thought problem to shine a light on how we think about ethics, and not as a moral problem anybody was likely to face in the future. But now, the invention of self-driven vehicles is making this hypothetical scenario a reality waiting to happen. Engineers are now facing a challenging issue very much like the trolley problem.

As an obvious fact, self-driven vehicles are very likely to find themselves in inevitable situations just like the trolley problem, and would have to come to a decision — where whatever decision they make, people will die. Now, instead of the trolley, imagine, that it’s a self-driven vehicle that’s about to hit five people in a crosswalk. It doesn’t have time to brake, but it does have time to swerve into a barricade, killing its sole passenger. What should it do?



The AI Practical Survey

In a new study published by the Society for Risk Analysis, researchers want to answer this question by posing several alternative hypothetical scenarios to participants they recruited in an online survey. Participants had to choose whether a vehicle should stay in its lane or swerve in a situation — where staying would endanger the life of a pedestrian on the street, and while swerving would threaten a bystander on the sidewalk. Other scenarios had various levels of the certainty of a collision with either victim.

The pedestrian — if threatened by staying in the lane — would be granted a 20 percent, 50 percent, or 80 percent chance of collision; whereas the bystander — if threatened by swerving — would be granted either a 50 percent chance or an unknown chance.

Let’s say that the bystander had a 50 percent chance of being hit, and the pedestrian’s chances were 20 percent — almost nobody said that the vehicle had to swerve; which kind of makes sense. And if both the pedestrian and the bystander had a 50 – 50 chance of being hit — about 13 percent of participants made suggestions that the vehicle had to swerve to avoid the pedestrian. And if there was an 80 percent chance that the pedestrian would be hit — more than 60 percent of participants said the vehicle had to swerve.



Choosing Against The Odds

If the participants had no idea of the chances hitting the bystander by swerving, however, paints a different picture. As you’d expect, the higher the odds of hitting the pedestrian, the more participants chose to “swerve.” In this case, the increase was basically a straight line from about 15 percent saying “swerve” when the pedestrian faced a 20 percent chance of being hit to about 45 percent of participants swerving when the pedestrian’s odds were 80 percent.

But there’s a huge catch here: In the case of an 80 percent chance, it seems the participants knew what the odds were for the bystander. This indicates that the participants generally prefer the vehicle to stay in it’s lane, even if the odds were pretty high for the pedestrian being hit. It later got really interesting when they posed that same scenario under the assumption that a human was driving, and not a computer.

If that’s the case, then, choosing the odds of hitting a pedestrian were 80 percent, and the odds of hitting a bystander were unknown — about 60 percent of the participants would have swerved, not knowing the chances of likely hitting a bystander on the sidewalk.



Gambling With The Inevitable Fate

Intuitively, the Society for Risk Analysis surely isn’t the only institution confronting this question. The Massachusetts Institute Of Technology (MIT), has a mechanized device for analyzing such issues known as the Moral Machine. It gives anyone the chance to make a multitude of hard decisions, with each guaranteed to leave at least one person dead. With this test, we’re in full-on trolley problem territory, and your answers aren’t just about the odds of hurting someone: They’re about who you choose to hurt, how, and why.

Here’s how you’ll get the full experience. Take the test for yourself: (here are a few of the truly brutal options that you might face).

Let’s say you’re cruising in one of these self-driven vehicle with your spouse, and your two children. And another family like yours happens to be crossing the street. You can choose for the vehicle to crash itself, killing you and your spouse, injuring your daughter, and leaving your son with an unknown fate; or choose for the vehicle to go through the intersection, killing the other set of parents and leaving both of the other set of children with an unknown fate.

Or, what if you have to choose between killing two adult passengers in the vehicle, or killing three jaywalking pedestrians — one criminal, one innocent woman, and one baby? Ouch! Hold on, there’s one more: Is it better for your vehicle to run over five jaywalking babies or one law-abiding dog? Um, wow.

There’s really no way to top that. Just take the test and start feeling horrible about your decisions right now.


Read more facts like this one in your inbox. Sign up for our daily email here.

The Factionary is ever ready to provide you with more interesting content for your reading pleasure. If you’re amazed by our work, you can support us on Patreon by a donation fee of your choice. Thank you!

Written by: Nana Kwadwo, Thu, Feb 14, 2019.

Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.