advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

The importance of ethics when dealing with autonomous vehicles

Autonomous vehicles (AVs), or self-driving cars, will not only have an impact of the world of motoring in coming years, but will have an influence on a myriad industries moving forward.

In fact the global market is predicted to hit the $42 billion mark in 2025, and from there see a compounded annual growth rate of 21 percent up until 2030.

As such a lot of eyes are on the technology, including that of SAP.

“Not only will this disruption be immense, giving birth to new types of businesses, services and business models, but their introduction will trigger similarly immense ethical implications,” notes Rudeon Snell, Leonardo lead at SAP Africa.

Another important consideration when it comes to autonomous vehicles is traffic and in particular accidents caused by cars. With the World Health Organisation (WHO) stating that 1.35 million people worldwide as a result of car accidents, any system that can reduce that number significantly will be welcomed.

The great debate

But, as with most solutions that hinge on artificial intelligence and machine learning, there is a fulminating debate around the ethics involved with self-driving cars.

“The benefits of autonomous vehicles are certainly clear – time saved, increased productivity, improved safety, continuous service availability – but the challenges remain and if not addressed, could potentially wreck the promise this technology holds,” says Snell.

“One of the key challenges for autonomous vehicles is centred around how they value human life. Who decides who lives and who dies in split-second decision-making? Who decides the value of one human life over another? And, more importantly, how is that decision calculated?,” he ponders.

While many categorise being on the road as mundane, it still forces drivers to make life and death decisions on a daily basis, which is something that self-driving cars, and the technology powering them will have to contend with as well.

“If an autonomous vehicle makes a mistake, it could directly lead to loss of life. In these scenarios, the question of who decides who lives and how that decision gets made, becomes very important,” stresses Snell.

Moral conundrum

Using the well trodden example of either hitting a person stuck in the middle of the road, or swerving and potentially hitting a group on the sidewalk, Snell posits who is responsible in such an event if a self-driving car is involved?

“Experienced human drivers have been programmed for years to deal with split-second decisions like these and they still don’t always get it right,” he adds.

As such there’s a certain moral conundrum that self-driving car makers and owners need to tackle. If for example you take a utilitarian approach, as Snell describes, the greater good often wins out in the equation. But again who makes that determination, especially when timely decisions need to be made.

“How do utilitarian calculations that violate individual rights get reconciled? Why should my life be less important than five strangers I don’t know?,” the SAP exec asks.

Snell suspects that it will come down to the consumer, and what they’re willing to adopt in these situations.

“Who decides what ethical guidelines AI in autonomous vehicles will follow? The harsh truth is that ultimately, we as consumers do,” he says.

“We do so by voting with our money. If 90% of autonomous vehicle sales are for units that prioritise our lives potentially at the expense of others, and only 10% of autonomous vehicles are sold following utilitarian principles, guess where the focus will be for future autonomous vehicles? Yes, there might be mitigating alternatives,” Snell points out.

Where the buck stops 

With so many moral implications for the systems we put in place for autonomous vehicles, can a resolution ever be found to ensure this potential-packed technology delivers on its promise?

Snell says such decisions fall to the technology providers.

“A key takeout is that policy makers do not govern advanced technology in the commercial world, ultimately technology providers do. Some consequences can be anticipated and are linked to the promises made on behalf of the technology, while some consequences unfortunately remain unforeseen,” explains Snell.

“My hope is that we strive not to be surprised by unintended consequences which could derail the promise that breakthrough technology offers, just because we didn’t take a moment to anticipate how we could deal with them,” he concludes.

With a widespread rollout of autonomous vehicles and services to support them just around the corner, it’s never been a more important time for technology providers and vendors to more actively consider the decision-making behind their self-driving cars.

Like Snell, we hope they do so with painstaking thought.

[Image – CC 0 Pixabay]

advertisement

About Author

advertisement

Related News

advertisement