A driverless bus carries five adults to work, and a child suddenly jumps on the road. Who should the vehicle protect? Stop suddenly and harm the passengers or keep going and hurt the pedestrian? Who is more important?
A virtual assistant Alexa put a child in a dangerous situation by instructing her to put a coin into a live plug socket, which could end up with an electric shock. Luckily the mother intervened and nothing bad happened, but robots have caused many fatal accidents.
Different ages, ethnicities, disabilities and needs. There is a lot to consider when creating algorithms for a smart machine.
A robot bar should “serve” a drink to anyone, regardless of gender. An investment bot should offer investment opportunities everywhere, not just in wealthier cities. These are two of the examples Estonians are working on and that were listed among the 100 world’s best by the International Research Centre in Artificial Intelligence under the auspices of UNESCO. Altogether, four Estonian artificial intelligence systems were selected.
Ethics played a role in the selection criteria. It increasingly does when speaking of smart machines, because it’s about who matters and who decides what is important. Our well-being and safety increasingly depend on how robots around us are programmed.
This is a lot of pressure on tech-savvy companies.
Robots are safer than humans
Estonia’s company Auve Tech creates autonomous vehicles. For their team, ethical questions are constantly on the table.
Ethics is primarily a trust issue in Auve Tech. “Autonomous vehicles have to be and are safer than human drivers,” said Taavi Rõivas, a former Prime Minister of Estonia, now a Chairman of Auve Tech.
It’s clearly not enough if only Auve Tech’s team agrees on that. The passengers have to be on board too, literally and figuratively speaking.
“If we are able to convincingly demonstrate how we make sure robots are more predictable than people, driverless transportation will be accepted. We can then start applying machine learning in more various fields,” Rõivas emphasized.
That’s why, in his view, Estonia’s e-government is a success story. A lot of effort was put into explaining how the system works. Once people understood it, they were more open to using the online services. Rõivas aims to do the same in the field of autonomous vehicles.
“Humans make mistakes, but robots are programmed to follow all the traffic regulations down to every detail,” Rõivas pointed out.
He has a point. Various human drivers, including minors, are allowed to sit behind wheels in traffic. People presume they understand each other and can predict each others’ actions, even though statistics show a different story. At a global level, approximately every 24 seconds someone dies in a traffic accident. It’s among the ten most common causes of death amongst people aged 5 to 55. Yet, people don’t object to human drivers.
To create awareness, Auve Tech shows its passengers how the driverless bus works. Out of three screens inside the bus, at least one shows how the vehicle “views” the world. The operator, who often rides along, explains how the lidars, cameras and the operating system function. He visualizes how the bus is able to see and grasp more than any human driver ever would.
Transparency and openness have been part of Auve Tech’s policy from the very beginning when it grew out of a cooperation between Tallinn Technological University and a private company Silberauto. Auve Tech continues to work with many experts and scientists around the world.
Their buses drive on the streets of Tallinn and many other European cities with up to 25 km/h and a human can take over in case of a problem. In that way, any serious damage is avoided. As the field develops, the buses will be driving faster, Rõivas believes. Spreading knowledge and bouncing ideas are crucial first steps in Rõivas’ view: “We don’t want to impose anything. Society needs to be ready for it.”
Building on users’ feedback
In case you do prefer a human driver, calling a cab with a mobility company Bolt is simple. A push of a button on an app and – like magic! – it turns up. But even here there is no escape from robots. It’s not a human, but a machine that decides in which car the passenger will end up in. All we can do is hope that the algorithms get the best option for our taxi drive.
For Bolt, many processes are automated by teams of engineers, data scientists and product managers. For example, in some markets with high crime rates, machines can scan and decide which orders could be more dangerous based on the time of the day and district among other factors. In case of higher risk, the user identity is verified before the order to protect the Bolt driver.
Even though every location has their own “human” team to consider the local context and threat levels, they rely on their data analytics and machine learning teams to improve the processes. Operating in over 400 cities, people alone could never handle that much data!
So how do they make sure everyone’s needs are being met? They ask people.
When it comes to safety, Bolt’s automated processes rely on the users’ feedback. The algorithms make calculations based on reviews and customer support tickets. When something appears out of the ordinary or there is a suspicion for a possible mistake or a threat, a human jumps in to verify, Bolt’s data science chief Siim Maivel explained.
When it comes to those clients needing extra care and attention, like children, elderly or people with disabilities, there is a way to indicate it on the app.
“If everything’s in order, the identification process can be highly automated,” Maivel continued. “Only if the machine has doubts or doesn’t recognise someone, for instance, the decision-making is turned over to a person.”
In Maivel’s view, machine learning expands the capabilities of humans to process more cases efficiently and spot patterns invisible to the naked eye.
The power in the hands of IT-specialists
To avoid accidents of any sorts, Estonian ethics professor Margit Sutrop encourages artificial intelligence companies to play out various scenarios already in the early stages of the development.
“When it comes to artificial intelligence, the questions of safety and privacy are not theoretical anymore,” said Margit Sutrop, a founder of Tartu University Center for Ethics and the European Commission’s ethics expert. “They are very real, with very tangible outcomes.”
Smart machines surround us on the streets, in our homes, around our wrists and in our pockets.
How come we still don’t fully trust them?
“Many AI solutions are becoming like black boxes,” Sutrop said. “There is rarely a chance to trace back to understand how the machine came to the decision. It is becoming increasingly complicated.” Companies tightly keep their data to themselves and are not usually open to explaining nor sharing it, fearing it makes them less competitive.
Hence, in Sutrop’s view, marking down the principles and making sure as many people with different life experiences and backgrounds are involved in the process of creating the algorithms.
“If we don’t think about all this already now, the solutions created may not be acceptable in the future. It’s a matter of safety!” Sutrop said.
She advises tech startups to gather social scientists, lawyers and ethics experts around the same table and discuss all possible future scenarios.
To create the bases of a common ground, the European Commission has compiled a list of ethics guidelines for trustworthy artificial intelligence. Just like Rõivas brought out, transparency and trust are the cornerstones of ethical AI. Developers should build diversity, non-discrimination and fairness within their technological solution. It’s just as important as sustainability and social impact.
It may seem a lot to ask for from someone simply creating an app or a machine, but as AI is pushing into every field, people need to agree on what is important for them.