Saturday

When algorithms discriminate: Robotics, AI and ethics | UK

We live in an age of rapid technological advances where artificial intelligence (AI) is a reality, not a science fiction fantasy.

Every day we rely on algorithms to communicate, do our banking online, book a holiday – even introduce us to potential partners.

Driverless cars and robots may be the headline makers, but AI is being used for everything from diagnosing illnesses to helping police predict crime hot spots.

As machines become more advanced, how does society keep pace when deciding the ethics and regulations governing technology?

Al Jazeera talks to Stephen Roberts, professor of Machine Learning at the University of Oxford, United Kingdom, on the role machine learning plays in our lives today – and in the future.

“We’re certainly nowhere near that particular point where there are going to be swarms of armies of robots that are taking over the world,” he says. 

“We have to remember that automation and autonomy are something that is very deeply embedded within our world already. Whether it’s from algorithms that are trading on global financial markets, to smart algorithms that are scanning our emails … predictive text on our mobile phones is another such intelligent algorithm. So I think these kind of things we’re very familiar with but we’re not afraid of.”

One question that’s raised by the increase in applications of machine learning is who’s to blame when an algorithm fails. 

“It becomes almost bizarre philosophical commentary but as a society we need to address these questions head-on … What happens if a robot surgeon gets something wrong? Who is to blame? The hospital? The designers of the robot? The people who created the algorithm? … I think as a society this takes us into very new territory,” professor Roberts says. 

And there are more nuanced risks to our increased reliance on machine learning. For instance, by perpetuating and amplifying biases that are already present in our society.  

“I think much of the bias goes back to the data which the algorithms often are fed … If we take the word scientist … most of the pictures we see will be of dead white men. The world is much bigger than white men and yet an algorithm doesn’t necessarily have the sensitivity to understand that it is looking at a very biased collection of pictures,” professor Roberts explains.

We obviously are going to have to work very hard in order to try and produce unbiased algorithms which take these sensitivities into account.”

Source: Al Jazeera

Source link

Follow Us @soratemplates