News

Algorithmic injustice, some robots just don’t like you

Algorithmic injustice, some robots just don’t like you

 

A podcast of the event is available on the Discovery Soundcloud Channel

By now, most of us have heard something about the term “Artificial intelligence” (AI) and how it has been employed to help tackle numerous arduous tasks in our world today. You may also be of the opinion that this tool is impartial, doesn’t take sides and won’t always leave you to be picked last for the football team. Sadly, this is not the case and although today computer programs are making significant decisions about humans lives, there is growing evidence that many of these algorithms suffer from racial and gender bias. 

“These predictive algorithms have been shown to discriminate against women or minorities in hiring and loan granting decisions."

Speaking at his Algorithmic Injustice and Artificial Intelligence in Peace and War talk hosted by UCD Discovery Institute Prof. Noel Sharkey who has spent over 40 years in the field of robotics and AI said that, “These predictive algorithms have been shown to discriminate against women or minorities in hiring and loan granting decisions. Computer programs like this are even entering public institutions for activities like predictive policing. They can decided whether you are likely to commit a crime or if you have the face of a dangerous criminal.”

This may seem like a step away from the norm and unlikely to affect you but when you see that a number of large multinationals who have global bases in Ireland are also using similarly biased facial recognition software when hiring new staff members, you start to see the picture. “Several large corporations”, Prof. Sharkey continued, “are now using AI to track job applicant’s facial expressions when asked interview questions. These facial maps are then compared to the companies most successful employees, who currently tend to be white males in senior management positions. So when companies do their first cuts to whittle down the numbers, a disproportionate number of white males are getting through as they tend to match other male expressions.”

This is a real an more common problem than a lot of people are unaware of but there are other, more dangerous connotations as AI gets used more and more in other areas such as the military. China, the US, Russia, Israel and the UK are investing billions of dollars in the pursuit of the perfect fighting machines. There are now autonomous planes, submarines, ships, tanks and soldiers ready to take to the field of battle, “delegated with the decision about who lives and who dies in armed conflict. This is why we must work to stop using AI for the selection of targets and for the application of violent force. We must always put humans first”, said Sharkey who currently sits as the active chair of ICRAC, the International Committee for Robots Arms Control.  

Director of the UCD Discovery Institute Professor Patricia Maguire echoed Prof. Sharkey’s words and added that, “At previous events at UCD Discovery we have highlighted how AI can provide a series of tools and approaches that have the potential to help organisations become more effective –ie. doing more for less. What Noel has opened our eyes to is the dangers of the misuse of AI and the current limitations of the technology. We need to have a balanced and critically evaluated system in order to assess the merits and ethics of using artificial intelligence no matter the application.” 

What is absolutely clear is the power and potential of AI as a technology. It can be used for good and bad in equal measure. We cannot afford to wait until the day that AI is telling us that it was just following orders. AI and the systems that use it are just machines and as such they will do only what we program them to do within the limits of their technology. They do not have a personality but they can inherently have bias built into them. This is why we must be vigilant and actively critical of AI use and its role in society.

***

The Algorithmic Injustice and Artificial Intelligence in Peace and War event was organised by the UCD Institute for Discovery.

UCD Discovery supports emerging interdisciplinary research in UCD and helps to build and strengthen interdiscplinary networks and research themes across all disciplines in the University.  Our programmes are interdisciplinary and open to all UCD faculty.

Noel Sharkey PhD, DSc FIET, FBCS CITP FRIN FRSA is Professor of AI and Robotics and Professor of Public Engagement at the University of Sheffield. He holds a Doctorate in Experimental Psychology and a Doctorate of Science. He is a chartered electrical engineer, a chartered information technology professional and is a member of both the Experimental Psychology Society and Equity (the actor’s union). He has published well over two hundred academic articles and books as well writing for national newspaper and magazines. In addition to editing several journal special issues on modern robotics, Noel has been Editor-in-Chief of the journal Connection Science for 22 years and an editor of both Robotics and Autonomous Systems and Artificial Intelligence Review. His research interests include Biologically Inspired Robotics, Cognitive Processes, History of Automata/Robots (from ancient to modern), Human-Robot interaction and communication, representations of language and emotion and neural computing/machine learning. But his current research passion is for the ethics of robot applications.

Noel appears regularly on TV (around 300 appearances) and is interviewed regularly on radio, in magazines and newspapers. He was chief judge for every series of Robot Wars throughout the world as well as “techspert” for 4 series of TechnoGames and co-presenter of Bright Sparks.

After many years of detailed research within Artificial Intelligence and robotics, Noel’s core research interest is now in the ethical application of robotics and AI in areas such as the military, child care, elder care, policing, surveillance, medicine/surgery, education and criminal/terrorist activity. He serves as an advisor to the National Health Service think tank Health2020, is a member of the Nuffield Foundation working group on the ethics of emerging biotechnologies, is a director for the European branch of the Centre for the Policy of Emerging Technologies and co-founder of the International Committee for Robot Arms control. Currently Noel is a Leverhulme Research Fellowship for the ethical and technical appraisal of Robots on the Battlefield.