Explore UCD

UCD Home >

“I have major concerns about gardaí using Facial Recognition Technology.”

Wednesday, 19 July, 2023

Labhaoise Ní Fhaoláin is a PhD researcher in artificial intelligence, law and governance at(opens in a new window)ml-labsin University College Dublin. She is a non-practising lawyer, a law society nominee to the Council of Europe committee on artificial intelligence and a member of the Law Society’s Technology Committee. 

Listen to the(opens in a new window)podcast

Labhaoise Ní Fhaoláin has “major concerns” about Justice Minister Simon Harris’s plans to introduce facial recognition technology (FRT) for gardaí. 

Earlier this month, Minister Harris said that he intended to bring an amendment to the Garda Síochána (Recording Devices) Bill at committee stage to provide for the technology’s use in limited circumstances. 

Those in favour of FRT argue that it will help to locate and apprehend criminals faster. 

“That's not sufficient justification,” Ní Fhaoláin counters. “Even if we could stand over the technology and say that it was 100% accurate, it still has implications for people's right to assembly, their right to privacy, their right to protest. I would have major concerns about the use of this technology by an Garda Síochána. Indeed, it has been rejected by an awful lot of other countries,” she adds, including by large US cities like Boston and San Francisco.

“I am not saying that we should always take our lead from other countries; we should be the leaders at times. But this is definitely a problematic area.”

FRT has been accused of unreliability and even racism, its accuracy rate declining for people of colour. 

Of course, it is not the world’s only divisive technology and Ní Fhaoláin says “all areas” of artificial intelligence “definitely” need more regulation and governance.

“ We're not talking about one specific thing because there isn't really a silver bullet for the regulation of artificial intelligence. It's an ecosystem. ”

Her work at UCD, in the School of Computer Science under the guidance of Dr Vivek Nallur (and co-supervised by Prof Colin Scott in the Sutherland School of Law) explores devising a “reflexive governance framework” to tackle the complex challenges of regulating AI. 

As the name suggests, this reflexive approach is “a type of governance which can adapt and change” and which includes hard legislation, softer AI policy strategies, standards certifications, ethical guidelines and a feedback loop from all stakeholders. 

The field is a continuous work-in-progress.

“Particularly with a technology like AI, because it changes. We can't take it at a point in time and say, ‘Now we're done, draw a line under it, we have regulated this area.’ We are never going to be finished.”

Luckily, Ní Fhaoláin finds her niche “fascinating” and she points out that there are already laws that cover many situations involving AI. 

For instance, in the case of British privacy expert Alexander Hanff, who is suing LinkedIn in Ireland alleging the social media giant’s artificial intelligence system has defamed him by putting ‘unwanted or harmful content’ warnings on his messages to others using the platform.

“This situation, as I see it, is not necessarily about AI: this is a defamation question. The question is whether sending a warning out to his followers that the content may be inappropriate in some way is defamatory. Does it lower the impression of him in the reasonable minds of others in society? Whether a person or an automated system places the warning on it, the question is still whether or not the statement is defamatory.”

Meanwhile, equality laws already exist to protect citizens from any prejudice that could exist within, for instance, hiring algorithms. 

“If you have a HR system for recruitment, and it has been trained on data about nurses and successful candidates, the majority of whom are women, if you allow that system to use gender as a defining feature as to whether or not this candidate is likely to be successful, then that's problematic. Because that is taking information into account which shouldn't be taken into account. So from a legal perspective, in that case, we have the Equality Acts.”

According to the Acts people cannot be discriminated against on the grounds of, for example, their gender, sexual orientation or race. But with machine learning and artificial intelligence systems, existing societal prejudices can sometimes slip through the net.  

“ In the States, you had hiring algorithms and candidates from particular universities weren't, as often, successful. It just happened that these universities, in some cases, had a large cohort of students of colour. While they weren't necessarily discriminating per se against a particular group, the effect was that people of colour were being discriminated against. So ML/AI systems can compound a discrimination very easily. ”

"Using addresses for mortgage applications and making decisions on that basis,” she says, offering another example. “That is what has the chilling effect.”

Explainable AI - where the system explains why you didn’t get the job or the mortgage - is “the way forward”, taking some of the mystery out of the decision-making process. 

“You need to bring people with you in order to ensure that the technology is used to everybody's best advantage,” says Ní Fhaoláin, who advocates for algorithms to go through a certification process to get a CE mark before deployment.

The area of AI and the law throws up interesting questions. They are not commercially available yet but if an autonomous car crashes into you, who is liable - its passenger or the AI powering it?

“The AI system is not responsible; the person operating the car is responsible. We are going to have a mix of driverless cars and regular cars on the road at the same time. So you can't say that you're going to effectively prefer the AI system over the individual. From a criminal law perspective, trying to make a technology criminally liable would be problematic.”

She flips the common concern that regulating artificial intelligence might stifle innovation. 

“Regulation can also encourage innovation because the market likes certainty. Investors won't want to plough millions into something only to find that they can't bring it to market.”