A warning has been issued by University of Alberta Faculty of Law assistant professor Dr. Gideon Christian regarding authorities implementing racially biased artificial intelligence systems.

The warning was issued in the form of a press release published by the institution, and Christian reminds the public that while technology may seem unbiased, there are very real instances where it is. Notably, the assistant law professor received a $50,000 donation from the Office of the Privacy Commissioner Contributions Program for a research project called Mitigating Race, Gender, and Privacy Impacts of AI Facial Recognition Technology. This initiative aims to study the impact of AI-powered technologies, such as facial recognition has on race issues.
Notably, Christian has already claimed that AI-powered face recognition technology is damaging to people of color, and that this technology, while appearing to be unbiased, has the capacity to replicate human biases. Furthermore, Christian says that AI-powered face recognition technology has a 99% accuracy rate in identifying white male faces, while maintaining an accuracy rate of just 35% for faces of black women.
The assistant law professor goes on to say that this discrepancy in accuracy may lead to the technology falsely matching an individual's face with someone who has committed a crime, which could result in police knocking on your front door and arresting you for something you haven't done.
"Facial recognition technology can wrongly match your face with that of some other person who might have committed a crime. All you see is the police knocking on the door, arresting you for a crime you never committed."
"We know this technology is being used by various police departments in Canada. We can attribute the absence of similar cases to what you have in the US based on the fact that this technology is secretly used by Canadian police. So, records may not exist, or if they do, they may not be publicized," he said.
"What we have seen in Canada are cases (of) Black women, immigrants who have successfully made refugee claims, having their refugee status stripped on the basis that facial recognition technology matched their face to some other person. Hence, the government argues, they made claims using false identities. Mind you, these are Black women - the same demographic group where this technology has its worst error rate."