Jason Edward Lewis wants you to know that there are fundamental biases built into artificial intelligence, as there are with any human-designed system.
We might like to think of technologies like machine learning as purely rational and scientific—free of the assumptions that plague social systems.
But according to Lewis—an expert in assessing AI from Indigenous perspectives— nothing could be further from the truth.
“AI is being developed in this epistemic echo chamber,” he said. “It really is a very specific way of trying to understand what humans are, what the mind is, what intelligence is.
“The bias that's in these systems is not just a bug. It's a feature of white supremacy, a feature that grows out of a whole bunch of interlocked and layered systems.”
The Concordia University professor of design and computation arts will discuss what artificial intelligence means to Indigenous communities, and how they can help shape its development, in a virtual lecture at the University of Alberta Oct. 29.
The talk is sponsored by the Kule Institute for Advanced Study and two of the university’s signature areas: Situated Knowledges, Indigenous Peoples and Place (SKIPP) and AI4Society.
“Artificial intelligence comes with a lot of epistemological baggage about what we think intelligence is in the first place, and what artificiality might be,” said Geoffrey Rockwell, director of the Kule Institute and professor of philosophy and humanities computing in the Faculty of Arts.
“We're trying to bring in different voices—not the usual voices that are heard when people talk about artificial intelligence.”
For Lewis, overcoming bias in artificial intelligence begins with rigorous and honest critique, and a recognition of “the long history of western technology being used to the disadvantage of Indigenous people, Black people, poor people, and so on.”
The bias that's in these systems is not just a bug. It's a feature of white supremacy, a feature that grows out of a whole bunch of interlocked and layered systems.
What western philosophical and scientific traditions lack, he said, is a “relational” understanding of humanity’s place in the world.
“Many Indigenous cultures, certainly ones I'm familiar with, still place a high premium on relationality,” he said.
“Ever since the Enlightenment and scientific revolution, we've systematically pared away all kinds of real engagement with the non-human as a peer relationship.”
That tunnel vision excludes many communities when it comes to designing a technology with human characteristics.
“AI is a non-human entity that we're birthing into the world,” said Lewis. “But we give it agency—the systems are making decisions about who gets loans, who gets to stay in jail longer, who gets recognized and who gets tracked.
“We're turning over wholesale big chunks of decision-making that used to be made by humans.
"The big problem with machine learning based on huge volumes of data is that it's all about statistical abstraction. Everything that lies on either tail (of a statistical mean) gets erased, and just by virtue of our numbers as Indigenous people, we're in the tails, so we get systems that don't work well for us.”
One prominent example, said Lewis, is the Human Genome Project: “The fact is that 95 per cent of the people whose genomes were sampled to build this map are from European descent,” largely excluding those of Indigenous descent.
Many of Lewis’ latest findings are drawn from a collaborative position paper he helped compile after a meeting of Indigenous people in Hawaii in 2019 that looked at how artificial intelligence relates to the Indigenous experience.
Responding to his U of A lecture this month will be Maggie Spivey-Faulkner, a new assistant professor recently recruited to the Department of Anthropology.
“A lot of Jason's work is about relationality between our species and others in the world,” said Spivey-Faulkner.
“If you're only using one fundamental philosophical stance to try to develop new science, it's going to be inherently flawed. The more we can remove that western lens and substitute it with others, the better we'll be at doing whatever it is we're trying to do.”
The next phase of Lewis’ research will bring together Indigenous technologists, knowledge holders, language keepers and those who build AI systems to see whether there are ways to develop protocols that benefit Indigenous communities.
“I’m of the view that you’re never going to completely get rid of the bias, because the fundamental approach will keep reproducing it,” he said.
“But these systems are live—they're here today. And we have to address them today, so the work to mitigate that is really important."