Berkeley Law J.S.D. candidate Mahwish Moazzam wasn’t that interested when the first ad for an AI-generated headshot app appeared on her feed. But when ad after ad kept coming, she finally caved into her curiosity, uploading a casual selfie. At first, she was pleasantly surprised by how professional it looked — the app even put her in a blazer.

But there was one thing missing in the headshot: her hijab. And over the course of the next few months, it happened over and over again on different apps, until she had a total of 25 headshots without her hijab.

“Nobody was talking about it,” Moazzam said. “But when we look into these (apps), these tools are not merely making mistakes, they are reshaping how people are represented in digital spaces. And when that reshaping repeatedly (erases) visible religious markers of identity, the issue is no longer technical. … It is discrimination, it is exclusion and it is a question of human dignity.”

Electrical engineering and computer sciences professor Emma Pierson said bias in AI systems that disproportionately cause harm for people of color is not a new thing.

In her research focusing on AI discrimination in healthcare, Pierson revealed that the databases that algorithms are trained on tend to be predominantly European. Therefore, during diagnosis or risk predictions for particular diseases, these systems tend to provide incorrect results for minorities.

“If you’ve got an algorithm that’s messed up, that can be messing up tens of millions of people’s lives,” Pierson said.

To produce more equitable outcomes, Moazzam points out that both AI databases and the teams building the AI need to reflect the diversity of the people who use it.

Beyond that, Moazzam understands that it’s hard to pinpoint accountability because flaws within these complex systems are not created by any one person.

“For me, there’s the chain of accountability, starting from training data to getting data in the market,” Moazzam said. “So (these AI companies) cannot say, ‘No, (the AI) was free, and it was autonomous. So we are not responsible.’ But no, you are responsible.”

According to Moazzam, there are already several laws that focus on protecting consumers from algorithmic discrimination. A significant example is California’s AB 316, which prevents developers from asserting that “the artificial intelligence autonomously caused the harm to the plaintiff.”

There is yet to be a comprehensive federal law on AI bias and discrimination.

Pierson said she has hope for these algorithms as she works on implementing changes to make AI more impartial.

“I would say the more optimistic flip side of this means that you can correct this on a large scale, right?” Pierson said. “It’s very difficult to sort of, you know, go in and rewire millions of human doctors. It’s easier to fix AI’s big decisions and skills if you know what’s going wrong.”