⚠️ AI Bias: A Problem to Solve or a Feature to Control?

The fight for fair AI systems starts with understanding who’s in charge.

Algorithmic Bias: Unintentional Flaw or Prejudiced by Design?

At the start of this year, our GenAI Works team predicted that 2025 would be the year artificial intelligence moved from experimentation to impact. Less than three months into the year, that forecast holds true. The growth in available AI agents and agentic AI releases is a pretty good illustration of that.

But AI has a dark side. As automation creeps into everything—from language learning apps to mortgage lending decisions—the threat of algorithmic bias spreads.

Even Ilya Sutskever, the co-founder and former chief AI scientist behind OpenAI, has expressed his concerns about the moral compass of autonomous AI and how it can perpetuate biases.

What is Algorithmic Bias?

Algorithmic bias refers to systematic errors in AI models that create discriminatory outcomes. For example, the University of Chicago reported that AI has made housing discrimination worse for minorities in the United States.

I have always considered these biases accidental: flaws in the system due to imperfect data. However, a reader gave me food for thought after we published my article about AI CEOs. He said:

AI is already deciding who gets hired, who gets a loan, and who the system ignores, but let us stop pretending it is neutral. Algorithmic bias is not just a flaw; it is a force multiplier for discrimination.

These systems are not broken. They are working exactly as designed by people with power, for people with power.

— Mark Cook

His hypothesis implies that algorithmic biases aren’t just a technical glitch or a reflection of social injustices.

Even worse? They could be intentional.

Where Do These Biases Come From?

Algorithmic bias often stems from historical prejudices that make their way into datasets. 

Even well-intentioned designers can accidentally create biased systems if they're not careful. A 2023 study showed that data collection methods often favor mainstream sources, which can lead to the underrepresentation of marginalized groups.

The human element also plays a role. 

Programmers decide which data to include, how to categorize it, and which variables to prioritize—all of which can introduce bias. Additionally, the lack of diversity in tech can cause unconscious biases to flourish unchallenged.

Algorithmic Bias in the Real World

So, are academics overreacting? Or is there a genuine concern that algorithmic bias is a growing problem? Let’s consider some real-world examples across several industries:

  • Healthcare: IBM found that computer-aided diagnosis systems tended to return results with lower accuracy for Black patients because they were underrepresented in the data.

  • Employment: A 2023 study found that recruitment algorithms often reproduce discriminatory practices based on gender, race, and personality traits. IBM confirms this by pointing to an AI model Amazon stopped using due to its biased preference for men’s resumes.

  • Financial Services: Forbes reports that AI led to 40-80% higher mortgage denial rates for US racial minorities compared to white Americans with the same financial profile.

Accidental Flaw or Designed Discrimination?

There’s no denying the implications of these AI flaws. But is it really evidence of bias by design? 

As Ilya Sutskever himself points out. Once engineers have fed the machine tons of complex data, the model itself becomes so complicated that we need a fresh investigation to understand it.

Sadly, few companies are investing in this type of research. As Sutskever initially feared in his interview with The Guardian: AI companies are in an arms race to build the world’s first AGI. And ethics takes a backseat to those ambitions.

In the book “Weapons of Math Destruction”, Cathy O’Neil also argues that algorithms can amplify existing social inequalities while pretending to be neutral. She warns that the perceived objectivity of mathematical models can make it harder to identify and challenge their biases.

💡 Key Takeaway? Technical limitations and unconscious biases certainly contribute. But, these factors operate within broader social systems characterized by historical inequities and differential power.

What Can We Do About It?

The good news is that while AI is developing quickly, we haven’t yet run out of time. We still have the opportunity to mitigate biases at every level of artificial intelligence: from conception to development and implementation.

This sounds like quite the to-do list, but there are two main points where we can concentrate our efforts:

  1. Technical Fixes:

    1. Develop unbiased datasets.

    2. Improve algorithmic transparency.

    3. Carefully select target variables.

  2. Structural Changes: 

    1. Increase diversity within the tech workforce.

    2. Implement internal corporate ethical governance frameworks.

    3. Establish external oversight.

The Ethical Imperative

Algorithmic bias is more than just a technical problem. It's a fundamental issue of social justice. The choices embedded in algorithms are value judgments with significant social consequences.

Creating truly fair algorithmic systems demands not only technical expertise but also critically examining:

👉🏼 Whose interests do these systems serve?

👉🏼Whose perspectives do they reflect?

Only then can we ensure that the promise of algorithmic efficiency doesn't come at the cost of algorithmic justice.

Final Thoughts

In 2023, Ilya Sutskever shared his vision of what AI might look like if it grew in power without proper alignment with the social good. He told The Guardian:

It's not that it's going to actively hate humans and want to harm them, but it is going to be too powerful—and I think a good analogy would be the way humans treat animals. It's not that we hate animals. I think humans love animals and have a lot of affection for them.

But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's important for us.

At the end of his interview, he added: 

About the Author

Tessina Grant Moloney is an AI ethics researcher investigating the socio-economic impact of AI and automation on marginalized groups. Previously she helped train Google’s LLMs—like Magi and Gemini. Now, she works as our Content Product Manager at GenAI Works. Follow her on LinkedIn!

Want to learn more about the socio-economic impacts of artificial intelligence? Subscribe to her AI Ethics newsletter at GenAI Works.

🚀 Boost your business with us—advertise where 10M+ AI leaders engage

🌟 Sign up for the first AI Hub in the world.

Reply

or to participate.