AI Ethics-Embracing AI with Human Centric Approach
The future of AI must be ethical, transparent, and human-centric. Because technology should never be a “facetious clown” — it should be a partner in building fairness, trust, and justice
Let’s be real for a second about AI ethics, discrimination, and Aurora. Everything is connected. The real danger happens when AI starts to hallucinate and manipulate facts.
We can't just let the machine run—augmented decisions have to follow social norms and compliance, or we end up with systems that manipulate the truth instead of helping us.
AI is going to be biased. That's just the reality of the data it eats. But we need a human-centric approach to keep the decorum and keep things fair.
The SVEA EKONOMI Mess
Look at the credit scoring case in Finland with SVEA EKONOMI. This wasn't just a glitch; it was a total failure of "non-cognitive science." They let an automated system deny a man credit for building materials just because of his profile.
The machine saw a "mismatch." Because he was a man, lived in the countryside, and spoke Finnish, the AI said no. The tribunal was clear: if he had been a Swedish-speaking woman, he’d have the loan. That is social injustice. The machine doesn’t see a person; it just sees a statistical pile of data.
It’s Happening Everywhere
This "statistical mismatch" isn't just a Finnish problem. In Healthcare, AI manipulates who gets care by looking at "costs" instead of "sickness." In hiring, it tosses out great resumes because they don't look like the "statistical average" of who worked there ten years ago. These systems are "black boxes"—nobody knows why they make the choices they do.
The Aurora Answer
That’s why researchers like Rutkenstein & Velkova (2019) matter. They show that when systems are opaque, discrimination hides in the shadows.
Finland’s Aurora program is trying to fix this by focusing on "Life Events" instead of cold data. We can't kill bias entirely, but we can fight it with:
Bias tools to catch when the data is manipulating things.
Actual human oversight (a person who actually thinks!).
Policies that don't leave marginalized people behind.
The Bottom Line
AI isn't neutral. It’s a mirror of our past prejudices. If we leave it unchecked in our schools or jobs, we’re just making discrimination "high-tech."
We need to stop the AI from being a facetious clown. We need a human-centric approach where ethics, governance, and risk are actually aligned with real human lives.
Rachana Bahel