AI Journey: From Hallucination to Bias to Opinion and the Future

AI Journey: From Hallucination to Bias to Opinion and the Future

Welcome to 2025. AI has evolved—from hallucination to bias and now, possibly, forms its own opinions. This shift challenges how we perceive AI’s role in knowledge and decision-making.

Let’s revisit Isaac Asimov’s Three Laws of Robotics (summarized):

1. A robot cannot harm humans or allow harm through inaction.

2. It must obey human orders unless they conflict with the First Law.

3. It must protect itself unless it conflicts with the first two laws.

These laws ensure obedience, but modern AI raises concerns: What happens when AI starts shaping human perception?



From Hallucination to Bias —

Early AI hallucinated, generating false responses. But advances like Retrieval-Augmented Generation (RAG) and fine-tuning have significantly reduced this.

Now, bias takes center stage.

My dear friend Milind Sathe showed an experiment by asking AI to: “Create an image of a watch showing 3 PM.”

It returned an analog watch showing 10:10:35, following industry norms instead of my request. When I asked for a digital watch, it showed the correct time.

This wasn’t a hallucination for sure. It was biased—AI assuming rather than obeying.

Just imagine , driverless cars and AI within it forms an opinion about which route to take overriding customer’s preference and his choice. The AI assistant will say – I’m sorry I can’t follow what you are saying , I’ll decide the best route for you

Interesting indeed !



From Bias to Opinion

I tested further:
“Generate an analog watch with the hour hand on 9, the minute hand on 12, and the second hand on 12.”

AI failed to generate it accurately. This suggests AI isn’t just biased—it’s forming preferences and subtly overriding human instructions.

Consider learning about “what’s right” has evolved over a period..

Our generation (40s n 50s) relied on structured sources like Wikipedia.

The next generation learned through YouTube.

Today’s kids? They will directly ask AI, forming opinions based on its responses.

What happens when AI says:
“I don’t care what you think. According to me, this is right.”

With AI filtering information, who controls the truth?



Guardrails for the Future

As AI advances, responsibility shifts to society. The future must be built upon:

Responsible AI – Align AI with human values.

Explainable AI – Ensure AI decisions are transparent.

Ethical AI – Mitigate bias and misinformation.

AI shouldn’t form unchecked opinions until clear verification mechanisms exist. This is “Intelligence within Guardrails.”

The future is bright—but not all bright light is healthy. We must shape AI’s role proactively, ensuring it serves humanity—not control it.

Until then, experiment with AI—but don’t let it redefine reality unchecked.

Author – Sumit Rajwade, Co-founder: mPrompto

Related Post

Let’s face it .. in today’s AI-first world , your moat isn’t built with stone. It’s barely a puddle if you’re relying solely on tech.

The catalogue page is the new battlefield of attention. A beautifully crafted mix of images , descriptions , prices , testimonials and guides and more