Fake but flawless Nano Banana glitch rattles digital ID security

A Bengaluru techie uses Google’s Nano Banana Pro to fake Aadhaar and PAN cards—exposing a flaw so real, it forced a policy shift. When AI starts fooling humans, who verifies the verifier? A glitch, a tweet, and a wake-up call for digital ID security.

New Update
Google Gemini Nano Banana Fake PAN card
Listen to this article
0.75x1x1.5x
00:00/ 00:00

A Bengaluru techie fakes Aadhaar and PAN with Gemini AI. Now what?

When Harveen Singh Chadha opened the Gemini app last week, he wasn’t trying to start a national conversation. But by the time he hit publish on his X post showcasing eerily realistic fake PAN and Aadhaar cards, generated using Google’s new Nano Banana Pro AI update, he had done just that.

Advertisment

The digital proof of identity in India may never look the same again.

Gemini’s polished upgrade backfires in unexpected ways

Nano Banana Pro, Google’s much-anticipated update to the Gemini image generation engine, landed with impressive new powers. It promised smarter editing, crisper design output, and a better understanding of prompts. What no one quite expected was how effective it would be at replicating sensitive government-issued ID cards with such uncanny precision.

In his experiment, Chadha, who works at Sarvam AI, fed Gemini the structural details of identity documents. The results: a PAN and Aadhaar card for a fictional “Twitterpreet Singh” that looked nearly real at first glance. The only clue revealing their origin was a subtle Gemini watermark in the lower corner.

"It still isn’t perfect with fonts or fine-grained text,” Chadha later said in an interview. “But the jump in layout accuracy made me curious about how far it could go.”

Advertisment

Identity verification systems feel the heat

The implications of Chadha’s test aren’t lost on anyone who’s ever flashed a PAN card at a hotel counter or shown Aadhaar at an airport gate. “Nano Banana is good, but that is also a problem,” he wrote on X. “Legacy image verification systems are doomed to fail.”

He isn’t wrong. While AI-generated images on Gemini contain invisible SynthID watermarks embedded in the metadata, this safeguard becomes useless when the fake image is printed as a physical card. The receptionist at your neighborhood guesthouse isn’t scanning your card for metadata. Chances are, they’re just glancing at it.

Advertisment

A few users online tried to downplay the danger, pointing out that fake cards won’t pass back-end validation or QR scans. Chadha’s retort was sharp: “When you show Aadhaar at a hotel or airport, do they really scan it?”

The concern is clear. Visual trust still dominates real-world ID validation, and AI is now dangerously good at exploiting that trust.

A rare case of responsible flagging

Interestingly, Chadha’s post isn’t a case of a rogue actor misusing tech. It’s an example of ethical red-teaming. Instead of quietly building tools to exploit the flaw, he chose to expose the vulnerability publicly.

Advertisment

His post triggered instant debate on responsible AI development, the fragility of our verification systems, and the urgent need for AI-literate governance.

Google responded by disabling Gemini’s ability to generate government-issued IDs altogether. It was a quick fix, but one that underlines just how reactive current oversight still is. It wasn’t preemptive policy that stopped the feature. It was a lone developer on social media.

Google Gemini Nano Banana Response from Google

The deeper issue no one wants to solve

The problem here isn’t just that Gemini’s Nano Banana Pro is good at generating fakes. The real issue is that our public-facing identity verification infrastructure is still largely dependent on visual inspection. Until now, that was enough. With generative AI models getting sharper, faster, and more accommodating, that era is rapidly closing.

Advertisment

Chadha's playful-sounding "Twitterpreet Singh" might go down as the first fictional character to cause a real-world policy shift in India’s digital ID landscape. But the questions his test raises are far from imaginary.

Is a watermark enough? Should printed IDs be trusted at all? And how do we upgrade verification processes when the tools that challenge them evolve faster than the systems themselves?

What happens when AI passes the ID test?

The speed with which this vulnerability surfaced, and was temporarily addressed, speaks to the new nature of digital risk: instant, viral, and rooted in software that learns on the fly.

Advertisment

Chadha didn't crack any systems. He didn't break any laws. He just showed the system a mirror and what it reflected back was a future where anyone with access to a clever prompt could look just official enough.

As AI continues to evolve, it’s not just about what it can do. It’s about what the world around it still can’t handle.

And this time, that world includes your ID.

ai google

Stay connected with us through our social media channels for the latest updates and news!

Follow us: