Deepfakes Are Here: Can We Still Trust What We See? Families of NRIs, elderly parents, and small business owners are especially at risk
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
A few days ago, social media was overtaken by the vintage saree trend, with images created using Google's Gemini Nano Banana AI image generator.
The pictures looked authentic enough to be mistaken for real photographs. While the fad is fading, it has raised questions about the risks of hyper-realistic media at scale. If such tools can fool millions casually, what happens when they are used for fraud?
The concern is timely. Tech companies from OpenAI and Google to Meta are in a race to release ever more lifelike AI systems. Alibaba's Wan 2.2 already creates videos close to reality.
The industry behind this technology is booming. According to Markets and Markets, the AI image generator market will grow from USD 8.7 billion in 2024 to USD 60.8 billion by 2030, at a CAGR of 38.2 per cent. Additionally, the AI video generator market, valued at USD 614.8 million in 2024, is expected to expand to USD 2.56 billion by 2032, according to Fortune Business Insights.
The numbers shown here are not convincing given the current pace of model development in this sector. They could be larger.
Meanwhile, the global deepfake AI market is projected to grow from USD 562.8 million in 2023 to USD 6.14 billion by 2030, at a CAGR of 41.5 per cent. These figures, too, could turn out to be higher.
We spoke to industry players to decode the current situation.
The blurred line
For Rahul Tyagi, Co-founder of Safe Security, the question of when we lose the ability to separate real from fake has already been answered. "Most of us can't tell whether a video of a celebrity saying something is real or fake without checking multiple sources," he says. "The safeguard cannot just be expecting the human eye to detect fakes. Safeguards have to be cultural, technical, and regulatory."
He adds that society will have to adopt the same scepticism towards video as it already does towards WhatsApp forwards.
Somshubro Pal Choudhary agrees, calling the problem a "cat-and-mouse game." He points out that open text-to-video and voice models already blur the line in everyday contexts. His recommendations include device-level provenance, platform policies, and a "media-zero-trust" approach to sensitive content.
According to Lasya Ippagunta, Machine Learning Engineering Manager at CloudSEK, everyday detection is already failing. "Consumer tools can generate 5–10 second high-quality clips where human detection is barely better than a coin flip," she says.
Scams get more personal
The scams that deepfakes enable are far from theoretical. "Earlier scams were about bad grammar emails asking for money. Today scams wear your father's voice and say, 'beta, I need help,'" warns Tyagi. Families of NRIs, elderly parents, and small business owners are especially at risk.
Choudhary highlights how fraud has moved into multimodal attacks–deepfake video calls to instant voice cloning. Finance and procurement teams, crypto platforms, and call centres are among the most exposed. "Unfortunately, common people are being duped too, with scammers cloning voices of family members," he says.
Ippagunta notes how impersonation has become professionalised, with attackers faking video calls and even entire business meetings to authorise wire transfers. Retail investors, SMB finance teams, influencers, and civic groups are among the current targets.
Pankit Desai, Co-founder and CEO of Sequretek, warns of scams in India involving impersonation of authority figures. "We keep hearing of 'digital arrests'," he says. His team uncovered an ITR-related impersonation scam that combined social engineering, psychological pressure, and technical skills. "If a communication feels off, it most likely is. Double-check the source," Desai advises.
Can labels help?
Many platforms are testing watermarking and mandatory labels for AI-generated content, but experts are cautious about their effectiveness.
"Watermarking and labels are like seatbelts; they save lives, but only if people wear them," says Tyagi, adding that bad actors will always find ways to bypass them.
Choudhary agrees, "Labels are helpful but insufficient. Over-reliance creates false confidence. What improves outcomes is secure authenticity at capture and enforcement tied to risk."
But Desai is more sceptical. "Platforms flag AI-generated content, but bad actors can easily bypass guardrails. Until tools evolve, critical thinking and trusted networks remain our best defence," he says.
The bigger picture
India, with one of the world's largest internet user bases, is a major target for deepfake scams. Cultural trust in voice and video communication makes the country particularly vulnerable. As tools get faster, cheaper, and more realistic, experts agree that the only real defence will be a combination of better technology, stronger policies, and greater public scepticism.
As Tyagi puts it, "Scams are no longer about hacking your password. They're about hacking your trust."