AI Throws a Curveball in the Fight Against Fake News, Misinformation Identifying and defining an AI content creator could be a difficult task for the lawmakers.

By Kul Bhushan

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

pixabay

India may introduce stringent rules and regulations to crack down on fake news generated using AI.

At the heart of the matter, a parliamentary panel has recommended exploring a licensing regime for AI content creators and making it compulsory to label AI-generated videos and content, reports The Hindu. The recommendations are aimed at curbing fake news.

The parliamentary standing committee on communications and information technology led by MP Nishikant Dubey has also asked the government to devise legal and technological measures to track individuals and organizations that are spreading such content, the report added.

The panel also noted that technology, especially AI, can be leveraged to tackle the menace of fake news. The current versions of AI are not fully capable of doing so given these models rely on information already existing online. AI can, however, flag such content for human review. It further sought for a "close coordination between the Ministry of Information and Broadcasting, Ministry of Electronics and Information Technology (MeitY), and other ministries and departments."

According to the report, the committee has sent its proposals to Lok Sabha Speaker Om Birla and will be presented in Parliament during the next session.

Note that it's not necessary that such recommendations translate into official guidelines.

Social media and fake news: An old saga

Internet companies and netizens are no strangers to fake news, hoaxes, and propaganda. Unfortunately, fake and misleading information have continued to go unchecked despite multiple initiatives and relatively better awareness among the masses. But it's certainly not a new thing.

One example is WhatsApp versus the Indian government.

Back in 2018, several reports emerged of multiple mob lynchings across the country, triggered by rumors and misinformation spread on WhatsApp. The government stepped up pressure on WhatsApp to address the issue. In one of the notices to WhatsApp, the government had then warned of stern actions for failing to curb fake news. Actions included considering the platform as an "abetter" of rumor propagation and legal consequences.

After drawing the government's ire, WhatsApp took a series of measures, including adding a "forwarded" label on messages so that users could easily identify that a message did not originate from the sender. It also took out full-page newspaper ads and conducted other outdoor campaigns to increase awareness among the general public. Another key step it took was to limit the number of contacts for forwarding to a maximum of five chats, down from the previous limit of 20.

Since then, the platform and the government have clashed several times. Just last year, WhatsApp threatened to exit its biggest market if the authorities required it to make traceability compulsory. The company warned that doing so would mean breaking end-to-end encryption, an important privacy tool which makes sure only the sender and receiver can know the conversation.

Similarly, Twitter (now X) was at loggerheads with the government during the 2020 farm protest. The government asked the firm to take down quite a few posts and handles. Former Twitter CEO Jack Dorsey alleged that the Indian government had threatened to shut down the social media network in the country. Earlier this year, X moved the Karnataka High Court, challenging the way that Central and State governments order to block content on its platform.

AI throws a curveball

As mentioned above, social media firms have long struggled to tackle fake news. They have especially struggled when a major or sensitive event is taking place.

We saw a flurry of fake news and misleading information swamping the timelines during the recent Operation Sindoor. Similarly, we saw such fake news and propaganda go unchecked during the Israel and Hamas conflict and many more occasions.

The wider availability of generative AI and things like deepfakes have also become a big area of concern. A deepfake is essentially synthetic media that has been created using AI and deep learning to deliver fake but realistic-looking images, videos, or audio recordings.

With the progress of tech, these deepfakes have become more realistic in appearance and sound. There have been multiple incidents of deepfakes targeting Indian politicians, actors and other personalities. The problem is not the technology but its weaponisation, which we have seen can cause a lot of damage to the social fabric.

Jaspreet Bindra, co-founder of AI&Beyond, explains that deepfakes are becoming increasingly sophisticated, making them challenging to detect. Deep learning algorithms are very good at analyzing facial expressions and body movements, making these fakes incredibly realistic. These can sometimes be detected through visual and auditory irregularities and there are AI tools to identify them.

The lack of diverse and high-quality training data hinders the development of effective deep fake detection models. Many deepfake detection models suffer from false positives, which can lead to real content being mislabeled as deepfakes, he pointed out.

"It is a battle of AI against AI and will continue forever. Tech companies should work together to develop and implement effective deepfake detection to mitigate the scams. In the current scenario, blockchain based solutions can be used as an effective solution to weed out deepfakes. They can be used to create tamper-proof records of digital content and ensure authenticity. Big tech firms and tech startups are focused on developing digital watermarks and classifiers to wipe out or manage the problem," he said.

Bindra noted that around 95% of the deepfakes are used to create a wide-range of fake content, including fake videos, audio and images. In most cases, women are the victims of deepfakes, resulting in emotional distress.

He also equated deepfakes with 'online acid-attacks' meant to take revenge and dishonour a person.

"Combating deepfakes will require a multifaceted approach, which will go beyond technology. While technology can play a crucial role in detecting and mitigating deepfakes, it is not a substitute for concerted efforts from governments, tech companies and the public," he noted.

AI Content Creators plus a licensing regime?

Identifying and defining an AI content creator could be a difficult task for the lawmakers. And bringing a licensing regime will complicate things further, according to experts.

Bindra further explains that implementing a licensing system for AI content creators faces several technical challenges, such as: it is important to clearly define who falls under the category of an AI Content Creator, whether it applies to individual users or only professionals and developers.

There will be a need to determine the scope of licenses, ensure compliance and enforce regulations and identify digital watermarking technologies to label AI generated content and prevent creators from removing labels.

From an enforcement and responsibility point of view, it is essential to establish a regulatory body to oversee the licensing system, issue and revoke licenses and enforce compliance, and that implementing monitoring systems to detect unlicensed AI generated content and enforcing penalties for non-compliance, he said.

He also stressed the need for collaborating with industry stakeholders, including AI developers, content creators, and social media platforms to ensure effective implementation.

When asked whether it's tantamount to pre-censorship, Bindra said that a licensing requirement for AI content creators could potentially raise concerns about free speech and expression.

"Some arguments for and against alignment with these principles include:

Licensing could be seen as a necessary regulation to ensure accountability and transparency in AI-generated content, particularly in cases where it may impact public safety, security or trust.

Licensing might help protect the rights of individuals and organizations affected by AI-generated content, such as those whose likenesses or intellectual property are used without permission.

A licensing requirement could be viewed as a form of pre-publication censorship, where content creators need approval before disseminating their work. This could stifle creativity and limit the free flow of information.

The requirement for a license might have a chilling effect on free speech, as individuals or organizations might self-censor or avoid creating content to avoid the licensing process or potential denial," he said.

Summing up,

It is unlikely that anyone will object to the efforts to curb fake news and misleading information. However, it's going to be very difficult given the sophistication of technologies, including generative AI, currently swamping the timelines of almost everyone. Also, it's going to be crucial what the lawmakers decide to do. As mentioned above, there should be a deeper collaboration between lawmakers as well as internet and tech companies so that the end user does not have to lift the heavy weight of figuring out what is AI created and what is not.

Business Ideas

70 Small Business Ideas to Start in 2025

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2025.

Branding

Creating a Brand: How To Build a Brand From Scratch

Every business needs good branding to succeed. Discover the basics and key tips to building a successful brand in this detailed guide.

Innovation

It's Time to Rethink Research and Development. Here's What Must Change.

R&D can't live in a lab anymore. Today's leaders fuse science, strategy, sustainability and people to turn discovery into real-world value.

Marketing

How to Better Manage Your Sales Process

Get your priorities in order, and watch sales roll in.

Business News

AI Agents Can Help Businesses Be '10 Times More Productive,' According to a Nvidia VP. Here's What They Are and How Much They Cost.

In a new interview with Entrepreneur, Nvidia's Vice President of AI Software, Kari Briski, explains how AI agents will "transform" the way we work — and sooner than you think.

Starting a Business

Passion-Driven vs. Purpose-Driven Businesses — What's the Difference, and Why Does It Matter?

Passion and purpose are both powerful forces in entrepreneurship, but they are not the same.