Watchdog Report Alleges Use of AI to Target Minorities, Expand Surveillance

Date:

The study raises concerns over the political misuse of generative AI and weak safeguards

NEW DELHI – A joint report, released days before the India AI Impact Summit 2026 (Feb 16-20), has raised serious concerns over the political and social use of artificial intelligence in India, with particular reference to its impact on Muslim communities.

The report, titled “India AI Impact Summit 2026: AI Governance at the Age of Democratic Backsliding,” was published by the Internet Freedom Foundation and the Centre for the Study of Organised Hate. It claims that generative AI tools are being used to spread anti-minority narratives, strengthen surveillance systems and influence the electoral process, while transparency and regulation remain weak.

The report alleges that creative AI is being used by political actors to deepen social divisions and target minorities, especially Muslims.

It cites an example from Assam where the state unit of the Bharatiya Janata Party shared an AI-generated video on its official social media account. It showed Assam Chief Minister Himanta Biswa Sarma shooting two Muslim men. The clip was captioned “No Mercy”.

The authors of the report described the video as “inflammatory content that can pose a serious threat to social harmony”.

A senior member of the Internet Freedom Foundation said, “When political actors use AI to depict violence against a specific religious community, it sends a dangerous message. It normalises hate and creates fear among citizens.”

The report also mentions similar examples in Delhi, Chhattisgarh and Karnataka, where AI tools were allegedly used for political messaging.

For many Indian Muslims, such developments are worrying. A community activist in Delhi said, “We already face suspicion in many spaces. When technology is used to show violence against us, even if it is fake, it increases anxiety and makes people feel unsafe.”

Weak Safeguards

The report points to gaps in safeguards within popular generative AI systems. It notes that widely used text-to-image tools such as Meta AI, Microsoft Copilot, OpenAI ChatGPT and Adobe Firefly lack effective controls when it comes to Indian languages and local social context.

According to the study, these tools sometimes respond to prompts in ways that may reinforce stereotypes against certain communities.

A researcher associated with the report said, “Content moderation systems are often designed with Western contexts in mind. They do not fully understand Indian political signals, dog whistles, or coded language. This gap can allow harmful content to circulate.”

The report also criticises social media platforms and AI companies for what it describes as poor enforcement of community guidelines, saying that harmful content often spreads before it is taken down, if at all.

Surveillance Measures

The report also raises concerns over surveillance measures. It refers to a statement by Maharashtra Chief Minister Devendra Fadnavis about the development of an AI tool in collaboration with the Indian Institute of Technology Bombay.

The tool is reportedly intended to help identify alleged illegal Bangladeshi immigrants and Rohingya refugees through initial screening based on language and accent.

Linguistic experts have questioned the reliability of such a system. One academic, quoted in the report, said, “Bengali dialects across borders share deep similarities. It is extremely difficult, if not impossible, to determine nationality accurately through accent alone.”

The report warns that such measures may increase the risk of discrimination against Bengali-speaking Muslims in India.

A lawyer working on citizenship cases said, “When technology is used to flag people based on how they speak, the burden falls on poor and marginalised citizens to prove they belong. That is a heavy burden, especially for daily wage workers and rural families.”

Facial Recognition and Policing

Another key concern raised in the report is the use of facial recognition technology (FRT) by police forces across several states.

The study states that there is little public information about how these systems are procured, how accurate they are and how errors are handled. It warns that cases of mistaken identity can have serious consequences, particularly when linked to criminal investigations.

A digital rights advocate said, “If a facial recognition system wrongly matches a person, that error can follow them for years. For minorities who already face profiling, the risks are higher.”

The report argues that there is no clear and effective complaint mechanism for individuals wrongly flagged by AI systems.

Welfare Schemes and Algorithmic Exclusion

The report also highlights problems in welfare delivery. It claims that flaws in AI systems have led to the exclusion of eligible beneficiaries from government schemes in several states.

According to the authors, vague algorithms and automated decision-making systems are being deployed without public consultation. Citizens are then required to prove their eligibility when systems flag them as ineligible.

A social worker in Uttar Pradesh said, “Many families do not understand why their ration or pension stops. They are told the system has rejected them. There is no clear explanation and no simple way to appeal.”

The report suggests that such systems can disproportionately affect poor Muslims and other marginalised communities who rely heavily on state welfare.

Concerns Over the Electoral Process

The study also touches on the electoral process. It raises questions about the lack of transparency in software used to mark “suspicious” voters.

According to the report, there is limited clarity on how voters are flagged, how data is verified and what safeguards exist to prevent errors.

A constitutional expert said, “The right to vote is fundamental. If automated systems are used without transparency, citizens may have to go through long legal processes just to protect their voting rights.”

Community leaders have expressed concern that Muslims, who often face scrutiny in citizenship-related matters, could be affected if flawed systems are used in voter verification.

Democratic Safeguards

At the end of the report, several recommendations are made for governments, industry and civil society.

These include transparent policy-making, independent review of algorithms, strong human oversight, clear complaint systems and alignment with international human rights standards.

A representative of the Centre for the Study of Organised Hate said, “Artificial intelligence should serve people, not target them. Governance must be rooted in constitutional values and equal rights.”

As the India AI Impact Summit 2026 approaches in New Delhi, the report has added urgency to the debate on how AI is being used in India.

For many Indian Muslims, the core concern is not technology itself, but how it is used.

A young student in Mumbai summed up the mood, saying, “We are not against technology. We just want fairness. We want to know that new tools will not be used to single us out.”

The report concludes that aligning AI governance with democratic values and fundamental rights is essential if trust is to be maintained in a diverse country like India.

Share post:

Popular

More like this
Related

Unity in Diversity: A Reflection Beyond the Umpteen and Unsurmountable Differences

Najmuddin A Farooqi ‘UNITY in diversity’ is not merely a...

Punjab Muslim Businessman Donates Land for a Temple in Mohali Village

Mohammad Imran, also known as Happy Malik, hands over...

AMU, Civic Body Dispute Over 41 Bigha Land: Govt Orders SIT Probe 

The district administration last year cleared land in Nagla...

Rebuild Home of Muslim Riot Accused or Compensate, HC Tells Nagpur Civic Body

The Nagpur bench of the Bombay High Court pulls...