Wed Oct 08 00:15:00 UTC 2025: Here’s a news article summarizing and rewriting the provided text, formatted for “The Hindu”:
**The Hindu: Technology**
**IIT Madras Unveils Dataset to Combat AI Bias in Indian Context**
*Chennai, October 8, 2025* – The Indian Institute of Technology (IIT) Madras has launched a crucial dataset aimed at identifying and mitigating bias risks within Language Models (LLMs) used in India. The initiative, unveiled at the Centre for Responsible AI (CeRAI) Conclave on AI Governance, addresses the significant shortcomings of global AI models in accurately reflecting and accounting for societal biases prevalent in the Indian subcontinent.
The dataset, named IndiCASA (IndiBias-based Contextuality Aligned Stereotypes and Anti-stereotypes), contains 2,575 human-validated sentences encompassing demographics related to caste, gender, religion, disability, and socio-economic status. Researchers hope this resource will enable the development of more equitable and culturally sensitive AI applications.
“The fast-changing scenarios in AI governance and policy could well lead us away from conventional LLMs to more domain-specific LLMs and smaller models,” said V. Kamamoti, Director, IIT Madras, highlighting the need for adaptable AI solutions.
Alongside IndiCASA, CeRAI also introduced several tools designed to foster responsible AI development and deployment. These include:
* **PolicyBot:** An interactive chatbot allowing non-experts to easily navigate complex legal and policy documents related to AI.
* **AI Evaluation Tool:** A framework for consistently and transparently evaluating Conversational AI systems.
* **Co-Intelligence Network:** A global network established with Itihaasa Research and Digital, dedicated to leveraging collaborative intelligence for societal benefit.
CeRAI also released a report, “The Algorithmic-Human Manager: AI, Apps, and Workers in the Indian Gig Economy,” examining the impact of AI on the gig economy. The report highlights the benefits of increased efficiency while also addressing concerns about fairness, transparency, and worker rights. A discussion paper on an “AI Incident Reporting Framework for India” was also presented.
During the Conclave, Abhishek Singh, CEO of India AI Mission, emphasized the government’s focus on regulating AI only in instances where it poses potential harm. He cited deepfakes as an example requiring regulation and stated that sector regulators would address specific AI-related risks within their respective industries.
Srinivasan Parthasarathy, Professor at Ohio State University, stressed the potential of joint human-AI systems, suggesting they consistently outperform either humans or machines alone and can result in safer, more resilient, inclusive, auditable, adaptive, and trustworthy outcomes.
B. Ravindran, Professor and Head, WSAI, also spoke at the event.