New Study Exploring Gender Bias in Large Language Models in India
Digital Futures Lab is thrilled to share our latest study, ‘From Code to Consequence: Interrogating Gender Biases in LLMs within the Indian Context’!
It examines the multiple layers of gender bias in Indian language LLMs within social sectors. Supported by the Bill & Melinda Gates Foundation, this work offers a wide set of recommendations for LLM developers, governments, and philanthropies for enhancing gender equity in LLM development and use.
Some of the key highlights of this work are:
It identifies potential sources of gender biases and inequities across the lifecycle of LLMs in India – starting from the problem discovery stage through to the final application rollout.
It applies a gender lens to the emerging space of indigenous LLMs – examining potential concerns in pre-trained LLMs for Indian languages.
It offers practical tools for organisations building LLM-based chatbots to conduct gender-focused user research.
It provides a range of system-level strategies and considerations for governments and philanthropic organisations to enable the development of gender-responsive LLMs at an institutional level.
We are thankful to our advisory board — Saurabh Karn, Kalika Bali, and Aditya Vashistha — for their direction and timely input, as well as to our reviewers — Soma Dhavala, Sara Chamberlain, and Maya Indira Ganesh — whose guidance and feedback have been instrumental in shaping our research outputs.
For more details, visit the project site here!