Vol. 19, No. 10, October 31, 2025
10.3837/tiis.2025.10.006,
Download Paper (Free):
Abstract
With the advancement in Artificial Intelligence (AI) technology, events that were once only seen in fiction movies now seem to be happening in real life. Today, we see extensive use of AI models to accomplish a wide range of tasks. These tasks may vary in the severity of their impact on human lives, ranging from mundane tasks like online searching to critical applications like crime rate prediction. One such use case is the application of generative AI models across different verticals. These models, when trained on human-generated data, may carry or even exaggerate several types of social biases present in the data, and their outcomes can adversely impact the socially disadvantaged groups. In this paper, we conduct a comprehensive review of existing state-of-the-art techniques for measuring and mitigating social bias in AI modelling and identify various underlying causes of such biases. We collected research articles through Google Scholar searches using keywords such as ‘social bias in AI text generation’, ‘fairness in language models’, and ‘bias in large language models’. We set the filter to retrieve articles published between 2016 and 2025. We then selected more than 80 relevant articles with five or more citations, or published in high-impact journals, conferences with category ‘A’ or higher. We develop a taxonomy of bias-measuring techniques and categorize the existing methods into six major classes. We identify key challenges, including the trade-off between fairness and model utility, generalization across domains, and data acquisition and representation. This paper serves the research community by offering a high-resolution summary of bottlenecks and actionable opportunities at the present state of the art in achieving fairer and more transparent generative AI systems.
Statistics
Show / Hide Statistics
Statistics (Cumulative Counts from December 1st, 2015)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
Cite this article
[IEEE Style]
P. Kamboj, S. Kumar, V. Goyal, "Mitigating Social Bias in Generative AI: A Comprehensive Review," KSII Transactions on Internet and Information Systems, vol. 19, no. 10, pp. 3372-3394, 2025. DOI: 10.3837/tiis.2025.10.006.
[ACM Style]
Pradeep Kamboj, Shailender Kumar, and Vikram Goyal. 2025. Mitigating Social Bias in Generative AI: A Comprehensive Review. KSII Transactions on Internet and Information Systems, 19, 10, (2025), 3372-3394. DOI: 10.3837/tiis.2025.10.006.
[BibTeX Style]
@article{tiis:103426, title="Mitigating Social Bias in Generative AI: A Comprehensive Review", author="Pradeep Kamboj and Shailender Kumar and Vikram Goyal and ", journal="KSII Transactions on Internet and Information Systems", DOI={10.3837/tiis.2025.10.006}, volume={19}, number={10}, year="2025", month={October}, pages={3372-3394}}