Dynamic Bias Mitigation for Multimodal AI in Recruitment Ensuring Fairness and Equity in Hiring Practices

Authors

  • Kiran Kumar Reddy Yanamala Central Michigan University

Abstract

This study proposes an adaptive bias mitigation framework for AI-driven recruitment systems, designed to dynamically detect and correct biases in real-time, ensuring fairness across different demographic groups. Leveraging the FairCVtest dataset, which includes diverse multimodal data such as resumes, social media profiles, and video interviews, the framework integrates real-time bias detection with adaptive algorithms to adjust decision-making processes continuously. The results demonstrate the framework's effectiveness in mitigating gender and ethnicity biases while maintaining accuracy in recruitment decisions. This approach addresses the limitations of traditional bias mitigation techniques by offering a dynamic and responsive solution tailored to the complexities of multimodal AI systems. The study contributes to the ongoing discourse on ethical AI in recruitment, emphasizing the need for transparent, fair, and inclusive hiring practices. Future work will focus on refining the adaptive mechanisms and exploring broader applications of this framework in various industry contexts.

Downloads

Published

2022-12-18

How to Cite

Yanamala, K. K. R. (2022). Dynamic Bias Mitigation for Multimodal AI in Recruitment Ensuring Fairness and Equity in Hiring Practices. Journal of Artificial Intelligence and Machine Learning in Management, 6(2), 51–61. Retrieved from https://journals.sagescience.org/index.php/jamm/article/view/169