Case Study: Using Synthetic Datasets to Examine Bias in Machine Learning Algorithms for Resume Screening
The increasing use of artificial intelligence (AI) in recruitment, particularly through resume screening algorithms, raises significant ethical concerns due to the potential for biased decision-making. This case study explores these issues by developing a synthetic dataset mimicking the Amazon hiring tool controversy, where biases in training data led to discriminatory outcomes. Using artificial resumes that reflect a diverse applicant pool, students trained and interacted with a machine learning algorithm, which, despite excluding explicit demographic information, exhibited biases against underrepresented groups. This exercise highlights the ethical implications of deploying AI in decision-making processes and equips students with problem-solving techniques for addressing such challenges. Initially introduced in a graduate-level ethics course, this case study serves as a framework for teaching the intersection of technology and ethics, offering valuable lessons in recognizing and mitigating bias in AI systems for undergraduate and graduate students in engineering.