Algorithmic Fairness

Dana Pessach*, Erez Shmueli

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

20 Scopus citations

Abstract

An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence (AI) and machine learning (ML) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans, and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop ML algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness (negatively affecting members of one group more than others), even when there is no intention for it. This chapter presents an overview of the main concepts of identifying, measuring, and improving algorithmic fairness when using ML algorithms. The chapter begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process, and post-process mechanisms. A comprehensive comparison of the mechanisms is then conducted, toward a better understanding of which mechanisms should be used in different scenarios. Finally, the chapter describes the most commonly used fairness-related datasets in this field.

Original languageEnglish
Title of host publicationMachine Learning for Data Science Handbook
Subtitle of host publicationData Mining and Knowledge Discovery Handbook, Third Edition
PublisherSpringer International Publishing
Pages867-886
Number of pages20
ISBN (Electronic)9783031246289
ISBN (Print)9783031246272
DOIs
StatePublished - 1 Jan 2023

Fingerprint

Dive into the research topics of 'Algorithmic Fairness'. Together they form a unique fingerprint.

Cite this