Skip to main content

Posts

Showing posts from February, 2020

Reading Note: Noise-tolerant fair classification

blog_noise_tolerant This post is the reading note for "Noise-tolerant fair classification." by Lamy, Alex, et al. This paper mainly focus on the question that Whether one can still learn fair classifiers given noisy sensitive features? (e.g., race or gender). A quick answer is yes. The authors claim that if one measures fairness using the mean-difference score, and sensitive features are subject to noise from the mutually contaminated learning model, then owing to a simple identity we only need to change the desired fairness-tolerance. To understand this paper, we must first review the following 2 major questions: Mutually contaminated learning Fairness learning where mutually contaminated learning is a model for learning the distribution of samples with corrupted labels. Mutually contaminated learning In the framework of learning from mutually contaminated distributions (MC learning), instead of observing samples from the “true” (or “clean”) joint distribution 𝐷, one