During the last year or so, I’ve been quite interested in the issue of fairness in machine learning. This area is more personal for me, as it is the confluence of several interests of mine:
- My lifelong activity in probability theory, math stat and stat methodology (in which I include ML).
- My lifelong activism aimed at achieving social justice.
- My extensive service as an expert witness in litigation involving discrimination (including a land mark age discrimination case, Reid v. Google).
(Further details in my bio.) I hope I will be able to make valued contributions.
My first of two papers in the Fair ML area is now on arXiv. The second should be ready in a couple of weeks.
The present paper, with my former student Wenxi Zhang, is titled, A Novel Regularization Approach to Fair ML. It’s applicable to linear models, random forests and k-NN, and could be adapted to other ML models.
Please try the package out on your favorite fair ML datasets. Feedback, both on the method and the software, would be greatly appreciated.