Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

CLASSIFICATION ALGORITHMS WITH DIFFERENTIAL PRIVACY AND FAIRNESS GUARANTEES

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Machine learning algorithms are increasingly used in high-stakes decision-making tasks, highlighting the need to evaluate their trustworthiness, especially regarding privacy and fairness. Models must protect individual privacy and avoid discriminating against demographic subgroups. Differential Privacy (DP) has become the standard for privacy-preserving machine learning. It is generally divided into central DP, which relies on a trusted curator, and local DP (LDP), where no trusted entity is assumed. The first part of this thesis investigates binary classification under the constraints of both central DP and fairness. We propose an algorithm based on the decoupling technique to learn a classifier that guarantees fairness. This algorithm takes classifiers trained on different demographic groups and produces a single classifier satisfying statistical parity. We then refine this algorithm to incorporate DP. The performance of the resulting algorithm is rigorously analyzed in terms of privacy, fairness, and utility guarantees. Empirical evaluations on the Adult and Credit Card datasets show that our algorithm outperforms state-of-the-art methods in fairness while maintaining the same levels of privacy and utility. The second part of this thesis addresses the design of an optimal pre-processing method based on LDP mechanisms to minimize data unfairness and reduce classification unfairness. For binary sensitive attributes, we derive a closed-form expression for the ``optimal'' mechanism. For non-binary sensitive attributes, we formulate an optimization problem that, when solved algorithmically, yields the optimal mechanism. We theoretically prove that applying these pre-processing mechanisms leads to lower classification unfairness using the notion of discrimination-accuracy optimal classifiers. Empirical evaluations on multiple datasets demonstrate the effectiveness of these mechanisms in reducing classification unfairness, highlighting LDP’s potential as a tool for enhancing fairness. This contrasts with central DP, which has been shown to adversely affect fairness.

Description

Citation

Endorsement

Review

Supplemented By

Referenced By