Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/30648
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorAsoodeh, Shahab-
dc.contributor.authorGhoukasian, Hrad-
dc.date.accessioned2024-12-19T18:46:34Z-
dc.date.available2024-12-19T18:46:34Z-
dc.date.issued2025-
dc.identifier.urihttp://hdl.handle.net/11375/30648-
dc.description.abstractMachine learning algorithms are increasingly used in high-stakes decision-making tasks, highlighting the need to evaluate their trustworthiness, especially regarding privacy and fairness. Models must protect individual privacy and avoid discriminating against demographic subgroups. Differential Privacy (DP) has become the standard for privacy-preserving machine learning. It is generally divided into central DP, which relies on a trusted curator, and local DP (LDP), where no trusted entity is assumed. The first part of this thesis investigates binary classification under the constraints of both central DP and fairness. We propose an algorithm based on the decoupling technique to learn a classifier that guarantees fairness. This algorithm takes classifiers trained on different demographic groups and produces a single classifier satisfying statistical parity. We then refine this algorithm to incorporate DP. The performance of the resulting algorithm is rigorously analyzed in terms of privacy, fairness, and utility guarantees. Empirical evaluations on the Adult and Credit Card datasets show that our algorithm outperforms state-of-the-art methods in fairness while maintaining the same levels of privacy and utility. The second part of this thesis addresses the design of an optimal pre-processing method based on LDP mechanisms to minimize data unfairness and reduce classification unfairness. For binary sensitive attributes, we derive a closed-form expression for the ``optimal'' mechanism. For non-binary sensitive attributes, we formulate an optimization problem that, when solved algorithmically, yields the optimal mechanism. We theoretically prove that applying these pre-processing mechanisms leads to lower classification unfairness using the notion of discrimination-accuracy optimal classifiers. Empirical evaluations on multiple datasets demonstrate the effectiveness of these mechanisms in reducing classification unfairness, highlighting LDP’s potential as a tool for enhancing fairness. This contrasts with central DP, which has been shown to adversely affect fairness.en_US
dc.language.isoenen_US
dc.subjectdifferential privacyen_US
dc.subjectfairnessen_US
dc.titleCLASSIFICATION ALGORITHMS WITH DIFFERENTIAL PRIVACY AND FAIRNESS GUARANTEESen_US
dc.typeThesisen_US
dc.contributor.departmentComputing and Softwareen_US
dc.description.degreetypeThesisen_US
dc.description.degreeMaster of Science (MSc)en_US
dc.description.layabstractFairness and privacy are two key concepts in trustworthy machine learning. In high-stakes scenarios, models must protect individual privacy while avoiding discrimination against demographic subgroups. Differential privacy (DP), the standard notion of privacy in machine learning these days, is divided into two main categories: central DP and local DP (LDP). The first part of this thesis examines the interplay between central DP and fairness in binary classification, presenting an algorithm that guarantees both privacy and fairness while providing theoretical performance guarantees. This algorithm is evaluated on real-world datasets, showing improved fairness without compromising privacy or utility. The second part introduces an optimal data pre-processing method using LDP to minimize unfairness, demonstrating an application of LDP in reducing unfairness in model predictions. Experiments on various datasets show that this optimal pre-processing outperforms existing LDP-based pre-processing fairness intervention methods and state-of-the-art fairness post-processing, achieving better fairness while maintaining comparable utility, even when compared to non-private scenarios.en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
Ghoukasian_Hrad_2024November_MSc.pdf
Open Access
1.36 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue