Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/29695
Title: | Towards eXplainable Artificial Intelligence (XAI) in cybersecurity |
Authors: | Lopez, Eduardo |
Advisor: | Archer, Norm Sartipi, Kamran |
Department: | Business Administration |
Keywords: | XAI cybersecurity IT governance |
Publication Date: | 2024 |
Abstract: | A 2023 cybersecurity research study highlighted the risk of increased technology investment not being matched by a proportional investment in cybersecurity, exposing organizations to greater cyber identity compromise vulnerabilities and risk. The result is that a survey of security professionals found that 240\% expected growth in digital identities, 68\% were concerned about insider threats from employee layoffs and churn, 99\% expect identity compromise due to financial cutbacks, geopolitical factors, cloud adoption and hybrid work, while 74\% were concerned about confidential data loss through employees, ex-employees and third party vendors. In the light of continuing growth of this type of criminal activity, those responsible for keeping such risks under control have no alternative than to use continually more defensive measures to prevent them from happening and causing unnecessary businesses losses. This research project explores a real-life case study: an Artificial Intelligence (AI) information systems solution implemented in a mid-size organization facing significant cybersecurity threats. A holistic approach was taken, where AI was complemented with key non-technical elements such as organizational structures, business processes, standard operating documentation and training - oriented towards driving behaviours conducive to a strong cybersecurity posture for the organization. Using Design Science Research (DSR) guidelines, the process for conceptualizing, designing, planning and implementing the AI project was richly described from both a technical and information systems perspective. In alignment with DSR, key artifacts are documented in this research, such as a model for AI implementation that can create significant value for practitioners. The research results illustrate how an iterative, data-driven approach to development and operations is essential, with explainability and interpretability taking centre stage in driving adoption and trust. This case study highlighted how critical communication, training and cost-containment strategies can be to the success of an AI project in a mid-size organization. |
URI: | http://hdl.handle.net/11375/29695 |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Lopez_Eduardo_A_finalsubmission2024April_PhD.pdf | 12.45 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.