Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/25113
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Wassyng, Alan | - |
dc.contributor.advisor | Lawford, Mark | - |
dc.contributor.author | Deevy, Spencer | - |
dc.date.accessioned | 2019-12-12T21:16:38Z | - |
dc.date.available | 2019-12-12T21:16:38Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://hdl.handle.net/11375/25113 | - |
dc.description.abstract | Trends in the automotive industry indicate rapid adoption of artificial intelligence techniques such as machine learning algorithms, enabling increasingly capable autonomous vehicles. However, the major focus has been to improve the performance and accuracy of these techniques, with a clear lack of development towards corresponding safety systems. Artificial intelligence techniques are characterized by high complexity, high variability, and low diagnosability. These issues all pose risks to the safety of autonomous vehicles and need to be taken into consideration as we move towards fully autonomous vehicles. Sentinel, a fault-tolerant software architecture is presented as the main contribution of this thesis. Sentinel has been designed to mitigate safety concerns surrounding artificial intelligence techniques employed by upcoming SAE J3016 level 5 autonomous vehicles. The architecture design process involved careful consideration of issues inherent to artificial intelligence techniques being utilized in autonomous vehicles and their corresponding mitigation strategies. Following this, a survey of software architectures was conducted, drawing inspiration from existing autonomous vehicle architectures as well as architectures in the related domains of artificial intelligence, organic computing, and robotics. These existing architectures were then iteratively combined, guided by an autonomous vehicle hazard analysis, resulting in the final architecture. Additionally, an assurance case was constructed to delineate the assumptions and evidence required to justify the continued safety of autonomous vehicles employing the Sentinel architecture. This work is presented to provide a safety-oriented framework towards fully autonomous vehicles. | en_US |
dc.language.iso | en | en_US |
dc.title | Sentinel: A Software Architecture for Safe Artificial Intelligence in Autonomous Vehicles | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Computing and Software | en_US |
dc.description.degreetype | Thesis | en_US |
dc.description.degree | Master of Applied Science (MASc) | en_US |
dc.description.layabstract | Artificial intelligence techniques are enabling increased autonomy in autonomous vehicles, however, a major concern that has yet to be addressed is how to ensure vehicle safety while utilizing potentially volatile artificial intelligence. This issue in addition to the lack of existing high-level roadmaps towards fully autonomous vehicles are the driving factors behind this work. We provide Sentinel, a safety-oriented software architecture for autonomous vehicles utilizing artificial intelligence techniques. The literature intersecting the fields of artificial intelligence and autonomous vehicles was reviewed to identify unique safety considerations and mitigation strategies that could be applied. The Sentinel architecture was then synthesized from information in the literature review, while also taking into consideration design choices that would increase industry adoption of Sentinel. An assurance case was then constructed to determine the assumptions and evidence required to justify the non-negative impacts of the architecture on the safety or reliability of a vehicle employing the Sentinel design. | en_US |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Deevy_Spencer_R_201912_MASc.pdf | 10.25 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.