Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/31980
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Ahmed, Ryan | - |
dc.contributor.author | Ismail, Mohanad | - |
dc.date.accessioned | 2025-07-16T20:13:02Z | - |
dc.date.available | 2025-07-16T20:13:02Z | - |
dc.date.issued | 2025 | - |
dc.identifier.uri | http://hdl.handle.net/11375/31980 | - |
dc.description.abstract | EVs live and die by their batteries. To keep drivers safe and confident in their vehicles, we need efficient, accurate, and private ways to track each battery's SoH. But, EV labelled data is scarce, sharing raw data raises privacy flags, and big models strain on-board hardware. This thesis tackles all three problems through a two-step remedy in one shot. 1. Learn data representations without needing labels: Each car trains a small autoencoder to reconstruct its own collected sensor data after randomly hiding parts of the signal. 2. Share knowledge, not data: Instead of uploading the raw collected data, every car sends only its trained model parameters to a remote cloud server. The server aggregates parameters from all cars and sends the improved model back. Four simple questions guide our work: 1. Does this usage of unlabelled data improve the model's performance? 2. How much of the signal should be hidden to get the best representation learning? 3. What is the optimal strategy for incorporating the limited labelled data available into the model? 4. Does this aggregation of separately trained models hurt accuracy compared with a fully centralized approach? Our experiments show a 17% lower average MAE, with up to a 60% improvement in the best cases, when we make use of the available unlabelled data versus training exclusively on labelled data. Hiding 30-40% of signals strikes the balance between challenge and clarity. Finally, aggregation of models on average stays within 0.05Ah of centralized training, virtually no loss, with zero raw-data exposure. This thesis incorporates cloud computing, SSL, and FL to present a light, privacy-friendly pipeline for fleet-wide SoH estimation, evidence that unfrozen fine-tuning outshines frozen variants, the first systematic look at how masking ratio shapes battery time-series representation learning, and practical proof that sharing model weights instead of data keeps accuracy basically untouched and privacy intact. | en_US |
dc.language.iso | en | en_US |
dc.subject | Electric Vehicle | en_US |
dc.subject | State-of-Health Estimation | en_US |
dc.subject | Self-Supervised Learning | en_US |
dc.subject | Masked Autoencoding | en_US |
dc.subject | Federated Learning | en_US |
dc.subject | Cloud Computing | en_US |
dc.subject | Edge Computing | en_US |
dc.subject | Fine-Tuning Strategies | en_US |
dc.subject | Masking Ratio Optimization | en_US |
dc.subject | Data Scarcity and Heterogeneity | en_US |
dc.subject | Privacy-Preserving Machine Learning | en_US |
dc.title | Self-Supervised Masked Autoencoding Meets Federated Learning for Electric Vehicle Battery State-of-Health Estimation | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Mechanical Engineering | en_US |
dc.description.degreetype | Thesis | en_US |
dc.description.degree | Master of Applied Science (MASc) | en_US |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ismail_mohanad_2025july_masc.pdf | 15.74 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.