Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

Neural network training loss optimization utilizing the sliding innovation filter

dc.contributor.authorAlsadi N
dc.contributor.authorHilal W
dc.contributor.authorSurucu O
dc.contributor.authorGiuliano A
dc.contributor.authorGadsden SA
dc.contributor.authorYawney J
dc.contributor.authorAl-Shabi M
dc.contributor.departmentMechanical Engineering
dc.contributor.editorPham T
dc.contributor.editorSolomon L
dc.contributor.editorHohil ME
dc.date.accessioned2025-03-03T17:26:07Z
dc.date.available2025-03-03T17:26:07Z
dc.date.issued2022-06-06
dc.date.updated2025-03-03T17:26:07Z
dc.description.abstractArtificial feedforward neural networks (ANN) have been traditionally trained by backpropagation algorithms involving gradient descent algorithms. This is in order to optimize the network’s weights and parameters in the training phase to minimize the out of sample error in the output during testing. However, gradient descent (GD) has been proven to be slow and computationally inefficient in comparison with studies implementing the extended Kalman filter (EKF) and unscented Kalman filter (UKF) as optimizers in ANNs. In this paper, a new method of training ANNs is proposed utilizing the sliding innovation filter (SIF). The SIF by Gadsden et al. has demonstrated to be a more robust predictor-corrector than the Kalman filters, especially in ill-conditioned situations or the presence of modelling uncertainties. In this paper, we propose implementing the SIF as an optimizer for training ANNs. The ANN proposed is trained with the SIF to predict the Mackey-Glass Chaotic series, and results demonstrate that the proposed method results in improved computation time compared to current estimation strategies for training ANNs while achieving results comparable to a UKF-trained neural network.
dc.identifier.doihttps://doi.org/10.1117/12.2619029
dc.identifier.isbn978-1-5106-5102-9
dc.identifier.issn0277-786X
dc.identifier.issn1996-756X
dc.identifier.urihttp://hdl.handle.net/11375/31309
dc.publisherSPIE, the international society for optics and photonics
dc.subject4006 Communications Engineering
dc.subject40 Engineering
dc.subject4009 Electronics, Sensors and Digital Hardware
dc.titleNeural network training loss optimization utilizing the sliding innovation filter
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
124-121131Z.pdf
Size:
1.71 MB
Format:
Adobe Portable Document Format
Description:
Published version