Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/20893
Title: A Scaled Gradient Descent Method for Unconstrained Optimization Problems With A Priori Estimation of the Minimum Value
Authors: D'Alves, Curtis
Advisor: Anand, Christopher
Department: Computer Science
Keywords: Unconstrained Continuous Optimization;Conjugate Gradient;Gradient Descent;Iterative Optimization
Publication Date: 2017
Abstract: This research proposes a novel method of improving the Gradient Descent method in an effort to be competitive with applications of the conjugate gradient method while reducing computation per iteration. Iterative methods for unconstrained optimization have found widespread application in digital signal processing applications for large inverse problems, such as the use of conjugate gradient for parallel image reconstruction in MR Imaging. In these problems, very good estimates of the minimum value at the objective function can be obtained by estimating the noise variance in the signal, or using additional measurements. The method proposed uses an estimation of the minimum to develop a scaling for Gradient Descent at each iteration, thus avoiding the necessity of a computationally extensive line search. A sufficient condition for convergence and proof are provided for the method, as well as an analysis of convergence rates for varying conditioned problems. The method is compared against the gradient descent and conjugate gradient methods. A method with a computationally inexpensive scaling factor is achieved that converges linearly for well-conditioned problems. The method is tested with tricky non-linear problems against gradient descent, but proves unsuccessful without augmenting with a line search. However with line search augmentation the method still outperforms gradient descent in iterations. The method is also benchmarked against conjugate gradient for linear problems, where it achieves similar convergence for well-conditioned problems even without augmenting with a line search.
Description: A scaled gradient descent method for competition of applications of conjugate gradient with priori estimations of the minimum value
URI: http://hdl.handle.net/11375/20893
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
thesis.pdf
Open Access
1.64 MBAdobe PDFView/Open
Show full item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue