Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/30992
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorKratsios, Anastasis-
dc.contributor.authorHong, Ruiyang-
dc.date.accessioned2025-01-29T20:11:00Z-
dc.date.available2025-01-29T20:11:00Z-
dc.date.issued2024-
dc.identifier.urihttp://hdl.handle.net/11375/30992-
dc.description.abstractThe foundations of deep learning are supported by the seemingly opposing perspectives of approximation or learning theory. The former advocates for large/expressive models that need not generalize, while the latter considers classes that generalize but may be too small/constrained to be universal approximators. Motivated by real−world deep learning implementations that are both expressive and statistically reliable, we ask: "Is there a class of neural networks that is both large enough to be universal but structured enough to generalize?" This paper constructively provides a positive answer to this question by identifying a highly structured class of ReLU multilayer perceptions (MLPs), which are optimal function approximators and are statistically well−behaved. We show that any L−Lipschitz function from [0,1]ᵈ to [−n,n] can be approximated to a uniform Ld/(2n) error on [0,1]ᵈ with a sparsely connected L−Lipschitz ReLU MLP of width 𝒪(dnᵈ), depth 𝒪(łog(d)), with 𝒪(dnᵈ) nonzero parameters, and whose weights and biases take values in {0,± 1/2} except in the first and last layers which instead have magnitude at−most $n$. Unlike previously known "large" classes of universal ReLU MLPs, the empirical Rademacher complexity of our class remains bounded even when its depth and width become arbitrarily large. Further, our class of MLPs achieves a near−optimal sample complexity of 𝒪(łog(N)/√{N}) when given N i.i.d. normalized sub−Gaussian training samples. We achieve this by avoiding the standard approach to constructing optimal ReLU approximators, which sacrifices regularity by relying on small spikes. Instead, we introduce a new construction that perfectly fits together linear pieces using Kuhn triangulations and avoids these small spikes.en_US
dc.language.isoenen_US
dc.subjectLipschitz Neural Networksen_US
dc.subjectOptimal Approximationen_US
dc.subjectGeneralization Boundsen_US
dc.subjectOptimal Interpolationen_US
dc.subjectOptimal Lipschitz Constanten_US
dc.subjectKuhn Triangulationen_US
dc.subjectUniversal Approximationen_US
dc.titleBridging the Gap Between Approximation and Learning via Optimal Approximation by ReLU MLPs of Maximal Regularityen_US
dc.typeThesisen_US
dc.contributor.departmentMathematics and Statisticsen_US
dc.description.degreetypeThesisen_US
dc.description.degreeMaster of Science (MSc)en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
Hong_Ruiyang_December2024_MSc.pdf
Open Access
1.04 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue