Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/32482
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorKratsios, Anastasis-
dc.contributor.authorArabpour Dahoei, Reza-
dc.date.accessioned2025-10-02T16:16:51Z-
dc.date.available2025-10-02T16:16:51Z-
dc.date.issued2025-11-
dc.identifier.urihttp://hdl.handle.net/11375/32482-
dc.description.abstractThis thesis presents two significant research contributions: one focuses on improving the adaptation of large language models (LLMs) using parameter-efficient fine-tuning (PEFT), and the other addresses the effective modelling of history-dependent stochastic processes—specifically Volterra processes, which are commonly applied in quantitative finance. In the first part, I introduce a user-friendly adaptation pipeline that boosts the performance of a standard foundation model, bringing it much closer to a fully fine-tuned, task-specific version. Remarkably, it achieves this while using significantly less compute and memory, all while keeping data private. The pipeline leverages existing learnable low-rank adapters (LoRA) for known datasets and predicts adapter values for new datasets using this readily available information. Its main advantage is that it can run on a standard laptop without requiring GPU power, ensuring that data remains local. This method effectively closes about half of the performance gap between an untuned base model and a fully fine-tuned one, making specialized models more accessible to researchers, practitioners, and everyday users who lack expensive infrastructure or work with sensitive data on devices like smartphones. The second part addresses a computational challenge in translating the non-Markovian Volterra process into a format suitable for computation. This translation is difficult because the data history dimension affecting the current state grows with the length of the path. I propose a two-step approach to make this process manageable: first, the Volterra process is mapped onto a simpler, lower-dimensional manifold; then, a geometric deep learning model—a "hypernetwork"—is applied, specifically designed for the manifold’s structure. We provide both mathematical and computational evidence demonstrating the model’s effectiveness and practicality (with proofs developed by co-authors available in the main paper), along with extensive testing of each parameter to validate our approach.en_US
dc.language.isoenen_US
dc.subjectMachine Learningen_US
dc.subjectDeep Learningen_US
dc.subjectGeometric Deep Learningen_US
dc.subjectFinancial Mathematicsen_US
dc.subjectTime Seriesen_US
dc.subjectFinancial Time Seriesen_US
dc.subjectFoundation Modelsen_US
dc.subjectLarge Language Modelsen_US
dc.subjectLLMsen_US
dc.subjectHypernetworksen_US
dc.subjectMathematicsen_US
dc.subjectArtificial Intelligenceen_US
dc.titleGeometric Deep Learning For Financial Time Series and Efficient Fine-Tuning of Foundation Modelsen_US
dc.title.alternativeGeometric Deep Learning for Time Series and Foundation Modelsen_US
dc.typeThesisen_US
dc.contributor.departmentComputational Engineering and Scienceen_US
dc.description.degreetypeThesisen_US
dc.description.degreeMaster of Science (MSc)en_US
dc.description.layabstractThis thesis presents two contributions at the intersection of artificial intelligence and mathematics. First, I introduce a novel method for adapting large language models on widely available hardware. This approach recovers half of the performance lost when using an untuned base model instead of a GPU fine-tuned one, while running on a single laptop with minimal cost and energy consumption. It makes specialized models more accessible, preserves privacy by keeping data local, and promotes environmentally responsible computing. Second, I develop a practical framework for working with history-dependent stochastic processes commonly used in quantitative finance. Such processes are often too large to compute efficiently. The method proposed here compresses them into a low-dimensional representation and then applies a computational model, enabling efficient simulation, estimation, and practical application. Together, these contributions introduce novel algorithms capable of addressing real-world problems from fresh perspectives.en_US
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
ArabpourDahoei_Reza_202509_MSc.pdf
Open Access
3.67 MBAdobe PDFView/Open
Show simple item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue