Skip navigation
  • Home
  • Browse
    • Communities
      & Collections
    • Browse Items by:
    • Publication Date
    • Author
    • Title
    • Subject
    • Department
  • Sign on to:
    • My MacSphere
    • Receive email
      updates
    • Edit Profile


McMaster University Home Page
  1. MacSphere
  2. Open Access Dissertations and Theses Community
  3. Open Access Dissertations and Theses
Please use this identifier to cite or link to this item: http://hdl.handle.net/11375/32482
Title: Geometric Deep Learning For Financial Time Series and Efficient Fine-Tuning of Foundation Models
Other Titles: Geometric Deep Learning for Time Series and Foundation Models
Authors: Arabpour Dahoei, Reza
Advisor: Kratsios, Anastasis
Department: Computational Engineering and Science
Keywords: Machine Learning;Deep Learning;Geometric Deep Learning;Financial Mathematics;Time Series;Financial Time Series;Foundation Models;Large Language Models;LLMs;Hypernetworks;Mathematics;Artificial Intelligence
Publication Date: Nov-2025
Abstract: This thesis presents two significant research contributions: one focuses on improving the adaptation of large language models (LLMs) using parameter-efficient fine-tuning (PEFT), and the other addresses the effective modelling of history-dependent stochastic processes—specifically Volterra processes, which are commonly applied in quantitative finance. In the first part, I introduce a user-friendly adaptation pipeline that boosts the performance of a standard foundation model, bringing it much closer to a fully fine-tuned, task-specific version. Remarkably, it achieves this while using significantly less compute and memory, all while keeping data private. The pipeline leverages existing learnable low-rank adapters (LoRA) for known datasets and predicts adapter values for new datasets using this readily available information. Its main advantage is that it can run on a standard laptop without requiring GPU power, ensuring that data remains local. This method effectively closes about half of the performance gap between an untuned base model and a fully fine-tuned one, making specialized models more accessible to researchers, practitioners, and everyday users who lack expensive infrastructure or work with sensitive data on devices like smartphones. The second part addresses a computational challenge in translating the non-Markovian Volterra process into a format suitable for computation. This translation is difficult because the data history dimension affecting the current state grows with the length of the path. I propose a two-step approach to make this process manageable: first, the Volterra process is mapped onto a simpler, lower-dimensional manifold; then, a geometric deep learning model—a "hypernetwork"—is applied, specifically designed for the manifold’s structure. We provide both mathematical and computational evidence demonstrating the model’s effectiveness and practicality (with proofs developed by co-authors available in the main paper), along with extensive testing of each parameter to validate our approach.
URI: http://hdl.handle.net/11375/32482
Appears in Collections:Open Access Dissertations and Theses

Files in This Item:
File Description SizeFormat 
ArabpourDahoei_Reza_202509_MSc.pdf
Open Access
3.67 MBAdobe PDFView/Open
Show full item record Statistics


Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.

Sherman Centre for Digital Scholarship     McMaster University Libraries
©2022 McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8 | 905-525-9140 | Contact Us | Terms of Use & Privacy Policy | Feedback

Report Accessibility Issue