Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/30699
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Zheng, Rong | - |
dc.contributor.author | Quansah, Bodee | - |
dc.date.accessioned | 2025-01-07T21:06:49Z | - |
dc.date.available | 2025-01-07T21:06:49Z | - |
dc.date.issued | 2024 | - |
dc.identifier.uri | http://hdl.handle.net/11375/30699 | - |
dc.description.abstract | For rendering realistic binaural and spatial audio, it is important to have accurate Head Related Transfer Functions. HRTFs are directional filters that models how sound diffracts and reflects around a subject’s head and ears. Due to their dependence on a subject’s head and ear morphology, HRTFs are unique to the individual and should be measured on a per-subject basis. Simulation is an attractive alternative to measurement, because it does not require special facilities, only a 3D mesh. For simulation to work, the simulator needs a high quality mesh of the subject's head and ears as input, but 3D capture techniques produce meshes that have artifacts. This thesis proposes three semi-automated non-rigid registration pipelines that use both global and part-based approaches to generate meshes that are watertight and manifold and thus suitable for simulation. The pipelines are referred to as follows: the hybrid, global+ear-refine, and model-part pipelines. Each pipeline non-rigidly registers a template to an artifact-laden 3D scan and morphs the template mesh to resemble the 3D scan free of the artifacts that cause simulation to fail. All pipelines were tested on the scans of 15 subjects. The global+ear-refine pipeline was found to produce meshes with the lowest average vertex error. The maximum average vertex error across subjects was 0.8 mm. For the left ear the pipeline produced a maximum average landmark error of 3 mm and 2.5 mm for the right ear. The global+ear-refine pipeline was also found to produce the smoothest meshes in the forehead region with a maximum roughness of 17.28. The morphed template was used as input to the Mesh2HRTF simulator, to generate HRTFs. The simulations were compared to ground truth and were found to be comparable to the ground truth, up to 3 kHz, above which the simulations suffer from large discrepancies. | en_US |
dc.language.iso | en | en_US |
dc.subject | HRTF | en_US |
dc.subject | Acoustics | en_US |
dc.subject | Graphics | en_US |
dc.subject | Simulation | en_US |
dc.title | Semi-Automated Shape Model Fitting for HRTF Simulation using Mesh2HRTF | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Biomedical Engineering | en_US |
dc.description.degreetype | Thesis | en_US |
dc.description.degree | Master of Applied Science (MASc) | en_US |
dc.description.layabstract | Head Related Transfer Functions (HRTFs) are crucial in creating realistic spatial audio. They model how sound is filtered by a subject's unique head shape. Realistic spatial audio requires personalized HRTFs, which are often obtained through direct measurement. However, the need of special equipment and facilities can make direct measurement infeasible. Simulation is an attractive alternative to measuring HRTFs, because it only requires a 3D capture of the head and ears, however, 3D scanning technologies often leave artifacts that impede numerical simulation. To mitigate these common artifacts, this thesis presents three semi-automated shape fitting pipelines for generating meshes suitable for simulation. The pipeline registers a template to a 3D scan such that the output mesh is suitable for numerical simulation (i.e. watertight and manifold). The morphed template is used as input to the Mesh2HRTF simulator. The proposed pipelines are evaluated for reconstruction accuracy and the accuracy of their simulated HRTFs. | en_US |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Quansah_Bodee_A_2024Dec_MASc.pdf | 9.53 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.