Please use this identifier to cite or link to this item:
http://hdl.handle.net/11375/32011
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Gonsamo, Alemu | - |
dc.contributor.author | So, Kangyu | - |
dc.date.accessioned | 2025-07-21T17:49:30Z | - |
dc.date.available | 2025-07-21T17:49:30Z | - |
dc.date.issued | 2025 | - |
dc.identifier.uri | http://hdl.handle.net/11375/32011 | - |
dc.description.abstract | Canada’s vast forests play a substantial role in the global carbon balance but require laborious and expensive forest inventory campaigns to monitor changes in aboveground biomass (AGB). Light detection and ranging (LiDAR) or reflectance observations onboard airborne or unoccupied aerial vehicles (UAV) may address scalability limitations associated with traditional forest inventory but require simple forest structures or large sets of manually delineated crowns. Here, we introduce a deep learning approach for crown delineation and AGB estimation reproducible for complex forest structures without relying on hand annotations for training. Firstly, we detect treetop and delineate crowns with LiDAR point cloud using marker-controlled watershed segmentation (MCWS). Then we train a deep learning model on annotations derived from MCWS to make crown predictions on an UAV red, blue and green (RGB) tiles. Finally, we estimate AGB metrics from tree height and crown diameter-based allometric equations, all derived from UAV data. We validate our approach using a 14-ha mixed forest stands with various experimental tree densities in Southern Ontario, Canada. Our results demonstrate an 18% improvement in AGB estimation accuracy when the unsupervised LiDAR only algorithm is followed by a self-supervised RGB deep learning model. In unharvested stands, the self-supervised RGB model performs well for height (R^2=0.79) and AGB (R^2= 0.80) estimation. In thinned stands, the performance of both unsupervised and self-supervised methods varied with stand density, crown clumping, canopy height variation, and species diversity. These findings suggest that MCWS can be supplemented with self-supervised deep learning to directly estimate biomass components in complex forest structures as well as atypical forest conditions where stand density and spatial patterns are manipulated. | en_US |
dc.language.iso | en | en_US |
dc.subject | LiDAR | en_US |
dc.subject | UAV | en_US |
dc.subject | biomass | en_US |
dc.subject | unmanned aerial vehicle | en_US |
dc.subject | crown delineation | en_US |
dc.subject | self-supervised deep learning | en_US |
dc.title | Direct estimation of forest aboveground biomass from UAV LiDAR and RGB observations in forest stands with various tree densities | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | Earth and Environmental Sciences | en_US |
dc.description.degreetype | Thesis | en_US |
dc.description.degree | Master of Science (MSc) | en_US |
dc.description.layabstract | The effects of forest thinning practices on biomass regeneration are not well understood as traditional field methods for measuring forest characteristics are costly and impractical for large spatial extents. To monitor and report on biomass components more effectively, we used unoccupied aerial vehicle (UAV) imagery and laser scanning observations, segmentation algorithms, and a deep learning predictive model, for a 14-ha mixed forest stand in Southern Ontario. Laser scanning observations were segmented into tree crowns for the deep learning model, and the crown size, height, and biomass of individual trees were output from UAV imagery. Our results indicate that a combined segmentation and modelling approach can provide accurate estimates of biomass components in forests, even under conditions where their stand density and spatial patterns are manipulated. | en_US |
Appears in Collections: | Open Access Dissertations and Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
So_Kangyu_M_202506_MSc.pdf | 1.74 MB | Adobe PDF | View/Open |
Items in MacSphere are protected by copyright, with all rights reserved, unless otherwise indicated.