Welcome to the upgraded MacSphere! We're putting the finishing touches on it; if you notice anything amiss, email macsphere@mcmaster.ca

Cooperative Shape from Shading and Stereo for 3D Reconstruction

dc.contributor.advisorCapson, David
dc.contributor.authorFortuna, Jeff
dc.contributor.departmentElectrical and Computer Engineeringen_US
dc.date.accessioned2021-05-06T16:17:51Z
dc.date.available2021-05-06T16:17:51Z
dc.date.issued2001-04
dc.description.abstractThis thesis presents a survey of techniques to obtain the depth component from two­-dimensional (2D) images. Two common techniques - stereo and shape from shading are examined here. Their performance is compared with an emphasis on noting the fundamental limitations of each technique. An argument is presented which suggests an adjustment of the paradigm with which stereo and shape from shading have been treated in three-dimensional vision. The theoretical development of the stereo and lighting models is followed by experiments illustrating use of these models for a variety of objects in a scene. A comparison of the results provides a motivation for combining them in a particular way. This combination is developed, and its application is examined. Using the model that is consistent for both shape and lighting, significant improvement over either stereo or lighting models alone is shown.en_US
dc.description.degreeMaster of Engineering (ME)en_US
dc.description.degreetypeThesisen_US
dc.identifier.urihttp://hdl.handle.net/11375/26428
dc.language.isoenen_US
dc.subjectcooperative shapeen_US
dc.subjectshadingen_US
dc.subjectstereoen_US
dc.subject3D reconstructionen_US
dc.titleCooperative Shape from Shading and Stereo for 3D Reconstructionen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fortuna_jeff_2001Apr_masters.pdf
Size:
15.2 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed upon to submission
Description: