Learned Improvements to the Visual Egomotion Pipeline

The ability to estimate egomotion is at the heart of safe and reliable mobile autonomy. By inferring pose changes from sequential sensor measurements, egomotion estimation forms the basis of mapping and navigation pipelines, and permits mobile robots to self-localize within environments where extern...

Full description

Bibliographic Details
Main Author: Peretroukhin, Valentin
Other Authors: Kelly, Jonathan S, Aerospace Science and Engineering
Format: Thesis
Language:unknown
Published: 2020
Subjects:
Online Access:http://hdl.handle.net/1807/101014
_version_ 1821834441323970560
author Peretroukhin, Valentin
author2 Kelly, Jonathan S
Aerospace Science and Engineering
author_facet Peretroukhin, Valentin
author_sort Peretroukhin, Valentin
collection University of Toronto: Research Repository T-Space
description The ability to estimate egomotion is at the heart of safe and reliable mobile autonomy. By inferring pose changes from sequential sensor measurements, egomotion estimation forms the basis of mapping and navigation pipelines, and permits mobile robots to self-localize within environments where external localization information may be intermittent or unavailable. Visual egomotion estimation, also known as visual odometry, has become ubiquitous in mobile robotics due to the availability of high-quality, compact, and inexpensive cameras that capture rich representations of the world. Classical visual odometry pipelines make simplifying assumptions that, while permitting reliable operation in ideal conditions, often lead to systematic error. In this dissertation, we present four ways in which conventional pipelines can be improved through the addition of a learned hyper-parametric model. By combining traditional pipelines with learning, we retain the performance of conventional techniques in nominal conditions while leveraging modern high-capacity data-driven models to improve uncertainty quantification, correct for systematic bias, and improve robustness to deleterious effects by extracting latent information in existing visual data. We demonstrate the improvements derived from our approach on data collected in sundry settings such as urban roads, indoor labs, and planetary analogue sites in the Canadian High Arctic. Ph.D.
format Thesis
genre Arctic
genre_facet Arctic
geographic Arctic
geographic_facet Arctic
id ftunivtoronto:oai:localhost:1807/101014
institution Open Polar
language unknown
op_collection_id ftunivtoronto
op_relation http://hdl.handle.net/1807/101014
op_rights Attribution 4.0 International
http://creativecommons.org/licenses/by/4.0/
op_rightsnorm CC-BY
publishDate 2020
record_format openpolar
spelling ftunivtoronto:oai:localhost:1807/101014 2025-01-16T20:39:44+00:00 Learned Improvements to the Visual Egomotion Pipeline Peretroukhin, Valentin Kelly, Jonathan S Aerospace Science and Engineering 2020-06-22T14:19:09Z http://hdl.handle.net/1807/101014 unknown http://hdl.handle.net/1807/101014 Attribution 4.0 International http://creativecommons.org/licenses/by/4.0/ CC-BY computer vision deep learning machine learning mobile autonomy SLAM visual odometry 0771 Thesis 2020 ftunivtoronto 2020-07-14T07:00:25Z The ability to estimate egomotion is at the heart of safe and reliable mobile autonomy. By inferring pose changes from sequential sensor measurements, egomotion estimation forms the basis of mapping and navigation pipelines, and permits mobile robots to self-localize within environments where external localization information may be intermittent or unavailable. Visual egomotion estimation, also known as visual odometry, has become ubiquitous in mobile robotics due to the availability of high-quality, compact, and inexpensive cameras that capture rich representations of the world. Classical visual odometry pipelines make simplifying assumptions that, while permitting reliable operation in ideal conditions, often lead to systematic error. In this dissertation, we present four ways in which conventional pipelines can be improved through the addition of a learned hyper-parametric model. By combining traditional pipelines with learning, we retain the performance of conventional techniques in nominal conditions while leveraging modern high-capacity data-driven models to improve uncertainty quantification, correct for systematic bias, and improve robustness to deleterious effects by extracting latent information in existing visual data. We demonstrate the improvements derived from our approach on data collected in sundry settings such as urban roads, indoor labs, and planetary analogue sites in the Canadian High Arctic. Ph.D. Thesis Arctic University of Toronto: Research Repository T-Space Arctic
spellingShingle computer vision
deep learning
machine learning
mobile autonomy
SLAM
visual odometry
0771
Peretroukhin, Valentin
Learned Improvements to the Visual Egomotion Pipeline
title Learned Improvements to the Visual Egomotion Pipeline
title_full Learned Improvements to the Visual Egomotion Pipeline
title_fullStr Learned Improvements to the Visual Egomotion Pipeline
title_full_unstemmed Learned Improvements to the Visual Egomotion Pipeline
title_short Learned Improvements to the Visual Egomotion Pipeline
title_sort learned improvements to the visual egomotion pipeline
topic computer vision
deep learning
machine learning
mobile autonomy
SLAM
visual odometry
0771
topic_facet computer vision
deep learning
machine learning
mobile autonomy
SLAM
visual odometry
0771
url http://hdl.handle.net/1807/101014