Towards Visual Route Following for Mobile Robots…Forever!

Presented on March 20, 2013 from 12:00 pm - 1:00 pm in Room 1116 of the Marcus Nanotechnology building. Dr. Timothy Barfoot (Associate Professor, University of Toronto Institute for Aerospace Studies -- UTIAS) holds the Canada Research Chair (Tier II) in Autonomous Space Robotics and works in the ar...

Full description

Bibliographic Details
Main Author: Barfoot, Tim
Other Authors: Georgia Institute of Technology. Center for Robotics and Intelligent Machines, University of Toronto. Institute for Aerospace Studies
Format: Lecture
Language:English
Published: Georgia Institute of Technology 2013
Subjects:
Online Access:http://hdl.handle.net/1853/46502
Description
Summary:Presented on March 20, 2013 from 12:00 pm - 1:00 pm in Room 1116 of the Marcus Nanotechnology building. Dr. Timothy Barfoot (Associate Professor, University of Toronto Institute for Aerospace Studies -- UTIAS) holds the Canada Research Chair (Tier II) in Autonomous Space Robotics and works in the area of guidance, navigation, and control of mobile robots for space and terrestrial applications. He is interested in developing methods to allow mobile robots to operate in large-scale, unstructured, three-dimensional environments, using rich onboard sensing (e.g., cameras and laser rangefinders, not the global-positioning system) and computation. Dr. Barfoot's Autonomous Space Robotics Lab (ASRL) is the only university lab in Canada to focus primarily on planetary rover technology. His approach is both theoretical and experimental, as demonstrated by recent field-testing campaigns on Devon Island in the Canadian High Arctic. Dr. Barfoot took up his position at UTIAS in May 2007, after spending four years at MDA Space Missions, where he developed autonomous vehicle navigation technologies for both planetary rovers and terrestrial applications such as underground mining. He is an Ontario Early Researcher Awardholder and a licensed Professional Engineer in the Province of Ontario. Runtime: 61:32 minutes. In this talk I will describe a particular approach to visual route following for mobile robots that we have developed, called Visual Teach & Repeat (VT&R), and what I think the next steps are to make this system usable in real-world applications. We can think of VT&R as a simple form of simultaneous localization and mapping (without the loop closures) along with a path-tracking controller; the idea is to pilot a robot manually along a route once and then be able to repeat the route (in its own tracks) autonomously many, many times using only visual feedback. VT&R is useful for such applications as load delivery (mining), sample return (space exploration), and perimeter patrol (security). Despite ...