Object segmentation methods for online model acquisition to guide robotic grasping

A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service...

Full description

Bibliographic Details
Main Author: Dmitri Ignakov (10856973)
Format: Thesis
Language:unknown
Published: 2013
Subjects:
DML
Online Access:https://doi.org/10.32920/ryerson.14655186.v1
id ftsmithonian:oai:figshare.com:article/14655186
record_format openpolar
spelling ftsmithonian:oai:figshare.com:article/14655186 2023-05-15T16:02:10+02:00 Object segmentation methods for online model acquisition to guide robotic grasping Dmitri Ignakov (10856973) 2013-01-01T00:00:00Z https://doi.org/10.32920/ryerson.14655186.v1 unknown https://figshare.com/articles/thesis/Object_segmentation_methods_for_online_model_acquisition_to_guide_robotic_grasping/14655186 doi:10.32920/ryerson.14655186.v1 In Copyright Uncategorized content Robots -- Motion -- Mathematical models Autonomous robots Robot hands -- Design and construction Text Thesis 2013 ftsmithonian https://doi.org/10.32920/ryerson.14655186.v1 2021-06-13T16:10:30Z A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. . First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods. Thesis DML Unknown Handle The ENVELOPE(161.983,161.983,-78.000,-78.000) The Handle ENVELOPE(161.983,161.983,-78.000,-78.000)
institution Open Polar
collection Unknown
op_collection_id ftsmithonian
language unknown
topic Uncategorized content
Robots -- Motion -- Mathematical models
Autonomous robots
Robot hands -- Design and construction
spellingShingle Uncategorized content
Robots -- Motion -- Mathematical models
Autonomous robots
Robot hands -- Design and construction
Dmitri Ignakov (10856973)
Object segmentation methods for online model acquisition to guide robotic grasping
topic_facet Uncategorized content
Robots -- Motion -- Mathematical models
Autonomous robots
Robot hands -- Design and construction
description A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. . First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.
format Thesis
author Dmitri Ignakov (10856973)
author_facet Dmitri Ignakov (10856973)
author_sort Dmitri Ignakov (10856973)
title Object segmentation methods for online model acquisition to guide robotic grasping
title_short Object segmentation methods for online model acquisition to guide robotic grasping
title_full Object segmentation methods for online model acquisition to guide robotic grasping
title_fullStr Object segmentation methods for online model acquisition to guide robotic grasping
title_full_unstemmed Object segmentation methods for online model acquisition to guide robotic grasping
title_sort object segmentation methods for online model acquisition to guide robotic grasping
publishDate 2013
url https://doi.org/10.32920/ryerson.14655186.v1
long_lat ENVELOPE(161.983,161.983,-78.000,-78.000)
ENVELOPE(161.983,161.983,-78.000,-78.000)
geographic Handle The
The Handle
geographic_facet Handle The
The Handle
genre DML
genre_facet DML
op_relation https://figshare.com/articles/thesis/Object_segmentation_methods_for_online_model_acquisition_to_guide_robotic_grasping/14655186
doi:10.32920/ryerson.14655186.v1
op_rights In Copyright
op_doi https://doi.org/10.32920/ryerson.14655186.v1
_version_ 1766397762455207936