Development of deep learning modules for autonomous navigation in marine and aerial robotic applications

This thesis develops two studies on deep learning-based autonomous navigation systems for marine and aerial field robotic applications. The first study involves developing a sea ice detection module to support the autonomous navigation of icebreakers using image semantic segmentation. This module ai...

Full description

Bibliographic Details
Main Author: Balasooriya, Narmada M.
Format: Thesis
Language:English
Published: Memorial University of Newfoundland 2023
Subjects:
Online Access:https://research.library.mun.ca/15900/
https://research.library.mun.ca/15900/1/converted.pdf
Description
Summary:This thesis develops two studies on deep learning-based autonomous navigation systems for marine and aerial field robotic applications. The first study involves developing a sea ice detection module to support the autonomous navigation of icebreakers using image semantic segmentation. This module aims to distinguish sea ice from water, sky, and the ship’s body when images captured onboard an icebreaker are received from a shipborne camera. The study compares the performance of the previous work on sea ice detection by the PSPNet model with a new-state-of-the art image semantic segmentation model called DeepLabv3. To evaluate the DeepLabv3 model, it is transfer-learned on the same image data used for the PSPNet model. The performance of both models is tested on a navigation module equipped with a Jetson AGX Xavier developer kit using standard evaluation metrics. The second study contains the development of a landing zone detection pipeline using Lidar semantic segmentation to support the vertical take-off and landing vehicle autonomy. The study evaluates different point cloud semantic segmentation approaches for their compatibility with the landing zone detection task. The main objective of this study is to use only the Lidar data to detect safe landable zones using deep learning-based architectures and to achieve an accuracy-runtime trade-off for real-time operations. The performance of the neural network models for point cloud semantic segmentation is evaluated using standard metrics and different variations of aerial Lidar data. The study also assesses the feasibility of integrating the landing zone detection module into a visual Lidar odometry and mapping pipeline for faster inference by the neural network models.