Multi-Sensor Fusion for Localization of a Lunar Micro Rover Using Non-Vision Sensors

Autonomous navigation is a critical aspect of robotic systems, particularly in hostile and uncertain environments, and robot localization is central to navigation. Robot localization establishes its position within its surroundings. This thesis addresses the challenge of robot localization, focusing...

Full description

Bibliographic Details
Main Author: Koshy, Anitha (author)
Other Authors: Rajan, R.T. (mentor), Delft University of Technology (degree granting institution)
Format: Master Thesis
Language:English
Published: 2024
Subjects:
IMU
Online Access:http://resolver.tudelft.nl/uuid:edbbfeba-6b57-4402-b34b-44db06bebeb7
Description
Summary:Autonomous navigation is a critical aspect of robotic systems, particularly in hostile and uncertain environments, and robot localization is central to navigation. Robot localization establishes its position within its surroundings. This thesis addresses the challenge of robot localization, focusing on a lunar-like environment with constraints such as limited computational resources and the use of non-visual based sensors. In this thesis, three sensors — wheel encoders (WE), Sun sensor (SS), and inertial measurement unit (IMU) — are employed for localization. Each sensor contributes distinct information regarding position and orientation. However, individual sensor measurements suffer from inherent inaccuracies and errors, especially the IMU’s reliance on integration over time, leading to significant drift. To mitigate these challenges, three fusion methods are explored: sensor selection based on predefined thresholds, Kalman filtering, and weighted fusion. Results indi- cate substantial improvements in localization accuracy compared to individual sensor measurements. The weighted fusion method, in particular, demonstrates superior per- formance by assigning appropriate importance, according to their accuracy, to each sensor’s information, resulting in significantly reduced positioning errors. The maxi- mum localization error using this method is 92m, which is smaller than reported in the literature. Further, the maximum localization percentage error over 65m is around 8%, which is comparable to the literature with visual sensors. The weighted fusion method introduces only a marginal increase in the computational complexity. Thus, this method stands out for its simplicity and delivers results superior to those documented in existing literature for non-visual sensors. Despite promising results, the research is met with certain hurdles, notably the avail- ability and consistency of datasets. The reliance on existing datasets, such as the Devon Island Rover dataset, highlights the need for standardized and ...