Optical navigation for Lunar landing based on Convolutional Neural Network crater detector
Traditional vision-based navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excelle...
Published in: | Aerospace Science and Technology |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article in Journal/Newspaper |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | http://hdl.handle.net/11311/1205476 https://doi.org/10.1016/j.ast.2022.107503 |
Summary: | Traditional vision-based navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excellent candidates to solve the aforementioned shortcoming of navigation algorithms. The paper presents a vision-based navigation system using a Convolutional Neural Network to solve the task of pinpoint landing on the Moon using absolute navigation, namely with respect to the Mean Earth/Polar Axis reference frame. The Moon landing scenario consists in the spacecraft descent on the South Pole from a parking orbit to the powered descent phase. The architecture features an Object Detection Convolutional Neural Network (ODN) trained with supervised learning approach. The CNN is used to extract features of the observed craters that are then processed by standard image processing algorithms in order to provide pseudo-measurements that can be used by navigation filter. The craters are matched against a database containing the inertial location of the known craters. An Extended Kalman Filter with time-delayed measurements integration is developed to fuse optical and altimeter information. |
---|