Visual localization for asteroid touchdown operation based on local image features
visual navigation, structure from motion, asteroid, touchdown, Hayabusa2
In an asteroid sample-return mission, accurate position estimation of the spacecraft relativeto the asteroid is essential for landing at the target point. During the missions of Hayabusaand Hayabusa2, the main part of the visual position estimation procedure was performedby human operators on the Earth based on a sequence of asteroid images acquired andsent by the spacecraft. Although this approach is still adopted in critical space missions,there is an increasing demand for automated visual position estimation, so that the timeand cost of human intervention may be reduced. In this paper, we propose a method forestimating the relative position of the spacecraft and asteroid during the descent phase fortouchdown from an image sequence using state-of-the-art techniques of image processing,feature extraction, and structure from motion. We apply this method to real Ryugu imagesthat were taken by Hayabusa2 from altitudes of 20 km–500 m. It is demonstrated thatthe method has practical relevance for altitudes within the range of 5–1 km. This resultindicates that our method could improve the efficiency of the ground operation in theglobal mapping and navigation during the touchdown sequence, whereas full automationand autonomous on-board estimation are beyond the scope of this study. Furthermore, wediscuss the challenges of developing a completely automatic position estimation framework.
Tsinghua University Press
Yoshiyuki Anzai, Takehisa Yairi, Naoya Takeishi et al. Visual localization for asteroid touchdown operation based on local image features.Astrodyn.2020, 4(2): 149–161.