Lunar Autonomy Challenge
With Stanford NAV Lab (Stanford University)

We developed a full-stack autonomous agent for lunar rover navigation and mapping designed for the Lunar Autonomy Challenge competition, hosted by NASA and Johns Hopkins University Applied Physics Laboratory (APL). This challenge directly supports NASA’s Lunar Surface Innovation Initiative (LSII), which seeks to develop foundational technologies and approaches needed to fulfill the Artemis missions.
The goal of the challenge is to virtually explore and map the lunar surface using a digital twin of NASA’s lunar In-Situ Resource Utilization (ISRU) Pilot Excavator (IPEx) robot. The challenge provides a high-fidelity lunar simulation environment based on Unreal Engine and built using CARLA (Dosovitskiy et al., 2017), which is capable of photorealistic imagery and simulating vehicle dynamics over complex terrain.
We leverage lightweight learning-based perception models for real-time segmentation and feature tracking, and use a factor-graph backend to maintain globally consistent localization. High-level waypoint planning is designed to promote mapping coverage while encouraging frequent loop closures, and local motion planning uses arc sampling with geometric obstacle checks for efficient, reactive control. We evaluate our approach in the competition’s high-fidelity lunar simulator, demonstrating centimeter-level localization accuracy, high-fidelity map generation, and strong repeatability across random seeds and rock distributions.
Our solution achieved first place in the final competition evaluation.
Related Publication:
[C23] Dai, A., Wu, A., Iiyama, K., Vila, G.C., Coimbra, K., Deng, T., Gao, G., “Full Stack Navigation, Mapping, and Planning for the Lunar Autonomy Challenge”, Proceedings of the Institute of Navigation GNSS+ conference (ION GNSS+ 2025), 2025 [Slides]
