Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2025
License: CC BY
Data sources: ZENODO
ZENODO
Dataset . 2025
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2025
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

StereoLunar dataset

Authors: Grethen, Clémentine; Gasparini, Simone; Morin, Geraldine; Lebreton, Jérémy; Marti, Lucas; Sanchez Gestido, Manuel;

StereoLunar dataset

Abstract

StereoLunar Dataset: Photorealistic Stereo Image Pairs for Lunar 3D Reconstruction If you find our work to be useful in your research, please consider citing our paper: @InProceedings{Grethen_2025_ICCV, author = {Grethen, Cl\'ementine and Gasparini, Simone and Morin, G\'eraldine and Lebreton, J\'er\'emy and Marti, Lucas and Sanchez-Gestido, Manuel}, title = {Adapting Stereo Vision From Objects To 3D Lunar Surface Reconstruction with the StereoLunar Dataset}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {3751-3760}} The StereoLunar Dataset is the first publicly available high-fidelity dataset designed for learning-based 3D reconstruction of lunar surfaces. It contains simulated stereo image pairs of the Moon, generated through physically-based rendering using high-resolution topography and reflectance models. These images are ideal for training and evaluating 3D vision models, especially for lunar applications such as descent and landing hazard detection, autonomous navigation, and topographic mapping. Github: https://github.com/clementinegrethen/StereoLunar Dataset Overview The StereoLunar Dataset covers a broad range of lunar terrains and illumination conditions, offering diverse altitudes, viewpoints, and camera trajectories around the lunar South Pole. The dataset contains over 50,000 stereo image pairs, each with dense ground-truth supervision, including pixel-level depth maps and accurate camera poses. These pairs were generated using a combination of: High-resolution Digital Elevation Models (DEMs), A Bidirectional Reflectance Distribution Function (BRDF) for surface reflectance, Realistic solar illumination configurations, A parametric camera model simulating various orbital and descent trajectories. Key Features Physical Realism: The dataset employs the Hapke BRDF model to simulate lunar surface reflectance, accounting for unique phenomena like the opposition effect and anisotropic scattering. The illumination setup includes side, overhead, and backlighting to generate varying shadow patterns and photometric changes, mimicking real-world conditions. Geometric Supervision: Every stereo pair comes with dense ground-truth depth maps and precise camera extrinsic parameters, providing fully supervised data for 3D reconstruction tasks. Diverse Trajectories: The dataset includes three types of camera motion: Nadir (vertical descent) with minimal parallax. Oblique (tilted camera views) with varying altitudes and baselines. Dynamic motion featuring additional camera variations, such as altitude and viewpoint changes, to simulate real lunar descent sequences. High Coverage: Stereo pairs are distributed across 10 altitude bands, ranging from 3.5 km to 30.5 km above the lunar surface. This ensures a broad range of ground sampling distances (GSD), from 5.7 m/px to 49.3 m/px. Applications and Evaluation This dataset is specifically tailored to address the unique challenges of lunar 3D reconstruction: Sparse Texture: Lunar imagery features highly repetitive, low-texture surfaces, which makes traditional 3D reconstruction techniques such as Structure-from-Motion (SfM) and Multi-View Stereo (MVS) difficult to apply. Lighting and Terrain Variations: Strong lighting gradients, shadows, and terrain roughness further complicate 3D reconstruction tasks, making this dataset crucial for developing models that can handle these challenges effectively. The StereoLunar Dataset has been used to fine-tune the MASt3R model, which demonstrates remarkable improvements in lunar terrain reconstruction over traditional methods. By adapting MASt3R to this dataset, we significantly improved slope estimation and relative accuracy, paving the way for robust lunar surface modeling and navigation. Dataset Details Stereo Pairs: Over 50,000 pairs of images with pixel-level depth maps. Resolution: 512x512 px per image. Camera Configuration: Pinhole model with known intrinsic and extrinsic parameters. Georeferenced Metadata: Includes absolute altitude, ground sampling distance (GSD), and camera trajectory metadata. Dataset Availability This dataset will be publicly released to foster further research in lunar 3D reconstruction, terrain analysis, and autonomous exploration. It provides a rich resource for the development and benchmarking of deep learning-based stereo vision models, particularly for space exploration applications.

Keywords

Vision-based navigation, space simulation, Computer vision, Physically-based rendering, Moon

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Related to Research communities