
Abstract This paper presents SubPipe, an underwater dataset for SLAM, object detection, and image segmentation. SubPipe has been recorded using a lightweight autonomous underwater vehicle (LAUV), operated by OceanScan MST, and carrying a sensor suite including two cameras, a side-scan sonar, and an inertial navigation system, among other sensors. The AUV has been deployed in a pipeline inspection environment with a submarine pipe partially covered by sand. The AUV's pose ground truth is estimated from the navigation sensors. The side-scan sonar and RGB images include object detection and segmentation annotations, respectively. State-of-the-art segmentation, object detection, and SLAM methods are benchmarked on SubPipe to demonstrate the dataset's challenges and opportunities for leveraging computer vision algorithms.To the authors' knowledge, this is the first annotated underwater dataset providing a real pipeline inspection scenario. The dataset and experiments are publicly available online. On Zenodo we provide three versions for SubPipe. One is the full version (SubPipe.zip, ~80GB unzipped) and two subsamples: SubPipeMini.zip, ~12GB unzipped and SubPipeMini2.zip, ~16GB unzipped. Both subsamples are only parts of the entire dataset (SubPipe.zip). SubPipeMini is a subset, containing semantic segmentation data, and it has interesting camera data of the underwater pipeline. On the other hand, SubPipeMini2 is mainly focused on underwater side-scan sonar images of the seabed including ground truth object detection bounding boxes of the pipeline. For (re-)using/publishing SubPipe, please include the following copyright text: SubPipe is a public dataset of a submarine outfall pipeline, property of Oceanscan-MST. This dataset was acquired with a Light Autonomous Underwater Vehicle by Oceanscan-MST, within the scope of Challenge Camp 1 of the H2020 REMARO project. More information about OceanScan-MST can be found at this link. Cam0 — GoPro Hero 10 Camera parameters: Resolution: 1520×2704 fx = 1612.36 fy = 1622.56 cx = 1365.43 cy = 741.27 k1,k2, p1, p2 = [−0.247, 0.0869, −0.006, 0.001] Side-scan Sonars Each sonar image was created after 20 “ping” (after every 20 new lines) which corresponds to approx. ~1 image / second. Regarding the object detection annotations, we provide both COCO and YOLO formats for each annotation. A single COCO annotation file is provided per each chunk and per each frequency (low frequency vs. high frequency), whereas the YOLO annotations are provided for each SSS image file. Metadata about the side-scan sonar images contained in this dataset: Images for object detection # Low Frequency (LF): 5000 LF image size: 2500 × 500 # High Frequency (HF): 5030 HF Image size 5000 × 500 Total number of images: 10030 Annotations # Low Frequency: 3163 # High Frequency: 3172 Total number of annotations: 6335
side-scan sonar, SLAM, RGB and grayscale camera, pipeline, object detection, underwater dataset, semantic segmentation
side-scan sonar, SLAM, RGB and grayscale camera, pipeline, object detection, underwater dataset, semantic segmentation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
