Downloads provided by UsageCounts
Overview 3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items. Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here. Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises: Python tools to read, generate, and visualize the dataset, 3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection. The DevKit is available here: https://github.com/volkswagen/3DHD_devkit. The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany. When using our dataset, you are welcome to cite: @INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}} Acknowledgements We thank the following interns for their exceptional contributions to our work. Benjamin Sertolli: Major contributions to our DevKit during his master thesis Niels Maier: Measurement campaign for data collection and data preparation The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies. The Dataset After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following. 1. Dataset This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map. During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet. To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example. import json json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data) 2. HD_Map Map items are stored as lists of items in JSON format. In particular, we provide: traffic signs, traffic lights, pole-like objects, construction site locations, construction site obstacles (point-like such as cones, and line-like such as fences), line-shaped markings (solid, dashed, etc.), polygon-shaped markings (arrows, stop lines, symbols, etc.), lanes (ordinary and temporary), relations between elements (only for construction sites, e.g., sign to lane association). 3. HD_Map_MetaData Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON. Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API. 4. HD_PointCloud_Tiles The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows. x-coordinates: 4 byte integer y-coordinates: 4 byte integer z-coordinates: 4 byte integer intensity of reflected beams: 2 byte unsigned integer ground classification flag: 1 byte unsigned integer After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance. import numpy as np import pptk file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['<i4', '<i4', '<i4', '<u2', 'u1'] with open(file_path, "r") as fid: num_points = np.fromfile(fid, count=1, dtype='<u4')[0] # print(num_points) # Init for k, dtype in zip(key_list, type_list): pc_dict[k] = np.zeros([num_points], dtype=dtype) # Read all arrays for k, t in zip(key_list, type_list): pc_dict[k] = np.fromfile(fid, count=num_points, dtype=t) # Unnorm pc_dict['x'] = (pc_dict['x'] / 1000) + 500000 pc_dict['y'] = (pc_dict['y'] / 1000) + 5000000 pc_dict['z'] = (pc_dict['z'] / 1000) pc_dict['intensity'] = pc_dict['intensity'] / 2**16 pc_dict['is_ground'] = pc_dict['is_ground'].astype(np.bool_) fid.close() print(pc_dict) # Visualization # Normalize (due to large UTM values) x_utm = pc_dict['x'] - np.mean(pc_dict['x']) y_utm = pc_dict['y'] - np.mean(pc_dict['y']) z_utm = pc_dict['z'] xyz = np.column_stack((x_utm, y_utm, z_utm)) viewer = pptk.viewer(xyz) viewer.attributes(pc_dict['intensity']) viewer.set(point_size=0.03) 5. Trajectories We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON. These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle. OP1 – OP5 cover the majority of the map with 5 trajectories. RH1 – RH10 cover the majority of the map with 10 trajectories. Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.
LiDAR, lanes, traffic lights, automated driving, point clouds, traffic signs, construction sites, map deviation detection, map verification, markings, high-definition (HD) maps, poles
LiDAR, lanes, traffic lights, automated driving, point clouds, traffic signs, construction sites, map deviation detection, map verification, markings, high-definition (HD) maps, poles
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 499 | |
| downloads | 536 |

Views provided by UsageCounts
Downloads provided by UsageCounts