Downloads provided by UsageCounts
Dataset Introduction The advent of neural networks capable of learning salient features from variance in the radar data has expanded the breadth of radar applications, often as an alternative sensor or a complementary modality to camera vision. Gesture recognition for command control is arguably the most commonly explored application. Nevertheless, more suitable benchmarking datasets than currently available are needed to assess and compare the merits of the different proposed solutions and explore a broader range of scenarios than simple hand-gesturing a few centimeters away from a radar transmitter/receiver. Most current publicly available radar datasets used in gesture recognition provide limited diversity, do not provide access to raw ADC data, and are not significantly challenging. To address these shortcomings, we created and make available a new dataset that combines FMCW radar and dynamic vision camera of 10 aircraft marshalling signals (whole body) at several distances and angles from the sensors, recorded from 13 people. The two modalities are hardware synchronized using the radar's PRI signal. Moreover, in the supporting publication we propose a sparse encoding of the time domain (ADC) signals that achieve a dramatic data rate reduction (>76%) while retaining the efficacy of the downstream FFT processing (//--/ofxRadar8Ghz_yyyy-mm-dd_HH-MM-SS.rad Identifiers stage [train, test]. room: [conference_room, foyer, open_space]. subject: [0-9]. Note that 0 stands for no person, and 1 for an unlabeled, random person (only present in test). gesture: ['none', 'emergency_stop', 'move_ahead', 'move_back_v1', 'move_back_v2', 'slow_down' 'start_engines', 'stop_engines', 'straight_ahead', 'turn_left', 'turn_right']. distance: ['xxx', '100', '150', '200', '250', '300', '350', '400', '450'] (in cm). Note that xxx is used for none gestures when there is no person present in front of the radar (i.e. background samples), or when a person is walking in front of the radar with varying distances but performing no gesture. The test data captures contain both subjects that appear in the train data as well as previously unseen subjects. Similarly the test data contain captures from the spaces that train data were recorded at, as well as from a new unseen open space. Files List radar8Ghz-DVS-marshaling_signals_20220901_publication_anonymized.7z This is the actual archive bundle with the data captures (recordings). rad_file_parser_2.py Parser for individual .rad files, which contain capture data. loader.py A convenience PyTorch Dataset loader (partly Tonic compatible). You practically only need this to quick-start if you don't want to delve too much into code reading. When you init a DvsRadarAircraftMarshallingSignals class object it automatically downloads the dataset archive and the .rad file parser, unpacks the archive, and imports the .rad parser to load the data. One can then request from it a training set, a validation set and a test set as torch.Datasets to work with. aircraft_marshalling_signals_howto.ipynb Jupyter notebook for exemplary basic use of loader.py Contact For further information or questions try contacting first M. Sifalakis or F. Corradi.
If you use this dataset, please also cite our accompanying paper: @INPROCEEDINGS{mueller2023aircraft, author={Müller, Leon and Sifalakis, Manolis and Eissa, Sherif and Yousefzadeh, Amirreza and Detterer, Paul and Stuijk, Sander and Corradi, Federico}, booktitle={2023 IEEE Radar Conference (RadarConf23)}, title={Aircraft Marshaling Signals Dataset of FMCW Radar and Event-Based Camera for Sensor Fusion}, year={2023}, pages={01-06}, doi={10.1109/RadarConf2351548.2023.10149465} }
fmcw radar, sensor fusion, gesture recognition, event-radar, neuromorphic dataset, sensory fusion, dvs, radar
fmcw radar, sensor fusion, gesture recognition, event-radar, neuromorphic dataset, sensory fusion, dvs, radar
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 82 | |
| downloads | 31 |

Views provided by UsageCounts
Downloads provided by UsageCounts