Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2022
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2022
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Cross-Camera View-Overlap Recognition

Authors: Xompero, Alessio; Cavallaro, Andrea;

Cross-Camera View-Overlap Recognition

Abstract

Data accompanying the paper titled Cross-Camera View-Overlap Recognition, published in the proceedings of the European Conference on Computer Vision Workshop and presented used for the evaluation of the framework presented in the publication. The dataset consists of image sequence pairs from four scenarios: two scenarios that were collected with both hand-held and chest-mounted cameras – gate and backyard of four sequences each – and two publicly available datasets – TUM-RGB-D SLAM and courtyard from CoSLAM – for a total of ∼28,000 frames (∼25 minutes). The data consisting of images, annotations, and scripts to process existing public sequences. Image sequences are provided for the collected scenarios gate and backyard. We sub-sampled backyard from 30 to 10 fps for annotation purposes. Image sequences for the scenario office can be found at TUM RGB-D SLAM (fr1_desk, fr1_desk2, fr1_room). Scripts to process these sequences as used in the work are provided. The courtyard scenario consists of four sequences. We sub-sampled courtyard from 50 to 25 fps for annotation purposes. Original sequences are available at CoSLAM project website. For all scenarios, we provide i) the annotation of angular distances, Euclidean distances, and overlap ratio of each view pair across camera sequences; ii) the annotation of the calibration (intrinsic) parameters; and iii) the annotation of the camera poses over time for each camera sequence, as automatically reconstructed with the structure-from-motion pipeline, COLMAP, or exploiting the depth data for the office scenario. Camera poses are saved as .txt file for each sequence using the KITTI format. The pose of each frame is represented as a 3x4 matrix (12 parameters) that is converted into a vector by horizontally concatenating the rows of the matrix: [r11 r12 r13 tx r21 r22 r23 ty => [r11 r12 r13 tx r21 r22 r23 ty r31 r32 r33 tz] r31 r32 r33 tz] Values of the parameters are saved in 6 digit floating point numbers as exponential notation. Along with the dataset, we also provide the global features computed by using DeepBit [code] and NetVLAD [code] for each image of all camera sequences. If you use the data, please cite: A. Xompero and A. Cavallaro, Cross-camera view-overlap recognition, International Workshop on Distributed Smart Cameras (IWDSC), European Conference on Computer Vision Workshops, 24 October 2022. ArXiv: https://arxiv.org/abs/2208.11661 Webpage: http://www.eecs.qmul.ac.uk/~ax300/xview/

Related Organizations
  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 8
    download downloads 3
  • 8
    views
    3
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
8
3