Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2024
License: CC BY NC SA
Data sources: ZENODO
ZENODO
Dataset . 2024
License: CC BY NC SA
Data sources: Datacite
ZENODO
Dataset . 2024
License: CC BY NC SA
Data sources: Datacite
versions View all 2 versions
addClaim

VHAKG: Multi-modal Knowledge Graphs with Multi-view Videos of Daily Activities

Authors: Egami, Shusaku; Ugai, Takanori; Htun, Swe Nwe Nwe; Fukuda, Ken;

VHAKG: Multi-modal Knowledge Graphs with Multi-view Videos of Daily Activities

Abstract

Outline This dataset is a multimodal knowledge graph (MMKG) of daily activity videos. This dataset integrates a KG with embedded multi-view videos created by VirtualHome-AIST, an extended version of the VirtualHome simulator, and an event-centric KG generated by VirtualHome2KG. We named this dataset VHAKG (VirtualHome-AIST-KG). Details VHAKG describes 2D bounding boxes of objects every five frames, compositional activities, primitive actions, target objects, object states, 3D bounding boxes, and their time-series changes. The videos are encoded in base64 and embedded as a literal value. VHAKG consists of 706 daily activity scenarios (e.g., clean desk, cook fried bread, and relax on sofa) and 3,530 videos captured by five synchronized cameras per scenario. The file format is RDF (Turtle), which can be loaded into various Triplestores. VHAKG's vocabularies are defined as an ontology and can be found in vh2kg_schema_v2.0.0.ttl. Contents vh2kg_video_base64.tar.gz {activity name}{scene}_{camera}_2dbbox.ttl: KG with video embedded in base64 format, including 2D bounding box data every 5 frames. To learn more about {scene}, check here. To learn more about {camera}, check here. vh2kg_event.tar.gz {activity name}_{scene}.ttl: Event-centric KGs representing video content as sequences of events. vh2kg_schema_v2.0.0.ttl: The ontology file of this dataset. affordance.ttl: The affordance data of objects that were created by crowdsourcing. Please see Section III.B of this paper for more information. add_places.ttl: Events in which agents moved from one room to another. Tools A set of tools for searching and extracting videos from VHAKG is available.

Keywords

knowledge graph, Computer vision, human activity, daily life, video, Semantic web

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average