Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2021
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2021
License: CC BY
Data sources: Datacite
ZENODO
Dataset . 2021
Data sources: Datacite
versions View all 3 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

TransProteus, Predicting 3D shapes, masks, and properties of materials, liquids, and objects inside transparent containers from images

Authors: Sagi Eppel;

TransProteus, Predicting 3D shapes, masks, and properties of materials, liquids, and objects inside transparent containers from images

Abstract

We present TransProteus, a dataset, for predicting the 3D structure and properties of materials, liquids, and objects inside transparent vessels from a single image without prior knowledge of the image source and camera parameters. Manipulating materials in transparent containers is essential in many fields and depends heavily on vision. This work supplies a new procedurally generated dataset consisting of 50k images of liquids and solid objects inside transparent containers. The image annotations include 3D models and material properties (color/transparency/roughness...) for the vessel and its content. The synthetic (CGI) part of the dataset was procedurally generated using 13k different objects, 500 different environments (HDRI), and 1450 material textures (PBR) combined with simulated liquids and procedurally generated vessels. In addition, we supply 104 real-world images of objects inside transparent vessels with depth maps of both the vessel and its content. Note that there are two files here: Transproteus_SimulatedLiquids2_New_No_Shift.7z and TranProteus2.7z , contain subset of the virtual CGI data set. https://zenodo.org/api/files/12b013ca-36be-4156-afd4-c93b5fa22093/Tansproteus_SimulatedLiquids2_New_No_Shift.7z TransProteus_RealSense_RealPhotos.7z : Contain real-world photos scanned with real sense with depth map of both the vessel and its content See ReadMe file in side the downloaded files for more details The full dataset (>100gb) can be found here: https://e.pcloud.link/publink/show?code=kZfx55Zx1GOrl4aUwXDrifAHUPSt7QUAIfV https://icedrive.net/1/6cZbP5dkNG See: https://arxiv.org/pdf/2109.07577.pdf for more details **This dataset is complementary to LabPics dataset with 8k real images of materials in vessels in chemistry labs, medical labs, and other settings. The LabPics dataset can be downloaded from here: https://zenodo.org/record/4736111#.YVOAx3tE1H4 ************************************************************************************ Transproteus_SimulatedLiquids2_New_No_Shift.7z and TranProteus2.7z The two folders contain relatively similar data styles. The data in No_Shift contain images that were generated with no camera shift in the camera paramters. If you try to predict 3d model from an image as a depth map, this is easier to use (Otherwise, you need to adapt the image using the shift). For all other purposes, both folders are the same, and you can use either or both. In addition, a real image dataset for testing is given in the RealSense file.

Related Organizations
Keywords

Transparent, Liquids, Labratory, chemistry, computer vision

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 96
    download downloads 18
  • 96
    views
    18
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
96
18
Related to Research communities