Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
227 Research products, page 1 of 23

  • Research data
  • Dataset
  • IT
  • ZENODO

10
arrow_drop_down
Relevance
arrow_drop_down
  • Open Access English
    Authors: 
    Bardi, Alessia; Kuchma, Iryna; Brobov, Evgeny; Truccolo, Ivana; Monteiro, Elizabete; Casalegno, Carlotta; Clary, Erin; Romanowski, Andrew; Pavone, Gina; Artini, Michele; +19 more
    Publisher: Zenodo
    Countries: Germany, Italy
    Project: EC | OpenAIRE Nexus (101017452), EC | OpenAIRE-Advance (777541)

    This dump provides access to the metadata records of publications, research data, software and projects that may be relevant to the Corona Virus Disease (COVID-19) fight. The dump contains records of the OpenAIRE COVID-19 Gateway, identified via full-text mining and inference techniques applied to the OpenAIRE Research Graph. The Graph is one of the largest Open Access collections of metadata records and links between publications, datasets, software, projects, funders, and organizations, aggregating 12,000+ scientific data sources world-wide, among which the Covid-19 data sources Zenodo COVID-19 Community, WHO (World Health Organization), BIP! FInder for COVID-19, Protein Data Bank, Dimensions, scienceOpen, and RSNA. The dump consists of a tar archive containing gzip files with one json per line. Each json is compliant to the schema available at https://doi.org/10.5281/zenodo.4723499.

  • Open Access English
    Authors: 
    Foszner, Paweł; Staniszewski, Michał; Szczęsna, Agnieszka; Michał Cogiel; Golba, Dominik; Ciampi, Luca; Messina, Nicola; Gennaro, Claudio; Falchi, Fabrizio; Amato, Giuseppe; +1 more
    Publisher: Zenodo
    Country: Italy
    Project: EC | AI4Media (951911)

    Dataset The Bus Violence dataset is a large-scale collection of videos depicting violent and non-violent situations in public transport environments. This benchmark was gathered from multiple cameras located inside a moving bus where several people simulated violent actions, such as stealing an object from another person, fighting between passengers, etc. It contains 1,400 video clips manually annotated as having or not violent scenes, making it one of the biggest benchmarks for video violence detection in the literature. Specifically, videos are recorded from three cameras at 25 Frames Per Second (FPS) --- two cameras located in the corners of the bus (with resolution 960x540 px) and one fisheye in the middle (1280x960 px). The clips have a minimum length of 16 frames and a maximum of 48 frames, capturing a very precise action (either violence or non-violence). The dataset is perfectly balanced, containing 700 videos of violence and 700 videos of non-violence. The Bus Violence dataset is intended as a test data benchmark. However, for researchers interested in using our data also for training purposes, we provide training and test splits. In this repository, we provide the 1,400 video clips divided into two folders named Violence /NoViolence, containing clips of violent situations and non-violent situations, respectively; two txt files containing the names of the videos belonging to the training and test splits, respectively. Citing our work If you found this dataset useful, please cite the following paper @inproceedings{bus_violence_dataset_2022, title = {Bus Violence: An Open Benchmark for Video Violence Detection on Public Transport}, doi = {10.3390/s22218345}, url = {https://doi.org/10.3390%2Fs22218345}, year = 2022, month = {oct}, publisher = {{MDPI} {AG}}, volume = {22}, number = {21}, pages = {8345}, author = {Luca Ciampi and Pawe{\l} Foszner and Nicola Messina and Micha{\l} Staniszewski and Claudio Gennaro and Fabrizio Falchi and Gianluca Serao and Micha{\l} Cogiel and Dominik Golba and Agnieszka Szcz{\k{e}}sna and Giuseppe Amato}, journal = {Sensors} } and this Zenodo Dataset @dataset{pawel_bus_violence_zenodo, author = {Paweł Foszner, Michał Staniszewski, Agnieszka Szczęsna, Michał Cogiel, Dominik Golba, Luca Ciampi, Nicola Messina, Claudio Gennaro, Fabrizio Falchi, Giuseppe Amato, Gianluca Serao}, title = {{Bus Violence: a large-scale benchmark for video violence detection in public transport}}, month = sep, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.7044203}, url = {https://doi.org/10.5281/zenodo.7044203} } Contact Information Blees Sp. z o.o., Gliwice, Poland mstaniszewski@blees.co Acknowledgments The presented dataset was supported by: European Union funds awarded to Blees Sp. z o.o. under grant POIR.01.01.01-00-0952/20-00 “Development of a system for analysing vision data captured by public transport vehicles interior monitoring, aimed at detecting undesirable situations/behaviours and passenger counting (including their classification by age group) and the objects they carry”); EC H2020 project "AI4media: a Centre of Excellence delivering next generation AI Research and Training at the service of Media, Society and Democracy" under GA 951911; research project INAROS (INtelligenza ARtificiale per il mOnitoraggio e Supporto agli anziani), Tuscany POR FSE CUP B53D21008060008. License The Bus Violence dataset was acquired by Blees Sp. z o.o. and is released under a Creative Commons Attribution license for non-commercial use.

  • Open Access English
    Authors: 
    Matiu, Michael; Jacob, Alexander; Notarnicola, Claudia;
    Publisher: Zenodo
    Project: EC | CliRSnow (795310)

    NOTE: We discovered some errors in the data for images after February 2019. They will be fixed in version >= 1.1.x, until then, usage of the data after Feb 2019 is not advised. The rest of the data is fine. This is the data to the same-titled Data paper, which can be found at https://doi.org/10.3390/data5010001. Along with auxilary files for the cloudremoval package, and example scripts on how to access chunks of the data. The files contain: python-cloudremoval-aux-data.tar.gz : auxilary data (altitude, aspect, ...) to run the cloudremoval module which can be found at https://gitlab.inf.unibz.it/earth_observation_public/modis_snow_cloud_removal python-example-data-access.html : Example script how to access parts of the data using python R-example-data-access.html : Example script how to access parts of the data using R zenodo_01_original.tar.gz : time series of snow cover maps, developed at the Institute for Earth Observation, Eurac Research, Bolzano, Italy. More information in same-title Data paper (https://doi.org/10.3390/data5010001), and for algorithm at https://doi.org/10.3390/rs5010110. zenodo_02_cloudremoval.tar.gz : time series of cloud filtered maps, based on 2. above, using code mentioned in 1. More information in same-titled Data paper. The maps are GeoTIFF with integer based values: 0 = no data; 1 = snow; 2 = land; 3 = cloud; 4&5 = water bodies / nodata Version history: 1.0.0 : initial upload 1.0.1 : changes after revision of Data paper 1.0.2 : added example scripts

  • Open Access
    Authors: 
    Fernando Guiomar;
    Publisher: Zenodo
    Project: EC | Flex-ON (653412)

    Data set containing experimental results on the propagation of probabilistic shaping optical signals over pure silica core fiber (PSCF) inside a recirculating loop. The net bit-rate is fixed at 250G.

  • Open Access English
    Authors: 
    Gaias, Gabriella; Colombo, Camilla; Lara, Martin;
    Publisher: Zenodo
    Project: EC | COMPASS (679086)

    The data sets provided here can be used to recreate the plots of the paper “Analytical Framework for Precise Relative Motion in Low Earth Orbits” available at this link. That paper presents a practical and efficient analytical framework for the precise modelling of the relative motion in low Earth orbits. {"references": ["Gaias, G., Colombo, C., & Lara, M.. (2020). Analytical Framework for Precise Relative Motion in Low Earth Orbits. [Data set] Zenodo. http://doi.org/10.5281/zenodo.3734154"]}

  • Research data . 2021
    Open Access English
    Authors: 
    Ferrarini, Federica; J Ramón Arrowsmith; Brozzetti, Francesco; De Nardis, Rita; Cirillo, Daniele; Kelin X Whipple; Lavecchia, Giusy;
    Publisher: Zenodo
    Project: EC | COLOSSEO (795396)

    This dataset contains the tables and the vector data (shapefiles - ESRI format) produced, interpreted, and used to support the findings of the study reported in the paper: Late-Quaternary tectonics along the peri-Adriatic sector of the Apenninic chain (central-southern Italy): inspecting active shortening through topographic relief and fluvial network analysis Federica Ferrarini (f.ferrarini@unich.it), J Ramón Arrowsmith, Francesco Brozzetti, Rita de Nardis, Daniele Cirillo, Kelin X Whipple, Giusy Lavecchia (2021) - Lithosphere - https://doi.org/10.2113/2020/7866617

  • Open Access English
    Authors: 
    Cabanes, Simon; Spiga, Aymeric; Young, Roland M. B.;
    Publisher: Zenodo
    Project: EC | JUMP (797012)

    We conduct in-depth analysis of statistical flow properties from Global Circulation Model that reproduce Saturn's macroturbulence, namely large-scale zonal winds. We use a high performance Global Climate Models (GCMs), named DYNAMICO, to model the atmospheric circulation of gas giants with appropriate physical parametrizations for Saturn's atmosphere. The high-resolution model DYNAMICO solves for 3D primitive equations of motion. We ran a Saturn simulation covering 15 Saturn years using the Saturn DYNAMICO GCM. Wind fields are output every 20 Saturn days at 32 pressure levels onto 1/2° latitude-longitude grid maps. Details on this Saturn reference simulation are given in Spiga et al. (2020). In addition, to diagnose the relevant 3D dynamical mechanisms in Saturn's turbulent atmosphere, we run a set of four simulations using an idealized version of our Global Climate Model devoid of radiative transfer, with a well-defined Taylor-Green forcing and over several rotation rates (4, 1, 0.5, and 0.25 times Saturn's rotation rate). Here, we deliver a full data set, including velocity maps, at different pressure levels and time steps, from which it is possible to recompute the statistical analysis detailed in Cabanes et al. (2020). The delivered data set includes: Files of our (1) data collection and (2) numerical codes that lead to the statistical analysis: (1) Data collection: A PDF file named JUMP-zonal-jets-data-collection-Icarus.pdf that describes in depththe data set and the associated nomenclature. A netcdf file of velocity fields from our Saturn Reference Simulation (SRS) uvData-SRS-istep-312000-nstep-50-niz-12.nc StatisticalData.nc A netcdf file of velocity fields from idealized simulation at 4 times the Satrun's rotation rate, uvData-Omega-4-istep-21026.0-nstep-20-niz-8.nc A netcdf file of velocity fields from idealized simulation at 1 times the Satrun's rotation rate, uvData-Omega-1-istep-21026.0-nstep-20-niz-8.nc A netcdf file of velocity fields from idealized simulation at 0.5 times the Satrun's rotation rate, uvData-Omega-0.5-istep-20626.0-nstep-20-niz-8.nc A netcdf file of velocity fields from idealized simulation at 0.25 times the Satrun's rotation rate, uvData-Omega-0.25-istep-21026.0-nstep-20-niz-8.nc (2) Numerical codes: Codes for statistical analysis in spherical geometry are on Github. --> https://github.com/scabanes/POST Acknowledgments: The authors acknowledge exceptional computing support from Grand Équipement National de Calcul Intensif (GENCI) and Centre Informatique National de l’Enseignement Supérieur (CINES). All the simulations presented in this paper were carried out on the Occigen cluster hosted at CINES. This work was granted access to the High-Performance Computing (HPC) resources of CINES under the allocations A001-0107548, A003-0107548, A004-0110391 made by GENCI. The authors acknowledge funding from Agence Nationale de la Recherche (ANR), project HEAT ANR-14-CE23-0010 and project EMERGIANT ANR-17-CE31-0007. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement N° 797012. Fruitful discussions with Sandrine Guerlet, Ehouarn Millour, Thomas Dubos, Frédéric Hourdin and Alexandre Boissinot from our team helped refine some discussions in the paper. {"references": ["Spiga, Aymeric, et al. \"Global climate modeling of Saturn's atmosphere. Part II: Multi-annual high-resolution dynamical simulations.\" Icarus 335 (2020): 113377.", "Cabanes, Simon et al. \"Global climate modeling of Saturn's atmosphere. Part III: Global statistical picture of zonostrophic turbulence in high-resolution 3D-turbulent simulations.\" arXiv preprint arXiv:2001.02473 (2020)."]}

  • Research data . 2021
    Open Access English
    Authors: 
    Camisasca, Gaia; Tinti, Antonio; Giacomello, Alberto;
    Publisher: Zenodo
    Project: EC | HyGate (803213)

    Research data associated with the publication: J. Phys. Chem. Lett. 2020, 11, 21, 9171–9177 https://doi.org/10.1021/acs.jpclett.0c02600 Description of the data format can be found inside the folders.

  • Open Access English
    Authors: 
    Alessandro; Gabriel;
    Publisher: Zenodo
    Project: EC | GenPercept (832813), EC | PUPILTRAITS (801715)

    Humans possess the ability to extract highly organized perceptual structures from sequences of temporal stimuli. For instance, we can organize specific rhythmical patterns into hierarchical, or metrical, systems. Despite the evidence of a fundamental influence of the motor system in achieving this skill, few studies have attempted to investigate the organization of our motor representation of rhythm. To this aim, we studied - in musicians and non-musicians – the ability to perceive and reproduce different rhythms. In a first experiment participants performed a temporal order-judgment task, for rhythmical sequences presented via auditory or tactile modality. In a second experiment, they were asked to reproduce the same rhythmic sequences, while their tapping force and timing were recorded. We demonstrate that tapping force encodes the metrical aspect of the rhythm, and the strength of the coding correlates with the individual’s perceptual accuracy. We suggest that the similarity between perception and tapping-force organization indicates a common representation of rhythm, shared between the perceptual and motor systems.

  • Open Access English
    Authors: 
    Arezoumandan,Morteza; Ghannadrad,Ali; Candela,Leonardo; Castelli,Donatella;
    Publisher: Zenodo
    Country: Italy
    Project: EC | SoBigData-PlusPlus (871042), EC | EOSC-Pillar (857650), EC | Blue Cloud (862409)

    This dataset is accompanying the "Recommender system for science: A basic taxonomy" paper published at IRCDL 2022 conference. This study had a Systematic Mapping Approach on the Recommender system for science. In particular, the study aims at responding to four questions on recommender systems in science cases: users and their interests representation, item typologies and their representation, recommendation algorithms, and evaluation, and then providing a taxonomy. This dataset contains 209 papers of interest that have been published between 2015 and 2022. The dataset has 11 columns which organised as follows: Column Title: This column contains the title of the papers. Column DOI: This column contains the DOI of the papers. Column Publication_year: This column contains the year that the paper is published. Column DB: This column contains the repository that the paper is retrieved. Column Keywords: This column contains the keywords provided for the paper. Column Content_type: This column contains the paper type which can be: Article, Conference or Review. Column Citing_paper_count: This column contains the citation number of the paper. Column Recommended_artefact: This column contains the scientific product that is recommended to users which can be paper, workflow, collaborator, dataset or others. Column User_type: This column contains the type of user who receives the recommendation, which can be an Individual user or a Group of users. Column Algorithm: This column contains the recommendation algorithm that the paper proposed, which can be: HB (Hybrid-based), CB (Content-based), CFB (Collaborative-filtering-based), or GB (Graph-based). Column Evaluation_method: This column contains the method of the algorithm evaluation which can be OFFLINE, ONLINE, BOTH, or NO_EVALUATION.

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
227 Research products, page 1 of 23
  • Open Access English
    Authors: 
    Bardi, Alessia; Kuchma, Iryna; Brobov, Evgeny; Truccolo, Ivana; Monteiro, Elizabete; Casalegno, Carlotta; Clary, Erin; Romanowski, Andrew; Pavone, Gina; Artini, Michele; +19 more
    Publisher: Zenodo
    Countries: Germany, Italy
    Project: EC | OpenAIRE Nexus (101017452), EC | OpenAIRE-Advance (777541)

    This dump provides access to the metadata records of publications, research data, software and projects that may be relevant to the Corona Virus Disease (COVID-19) fight. The dump contains records of the OpenAIRE COVID-19 Gateway, identified via full-text mining and inference techniques applied to the OpenAIRE Research Graph. The Graph is one of the largest Open Access collections of metadata records and links between publications, datasets, software, projects, funders, and organizations, aggregating 12,000+ scientific data sources world-wide, among which the Covid-19 data sources Zenodo COVID-19 Community, WHO (World Health Organization), BIP! FInder for COVID-19, Protein Data Bank, Dimensions, scienceOpen, and RSNA. The dump consists of a tar archive containing gzip files with one json per line. Each json is compliant to the schema available at https://doi.org/10.5281/zenodo.4723499.

  • Open Access English
    Authors: 
    Foszner, Paweł; Staniszewski, Michał; Szczęsna, Agnieszka; Michał Cogiel; Golba, Dominik; Ciampi, Luca; Messina, Nicola; Gennaro, Claudio; Falchi, Fabrizio; Amato, Giuseppe; +1 more
    Publisher: Zenodo
    Country: Italy
    Project: EC | AI4Media (951911)

    Dataset The Bus Violence dataset is a large-scale collection of videos depicting violent and non-violent situations in public transport environments. This benchmark was gathered from multiple cameras located inside a moving bus where several people simulated violent actions, such as stealing an object from another person, fighting between passengers, etc. It contains 1,400 video clips manually annotated as having or not violent scenes, making it one of the biggest benchmarks for video violence detection in the literature. Specifically, videos are recorded from three cameras at 25 Frames Per Second (FPS) --- two cameras located in the corners of the bus (with resolution 960x540 px) and one fisheye in the middle (1280x960 px). The clips have a minimum length of 16 frames and a maximum of 48 frames, capturing a very precise action (either violence or non-violence). The dataset is perfectly balanced, containing 700 videos of violence and 700 videos of non-violence. The Bus Violence dataset is intended as a test data benchmark. However, for researchers interested in using our data also for training purposes, we provide training and test splits. In this repository, we provide the 1,400 video clips divided into two folders named Violence /NoViolence, containing clips of violent situations and non-violent situations, respectively; two txt files containing the names of the videos belonging to the training and test splits, respectively. Citing our work If you found this dataset useful, please cite the following paper @inproceedings{bus_violence_dataset_2022, title = {Bus Violence: An Open Benchmark for Video Violence Detection on Public Transport}, doi = {10.3390/s22218345}, url = {https://doi.org/10.3390%2Fs22218345}, year = 2022, month = {oct}, publisher = {{MDPI} {AG}}, volume = {22}, number = {21}, pages = {8345}, author = {Luca Ciampi and Pawe{\l} Foszner and Nicola Messina and Micha{\l} Staniszewski and Claudio Gennaro and Fabrizio Falchi and Gianluca Serao and Micha{\l} Cogiel and Dominik Golba and Agnieszka Szcz{\k{e}}sna and Giuseppe Amato}, journal = {Sensors} } and this Zenodo Dataset @dataset{pawel_bus_violence_zenodo, author = {Paweł Foszner, Michał Staniszewski, Agnieszka Szczęsna, Michał Cogiel, Dominik Golba, Luca Ciampi, Nicola Messina, Claudio Gennaro, Fabrizio Falchi, Giuseppe Amato, Gianluca Serao}, title = {{Bus Violence: a large-scale benchmark for video violence detection in public transport}}, month = sep, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.7044203}, url = {https://doi.org/10.5281/zenodo.7044203} } Contact Information Blees Sp. z o.o., Gliwice, Poland mstaniszewski@blees.co Acknowledgments The presented dataset was supported by: European Union funds awarded to Blees Sp. z o.o. under grant POIR.01.01.01-00-0952/20-00 “Development of a system for analysing vision data captured by public transport vehicles interior monitoring, aimed at detecting undesirable situations/behaviours and passenger counting (including their classification by age group) and the objects they carry”); EC H2020 project "AI4media: a Centre of Excellence delivering next generation AI Research and Training at the service of Media, Society and Democracy" under GA 951911; research project INAROS (INtelligenza ARtificiale per il mOnitoraggio e Supporto agli anziani), Tuscany POR FSE CUP B53D21008060008. License The Bus Violence dataset was acquired by Blees Sp. z o.o. and is released under a Creative Commons Attribution license for non-commercial use.

  • Open Access English
    Authors: 
    Matiu, Michael; Jacob, Alexander; Notarnicola, Claudia;
    Publisher: Zenodo
    Project: EC | CliRSnow (795310)

    NOTE: We discovered some errors in the data for images after February 2019. They will be fixed in version >= 1.1.x, until then, usage of the data after Feb 2019 is not advised. The rest of the data is fine. This is the data to the same-titled Data paper, which can be found at https://doi.org/10.3390/data5010001. Along with auxilary files for the cloudremoval package, and example scripts on how to access chunks of the data. The files contain: python-cloudremoval-aux-data.tar.gz : auxilary data (altitude, aspect, ...) to run the cloudremoval module which can be found at https://gitlab.inf.unibz.it/earth_observation_public/modis_snow_cloud_removal python-example-data-access.html : Example script how to access parts of the data using python R-example-data-access.html : Example script how to access parts of the data using R zenodo_01_original.tar.gz : time series of snow cover maps, developed at the Institute for Earth Observation, Eurac Research, Bolzano, Italy. More information in same-title Data paper (https://doi.org/10.3390/data5010001), and for algorithm at https://doi.org/10.3390/rs5010110. zenodo_02_cloudremoval.tar.gz : time series of cloud filtered maps, based on 2. above, using code mentioned in 1. More information in same-titled Data paper. The maps are GeoTIFF with integer based values: 0 = no data; 1 = snow; 2 = land; 3 = cloud; 4&5 = water bodies / nodata Version history: 1.0.0 : initial upload 1.0.1 : changes after revision of Data paper 1.0.2 : added example scripts

  • Open Access
    Authors: 
    Fernando Guiomar;
    Publisher: Zenodo
    Project: EC | Flex-ON (653412)

    Data set containing experimental results on the propagation of probabilistic shaping optical signals over pure silica core fiber (PSCF) inside a recirculating loop. The net bit-rate is fixed at 250G.

  • Open Access English
    Authors: 
    Gaias, Gabriella; Colombo, Camilla; Lara, Martin;
    Publisher: Zenodo
    Project: EC | COMPASS (679086)

    The data sets provided here can be used to recreate the plots of the paper “Analytical Framework for Precise Relative Motion in Low Earth Orbits” available at this link. That paper presents a practical and efficient analytical framework for the precise modelling of the relative motion in low Earth orbits. {"references": ["Gaias, G., Colombo, C., & Lara, M.. (2020). Analytical Framework for Precise Relative Motion in Low Earth Orbits. [Data set] Zenodo. http://doi.org/10.5281/zenodo.3734154"]}

  • Research data . 2021
    Open Access English
    Authors: 
    Ferrarini, Federica; J Ramón Arrowsmith; Brozzetti, Francesco; De Nardis, Rita; Cirillo, Daniele; Kelin X Whipple; Lavecchia, Giusy;
    Publisher: Zenodo
    Project: EC | COLOSSEO (795396)

    This dataset contains the tables and the vector data (shapefiles - ESRI format) produced, interpreted, and used to support the findings of the study reported in the paper: Late-Quaternary tectonics along the peri-Adriatic sector of the Apenninic chain (central-southern Italy): inspecting active shortening through topographic relief and fluvial network analysis Federica Ferrarini (f.ferrarini@unich.it), J Ramón Arrowsmith, Francesco Brozzetti, Rita de Nardis, Daniele Cirillo, Kelin X Whipple, Giusy Lavecchia (2021) - Lithosphere - https://doi.org/10.2113/2020/7866617

  • Open Access English
    Authors: 
    Cabanes, Simon; Spiga, Aymeric; Young, Roland M. B.;
    Publisher: Zenodo
    Project: EC | JUMP (797012)

    We conduct in-depth analysis of statistical flow properties from Global Circulation Model that reproduce Saturn's macroturbulence, namely large-scale zonal winds. We use a high performance Global Climate Models (GCMs), named DYNAMICO, to model the atmospheric circulation of gas giants with appropriate physical parametrizations for Saturn's atmosphere. The high-resolution model DYNAMICO solves for 3D primitive equations of motion. We ran a Saturn simulation covering 15 Saturn years using the Saturn DYNAMICO GCM. Wind fields are output every 20 Saturn days at 32 pressure levels onto 1/2° latitude-longitude grid maps. Details on this Saturn reference simulation are given in Spiga et al. (2020). In addition, to diagnose the relevant 3D dynamical mechanisms in Saturn's turbulent atmosphere, we run a set of four simulations using an idealized version of our Global Climate Model devoid of radiative transfer, with a well-defined Taylor-Green forcing and over several rotation rates (4, 1, 0.5, and 0.25 times Saturn's rotation rate). Here, we deliver a full data set, including velocity maps, at different pressure levels and time steps, from which it is possible to recompute the statistical analysis detailed in Cabanes et al. (2020). The delivered data set includes: Files of our (1) data collection and (2) numerical codes that lead to the statistical analysis: (1) Data collection: A PDF file named JUMP-zonal-jets-data-collection-Icarus.pdf that describes in depththe data set and the associated nomenclature. A netcdf file of velocity fields from our Saturn Reference Simulation (SRS) uvData-SRS-istep-312000-nstep-50-niz-12.nc StatisticalData.nc A netcdf file of velocity fields from idealized simulation at 4 times the Satrun's rotation rate, uvData-Omega-4-istep-21026.0-nstep-20-niz-8.nc A netcdf file of velocity fields from idealized simulation at 1 times the Satrun's rotation rate, uvData-Omega-1-istep-21026.0-nstep-20-niz-8.nc A netcdf file of velocity fields from idealized simulation at 0.5 times the Satrun's rotation rate, uvData-Omega-0.5-istep-20626.0-nstep-20-niz-8.nc A netcdf file of velocity fields from idealized simulation at 0.25 times the Satrun's rotation rate, uvData-Omega-0.25-istep-21026.0-nstep-20-niz-8.nc (2) Numerical codes: Codes for statistical analysis in spherical geometry are on Github. --> https://github.com/scabanes/POST Acknowledgments: The authors acknowledge exceptional computing support from Grand Équipement National de Calcul Intensif (GENCI) and Centre Informatique National de l’Enseignement Supérieur (CINES). All the simulations presented in this paper were carried out on the Occigen cluster hosted at CINES. This work was granted access to the High-Performance Computing (HPC) resources of CINES under the allocations A001-0107548, A003-0107548, A004-0110391 made by GENCI. The authors acknowledge funding from Agence Nationale de la Recherche (ANR), project HEAT ANR-14-CE23-0010 and project EMERGIANT ANR-17-CE31-0007. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement N° 797012. Fruitful discussions with Sandrine Guerlet, Ehouarn Millour, Thomas Dubos, Frédéric Hourdin and Alexandre Boissinot from our team helped refine some discussions in the paper. {"references": ["Spiga, Aymeric, et al. \"Global climate modeling of Saturn's atmosphere. Part II: Multi-annual high-resolution dynamical simulations.\" Icarus 335 (2020): 113377.", "Cabanes, Simon et al. \"Global climate modeling of Saturn's atmosphere. Part III: Global statistical picture of zonostrophic turbulence in high-resolution 3D-turbulent simulations.\" arXiv preprint arXiv:2001.02473 (2020)."]}

  • Research data . 2021
    Open Access English
    Authors: 
    Camisasca, Gaia; Tinti, Antonio; Giacomello, Alberto;
    Publisher: Zenodo
    Project: EC | HyGate (803213)

    Research data associated with the publication: J. Phys. Chem. Lett. 2020, 11, 21, 9171–9177 https://doi.org/10.1021/acs.jpclett.0c02600 Description of the data format can be found inside the folders.

  • Open Access English
    Authors: 
    Alessandro; Gabriel;
    Publisher: Zenodo
    Project: EC | GenPercept (832813), EC | PUPILTRAITS (801715)

    Humans possess the ability to extract highly organized perceptual structures from sequences of temporal stimuli. For instance, we can organize specific rhythmical patterns into hierarchical, or metrical, systems. Despite the evidence of a fundamental influence of the motor system in achieving this skill, few studies have attempted to investigate the organization of our motor representation of rhythm. To this aim, we studied - in musicians and non-musicians – the ability to perceive and reproduce different rhythms. In a first experiment participants performed a temporal order-judgment task, for rhythmical sequences presented via auditory or tactile modality. In a second experiment, they were asked to reproduce the same rhythmic sequences, while their tapping force and timing were recorded. We demonstrate that tapping force encodes the metrical aspect of the rhythm, and the strength of the coding correlates with the individual’s perceptual accuracy. We suggest that the similarity between perception and tapping-force organization indicates a common representation of rhythm, shared between the perceptual and motor systems.

  • Open Access English
    Authors: 
    Arezoumandan,Morteza; Ghannadrad,Ali; Candela,Leonardo; Castelli,Donatella;
    Publisher: Zenodo
    Country: Italy
    Project: EC | SoBigData-PlusPlus (871042), EC | EOSC-Pillar (857650), EC | Blue Cloud (862409)

    This dataset is accompanying the "Recommender system for science: A basic taxonomy" paper published at IRCDL 2022 conference. This study had a Systematic Mapping Approach on the Recommender system for science. In particular, the study aims at responding to four questions on recommender systems in science cases: users and their interests representation, item typologies and their representation, recommendation algorithms, and evaluation, and then providing a taxonomy. This dataset contains 209 papers of interest that have been published between 2015 and 2022. The dataset has 11 columns which organised as follows: Column Title: This column contains the title of the papers. Column DOI: This column contains the DOI of the papers. Column Publication_year: This column contains the year that the paper is published. Column DB: This column contains the repository that the paper is retrieved. Column Keywords: This column contains the keywords provided for the paper. Column Content_type: This column contains the paper type which can be: Article, Conference or Review. Column Citing_paper_count: This column contains the citation number of the paper. Column Recommended_artefact: This column contains the scientific product that is recommended to users which can be paper, workflow, collaborator, dataset or others. Column User_type: This column contains the type of user who receives the recommendation, which can be an Individual user or a Group of users. Column Algorithm: This column contains the recommendation algorithm that the paper proposed, which can be: HB (Hybrid-based), CB (Content-based), CFB (Collaborative-filtering-based), or GB (Graph-based). Column Evaluation_method: This column contains the method of the algorithm evaluation which can be OFFLINE, ONLINE, BOTH, or NO_EVALUATION.

Send a message
How can we help?
We usually respond in a few hours.