Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
2,691 Research products, page 1 of 270

  • 2013-2022
  • Open Access
  • Preprint
  • AE

10
arrow_drop_down
Relevance
arrow_drop_down
  • Open Access
    Authors: 
    Aaron A. Dutton; Andrea V. Macciò; Jonas Frings; Liang Wang; G. S. Stinson; Camilla Penzo; Xi Kang;
    Publisher: Oxford University Press (OUP)
    Project: EC | MW-DISK (321035)

    We compare the half-light circular velocities, V_{1/2}, of dwarf galaxies in the Local Group to the predicted circular velocity curves of galaxies in the NIHAO suite of LCDM simulations. We use a subset of 34 simulations in which the central galaxy has a stellar luminosity in the range 0.5 x 10^5 < L_V < 2 x 10^8 L_{sun}. The NIHAO galaxy simulations reproduce the relation between stellar mass and halo mass from abundance matching, as well as the observed half-light size vs luminosity relation. The corresponding dissipationless simulations over-predict the V_{1/2}, recovering the problem known as too big to fail (TBTF). By contrast, the NIHAO simulations have expanded dark matter haloes, and provide an excellent match to the distribution of V_{1/2} for galaxies with L_V > 2 x 10^6 L_{sun}. For lower luminosities our simulations predict very little halo response, and tend to over predict the observed circular velocities. In the context of LCDM, this could signal the increased stochasticity of star formation in haloes below M_{halo} \sim 10^{10} M_{sun}, or the role of environmental effects. Thus, haloes that are "too big to fail", do not fail LCDM, but haloes that are "too small to pass" (the galaxy formation threshold) provide a future test of LCDM. 6 pages, 3 figures, accepted to MNRAS letters

  • Publication . Conference object . Preprint . Article . 2019 . Embargo End Date: 01 Jan 2019
    Open Access
    Authors: 
    Breton Minnehan; Andreas Savakis;
    Publisher: arXiv

    We propose a data-driven approach for deep convolutional neural network compression that achieves high accuracy with high throughput and low memory requirements. Current network compression methods either find a low-rank factorization of the features that requires more memory, or select only a subset of features by pruning entire filter channels. We propose the Cascaded Projection (CaP) compression method that projects the output and input filter channels of successive layers to a unified low dimensional space based on a low-rank projection. We optimize the projection to minimize classification loss and the difference between the next layer's features in the compressed and uncompressed networks. To solve this non-convex optimization problem we propose a new optimization method of a proxy matrix using backpropagation and Stochastic Gradient Descent (SGD) with geometric constraints. Our cascaded projection approach leads to improvements in all critical areas of network compression: high accuracy, low memory consumption, low parameter count and high processing speed. The proposed CaP method demonstrates state-of-the-art results compressing VGG16 and ResNet networks with over 4x reduction in the number of computations and excellent performance in top-5 accuracy on the ImageNet dataset before and after fine-tuning.

  • Publication . Article . Preprint . Other literature type . 2013
    Open Access English
    Authors: 
    Sarah E. Medland; Jaime Derringer; Jian Yang; Tõnu Esko; Nicolas W. Martin; Konstantin Shakhbazov; Abdel Abdellaoui; Arpana Agrawal; Eva Albrecht; Behrooz Z. Alizadeh; +173 more
    Countries: Netherlands, United States, United Kingdom, Croatia, Australia
    Project: WT , NIH | FINANCIAL STATUS--RETIREM... (2P01AG005842-04), NIH | ECONOMICS OF AGING TRAINI... (5T32AG000186-10), EC | DEVHEALTH (269874), NSF | EAGER Proposal: Workshop ... (1064089), EC | GMI (230374), NIH | NBER Center for Aging and... (5P30AG012810-15)

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R2 ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics. Economics

  • Open Access
    Authors: 
    Fumiki Yoshihara; Tomoko Fuse; Sahel Ashhab; Kosuke Kakuyanagi; Shiro Saito; Kouichi Semba;
    Publisher: Springer Science and Business Media LLC

    The interaction between an atom and the electromagnetic field inside a cavity has played a crucial role in the historical development of our understanding of light-matter interaction and is a central part of various quantum technologies, such as lasers and many quantum computing architectures. The emergence of superconducting qubits has allowed the realization of strong and ultrastrong coupling between artificial atoms and cavities. If the coupling strength $g$ becomes as large as the atomic and cavity frequencies ($\Delta$ and $\omega_{\rm o}$ respectively), the energy eigenstates including the ground state are predicted to be highly entangled. This qualitatively new regime can be called the deep strong-coupling regime, and there has been an ongoing debate over whether it is fundamentally possible to realize this regime in realistic physical systems. By inductively coupling a flux qubit and an LC oscillator via Josephson junctions, we have realized circuits with $g/\omega_{\rm o}$ ranging from 0.72 to 1.34 and $g/\Delta\gg 1$. Using spectroscopy measurements, we have observed unconventional transition spectra, with patterns resembling masquerade masks, that are characteristic of this new regime. Our results provide a basis for ground-state-based entangled-pair generation and open a new direction of research on strongly correlated light-matter states in circuit-quantum electrodynamics. Comment: 3 figures, Methods, and Supplementary Information

  • Open Access
    Authors: 
    Richard M. Plotkin; Elena Gallo; Peter G. Jonker; James Miller-Jones; Jeroen Homan; T. Muñoz-Darias; Sera Markoff; Montserrat Armas Padilla; Rob Fender; A. Rushton; +2 more
    Country: Netherlands

    We present coordinated multiwavelength observations of the high Galactic latitude (b=+50 deg) black hole X-ray binary (XRB) J1357.2-0933 in quiescence. Our broadband spectrum includes strictly simultaneous radio and X-ray observations, and near-infrared, optical, and ultraviolet data taken 1-2 days later. We detect Swift J1357.2-0933 at all wavebands except for the radio (f_5GHz < 3.9 uJy/beam). Given current constraints on the distance (2.3-6.3 kpc), its 0.5-10 keV X-ray flux corresponds to an Eddington ratio Lx/Ledd = 4e-9 -- 3e-8 (assuming a black hole mass of 10 Msun). The broadband spectrum is dominated by synchrotron radiation from a relativistic population of outflowing thermal electrons, which we argue to be a common signature of short-period quiescent BHXBs. Furthermore, we identify the frequency where the synchrotron radiation transitions from optically thick-to-thin (approximately 2-5e14 Hz, which is the most robust determination of a 'jet break' for a quiescent BHXB to date. Our interpretation relies on the presence of steep curvature in the ultraviolet spectrum, a frequency window made observable by the low amount of interstellar absorption along the line of sight. High Galactic latitude systems like Swift J1357.2-0933 with clean ultraviolet sightlines are crucial for understanding black hole accretion at low luminosities. 12 pages, 5 Figures, 1 Table. Accepted for publication in MNRAS

  • Publication . Article . Preprint . 2020 . Embargo End Date: 01 Jan 2020
    Open Access
    Authors: 
    Khaled Ai Thelaya; Marco Agus; Jens Schneider;
    Publisher: arXiv

    In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178$\times$ speed-up over na$\mathrm{\"{i}}$ve parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions. Comment: To appear in IEEE Transacations on Visualization and Computer Graphics (IEEE Vis 2020)

  • Open Access English
    Authors: 
    H. B. Benaoum; S. H. Shaglel;

    We propose a new scaling ansatz in the neutrino Dirac mass matrix to explain the low energy neutrino oscillations data, baryon number asymmetry and neutrinoless double beta decay. In this work, a full reconstruction of the neutrino Dirac mass matrix has been realized from the low energy neutrino oscillations data based on type-I seesaw mechanism. A concrete model based on $A_4$ flavor symmetry has been considered to generate such a neutrino Dirac mass matrix and imposes a relation between the two scaling factors. In this model, the right-handed Heavy Majorana neutrino masses are quasi-degenerate at TeV mass scales. Extensive numerical analysis studies have been carried out to constrain the parameter space of the model from the low energy neutrino oscillations data. It has been found that the parameter space of the Dirac mass matrix elements lies near or below the MeV region and the scaling factor $|\kappa_1|$ has to be less than 10. Furthermore, we have examined the possibility for simultaneous explanation of both neutrino oscillations data and the observed baryon number asymmetry in the Universe. Such an analysis gives further restrictions on the parameter space of the model, thereby explaining the correct neutrino data as well as the baryon number asymmetry via a resonant leptogenesis scenario. Finally, we show that the allowed space for the effective Majorana neutrino mass $m_{ee}$ is also constrained in order to account for the observed baryon asymmetry. Comment: 25 pages, 10 figues, revised version

  • Open Access English
    Authors: 
    Maurizio Capra; Beatrice Bussolino; Alberto Marchisio; Guido Masera; Maurizio Martina; Muhammad Shafique;
    Country: Italy

    Currently, Machine Learning (ML) is becoming ubiquitous in everyday life. Deep Learning (DL) is already present in many applications ranging from computer vision for medicine to autonomous driving of modern cars as well as other sectors in security, healthcare, and finance. However, to achieve impressive performance, these algorithms employ very deep networks, requiring a significant computational power, both during the training and inference time. A single inference of a DL model may require billions of multiply-and-accumulated operations, making the DL extremely compute- and energy-hungry. In a scenario where several sophisticated algorithms need to be executed with limited energy and low latency, the need for cost-effective hardware platforms capable of implementing energy-efficient DL execution arises. This paper first introduces the key properties of two brain-inspired models like Deep Neural Network (DNN), and Spiking Neural Network (SNN), and then analyzes techniques to produce efficient and high-performance designs. This work summarizes and compares the works for four leading platforms for the execution of algorithms such as CPU, GPU, FPGA and ASIC describing the main solutions of the state-of-the-art, giving much prominence to the last two solutions since they offer greater design flexibility and bear the potential of high energy-efficiency, especially for the inference process. In addition to hardware solutions, this paper discusses some of the important security issues that these DNN and SNN models may have during their execution, and offers a comprehensive section on benchmarking, explaining how to assess the quality of different networks and hardware systems designed for them. Accepted for publication in IEEE Access

  • Open Access
    Authors: 
    Jesus M. Corral-Santana; Jorge Casares; Teo Muñoz-Darias; Franz E. Bauer; I. G. Martínez-Pais; David M. Russell;
    Publisher: EDP Sciences

    During the last ~50 years, the population of black hole candidates in X-ray binaries has increased considerably with 59 Galactic objects detected in transient low-mass X-ray binaries, plus a few in persistent systems (including ~5 extragalactic binaries). We collect near-infrared, optical and X-ray information spread over hundreds of references in order to study the population of black holes in X-ray transients as a whole. We present the most updated catalogue of black hole transients, which contains X-ray, optical and near-infrared observations together with their astrometric and dynamical properties. It provides new useful information in both statistical and observational parameters providing a thorough and complete overview of the black hole population in the Milky Way. Analysing the distances and spatial distribution of the observed systems, we estimate a total population of ~1300 Galactic black hole transients. This means that we have already discovered less than ~5% of the total Galactic distribution. The complete version of this catalogue will be continuously updated online and in the Virtual Observatory, including finding charts and data in other wavelengths. Comment: http://www.astro.puc.cl/BlackCAT - Accepted for publication in Astronomy & Astrophysics. 20 pages, 8 figures, 5 Tables

  • Publication . Preprint . Conference object . Article . 2020
    Open Access
    Authors: 
    Ismail Shahin;
    Publisher: IEEE

    This research aims at identifying the unknown emotion using speaker cues. In this study, we identify the unknown emotion using a two-stage framework. The first stage focuses on identifying the speaker who uttered the unknown emotion, while the next stage focuses on identifying the unknown emotion uttered by the recognized speaker in the prior stage. This proposed framework has been evaluated on an Arabic Emirati-accented speech database uttered by fifteen speakers per gender. Mel-Frequency Cepstral Coefficients (MFCCs) have been used as the extracted features and Hidden Markov Model (HMM) has been utilized as the classifier in this work. Our findings demonstrate that emotion recognition accuracy based on the two-stage framework is greater than that based on the one-stage approach and the state-of-the-art classifiers and models such as Gaussian Mixture Model (GMM), Support Vector Machine (SVM), and Vector Quantization (VQ). The average emotion recognition accuracy based on the two-stage approach is 67.5%, while the accuracy reaches to 61.4%, 63.3%, 64.5%, and 61.5%, based on the one-stage approach, GMM, SVM, and VQ, respectively. The achieved results based on the two-stage framework are very close to those attained in subjective assessment by human listeners. 5 pages

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
2,691 Research products, page 1 of 270
  • Open Access
    Authors: 
    Aaron A. Dutton; Andrea V. Macciò; Jonas Frings; Liang Wang; G. S. Stinson; Camilla Penzo; Xi Kang;
    Publisher: Oxford University Press (OUP)
    Project: EC | MW-DISK (321035)

    We compare the half-light circular velocities, V_{1/2}, of dwarf galaxies in the Local Group to the predicted circular velocity curves of galaxies in the NIHAO suite of LCDM simulations. We use a subset of 34 simulations in which the central galaxy has a stellar luminosity in the range 0.5 x 10^5 < L_V < 2 x 10^8 L_{sun}. The NIHAO galaxy simulations reproduce the relation between stellar mass and halo mass from abundance matching, as well as the observed half-light size vs luminosity relation. The corresponding dissipationless simulations over-predict the V_{1/2}, recovering the problem known as too big to fail (TBTF). By contrast, the NIHAO simulations have expanded dark matter haloes, and provide an excellent match to the distribution of V_{1/2} for galaxies with L_V > 2 x 10^6 L_{sun}. For lower luminosities our simulations predict very little halo response, and tend to over predict the observed circular velocities. In the context of LCDM, this could signal the increased stochasticity of star formation in haloes below M_{halo} \sim 10^{10} M_{sun}, or the role of environmental effects. Thus, haloes that are "too big to fail", do not fail LCDM, but haloes that are "too small to pass" (the galaxy formation threshold) provide a future test of LCDM. 6 pages, 3 figures, accepted to MNRAS letters

  • Publication . Conference object . Preprint . Article . 2019 . Embargo End Date: 01 Jan 2019
    Open Access
    Authors: 
    Breton Minnehan; Andreas Savakis;
    Publisher: arXiv

    We propose a data-driven approach for deep convolutional neural network compression that achieves high accuracy with high throughput and low memory requirements. Current network compression methods either find a low-rank factorization of the features that requires more memory, or select only a subset of features by pruning entire filter channels. We propose the Cascaded Projection (CaP) compression method that projects the output and input filter channels of successive layers to a unified low dimensional space based on a low-rank projection. We optimize the projection to minimize classification loss and the difference between the next layer's features in the compressed and uncompressed networks. To solve this non-convex optimization problem we propose a new optimization method of a proxy matrix using backpropagation and Stochastic Gradient Descent (SGD) with geometric constraints. Our cascaded projection approach leads to improvements in all critical areas of network compression: high accuracy, low memory consumption, low parameter count and high processing speed. The proposed CaP method demonstrates state-of-the-art results compressing VGG16 and ResNet networks with over 4x reduction in the number of computations and excellent performance in top-5 accuracy on the ImageNet dataset before and after fine-tuning.

  • Publication . Article . Preprint . Other literature type . 2013
    Open Access English
    Authors: 
    Sarah E. Medland; Jaime Derringer; Jian Yang; Tõnu Esko; Nicolas W. Martin; Konstantin Shakhbazov; Abdel Abdellaoui; Arpana Agrawal; Eva Albrecht; Behrooz Z. Alizadeh; +173 more
    Countries: Netherlands, United States, United Kingdom, Croatia, Australia
    Project: WT , NIH | FINANCIAL STATUS--RETIREM... (2P01AG005842-04), NIH | ECONOMICS OF AGING TRAINI... (5T32AG000186-10), EC | DEVHEALTH (269874), NSF | EAGER Proposal: Workshop ... (1064089), EC | GMI (230374), NIH | NBER Center for Aging and... (5P30AG012810-15)

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R2 ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics. Economics

  • Open Access
    Authors: 
    Fumiki Yoshihara; Tomoko Fuse; Sahel Ashhab; Kosuke Kakuyanagi; Shiro Saito; Kouichi Semba;
    Publisher: Springer Science and Business Media LLC

    The interaction between an atom and the electromagnetic field inside a cavity has played a crucial role in the historical development of our understanding of light-matter interaction and is a central part of various quantum technologies, such as lasers and many quantum computing architectures. The emergence of superconducting qubits has allowed the realization of strong and ultrastrong coupling between artificial atoms and cavities. If the coupling strength $g$ becomes as large as the atomic and cavity frequencies ($\Delta$ and $\omega_{\rm o}$ respectively), the energy eigenstates including the ground state are predicted to be highly entangled. This qualitatively new regime can be called the deep strong-coupling regime, and there has been an ongoing debate over whether it is fundamentally possible to realize this regime in realistic physical systems. By inductively coupling a flux qubit and an LC oscillator via Josephson junctions, we have realized circuits with $g/\omega_{\rm o}$ ranging from 0.72 to 1.34 and $g/\Delta\gg 1$. Using spectroscopy measurements, we have observed unconventional transition spectra, with patterns resembling masquerade masks, that are characteristic of this new regime. Our results provide a basis for ground-state-based entangled-pair generation and open a new direction of research on strongly correlated light-matter states in circuit-quantum electrodynamics. Comment: 3 figures, Methods, and Supplementary Information

  • Open Access
    Authors: 
    Richard M. Plotkin; Elena Gallo; Peter G. Jonker; James Miller-Jones; Jeroen Homan; T. Muñoz-Darias; Sera Markoff; Montserrat Armas Padilla; Rob Fender; A. Rushton; +2 more
    Country: Netherlands

    We present coordinated multiwavelength observations of the high Galactic latitude (b=+50 deg) black hole X-ray binary (XRB) J1357.2-0933 in quiescence. Our broadband spectrum includes strictly simultaneous radio and X-ray observations, and near-infrared, optical, and ultraviolet data taken 1-2 days later. We detect Swift J1357.2-0933 at all wavebands except for the radio (f_5GHz < 3.9 uJy/beam). Given current constraints on the distance (2.3-6.3 kpc), its 0.5-10 keV X-ray flux corresponds to an Eddington ratio Lx/Ledd = 4e-9 -- 3e-8 (assuming a black hole mass of 10 Msun). The broadband spectrum is dominated by synchrotron radiation from a relativistic population of outflowing thermal electrons, which we argue to be a common signature of short-period quiescent BHXBs. Furthermore, we identify the frequency where the synchrotron radiation transitions from optically thick-to-thin (approximately 2-5e14 Hz, which is the most robust determination of a 'jet break' for a quiescent BHXB to date. Our interpretation relies on the presence of steep curvature in the ultraviolet spectrum, a frequency window made observable by the low amount of interstellar absorption along the line of sight. High Galactic latitude systems like Swift J1357.2-0933 with clean ultraviolet sightlines are crucial for understanding black hole accretion at low luminosities. 12 pages, 5 Figures, 1 Table. Accepted for publication in MNRAS

  • Publication . Article . Preprint . 2020 . Embargo End Date: 01 Jan 2020
    Open Access
    Authors: 
    Khaled Ai Thelaya; Marco Agus; Jens Schneider;
    Publisher: arXiv

    In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178$\times$ speed-up over na$\mathrm{\"{i}}$ve parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions. Comment: To appear in IEEE Transacations on Visualization and Computer Graphics (IEEE Vis 2020)

  • Open Access English
    Authors: 
    H. B. Benaoum; S. H. Shaglel;

    We propose a new scaling ansatz in the neutrino Dirac mass matrix to explain the low energy neutrino oscillations data, baryon number asymmetry and neutrinoless double beta decay. In this work, a full reconstruction of the neutrino Dirac mass matrix has been realized from the low energy neutrino oscillations data based on type-I seesaw mechanism. A concrete model based on $A_4$ flavor symmetry has been considered to generate such a neutrino Dirac mass matrix and imposes a relation between the two scaling factors. In this model, the right-handed Heavy Majorana neutrino masses are quasi-degenerate at TeV mass scales. Extensive numerical analysis studies have been carried out to constrain the parameter space of the model from the low energy neutrino oscillations data. It has been found that the parameter space of the Dirac mass matrix elements lies near or below the MeV region and the scaling factor $|\kappa_1|$ has to be less than 10. Furthermore, we have examined the possibility for simultaneous explanation of both neutrino oscillations data and the observed baryon number asymmetry in the Universe. Such an analysis gives further restrictions on the parameter space of the model, thereby explaining the correct neutrino data as well as the baryon number asymmetry via a resonant leptogenesis scenario. Finally, we show that the allowed space for the effective Majorana neutrino mass $m_{ee}$ is also constrained in order to account for the observed baryon asymmetry. Comment: 25 pages, 10 figues, revised version

  • Open Access English
    Authors: 
    Maurizio Capra; Beatrice Bussolino; Alberto Marchisio; Guido Masera; Maurizio Martina; Muhammad Shafique;
    Country: Italy

    Currently, Machine Learning (ML) is becoming ubiquitous in everyday life. Deep Learning (DL) is already present in many applications ranging from computer vision for medicine to autonomous driving of modern cars as well as other sectors in security, healthcare, and finance. However, to achieve impressive performance, these algorithms employ very deep networks, requiring a significant computational power, both during the training and inference time. A single inference of a DL model may require billions of multiply-and-accumulated operations, making the DL extremely compute- and energy-hungry. In a scenario where several sophisticated algorithms need to be executed with limited energy and low latency, the need for cost-effective hardware platforms capable of implementing energy-efficient DL execution arises. This paper first introduces the key properties of two brain-inspired models like Deep Neural Network (DNN), and Spiking Neural Network (SNN), and then analyzes techniques to produce efficient and high-performance designs. This work summarizes and compares the works for four leading platforms for the execution of algorithms such as CPU, GPU, FPGA and ASIC describing the main solutions of the state-of-the-art, giving much prominence to the last two solutions since they offer greater design flexibility and bear the potential of high energy-efficiency, especially for the inference process. In addition to hardware solutions, this paper discusses some of the important security issues that these DNN and SNN models may have during their execution, and offers a comprehensive section on benchmarking, explaining how to assess the quality of different networks and hardware systems designed for them. Accepted for publication in IEEE Access

  • Open Access
    Authors: 
    Jesus M. Corral-Santana; Jorge Casares; Teo Muñoz-Darias; Franz E. Bauer; I. G. Martínez-Pais; David M. Russell;
    Publisher: EDP Sciences

    During the last ~50 years, the population of black hole candidates in X-ray binaries has increased considerably with 59 Galactic objects detected in transient low-mass X-ray binaries, plus a few in persistent systems (including ~5 extragalactic binaries). We collect near-infrared, optical and X-ray information spread over hundreds of references in order to study the population of black holes in X-ray transients as a whole. We present the most updated catalogue of black hole transients, which contains X-ray, optical and near-infrared observations together with their astrometric and dynamical properties. It provides new useful information in both statistical and observational parameters providing a thorough and complete overview of the black hole population in the Milky Way. Analysing the distances and spatial distribution of the observed systems, we estimate a total population of ~1300 Galactic black hole transients. This means that we have already discovered less than ~5% of the total Galactic distribution. The complete version of this catalogue will be continuously updated online and in the Virtual Observatory, including finding charts and data in other wavelengths. Comment: http://www.astro.puc.cl/BlackCAT - Accepted for publication in Astronomy & Astrophysics. 20 pages, 8 figures, 5 Tables

  • Publication . Preprint . Conference object . Article . 2020
    Open Access
    Authors: 
    Ismail Shahin;
    Publisher: IEEE

    This research aims at identifying the unknown emotion using speaker cues. In this study, we identify the unknown emotion using a two-stage framework. The first stage focuses on identifying the speaker who uttered the unknown emotion, while the next stage focuses on identifying the unknown emotion uttered by the recognized speaker in the prior stage. This proposed framework has been evaluated on an Arabic Emirati-accented speech database uttered by fifteen speakers per gender. Mel-Frequency Cepstral Coefficients (MFCCs) have been used as the extracted features and Hidden Markov Model (HMM) has been utilized as the classifier in this work. Our findings demonstrate that emotion recognition accuracy based on the two-stage framework is greater than that based on the one-stage approach and the state-of-the-art classifiers and models such as Gaussian Mixture Model (GMM), Support Vector Machine (SVM), and Vector Quantization (VQ). The average emotion recognition accuracy based on the two-stage approach is 67.5%, while the accuracy reaches to 61.4%, 63.3%, 64.5%, and 61.5%, based on the one-stage approach, GMM, SVM, and VQ, respectively. The achieved results based on the two-stage framework are very close to those attained in subjective assessment by human listeners. 5 pages

Send a message
How can we help?
We usually respond in a few hours.