Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
2,781 Research products, page 1 of 279

  • Publications
  • Research data
  • Research software
  • Open Access
  • Preprint
  • AE

10
arrow_drop_down
Relevance
arrow_drop_down
  • Publication . Preprint . Conference object . Article . 2019 . Embargo End Date: 01 Jan 2019
    Open Access
    Authors: 
    Yichao Yan; Qiang Zhang; Bingbing Ni; Wendong Zhang; Minghao Xu; Xiaokang Yang;
    Publisher: arXiv

    Person re-identification has achieved great progress with deep convolutional neural networks. However, most previous methods focus on learning individual appearance feature embedding, and it is hard for the models to handle difficult situations with different illumination, large pose variance and occlusion. In this work, we take a step further and consider employing context information for person search. For a probe-gallery pair, we first propose a contextual instance expansion module, which employs a relative attention module to search and filter useful context information in the scene. We also build a graph learning framework to effectively employ context pairs to update target similarity. These two modules are built on top of a joint detection and instance feature learning framework, which improves the discriminativeness of the learned features. The proposed framework achieves state-of-the-art performance on two widely used person search datasets. Comment: To appear in CVPR 2019

  • Open Access English
    Authors: 
    Cornelius A. Rietveld; Sarah E. Medland; Jaime Derringer; Jian Yang; Tõnu Esko; Harm-Jan Westra; Abdel Abdellaoui; Arpana Agrawal; Eva Albrecht; Behrooz Z. Alizadeh; +173 more
    Countries: Netherlands, Croatia, Australia, United Kingdom, United States
    Project: EC | GMI (230374), EC | DEVHEALTH (269874), NSF | EAGER Proposal: Workshop ... (1064089), NIH | NBER Center for Aging and... (5P30AG012810-15), NIH | ECONOMICS OF AGING TRAINI... (5T32AG000186-10), NIH | FINANCIAL STATUS--RETIREM... (2P01AG005842-04), WT

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R2 ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics. Economics

  • Publication . Preprint . Article . 2019 . Embargo End Date: 01 Jan 2019
    Open Access
    Authors: 
    Wheatcroft, Edward; Wynn, Henry; Dent, Chris J.; Smith, Jim Q.; Copeland, Claire L.; Ralph, Daniel; Zachary, Stan;
    Publisher: arXiv

    Scenario Analysis is a risk assessment tool that aims to evaluate the impact of a small number of distinct plausible future scenarios. In this paper, we provide an overview of important aspects of Scenario Analysis including when it is appropriate, the design of scenarios, uncertainty and encouraging creativity. Each of these issues is discussed in the context of climate, energy and legal scenarios.

  • Open Access English
    Authors: 
    Aysha Hamad Alneyadi; Iltaf Shah; Synan F. AbuQamar; S. Salman Ashraf;
    Publisher: Preprints

    Enzymatic degradation of organic pollutants is a new and promising remediation approach. Peroxidases are one of the most commonly used classes of enzymes to degrade organic pollutants. However, it is generally assumed that all peroxidases behave similarly and produce similar degradation products. In this study, we conducted detailed studies of the degradation of a model aromatic pollutant, Sulforhodamine B dye (SRB dye), using two peroxidases—soybean peroxidase (SBP) and chloroperoxidase (CPO). Our results show that these two related enzymes had different optimum conditions (pH, temperature, H2O2 concentration...etc.) for efficiently degrading SRB dye. High-performance liquid chromatography and LC-mass spectrometry analyses confirmed that both SBP and CPO transformed the SRB dye into low molecular weight intermediates. While most of the intermediates produced by the two enzymes were the same, the CPO treatment produced at least one different intermediate. Furthermore, toxicological evaluation using lettuce (Lactuca sativa) seeds demonstrated that the SBP-based treatment was able to eliminate the phytotoxicity of SRB dye, but the CPO-based treatment did not. Our results show, for the first time, that while both of these related enzymes can be used to efficiently degrade organic pollutants, they have different optimum reaction conditions and may not be equally efficient in detoxification of organic pollutants.

  • Publication . Article . Preprint . 2014 . Embargo End Date: 01 Jan 2014
    Open Access
    Authors: 
    Rahwan, Talal; Michalak, Tomasz P.;
    Publisher: arXiv

    Two fundamental algorithm-design paradigms are Tree Search and Dynamic Programming. The techniques used therein have been shown to complement one another when solving the complete set partitioning problem, also known as the coalition structure generation problem [5]. Inspired by this observation, we develop in this paper an algorithm to solve the coalition structure generation problem on graphs, where the goal is to identifying an optimal partition of a graph into connected subgraphs. More specifically, we develop a new depth-first search algorithm, and combine it with an existing dynamic programming algorithm due to Vinyals et al. [9]. The resulting hybrid algorithm is empirically shown to significantly outperform both its constituent parts when the subset-evaluation function happens to have certain intuitive properties.

  • Publication . Conference object . Preprint . Article . 2019
    Open Access
    Authors: 
    Breton Minnehan; Andreas Savakis;
    Publisher: IEEE

    We propose a data-driven approach for deep convolutional neural network compression that achieves high accuracy with high throughput and low memory requirements. Current network compression methods either find a low-rank factorization of the features that requires more memory, or select only a subset of features by pruning entire filter channels. We propose the Cascaded Projection (CaP) compression method that projects the output and input filter channels of successive layers to a unified low dimensional space based on a low-rank projection. We optimize the projection to minimize classification loss and the difference between the next layer's features in the compressed and uncompressed networks. To solve this non-convex optimization problem we propose a new optimization method of a proxy matrix using backpropagation and Stochastic Gradient Descent (SGD) with geometric constraints. Our cascaded projection approach leads to improvements in all critical areas of network compression: high accuracy, low memory consumption, low parameter count and high processing speed. The proposed CaP method demonstrates state-of-the-art results compressing VGG16 and ResNet networks with over 4x reduction in the number of computations and excellent performance in top-5 accuracy on the ImageNet dataset before and after fine-tuning.

  • Open Access
    Authors: 
    Khaled Ai Thelaya; Marco Agus; Jens Schneider;
    Publisher: Institute of Electrical and Electronics Engineers (IEEE)

    In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178$\times$ speed-up over na$\mathrm{\"{i}}$ve parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions. Comment: To appear in IEEE Transacations on Visualization and Computer Graphics (IEEE Vis 2020)

  • Publication . Article . Preprint . 2020
    Open Access English
    Authors: 
    Desmond Alexander Johnston; Ranasinghe P. K. C. M. Ranasinghe;

    A characteristic feature of the 3d plaquette Ising model is its planar subsystem symmetry. The quantum version of this model has been shown to be related via a duality to the X-Cube model, which has been paradigmatic in the new and rapidly developing field of fractons. The relation between the 3d plaquette Ising and the X-Cube model is similar to that between the 2d quantum transverse spin Ising model and the Toric Code. Gauging the global symmetry in the case of the 2d Ising model and considering the gauge invariant sector of the high temperature phase leads to the Toric Code, whereas gauging the subsystem symmetry of the 3d quantum transverse spin plaquette Ising model leads to the X-Cube model. A non-standard dual formulation of the 3d plaquette Ising model which utilises three flavours of spins has recently been discussed in the context of dualising the fracton-free sector of the X-Cube model. In this paper we investigate the classical spin version of this non-standard dual Hamiltonian and discuss its properties in relation to the more familiar Ashkin-Teller-like dual and further related dual formulations involving both link and vertex spins and non-Ising spins. Reviews results in arXiv:1106.0325 and arXiv:1106.4664 in light of more recent simulations and fracton literature. Published in special issue of Entropy dedicated to the memory of Professor Ian Campbell

  • Open Access English
    Authors: 
    Aaron A. Dutton; Andrea V. Macciò; Jonas Frings; Liang Wang; G. S. Stinson; Camilla Penzo; Xi Kang;
    Project: EC | MW-DISK (321035)

    We compare the half-light circular velocities, V_{1/2}, of dwarf galaxies in the Local Group to the predicted circular velocity curves of galaxies in the NIHAO suite of LCDM simulations. We use a subset of 34 simulations in which the central galaxy has a stellar luminosity in the range 0.5 x 10^5 < L_V < 2 x 10^8 L_{sun}. The NIHAO galaxy simulations reproduce the relation between stellar mass and halo mass from abundance matching, as well as the observed half-light size vs luminosity relation. The corresponding dissipationless simulations over-predict the V_{1/2}, recovering the problem known as too big to fail (TBTF). By contrast, the NIHAO simulations have expanded dark matter haloes, and provide an excellent match to the distribution of V_{1/2} for galaxies with L_V > 2 x 10^6 L_{sun}. For lower luminosities our simulations predict very little halo response, and tend to over predict the observed circular velocities. In the context of LCDM, this could signal the increased stochasticity of star formation in haloes below M_{halo} \sim 10^{10} M_{sun}, or the role of environmental effects. Thus, haloes that are "too big to fail", do not fail LCDM, but haloes that are "too small to pass" (the galaxy formation threshold) provide a future test of LCDM. 6 pages, 3 figures, accepted to MNRAS letters

  • Open Access
    Authors: 
    H. B. Benaoum; S. H. Shaglel;
    Publisher: World Scientific Pub Co Pte Lt

    We propose a new scaling ansatz in the neutrino Dirac mass matrix to explain the low energy neutrino oscillations data, baryon number asymmetry and neutrinoless double beta decay. In this work, a full reconstruction of the neutrino Dirac mass matrix has been realized from the low energy neutrino oscillations data based on type-I seesaw mechanism. A concrete model based on $A_4$ flavor symmetry has been considered to generate such a neutrino Dirac mass matrix and imposes a relation between the two scaling factors. In this model, the right-handed Heavy Majorana neutrino masses are quasi-degenerate at TeV mass scales. Extensive numerical analysis studies have been carried out to constrain the parameter space of the model from the low energy neutrino oscillations data. It has been found that the parameter space of the Dirac mass matrix elements lies near or below the MeV region and the scaling factor $|\kappa_1|$ has to be less than 10. Furthermore, we have examined the possibility for simultaneous explanation of both neutrino oscillations data and the observed baryon number asymmetry in the Universe. Such an analysis gives further restrictions on the parameter space of the model, thereby explaining the correct neutrino data as well as the baryon number asymmetry via a resonant leptogenesis scenario. Finally, we show that the allowed space for the effective Majorana neutrino mass $m_{ee}$ is also constrained in order to account for the observed baryon asymmetry. Comment: 25 pages, 10 figues, revised version

Advanced search in
Research products
arrow_drop_down
Searching FieldsTerms
Any field
arrow_drop_down
includes
arrow_drop_down
Include:
2,781 Research products, page 1 of 279
  • Publication . Preprint . Conference object . Article . 2019 . Embargo End Date: 01 Jan 2019
    Open Access
    Authors: 
    Yichao Yan; Qiang Zhang; Bingbing Ni; Wendong Zhang; Minghao Xu; Xiaokang Yang;
    Publisher: arXiv

    Person re-identification has achieved great progress with deep convolutional neural networks. However, most previous methods focus on learning individual appearance feature embedding, and it is hard for the models to handle difficult situations with different illumination, large pose variance and occlusion. In this work, we take a step further and consider employing context information for person search. For a probe-gallery pair, we first propose a contextual instance expansion module, which employs a relative attention module to search and filter useful context information in the scene. We also build a graph learning framework to effectively employ context pairs to update target similarity. These two modules are built on top of a joint detection and instance feature learning framework, which improves the discriminativeness of the learned features. The proposed framework achieves state-of-the-art performance on two widely used person search datasets. Comment: To appear in CVPR 2019

  • Open Access English
    Authors: 
    Cornelius A. Rietveld; Sarah E. Medland; Jaime Derringer; Jian Yang; Tõnu Esko; Harm-Jan Westra; Abdel Abdellaoui; Arpana Agrawal; Eva Albrecht; Behrooz Z. Alizadeh; +173 more
    Countries: Netherlands, Croatia, Australia, United Kingdom, United States
    Project: EC | GMI (230374), EC | DEVHEALTH (269874), NSF | EAGER Proposal: Workshop ... (1064089), NIH | NBER Center for Aging and... (5P30AG012810-15), NIH | ECONOMICS OF AGING TRAINI... (5T32AG000186-10), NIH | FINANCIAL STATUS--RETIREM... (2P01AG005842-04), WT

    A genome-wide association study (GWAS) of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent single-nucleotide polymorphisms (SNPs) are genome-wide significant (rs9320913, rs11584700, rs4851266), and all three replicate. Estimated effects sizes are small (coefficient of determination R2 ≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates can anchor power analyses in social-science genetics. Economics

  • Publication . Preprint . Article . 2019 . Embargo End Date: 01 Jan 2019
    Open Access
    Authors: 
    Wheatcroft, Edward; Wynn, Henry; Dent, Chris J.; Smith, Jim Q.; Copeland, Claire L.; Ralph, Daniel; Zachary, Stan;
    Publisher: arXiv

    Scenario Analysis is a risk assessment tool that aims to evaluate the impact of a small number of distinct plausible future scenarios. In this paper, we provide an overview of important aspects of Scenario Analysis including when it is appropriate, the design of scenarios, uncertainty and encouraging creativity. Each of these issues is discussed in the context of climate, energy and legal scenarios.

  • Open Access English
    Authors: 
    Aysha Hamad Alneyadi; Iltaf Shah; Synan F. AbuQamar; S. Salman Ashraf;
    Publisher: Preprints

    Enzymatic degradation of organic pollutants is a new and promising remediation approach. Peroxidases are one of the most commonly used classes of enzymes to degrade organic pollutants. However, it is generally assumed that all peroxidases behave similarly and produce similar degradation products. In this study, we conducted detailed studies of the degradation of a model aromatic pollutant, Sulforhodamine B dye (SRB dye), using two peroxidases&mdash;soybean peroxidase (SBP) and chloroperoxidase (CPO). Our results show that these two related enzymes had different optimum conditions (pH, temperature, H2O2 concentration...etc.) for efficiently degrading SRB dye. High-performance liquid chromatography and LC-mass spectrometry analyses confirmed that both SBP and CPO transformed the SRB dye into low molecular weight intermediates. While most of the intermediates produced by the two enzymes were the same, the CPO treatment produced at least one different intermediate. Furthermore, toxicological evaluation using lettuce (Lactuca sativa) seeds demonstrated that the SBP-based treatment was able to eliminate the phytotoxicity of SRB dye, but the CPO-based treatment did not. Our results show, for the first time, that while both of these related enzymes can be used to efficiently degrade organic pollutants, they have different optimum reaction conditions and may not be equally efficient in detoxification of organic pollutants.

  • Publication . Article . Preprint . 2014 . Embargo End Date: 01 Jan 2014
    Open Access
    Authors: 
    Rahwan, Talal; Michalak, Tomasz P.;
    Publisher: arXiv

    Two fundamental algorithm-design paradigms are Tree Search and Dynamic Programming. The techniques used therein have been shown to complement one another when solving the complete set partitioning problem, also known as the coalition structure generation problem [5]. Inspired by this observation, we develop in this paper an algorithm to solve the coalition structure generation problem on graphs, where the goal is to identifying an optimal partition of a graph into connected subgraphs. More specifically, we develop a new depth-first search algorithm, and combine it with an existing dynamic programming algorithm due to Vinyals et al. [9]. The resulting hybrid algorithm is empirically shown to significantly outperform both its constituent parts when the subset-evaluation function happens to have certain intuitive properties.

  • Publication . Conference object . Preprint . Article . 2019
    Open Access
    Authors: 
    Breton Minnehan; Andreas Savakis;
    Publisher: IEEE

    We propose a data-driven approach for deep convolutional neural network compression that achieves high accuracy with high throughput and low memory requirements. Current network compression methods either find a low-rank factorization of the features that requires more memory, or select only a subset of features by pruning entire filter channels. We propose the Cascaded Projection (CaP) compression method that projects the output and input filter channels of successive layers to a unified low dimensional space based on a low-rank projection. We optimize the projection to minimize classification loss and the difference between the next layer's features in the compressed and uncompressed networks. To solve this non-convex optimization problem we propose a new optimization method of a proxy matrix using backpropagation and Stochastic Gradient Descent (SGD) with geometric constraints. Our cascaded projection approach leads to improvements in all critical areas of network compression: high accuracy, low memory consumption, low parameter count and high processing speed. The proposed CaP method demonstrates state-of-the-art results compressing VGG16 and ResNet networks with over 4x reduction in the number of computations and excellent performance in top-5 accuracy on the ImageNet dataset before and after fine-tuning.

  • Open Access
    Authors: 
    Khaled Ai Thelaya; Marco Agus; Jens Schneider;
    Publisher: Institute of Electrical and Electronics Engineers (IEEE)

    In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178$\times$ speed-up over na$\mathrm{\"{i}}$ve parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions. Comment: To appear in IEEE Transacations on Visualization and Computer Graphics (IEEE Vis 2020)

  • Publication . Article . Preprint . 2020
    Open Access English
    Authors: 
    Desmond Alexander Johnston; Ranasinghe P. K. C. M. Ranasinghe;

    A characteristic feature of the 3d plaquette Ising model is its planar subsystem symmetry. The quantum version of this model has been shown to be related via a duality to the X-Cube model, which has been paradigmatic in the new and rapidly developing field of fractons. The relation between the 3d plaquette Ising and the X-Cube model is similar to that between the 2d quantum transverse spin Ising model and the Toric Code. Gauging the global symmetry in the case of the 2d Ising model and considering the gauge invariant sector of the high temperature phase leads to the Toric Code, whereas gauging the subsystem symmetry of the 3d quantum transverse spin plaquette Ising model leads to the X-Cube model. A non-standard dual formulation of the 3d plaquette Ising model which utilises three flavours of spins has recently been discussed in the context of dualising the fracton-free sector of the X-Cube model. In this paper we investigate the classical spin version of this non-standard dual Hamiltonian and discuss its properties in relation to the more familiar Ashkin-Teller-like dual and further related dual formulations involving both link and vertex spins and non-Ising spins. Reviews results in arXiv:1106.0325 and arXiv:1106.4664 in light of more recent simulations and fracton literature. Published in special issue of Entropy dedicated to the memory of Professor Ian Campbell

  • Open Access English
    Authors: 
    Aaron A. Dutton; Andrea V. Macciò; Jonas Frings; Liang Wang; G. S. Stinson; Camilla Penzo; Xi Kang;
    Project: EC | MW-DISK (321035)

    We compare the half-light circular velocities, V_{1/2}, of dwarf galaxies in the Local Group to the predicted circular velocity curves of galaxies in the NIHAO suite of LCDM simulations. We use a subset of 34 simulations in which the central galaxy has a stellar luminosity in the range 0.5 x 10^5 < L_V < 2 x 10^8 L_{sun}. The NIHAO galaxy simulations reproduce the relation between stellar mass and halo mass from abundance matching, as well as the observed half-light size vs luminosity relation. The corresponding dissipationless simulations over-predict the V_{1/2}, recovering the problem known as too big to fail (TBTF). By contrast, the NIHAO simulations have expanded dark matter haloes, and provide an excellent match to the distribution of V_{1/2} for galaxies with L_V > 2 x 10^6 L_{sun}. For lower luminosities our simulations predict very little halo response, and tend to over predict the observed circular velocities. In the context of LCDM, this could signal the increased stochasticity of star formation in haloes below M_{halo} \sim 10^{10} M_{sun}, or the role of environmental effects. Thus, haloes that are "too big to fail", do not fail LCDM, but haloes that are "too small to pass" (the galaxy formation threshold) provide a future test of LCDM. 6 pages, 3 figures, accepted to MNRAS letters

  • Open Access
    Authors: 
    H. B. Benaoum; S. H. Shaglel;
    Publisher: World Scientific Pub Co Pte Lt

    We propose a new scaling ansatz in the neutrino Dirac mass matrix to explain the low energy neutrino oscillations data, baryon number asymmetry and neutrinoless double beta decay. In this work, a full reconstruction of the neutrino Dirac mass matrix has been realized from the low energy neutrino oscillations data based on type-I seesaw mechanism. A concrete model based on $A_4$ flavor symmetry has been considered to generate such a neutrino Dirac mass matrix and imposes a relation between the two scaling factors. In this model, the right-handed Heavy Majorana neutrino masses are quasi-degenerate at TeV mass scales. Extensive numerical analysis studies have been carried out to constrain the parameter space of the model from the low energy neutrino oscillations data. It has been found that the parameter space of the Dirac mass matrix elements lies near or below the MeV region and the scaling factor $|\kappa_1|$ has to be less than 10. Furthermore, we have examined the possibility for simultaneous explanation of both neutrino oscillations data and the observed baryon number asymmetry in the Universe. Such an analysis gives further restrictions on the parameter space of the model, thereby explaining the correct neutrino data as well as the baryon number asymmetry via a resonant leptogenesis scenario. Finally, we show that the allowed space for the effective Majorana neutrino mass $m_{ee}$ is also constrained in order to account for the observed baryon asymmetry. Comment: 25 pages, 10 figues, revised version

Send a message
How can we help?
We usually respond in a few hours.