Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Applied Sciencesarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
Applied Sciences
Article . 2025 . Peer-reviewed
License: CC BY
Data sources: Crossref
addClaim

An Efficient Dropout for Robust Deep Neural Networks

Authors: Yavuz Çapkan; Aydın Yeşildirek;

An Efficient Dropout for Robust Deep Neural Networks

Abstract

Overfitting remains a major difficulty in training deep neural networks, especially when attempting to achieve good generalization in complex classification tasks. Standard dropout is often employed to address this issue; however, its uniform random inactivation of neurons typically leads to instability and insufficient performance increases. This paper proposes an upgraded regularization technique merging adaptive sigmoidal dropout with weight amplification, seeking to dynamically adjust neuron deactivation depending on weight statistics, activation patterns, and neuron history. The proposed dropout process uses a sigmoid function driven by a temperature parameter to determine deactivation likelihood and incorporates a “neuron recovery” step to restore important activations. Simultaneously, the method amplifies high-magnitude weights to select crucial traits during learning. The proposed method is tested on CIFAR-10, and CIFAR-100 datasets using four unique CNN architectures, including deep and residual-based models, to evaluate the approach. Results demonstrate that the suggested technique consistently outperforms both standard dropout and baseline models without dropout, yielding higher validation accuracy and lower, more stable validation loss across all datasets. In particular, it demonstrated superior convergence and generalization performance on challenging datasets such as CIFAR-100. These findings demonstrate the potential of the proposed technique to improve model robustness and training efficiency and provide an alternative in complex classification tasks.

Related Organizations
  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    4
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
4
Top 10%
Average
Top 10%
gold