Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ IET Image Processingarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IET Image Processing
Article . 2022 . Peer-reviewed
License: CC BY NC
Data sources: Crossref
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IET Image Processing
Article
License: CC BY NC
Data sources: UnpayWall
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
IET Image Processing
Article . 2022
Data sources: DOAJ
versions View all 2 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

The local ternary pattern encoder–decoder neural network for dental image segmentation

Authors: Omran Salih; Kevin Jan Duffy;

The local ternary pattern encoder–decoder neural network for dental image segmentation

Abstract

Abstract Recent advances in medical imaging analyses, especially the use of deep learning, are helping to identify, detect, classify, and quantify patterns in radiographs. At the centre of these advances is the ability to explore hierarchical feature representations learned from data. Deep learning is invaluably becoming the most sought out technique, leading to enhanced performances in the analysis of medical applications and systems. Deep learning techniques have achieved improved performance results in dental image segmentation. Segmentation of dental radiographs is a crucial step that helps dentists to diagnose dental caries. However, the performance of the deep networks used for these analyses are restrained by various challenging features found in dental carious lesions. Segmentation of dental images is often difficult due to the vast variety of types of topology, intricacies of medical structure and poor image quality caused by conditions such as low contrast, noise, irregular, and fuzzy border edges. These issues are exacerbated by low numbers of data images available for any particular analysis. A robust local ternary pattern encoder–decoder network (LTPEDN) is proposed to overcome dental image segmentation challenges and minimise the computational resources required. This new architecture is a modification of existing methods using an LTP. Images are preprocessed via augmentation and normalisation techniques to increase and prepare the datasets. Thereafter, the dataset input is sent to the LTPEDN for training and testing the model. Segmentation is performed using the non‐learnable layers (the LTP layers) and the learnable layers (standard convolution layers), to extract the ROI of the teeth. The method was evaluated on an augmented dataset of 11, 000 dental images. It was trained on 8, 800 training set images and tested on 2, 200 testing set images. The new method is shown to be 94.32% accurate.

Related Organizations
Keywords

QA76.75-76.765, Photography, Computer software, TR1-1050

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    5
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Top 10%
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Top 10%
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
5
Top 10%
Average
Top 10%
gold