Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ https://dx.doi.org/1...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
https://dx.doi.org/10.20372/na...
Thesis . 2025
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
https://dx.doi.org/10.20372/na...
Thesis . 2025
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
https://dx.doi.org/10.20372/na...
Thesis . 2025
License: CC BY
Data sources: Datacite
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
https://dx.doi.org/10.20372/na...
Thesis . 2025
License: CC BY
Data sources: Datacite
versions View all 4 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

KAFI-NOONOO TEXT SUMMARIZATION WITH A DEEP LEARNING APPROACH

Authors: KOCHITO, TESHALE MENGESHA;

KAFI-NOONOO TEXT SUMMARIZATION WITH A DEEP LEARNING APPROACH

Abstract

The technique of reducing a lengthy text to a manageable length while maintaining its essential concepts and points are known as text summary. Its goal is to give a concise synopsis that encapsulates the main ideas of the original work. In text summarization, there are two main approaches: the extractive approach and the abstractive approach. In order to provide a succinct summary, extractive text summarizing entails determining the important details by picking out key sentences or phrases from the source text. The technique of creating an internal semantic representation of the source text and rewriting it in new words using natural language processing is known as abstractive text summarization. Extractive text summarization is the main focus of this study. There is no available text summarization research for the Kafi-noonoo language. The main objective of this study is to develop Kafinoonoo text summarizer models with a deep learning approach. For the purpose of this study, 402 kafi-noonoo texts with summaries were used as input documents. Consequently, three deep learning models were proposed in this study: CNN (convolutional neural network), LSTM (long short-term memory), and Bi-LSTM (bi-directional long short-term memory) to perform a comparison analysis for a Kafi-noonoo dataset. So, the developed models for Kafinoonoo language eliminate the mentioned problems of content selection bias, information overload, and wasting time, effort, and materials. In our experiments, the result indicates that the LSTM model achieves precision 98.2%, recall 98.6%, F1 score 98.1%, accuracy 93% and 98.5% of validation accuracy and 96.7% of training accuracy; Bi-LSTM scores precision 98.3%, recall 99.2%, F1 score 98.6%, accuracy 98% and 98.6% of validation accuracy and 97.8% of training accuracy; and the CNN model scores precision 88%, recall 87%, F1 score 93.9%, accuracy 93.5% and 94% of validation accuracy and 93.6% of training accuracy.

Keywords

Automatic text summarization, Natural language processing, Abstractive summarization, Extractive summarization, Kafi-noonoo language

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average