<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
The technique of reducing a lengthy text to a manageable length while maintaining its essential concepts and points are known as text summary. Its goal is to give a concise synopsis that encapsulates the main ideas of the original work. In text summarization, there are two main approaches: the extractive approach and the abstractive approach. In order to provide a succinct summary, extractive text summarizing entails determining the important details by picking out key sentences or phrases from the source text. The technique of creating an internal semantic representation of the source text and rewriting it in new words using natural language processing is known as abstractive text summarization. Extractive text summarization is the main focus of this study. There is no available text summarization research for the Kafi-noonoo language. The main objective of this study is to develop Kafinoonoo text summarizer models with a deep learning approach. For the purpose of this study, 402 kafi-noonoo texts with summaries were used as input documents. Consequently, three deep learning models were proposed in this study: CNN (convolutional neural network), LSTM (long short-term memory), and Bi-LSTM (bi-directional long short-term memory) to perform a comparison analysis for a Kafi-noonoo dataset. So, the developed models for Kafinoonoo language eliminate the mentioned problems of content selection bias, information overload, and wasting time, effort, and materials. In our experiments, the result indicates that the LSTM model achieves precision 98.2%, recall 98.6%, F1 score 98.1%, accuracy 93% and 98.5% of validation accuracy and 96.7% of training accuracy; Bi-LSTM scores precision 98.3%, recall 99.2%, F1 score 98.6%, accuracy 98% and 98.6% of validation accuracy and 97.8% of training accuracy; and the CNN model scores precision 88%, recall 87%, F1 score 93.9%, accuracy 93.5% and 94% of validation accuracy and 93.6% of training accuracy.
Automatic text summarization, Natural language processing, Abstractive summarization, Extractive summarization, Kafi-noonoo language
Automatic text summarization, Natural language processing, Abstractive summarization, Extractive summarization, Kafi-noonoo language
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |