Downloads provided by UsageCounts
doi: 10.3389/frai.2023.1023281 , 10.48550/arxiv.2005.07503 , 10.5281/zenodo.7074653 , 10.5281/zenodo.7074654
pmid: 36998290
pmc: PMC10043293
arXiv: 2005.07503
doi: 10.3389/frai.2023.1023281 , 10.48550/arxiv.2005.07503 , 10.5281/zenodo.7074653 , 10.5281/zenodo.7074654
pmid: 36998290
pmc: PMC10043293
arXiv: 2005.07503
IntroductionThis study presents COVID-Twitter-BERT (CT-BERT), a transformer-based model that is pre-trained on a large corpus of COVID-19 related Twitter messages. CT-BERT is specifically designed to be used on COVID-19 content, particularly from social media, and can be utilized for various natural language processing tasks such as classification, question-answering, and chatbots. This paper aims to evaluate the performance of CT-BERT on different classification datasets and compare it with BERT-LARGE, its base model.MethodsThe study utilizes CT-BERT, which is pre-trained on a large corpus of COVID-19 related Twitter messages. The authors evaluated the performance of CT-BERT on five different classification datasets, including one in the target domain. The model's performance is compared to its base model, BERT-LARGE, to measure the marginal improvement. The authors also provide detailed information on the training process and the technical specifications of the model.ResultsThe results indicate that CT-BERT outperforms BERT-LARGE with a marginal improvement of 10-30% on all five classification datasets. The largest improvements are observed in the target domain. The authors provide detailed performance metrics and discuss the significance of these results.DiscussionThe study demonstrates the potential of pre-trained transformer models, such as CT-BERT, for COVID-19 related natural language processing tasks. The results indicate that CT-BERT can improve the classification performance on COVID-19 related content, especially on social media. These findings have important implications for various applications, such as monitoring public sentiment and developing chatbots to provide COVID-19 related information. The study also highlights the importance of using domain-specific pre-trained models for specific natural language processing tasks. Overall, this work provides a valuable contribution to the development of COVID-19 related NLP models.
Social and Information Networks (cs.SI), FOS: Computer and information sciences, Computer Science - Machine Learning, text classification, Computer Science - Computation and Language, COVID-19, Computer Science - Social and Information Networks, QA75.5-76.95, vaccines, Machine Learning (cs.LG), covid-19, Artificial Intelligence, Electronic computers. Computer science, natural language processing (NLP), Computation and Language (cs.CL), language model (LM), BERT, Natural Language Processing
Social and Information Networks (cs.SI), FOS: Computer and information sciences, Computer Science - Machine Learning, text classification, Computer Science - Computation and Language, COVID-19, Computer Science - Social and Information Networks, QA75.5-76.95, vaccines, Machine Learning (cs.LG), covid-19, Artificial Intelligence, Electronic computers. Computer science, natural language processing (NLP), Computation and Language (cs.CL), language model (LM), BERT, Natural Language Processing
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 111 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 1% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 0.1% |
| views | 6 | |
| downloads | 13 |

Views provided by UsageCounts
Downloads provided by UsageCounts