Testing the Generalization of Neural Language Models for COVID-19 Misinformation Detection

2021-11-15 | preprint

Jump to: Cite & Linked | Documents & Media | Details | Version history

Cite this publication

​Testing the Generalization of Neural Language Models for COVID-19 Misinformation Detection​
Wahle, J. P.; Ashok, N.; Ruas, T. ; Meuschke, N. ; Ghosal, T.& Gipp, B. ​ (2021)

Documents & Media

License

GRO License GRO License

Details

Authors
Wahle, Jan Philip; Ashok, Nischal; Ruas, Terry ; Meuschke, Norman ; Ghosal, Tirthankar; Gipp, Bela 
Abstract
A drastic rise in potentially life-threatening misinformation has been a by-product of the COVID-19 pandemic. Computational support to identify false information within the massive body of data on the topic is crucial to prevent harm. Researchers proposed many methods for flagging online misinformation related to COVID-19. However, these methods predominantly target specific content types (e.g., news) or platforms (e.g., Twitter). The methods' capabilities to generalize were largely unclear so far. We evaluate fifteen Transformer-based models on five COVID-19 misinformation datasets that include social media posts, news articles, and scientific papers to fill this gap. We show tokenizers and models tailored to COVID-19 data do not provide a significant advantage over general-purpose ones. Our study provides a realistic assessment of models for detecting COVID-19 misinformation. We expect that evaluating a broad spectrum of datasets and models will benefit future research in developing misinformation detection systems.
Issue Date
15-November-2021

Reference

Citations