
Training datasets for the IberAuTexTification shared task at IberLEF 2024. This task aims to boost research on the detection and attribution of text generated automatically by large language models, in a multilingual (languages from the Iberian peninsula such as Spanish, English, Catalan, Gallego, Euskera, and Portuguese), multi-domain (news, reviews, emails, essays, dialogues, wikipedia, wikihow, tweets, emails, etc.), and multi-model (GPT, LLaMA, Mistral, Cohere, Anthropic, MPT, Falcon, etc.) setup. This dataset includes only the training sets for the two subtasks of the competition. The test sets will be released in April, 21st. Once you request the data through Zenodo, a password to decompress the data will be sent to your email within 24 hours. Please, make sure you write your email address correctly, since we will send passwords, as well as future notifications to that address.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
