Downloads provided by UsageCounts
This repository contains the raw results (by word information-theoretic measures for the experimental stimuli) and the LSTM models analyzed in Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment. The models from the synthetic experiments are given in the synthetic archive, as well as the training data generation script. There is a README included that gives more details for recreating/evaluating results from those experiments. The naming convention for each model in the models directory is: [Language]_hidden[Hidden Units]_batch[Batch Size]_dropout[Dropout Rate]_lr[Learning Rate]_[Model Number].pt Language: en for English and es for Spanish Hidden Units: All models had two layers with 650 hidden units per layer Batch Size: The size of the batch (128 for English, 64 for Spanish) Dropout Rate: All models used a dropout rate of 0.2 Learning Rate: All models has a learning rate of 20 Model Number: Identifier of the model (English model 0 is the best model from Gulordava et al. (2018))
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 11 | |
| downloads | 3 |

Views provided by UsageCounts
Downloads provided by UsageCounts