Views provided by UsageCounts
Token-level metrics from the paper Black-box language model explanation by context length probing. The metrics were computed on the UD_English_LinES development set using the preds_to_metrics script from the repository. The archives were created using PyTorch 1.11.0 and can be loaded using torch.load. Each file contains a dictionary mapping metric names to PyTorch tensors. The first two dimensions of each tensor correspond to target token position (within the whole dataset) and context length, respectively. Code for processing the metrics is included in the process_metrics notebook. The metrics are provided for research purposes, in particular to enable reproducing results from the paper without having to recompute or store the model predictions.
Transformer, explainability, GPT, language model, interpretability
Transformer, explainability, GPT, language model, interpretability
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 4 |

Views provided by UsageCounts