
Understanding data, models and predictions is important for machine learning applications. Due to the limitations of our spatial perception and intuition, analysing high-dimensional data is inherently difficult. Furthermore, black-box models achieving high predictive accuracy are widely used, yet the logic behind their predictions is often opaque. Use of textualisation -- a natural language narrative of selected phenomena -- can tackle these shortcomings. When extended with argumentation theory we could envisage machine learning models and predictions arguing persuasively for their choices.
/dk/atira/pure/core/keywords/digital_health; name=Digital Health, /dk/atira/pure/core/keywords/digital_health, name=Digital Health, name=SPHERE, /dk/atira/pure/core/keywords/eng_sphere, 004, /dk/atira/pure/core/keywords/eng_sphere; name=SPHERE
/dk/atira/pure/core/keywords/digital_health; name=Digital Health, /dk/atira/pure/core/keywords/digital_health, name=Digital Health, name=SPHERE, /dk/atira/pure/core/keywords/eng_sphere, 004, /dk/atira/pure/core/keywords/eng_sphere; name=SPHERE
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
