
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>Algorithmic statistics considers the following problem: given a binary string $x$ (e.g., some experimental data), find a "good" explanation of this data. It uses algorithmic information theory to define formally what is a good explanation. In this paper we extend this framework in two directions. First, the explanations are not only interesting in themselves but also used for prediction: we want to know what kind of data we may reasonably expect in similar situations (repeating the same experiment). We show that some kind of hierarchy can be constructed both in terms of algorithmic statistics and using the notion of a priori probability, and these two approaches turn out to be equivalent. Second, a more realistic approach that goes back to machine learning theory, assumes that we have not a single data string $x$ but some set of "positive examples" $x_1,\ldots,x_l$ that all belong to some unknown set $A$, a property that we want to learn. We want this set $A$ to contain all positive examples and to be as small and simple as possible. We show how algorithmic statistic can be extended to cover this situation.
22 pages
FOS: Computer and information sciences, Computer Science - Machine Learning, learning, Computer Science - Information Theory, Information Theory (cs.IT), minimal description length, prediction, kolmogorov complexity, 004, Machine Learning (cs.LG), algorithmic information theory, ddc: ddc:004
FOS: Computer and information sciences, Computer Science - Machine Learning, learning, Computer Science - Information Theory, Information Theory (cs.IT), minimal description length, prediction, kolmogorov complexity, 004, Machine Learning (cs.LG), algorithmic information theory, ddc: ddc:004
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average | 
