
Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterised by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Lof (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyse the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.
31 pages LaTeX. arXiv admin note: substantial text overlap with arXiv:1311.7385
FOS: Computer and information sciences, Computer Science - Machine Learning, computable measure, strong law of large numbers, Computer Science - Artificial Intelligence, Memory and learning in psychology, Markov chain, Learning and adaptive systems in artificial intelligence, BF, Kolmogorov complexity, Article, 510, Machine Learning (cs.LG), Computational methods in Markov chains, QA, Psychology(all), computable probability, typicality, learning, Applied Mathematics, Martin-Löf randomness, Bayesian brain, identification, 004, Artificial Intelligence (cs.AI), TA, RC0321
FOS: Computer and information sciences, Computer Science - Machine Learning, computable measure, strong law of large numbers, Computer Science - Artificial Intelligence, Memory and learning in psychology, Markov chain, Learning and adaptive systems in artificial intelligence, BF, Kolmogorov complexity, Article, 510, Machine Learning (cs.LG), Computational methods in Markov chains, QA, Psychology(all), computable probability, typicality, learning, Applied Mathematics, Martin-Löf randomness, Bayesian brain, identification, 004, Artificial Intelligence (cs.AI), TA, RC0321
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 11 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
