Probabilistic inductive inference: a survey

Article, Preprint English OPEN
Ambainis, Andris (2001)
  • Publisher: Elsevier BV
  • Journal: Theoretical Computer Science, volume 264, issue 1, pages 155-167 (issn: 0304-3975)
  • Related identifiers: doi: 10.1016/s0304-3975(00)00218-8
  • Subject: Theoretical Computer Science | Mathematics - Logic | Computer Science(all) | Computer Science - Computational Complexity | F.1.1., F.4.1., I.2.3., I.2.6 | Computer Science - Logic in Computer Science | Computer Science - Learning
    acm: TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES

Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.
  • References (35)
    35 references, page 1 of 4

    [1] A. Ambainis. Probabilistic and team PFIN-type learning: General properties. In Proceedings of the Ninth Annual Conference on Computational Learning Theory, pages 157-168. ACM Press, 1996.

    [2] A. Ambainis. General oracle results for finite learning. Unpublished, 1998.

    [3] A. Ambainis, K. Aps¯ıtis, R. Freivalds, W. Gasarch, and C. H. Smith. Team learning as a game. In Ming Li and Akira Maruoka, editors, Proceedings of the 8th International Workshop on Algorithmic Learning Theory (ALT-97), volume 1316 of LNAI, pages 2-17, Berlin, October 6-8 1997. Springer.

    [4] A. Ambainis, K. Aps¯ıtis, R. Freivalds, and C. H. Smith. Hierarchies of probabilistic and team fin-learning. Theoretical Computer Science, 1999. to appear.

    [5] D. Angluin and C. Smith. A survey of inductive inference: Theory and methods. Computing Surveys, 15:237-289, 1983.

    [6] K. Aps¯ıtis. Hierarchies of Probabilistic and Team Learning. PhD thesis, University of Maryland, College Park, 1998.

    [7] K. Aps¯ıtis, R. Freivalds, and C. H. Smith Asymmetric team learning. In Proceedings of the Tenth Annual Conference on Computational Learning Theory, pages 90-95. ACM Press, 1997.

    [8] R. Daley and B. Kalyanasundaram. Use of reduction arguments in determining Popperian FIN-type learning capabilities. In K. Jantke, S. Kobayashi, E. Tomita, and T. Yokomori, editors, Algorithmic Learning Theory: Fourth International Workshop (ALT '93), volume 744 of Lecture Notes in Artificial Intelligence, pages 173-186. Springer-Verlag, 1993.

    [9] R. Daley and B. Kalyanasundaram. Towards reduction arguments for FINite learning. In K. Jantke and S. Lange, editors, Algorithmic Learning for Knowledge-Based Systems, volume 961 of Lecture Notes in Artificial Intelligence, pages 63-74. Springer-Verlag, 1995.

    [10] R. Daley, B. Kalyanasundaram, and M. Velauthapillai. The power of probabilism in Popperian finite learning. In Analogical and Inductive Inference, Proceedings of the Third International Workshop, volume 642 of Lecture Notes in Artificial Intelligence, pages 151-169. Springer-Verlag, 1992.

  • Metrics
    No metrics available
Share - Bookmark