publication . Preprint . Article . Other literature type . 2018

Linear Maximum Margin Classifier for Learning from Uncertain Data

Ioannis Patras; Christos Tzelepis; Vasileios Mezaris;
Open Access English
  • Published: 01 Dec 2018
Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence. (c) 2017 IEEE. DOI: 10.1109/TPAMI.2017.2772235 Author's accepted version. The final publication is available at
ACM Computing Classification System: ComputingMethodologies_PATTERNRECOGNITION
free text keywords: Computer Science - Learning, Classification, Convex optimization, Gaussian anisotropic uncertainty, Large margin methods, Learning with uncertainty, Pattern recognition, Statistical learning theory, Stochastic gradient descent, Support vector machine, Gaussian process, symbols.namesake, symbols, Artificial intelligence, business.industry, business, Mathematics, MNIST database, Margin classifier, Margin (machine learning), Uncertain data
Funded by
Training towards a society of data-savvy information professionals to enable open leadership innovation
  • Funder: European Commission (EC)
  • Project Code: 693092
  • Funding stream: H2020 | RIA
Television Linked To The Web
  • Funder: European Commission (EC)
  • Project Code: 287911
  • Funding stream: FP7 | SP1 | ICT
Concise Preservation by combining Managed Forgetting and Contextualized Remembering
  • Funder: European Commission (EC)
  • Project Code: 600826
  • Funding stream: FP7 | SP1 | ICT
37 references, page 1 of 3

[1] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the fifth annual workshop on Computational learning theory. ACM, 1992, pp. 144-152. [OpenAIRE]

[2] V. N. Vapnik and A. J. Chervonenkis, “Theory of pattern recognition,” 1974.

[3] V. N. Vapnik, “Statistical learning theory (adaptive and learning systems for signal processing, communications and control series),” 1998.

[4] A. J. Smola, B. Scho¨lkopf, and K.-R. Mu¨ller, “The connection between regularization operators and support vector kernels,” Neural networks, vol. 11, no. 4, pp. 637-649, 1998.

[5] T. Evgeniou, M. Pontil, and T. Poggio, “Regularization networks and support vector machines,” Advances in Computational Mathematics, vol. 13, no. 1, pp. 1-50, 2000. [OpenAIRE]

[6] P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” The Journal of Machine Learning Research, vol. 3, pp. 463-482, 2003.

[7] J. Bi and T. Zhang, “Support vector classification with input data uncertainty.” in NIPS, 2004.

[8] F. Alizadeh and D. Goldfarb, “Second-order cone programming,” Mathematical programming, vol. 95, no. 1, pp. 3-51, 2003.

[9] A. Ben-Tal and A. Nemirovski, “Robust convex optimization,” Mathematics of Operations Research, vol. 23, no. 4, pp. 769-805, 1998.

[10] D. Bertsimas, D. B. Brown, and C. Caramanis, “Theory and applications of robust optimization,” SIAM review, vol. 53, no. 3, pp. 464-501, 2011.

[11] G. R. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan, “A robust minimax approach to classification,” The Journal of Machine Learning Research, vol. 3, pp. 555-582, 2003.

[12] A. W. Marshall, I. Olkin et al., “Multivariate chebyshev inequalities,” The Annals of Mathematical Statistics, vol. 31, no. 4, pp. 1001-1014, 1960. [OpenAIRE]

[13] P. K. Shivaswamy, C. Bhattacharyya, and A. J. Smola, “Second order cone programming approaches for handling missing and uncertain data,” The Journal of Machine Learning Research, vol. 7, pp. 1283-1314, 2006.

[14] C. Bhattacharyya, P. K. Shivaswamy, and A. J. Smola, “A second order cone programming formulation for classifying missing data.” in NIPS, 2004. [OpenAIRE]

[15] H. Xu, C. Caramanis, and S. Mannor, “Robustness and regularization of support vector machines,” The Journal of Machine Learning Research, vol. 10, pp. 1485-1510, 2009.

37 references, page 1 of 3
Any information missing or wrong?Report an Issue