메뉴 건너뛰기




Volumn 44, Issue 5, 1998, Pages 1926-1940

Structural risk minimization over data-dependent hierarchies

Author keywords

Computational learning theory, fat shattering dimension, learning machines, maximal margin, probable smooth luckiness, probably approximately correct learning, support vector machines, uniform convergence, vapnik chervonenkis dimension

Indexed keywords

COMPUTATIONAL METHODS; CONVERGENCE OF NUMERICAL METHODS; ERROR ANALYSIS; LEARNING ALGORITHMS;

EID: 0032166068     PISSN: 00189448     EISSN: None     Source Type: Journal    
DOI: 10.1109/18.705570     Document Type: Article
Times cited : (414)

References (51)
  • 2
    • 0031176507 scopus 로고    scopus 로고
    • vol. 44, no.4, pp. 615-631, 1997.
    • Also, J. Assoc. Comput. Mach, vol. 44, no.4, pp. 615-631, 1997.
    • J. Assoc. Comput. Mach
  • 3
    • 84955561944 scopus 로고
    • "Function learning from interpolation," NeuroCOLT Tech. Rep. NC-TR-94-013 [Online] Available FTP: Ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports, an extended abstract appeared in
    • (Lecture Notes in Artificial Intelligence, vol. 904, P. Vitanyi, Ed.) Berlin, Germany: Springer-Verlag, 1995, pp. 211-221.
    • M. Anthony and P. Bartlett (1995), "Function learning from interpolation," NeuroCOLT Tech. Rep. NC-TR-94-013 [Online] Available FTP: ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports, (an extended abstract appeared in Computational Learning Theory, Proc. 2nd European Conf., EuroCOLT'95 (Lecture Notes in Artificial Intelligence, vol. 904, P. Vitanyi, Ed.) Berlin, Germany: Springer-Verlag, 1995, pp. 211-221.
    • (1995) Computational Learning Theory, Proc. 2nd European Conf., EuroCOLT'95
    • Anthony, M.1    Bartlett, P.2
  • 5
    • 0041805140 scopus 로고    scopus 로고
    • "A result of Vapnik with applications,"
    • vol. 47, pp. 207-217, 1993.
    • M. Anthony and J. Shawe-Taylor, "A result of Vapnik with applications," Discr. Appl Math., vol. 47, pp. 207-217, 1993.
    • Discr. Appl Math.
    • Anthony, M.1    Shawe-Taylor, J.2
  • 6
    • 0038911768 scopus 로고    scopus 로고
    • "A sufficient condition for polynomial distribution-dependent learnability,"
    • vol. 77, pp. 1-12, 1997.
    • _, "A sufficient condition for polynomial distribution-dependent learnability," Discr. Appl. Math., vol. 77, pp. 1-12, 1997.
    • Discr. Appl. Math.
  • 7
    • 0001325515 scopus 로고    scopus 로고
    • "Approximation and estimation bounds for artificial neural networks,"
    • vol. 14, pp. 115-133, 1994.
    • A. R. Barron, "Approximation and estimation bounds for artificial neural networks," Mach. Learning, vol. 14, pp. 115-133, 1994.
    • Mach. Learning
    • Barron, A.R.1
  • 9
    • 0026190366 scopus 로고    scopus 로고
    • "Minimum complexity density estimation,"
    • vol. 37, pp. 1034-1054, 1738, 1991.
    • A. R. Barron and T. M. Cover, "Minimum complexity density estimation," IEEE Trans. Inform. Theory, vol. 37, pp. 1034-1054, 1738, 1991.
    • IEEE Trans. Inform. Theory
    • Barron, A.R.1    Cover, T.M.2
  • 10
    • 0032028728 scopus 로고    scopus 로고
    • "The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network,"
    • 1998, to be published.
    • P. L. Bartlett, "The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network," IEEE Trans. Inform. Theory, 1998, to be published.
    • IEEE Trans. Inform. Theory
    • Bartlett, P.L.1
  • 11
    • 0032046897 scopus 로고    scopus 로고
    • Prediction, learning, uniform convergence, and scale-sensitive dimensions,"
    • 1998.
    • P. L. Bartlett and P. M. Long, "Prediction, learning, uniform convergence, and scale-sensitive dimensions," submitted to J. Comp. Syst. Scie. 1998.
    • J. Comp. Syst. Scie.
    • Bartlett, P.L.1    Long, P.M.2
  • 12
    • 0030165580 scopus 로고    scopus 로고
    • "Fat-shattering and the learnability of real-valued functions,"
    • vol. 52, no. 3, pp. 434-452, 1996.
    • P. L. Bartlett, P. M. Long, and R. C. Williamson, "Fat-shattering and the learnability of real-valued functions," J. Comp. Syst. Sci., vol. 52, no. 3, pp. 434-452, 1996.
    • J. Comp. Syst. Sci.
    • Bartlett, P.L.1    Long, P.M.2    Williamson, R.C.3
  • 13
    • 0028406575 scopus 로고    scopus 로고
    • "Nonuniform learnability,"
    • vol. 48, pp. 311-323, 1994.
    • G. M. Benedek and A. Itai, "Nonuniform learnability," J. Comp. Syst. Scie., vol. 48, pp. 311-323, 1994.
    • J. Comp. Syst. Scie.
    • Benedek, G.M.1    Itai, A.2
  • 16
    • 0030128623 scopus 로고    scopus 로고
    • "Learning by canonical smooth estimation, Part I: Simultaneous estimation,"
    • vol. 41, no. 4, p. 545, 1996.
    • K. L. Buescher and P. R. Kumar, "Learning by canonical smooth estimation, Part I: Simultaneous estimation," IEEE Trans. Automat. Contr., vol. 41, no. 4, p. 545, 1996.
    • IEEE Trans. Automat. Contr.
    • Buescher, K.L.1    Kumar, P.R.2
  • 17
    • 34249753618 scopus 로고    scopus 로고
    • "Support-vector networks,"
    • vol. 20, pp. 273-297, 1995.
    • C. Cortes and V. Vapnik, "Support-vector networks," Mach. Learning, vol. 20, pp. 273-297, 1995.
    • Mach. Learning
    • Cortes, C.1    Vapnik, V.2
  • 20
    • 0029521676 scopus 로고    scopus 로고
    • "Sample compression, learnability, and the Vapnik-Chervonenkis Dimension,"
    • vol. 21, pp. 269-304, 1995.
    • S. Floyd and M. Warmuth, "Sample compression, learnability, and the Vapnik-Chervonenkis Dimension," Mach. Learning, vol. 21, pp. 269-304, 1995.
    • Mach. Learning
    • Floyd, S.1    Warmuth, M.2
  • 21
    • 0001219859 scopus 로고    scopus 로고
    • "Regularization theory and neural networks architecture,"
    • vol. 7, pp. 219-269, 1995.
    • F. Girosi, M. Jones, and T. Poggio, "Regularization theory and neural networks architecture," Neural Comp., vol. 7, pp. 219-269, 1995.
    • Neural Comp.
    • Girosi, F.1    Jones, M.2    Poggio, T.3
  • 22
    • 84955611223 scopus 로고    scopus 로고
    • "Approximation and learning of convex superpositions," in
    • 2nd European Conf., EuroCOLT'95 (Lecture Notes in Artificial Intelligence, vol. 904, P. Vitânyi, Ed.) Berlin, Germany: Springer-Verlag, 1995, pp. 222-236.
    • L. Gurvits and P. Koiran, "Approximation and learning of convex superpositions," in Computational Learning Theory, Proc. 2nd European Conf., EuroCOLT'95 (Lecture Notes in Artificial Intelligence, vol. 904, P. Vitânyi, Ed.) Berlin, Germany: Springer-Verlag, 1995, pp. 222-236.
    • Computational Learning Theory, Proc.
    • Gurvits, L.1    Koiran, P.2
  • 25
    • 0002192516 scopus 로고    scopus 로고
    • "Decision theoretic generalizations of the PAC model for neural net and other learning applications,"
    • vol. 100, pp. 78-150, 1992.
    • D. Haussler, "Decision theoretic generalizations of the PAC model for neural net and other learning applications," Inform. Comp., vol. 100, pp. 78-150, 1992.
    • Inform. Comp.
    • Haussler, D.1
  • 28
    • 33747213454 scopus 로고    scopus 로고
    • "Neural networks with quadratic VC dimension," in
    • 7 (NIPS95) and to appear in J. Comp. Syst. Sci.; also available as a NeuroCOLT Tech. Rep. NC-TR-95-044 [Online]: Available FTP: ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports.
    • P. Koiran and E. D. Sontag, "Neural networks with quadratic VC dimension," in Neural Information Processing Systems 7 (NIPS95) and to appear in J. Comp. Syst. Sci.; also available as a NeuroCOLT Tech. Rep. NC-TR-95-044 [Online]: Available FTP: ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports.
    • Neural Information Processing Systems
    • Koiran, P.1    Sontag, E.D.2
  • 29
    • 0030128525 scopus 로고    scopus 로고
    • "Learning by canonical smooth estimation, Part 2: Learning and choice of model complexity,"
    • vol. 41, no. 4, p. 557, 1996.
    • P. R. Kumar and K. L. Buescher, "Learning by canonical smooth estimation, Part 2: Learning and choice of model complexity," IEEE Trans. Automat. Contr., vol. 41, no. 4, p. 557, 1996.
    • IEEE Trans. Automat. Contr.
    • Kumar, P.R.1    Buescher, K.L.2
  • 30
    • 0025742422 scopus 로고    scopus 로고
    • "Results on learnability and the Vapnik-Chervonenkis dimension,"
    • vol. 90, pp. 33-49, 1991.
    • N. Linial, Y. Mansour, and R. L. Rivest, "Results on learnability and the Vapnik-Chervonenkis dimension," Inform. Comp., vol. 90, pp. 33-49, 1991.
    • Inform. Comp.
    • Linial, N.1    Mansour, Y.2    Rivest, R.L.3
  • 31
    • 34250091945 scopus 로고    scopus 로고
    • "Learning quickly when irrelevant attributes abound: A new linear threshold algorithm,"
    • vol. 2, pp. 285-318, 1988.
    • N. Littlestone, "Learning quickly when irrelevant attributes abound: A new linear threshold algorithm," Mach. Learning, vol. 2, pp. 285-318, 1988.
    • Mach. Learning
    • Littlestone, N.1
  • 38
    • 0029307575 scopus 로고    scopus 로고
    • "Nonparametric estimation via empirical risk minimization,"
    • vol. 41, pp. 677-687, May 1995.
    • G. Lugosi and K. Zeger, "Nonparametric estimation via empirical risk minimization," IEEE Trans. Inform. Theory, vol. 41, pp. 677-687, May 1995.
    • IEEE Trans. Inform. Theory
    • Lugosi, G.1    Zeger, K.2
  • 39
    • 0029754587 scopus 로고    scopus 로고
    • "Concept learning using complexity regularization,"
    • vol. 42, pp. 48-54, Jan. 1996.
    • _, "Concept learning using complexity regularization," IEEE Trans. Inform. Theory, vol. 42, pp. 48-54, Jan. 1996.
    • IEEE Trans. Inform. Theory
  • 40
    • 0000372206 scopus 로고    scopus 로고
    • "Bayesian model comparison and backprop nets,"
    • J. E. Moody et al., Eds. San Mateo, CA: Morgan Kaufmann, 1992, pp. 839-846.
    • D. J. C. MacKay, "Bayesian model comparison and backprop nets," in Adv. Neural Inform. Processing Syst. 4, J. E. Moody et al., Eds. San Mateo, CA: Morgan Kaufmann, 1992, pp. 839-846.
    • Adv. Neural Inform. Processing Syst. 4
    • MacKay, D.J.C.1
  • 43
    • 38249005514 scopus 로고    scopus 로고
    • "Bounding sample size with the Vapnik-Chervonenkis dimension,"
    • vol. 42, pp. 65-73, 1993.
    • J. Shawe-Taylor, M. Anthony, and N. Biggs, "Bounding sample size with the Vapnik-Chervonenkis dimension," Discr. Appl. Math., vol. 42, pp. 65-73, 1993.
    • Discr. Appl. Math.
    • Shawe-Taylor, J.1    Anthony, M.2    Biggs, N.3
  • 45
    • 33747230456 scopus 로고    scopus 로고
    • "Shattering all sets of
    • 1/2 parameters," Rutgers Center for Systems and Control (SYCON), Rep. 96-01; also NeuroCOLT Tech. Rep. NC-TR-96-042. [Online] Available FTP: ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports.
    • E. D. Sontag, "Shattering all sets of k points in 'general position' requires (k -1)/2 parameters," Rutgers Center for Systems and Control (SYCON), Rep. 96-01; also NeuroCOLT Tech. Rep. NC-TR-96-042. [Online] Available FTP: ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports).
    • K Points in 'General Position' Requires
    • Sontag, E.D.1
  • 48
    • 0040864988 scopus 로고    scopus 로고
    • "Principles of risk minimization for learning theory
    • J. E. Moody et al., Eds. San Mateo, CA: Morgan Kaufmann, 1992, pp. 831-838.
    • _, "Principles of risk minimization for learning theory," in Advances in Neural Information Processing Systems 4, J. E. Moody et al., Eds. San Mateo, CA: Morgan Kaufmann, 1992, pp. 831-838.
    • Advances in Neural Information Processing Systems 4
  • 50
    • 0001024505 scopus 로고    scopus 로고
    • "On the uniform convergence of relative frequencies of events to their probabilities,"
    • vol. 16, pp. 264-280, 1971.
    • V. N. Vapnik and A. J. Chervonenkis, "On the uniform convergence of relative frequencies of events to their probabilities," Theory of Probability and Applications, vol. 16, pp. 264-280, 1971.
    • Theory of Probability and Applications
    • Vapnik, V.N.1    Chervonenkis, A.J.2
  • 51
    • 33747303882 scopus 로고    scopus 로고
    • "Ordered risk minimization (I and II),"
    • vol. 34, pp. 1226-1235 and 1403-1412, 1974.
    • _, "Ordered risk minimization (I and II)," Automat. Remote Contr., vol. 34, pp. 1226-1235 and 1403-1412, 1974.
    • Automat. Remote Contr.


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.