메뉴 건너뛰기




Volumn 2600, Issue , 2003, Pages 235-257

Online learning of linear classifiers

Author keywords

[No Author keywords available]

Indexed keywords

ALGORITHMS; ARTIFICIAL INTELLIGENCE; LEARNING SYSTEMS;

EID: 35248832956     PISSN: 03029743     EISSN: 16113349     Source Type: Book Series    
DOI: 10.1007/3-540-36434-x_8     Document Type: Article
Times cited : (11)

References (27)
  • 3
    • 49949144765 scopus 로고
    • The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming
    • L. M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Physics, 7:200-217, 1967.
    • (1967) USSR Computational Mathematics and Physics , vol.7 , pp. 200-217
    • Bregman, L.M.1
  • 4
    • 0025595687 scopus 로고
    • Why least squares and maximum entropy? An axiomatic approach for linear inverse problems
    • I. Csiszar. Why least squares and maximum entropy? An axiomatic approach for linear inverse problems. The Annals of Statistics, 19:2032-2066, 1991.
    • (1991) The Annals of Statistics , vol.19 , pp. 2032-2066
    • Csiszar, I.1
  • 5
    • 0033281425 scopus 로고    scopus 로고
    • Large margin classification using the perception algorithm
    • Y. Freund and R. E. Schapire. Large margin classification using the perception algorithm. Machine Learning, 37:277-296, 1999.
    • (1999) Machine Learning , vol.37 , pp. 277-296
    • Freund, Y.1    Schapire, R.E.2
  • 6
    • 84868111801 scopus 로고    scopus 로고
    • A new approximate maximal margin classification algorithm
    • C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2:213-242, 2001.
    • (2001) Journal of Machine Learning Research , vol.2 , pp. 213-242
    • Gentile, C.1
  • 8
    • 84898954071 scopus 로고    scopus 로고
    • Hinge loss and average margin
    • M. S. Kearns, S. A. Solla and D. A. Cohn, editors, MIT Press, Cambridge, MA
    • C. Gentile and M. K. Warmuth. Hinge loss and average margin. In M. S. Kearns, S. A. Solla and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 225-231. MIT Press, Cambridge, MA, 1998.
    • (1998) Advances in Neural Information Processing Systems , vol.11 , pp. 225-231
    • Gentile, C.1    Warmuth, M.K.2
  • 9
    • 0035370643 scopus 로고    scopus 로고
    • General convergence results for linear discriminant updates
    • A. J. Grove, N. Littlestone and D. Schuurmans. General convergence results for linear discriminant updates. Machine Learning, 43:173-210, 2001.
    • (2001) Machine Learning , vol.43 , pp. 173-210
    • Grove, A.J.1    Littlestone, N.2    Schuurmans, D.3
  • 14
    • 0025209334 scopus 로고
    • General entropy criteria for inverse problems, with applications to data compression, pattern classification and cluster analysis
    • L. Jones and C. Byrne. General entropy criteria for inverse problems, with applications to data compression, pattern classification and cluster analysis. IEEE Transactions on Information Theory, 36:23-30, 1990.
    • (1990) IEEE Transactions on Information Theory , vol.36 , pp. 23-30
    • Jones, L.1    Byrne, C.2
  • 15
    • 84898940321 scopus 로고    scopus 로고
    • Online learning with kernels
    • T. G. Dietterich, S. Becker and Z. Ghahramani, editors, MIT Press, Cambridge, MA
    • J. Kivinen, A. J. Smola and R. C. Williamson. Online learning with kernels. In T. G. Dietterich, S. Becker and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 785-792. MIT Press, Cambridge, MA, 2002.
    • (2002) Advances in Neural Information Processing Systems , vol.14 , pp. 785-792
    • Kivinen, J.1    Smola, A.J.2    Williamson, R.C.3
  • 16
    • 0008815681 scopus 로고    scopus 로고
    • Additive versus exponentiated gradient updates for linear prediction
    • J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132:1-64, 1997.
    • (1997) Information and Computation , vol.132 , pp. 1-64
    • Kivinen, J.1    Warmuth, M.K.2
  • 17
    • 0035575628 scopus 로고    scopus 로고
    • Relative loss bounds for multidimensional regression problems
    • J. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression problems. Machine Learning, 45:301-329, 2001.
    • (2001) Machine Learning , vol.45 , pp. 301-329
    • Kivinen, J.1    Warmuth, M.K.2
  • 18
    • 0031375503 scopus 로고    scopus 로고
    • The Perceptron algorithm vs. Winnow: Linear vs. logarithmic mistake bounds when few input variables are relevant
    • J. Kivinen, M. K. Warmuth and P. Auer. The Perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant. Artificial Intelligence, 97:325-343, 1997.
    • (1997) Artificial Intelligence , vol.97 , pp. 325-343
    • Kivinen, J.1    Warmuth, M.K.2    Auer, P.3
  • 20
    • 34250091945 scopus 로고
    • Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm
    • N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285-318, 1988.
    • (1988) Machine Learning , vol.2 , pp. 285-318
    • Littlestone, N.1
  • 22
    • 84937405862 scopus 로고    scopus 로고
    • Tracking linear-threshold concepts with Winnow
    • J. Kivinen and B. Sloan, editors, Springer, Berlin
    • C. Mesterharm. Tracking linear-threshold concepts with Winnow. In J. Kivinen and B. Sloan, editors, Proc. 15th Annual Conference on Computational Learning Theory, pages 138-152. Springer, Berlin, 2002.
    • (2002) Proc. 15th Annual Conference on Computational Learning Theory , pp. 138-152
    • Mesterharm, C.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.