메뉴 건너뛰기




Volumn 11, Issue , 2010, Pages 815-848

Iterative scaling and coordinate descent methods for maximum entropy models

Author keywords

Coordinate descent; Iterative scaling; Maximum entropy; Natural language processing; Optimization

Indexed keywords

CONVERGENCE RESULTS; COORDINATE DESCENT; LINEAR SVM; MAXIMUM ENTROPY; MAXIMUM ENTROPY MODELS; NATURAL LANGUAGE PROCESSING; SCALING METHOD; UNIFIED FRAMEWORK;

EID: 77949528697     PISSN: 15324435     EISSN: 15337928     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (31)

References (35)
  • 5
    • 0003713964 scopus 로고    scopus 로고
    • Athena Scientific, Belmont, MA 02178-9998, second edition
    • Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, second edition, 1999.
    • (1999) Nonlinear Programming
    • Bertsekas, D.P.1
  • 6
    • 33947180792 scopus 로고    scopus 로고
    • Stochastic learning
    • Olivier Bousquet and Ulrike von Luxburg, editors, Lecture Notes in Artificial Intelligence, LNAI 3176. Springer Verlag
    • Léon Bottou. Stochastic learning. In Olivier Bousquet and Ulrike von Luxburg, editors, Advanced Lectures on Machine Learning, Lecture Notes in Artificial Intelligence, LNAI 3176, pages 146-168. Springer Verlag, 2004.
    • (2004) Advanced Lectures on Machine Learning , pp. 146-168
    • Bottou, L.1
  • 7
    • 48849104146 scopus 로고    scopus 로고
    • Coordinate descent method for large-scale L2-loss linear SVM
    • URL
    • Kai-Wei Chang, Cho-Jui Hsieh, and Chih-Jen Lin. Coordinate descent method for large-scale L2-loss linear SVM. Journal of Machine Learning Research, 9:1369-1398, 2008. URL http://www.csie.ntu.edu.tw/cjlin/papers/cdl2.pdf.
    • (2008) Journal of Machine Learning Research , vol.9 , pp. 1369-1398
    • Chang, K.-W.1    Hsieh, C.-J.2    Lin, C.-J.3
  • 9
    • 0036643072 scopus 로고    scopus 로고
    • Logistic regression, AdaBoost and Bregman distances
    • Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48(1-3):253-285, 2002.
    • (2002) Machine Learning , vol.48 , Issue.1-3 , pp. 253-285
    • Collins, M.1    Schapire, R.E.2    Singer, Y.3
  • 10
    • 50949133940 scopus 로고    scopus 로고
    • Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks
    • Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter Bartlett. Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks. Journal of Machine Learning Research, 9:1775-1822, 2008.
    • (2008) Journal of Machine Learning Research , vol.9 , pp. 1775-1822
    • Collins, M.1    Globerson, A.2    Koo, T.3    Carreras, X.4    Bartlett, P.5
  • 11
    • 0001573124 scopus 로고
    • Generalized iterative scaling for log-linear models
    • John N. Darroch and Douglas Ratcliff. Generalized iterative scaling for log-linear models. The Annals of Mathematical Statistics, 43(5):1470-1480, 1972.
    • (1972) The Annals of Mathematical Statistics , vol.43 , Issue.5 , pp. 1470-1480
    • Darroch, J.N.1    Ratcliff, D.2
  • 18
    • 34548105186 scopus 로고    scopus 로고
    • Large-scale Bayesian logistic regression for text categorization
    • Alexandar Genkin, David D. Lewis, and David Madigan. Large-scale Bayesian logistic regression for text categorization. Technometrics, 49(3):291-304, 2007.
    • (2007) Technometrics , vol.49 , Issue.3 , pp. 291-304
    • Genkin, A.1    Lewis, D.D.2    Madigan, D.3
  • 20
    • 0032665409 scopus 로고    scopus 로고
    • Globally convergent block-coordinate techniques for unconstrained optimization
    • Luigi Grippo and Marco Sciandrone. Globally convergent block-coordinate techniques for unconstrained optimization. Optimization Methods and Software, 10:587-637, 1999.
    • (1999) Optimization Methods and Software , vol.10 , pp. 587-637
    • Grippo, L.1    Sciandrone, M.2
  • 23
    • 30044437592 scopus 로고    scopus 로고
    • A fast dual algorithm for kernel logistic regression
    • S. Sathiya Keerthi, Kaibo Duan, Shirish Shevade, and Aun Neow Poo. A fast dual algorithm for kernel logistic regression. Machine Learning, 61:151-165, 2005.
    • (2005) Machine Learning , vol.61 , pp. 151-165
    • Keerthi, S.S.1    Duan, K.2    Shevade, S.3    Poo, A.N.4
  • 24
    • 34547688865 scopus 로고    scopus 로고
    • An interior-point method for large-scale l1-regularized logistic regression
    • URL
    • Kwangmoo Koh, Seung-Jean Kim, and Stephen Boyd. An interior-point method for large-scale l1-regularized logistic regression. Journal of Machine Learning Research, 8:1519-1555, 2007. URL http://www.stanford.edu/boyd/l1-logistic-reg. html.
    • (2007) Journal of Machine Learning Research , vol.8 , pp. 1519-1555
    • Koh, K.1    Kim, S.-J.2    Boyd, S.3
  • 26
    • 44649088319 scopus 로고    scopus 로고
    • Trust region Newton method for largescale logistic regression
    • URL
    • Chih-Jen Lin, Ruby C. Weng, and S. Sathiya Keerthi. Trust region Newton method for largescale logistic regression. Journal of Machine Learning Research, 9:627-650, 2008. URL http://www.csie.ntu.edu.tw/cjlin/papers/logistic.pdf.
    • (2008) Journal of Machine Learning Research , vol.9 , pp. 627-650
    • Lin, C.-J.1    Weng, R.C.2    Keerthi, S.S.3
  • 27
    • 33646887390 scopus 로고
    • On the limited memory BFGS method for large scale optimization
    • Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503-528, 1989.
    • (1989) Mathematical Programming , vol.45 , Issue.1 , pp. 503-528
    • Liu, D.C.1    Nocedal, J.2
  • 28
    • 0026678659 scopus 로고
    • On the convergence of coordinate descent method for convex differentiable minimization
    • Zhi-Quan Luo and Paul Tseng. On the convergence of coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7-35, 1992.
    • (1992) Journal of Optimization Theory and Applications , vol.72 , Issue.1 , pp. 7-35
    • Luo, Z.-Q.1    Tseng, P.2
  • 29
    • 1042264823 scopus 로고    scopus 로고
    • A comparison of algorithms for maximum entropy parameter estimation
    • Association for Computational Linguistics
    • Robert Malouf. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of the 6th conference on Natural language learning, pages 1-7. Association for Computational Linguistics, 2002.
    • (2002) Proceedings of the 6th Conference on Natural Language Learning , pp. 1-7
    • Malouf, R.1
  • 35
    • 35148838927 scopus 로고    scopus 로고
    • Surrogate maximization/minimization algorithms and extensions
    • October
    • Zhihua Zhang, James T. Kwok, and Dit-Yan Yeung. Surrogate maximization/minimization algorithms and extensions. Machine Learning, 69(1):1-33, October 2007.
    • (2007) Machine Learning , vol.69 , Issue.1 , pp. 1-33
    • Zhang, Z.1    Kwok, J.T.2    Yeung, D.-Y.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.