메뉴 건너뛰기




Volumn 11, Issue , 2010, Pages 2387-2422

Composite binary losses

Author keywords

Bregman divergence; Classification; Classification calibrated; Convexity; Fisher consistency; Misclassification noise; Probability estimation; Proper scoring rule; Regret bound; Robustness; Surrogate loss

Indexed keywords

BREGMAN DIVERGENCES; CLASSIFICATION; CLASSIFICATION-CALIBRATED; CONVEXITY; FISHER CONSISTENCY; MISCLASSIFICATIONS; PROBABILITY ESTIMATION; PROPER SCORING RULES; REGRET BOUND; ROBUSTNESS; SURROGATE LOSS;

EID: 78649418936     PISSN: 15324435     EISSN: 15337928     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (237)

References (58)
  • 2
    • 84951612392 scopus 로고
    • Remarks on the measurement of subjective probability and information
    • December
    • J. Aczel and J. Pfanzagl. Remarks on the measurement of subjective probability and information. Metrika, 11 (1):91-105, December 1967.
    • (1967) Metrika , vol.11 , Issue.1 , pp. 91-105
    • Aczel, J.1    Pfanzagl, J.2
  • 6
    • 0003652453 scopus 로고    scopus 로고
    • P. J. Bartlett, B. Schölkopf, D. Schuurmans, and A. J. Smola, editors, MIT Press
    • P. J. Bartlett, B. Schölkopf, D. Schuurmans, and A. J. Smola, editors. Advances in Large-Margin Classifiers. MIT Press, 2000.
    • (2000) Advances in Large-Margin Classifiers
  • 7
    • 34247596518 scopus 로고    scopus 로고
    • Sparseness vs estimating conditional probabilities: Some asymptotic results
    • P. L. Bartlett and A. Tewari. Sparseness vs estimating conditional probabilities: Some asymptotic results. The Journal of Machine Learning Research, 8:775-790, 2007.
    • (2007) The Journal of Machine Learning Research , vol.8 , pp. 775-790
    • Bartlett, P.L.1    Tewari, A.2
  • 10
    • 78649433471 scopus 로고    scopus 로고
    • Machine learning techniques - Reductions between prediction quality metrics
    • Z. Liu and C. H. Xia, editors, Springer US, April, URL
    • A. Beygelzimer, J. Langford, and B. Zadrozny. Machine learning techniques - reductions between prediction quality metrics. In Z. Liu and C. H. Xia, editors, Performance Modeling and Engineering, pages 3-28. Springer US, April 2008. URL http://hunch.net/~j1/proj ects/reductions/tutorial/paper/chapter.pdf.
    • (2008) Performance Modeling and Engineering , pp. 3-28
    • Beygelzimer, A.1    Langford, J.2    Zadrozny, B.3
  • 11
    • 33947253079 scopus 로고    scopus 로고
    • Loss functions for binary class probability estimation and classification: Structure and applications
    • University of Pennsylvania, November
    • A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation and classification: Structure and applications. Technical report, University of Pennsylvania, November 2005.
    • (2005) Technical Report
    • Buja, A.1    Stuetzle, W.2    Shen, Y.3
  • 12
    • 0031327745 scopus 로고    scopus 로고
    • Optimal prediction under asymmetric loss
    • P. F. Christoffersen and F. X. Diebold. Optimal prediction under asymmetric loss. Econometric Theory, 13 (06):808-817, 2009.
    • (2009) Econometric Theory , vol.13 , Issue.6 , pp. 808-817
    • Christoffersen, P.F.1    Diebold, F.X.2
  • 13
    • 78649424698 scopus 로고    scopus 로고
    • Properties and benefits of calibrated classifiers
    • HP Laboratories, Palo Alto, July
    • I. Cohen and M. Goldszmidt. Properties and benefits of calibrated classifiers. Technical Report HPL-2004-22 (R.1), HP Laboratories, Palo Alto, July 2004.
    • (2004) Technical Report HPL-2004-22 (R.1)
    • Cohen, I.1    Goldszmidt, M.2
  • 14
    • 34249753618 scopus 로고
    • Support-vector networks
    • C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20 (3):273-297, 1995.
    • (1995) Machine Learning , vol.20 , Issue.3 , pp. 273-297
    • Cortes, C.1    Vapnik, V.2
  • 16
    • 31644433509 scopus 로고    scopus 로고
    • Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling
    • S. Fidler, D. Skocaj, and A. Leonardis. Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28 (3):337-350, 2006.
    • (2006) IEEE Transactions on Pattern Analysis and Machine Intelligence , vol.28 , Issue.3 , pp. 337-350
    • Fidler, S.1    Skocaj, D.2    Leonardis, A.3
  • 17
    • 77957197521 scopus 로고    scopus 로고
    • arXiv:0905.2138v1 stat. ML, May, URL
    • Y. Freund. A more robust boosting algorithm. arXiv:0905.2138v1 [stat. ML], May 2009. URL http://arxiv.org/abs/0905.2138.
    • (2009) A More Robust Boosting Algorithm
    • Freund, Y.1
  • 19
  • 20
    • 67649344277 scopus 로고    scopus 로고
    • Forecasting and decision theory
    • G. Elliot, C. W. J. Granger, and A. Timmermann, editors, North-Holland, Amsterdam
    • C. W. J. Granger and M. J. Machina. Forecasting and decision theory. In G. Elliot, C. W. J. Granger, and A. Timmermann, editors, Handbook of Economic Forecasting, volume 1, pages 82-98. North-Holland, Amsterdam, 2006.
    • (2006) Handbook of Economic Forecasting , vol.1 , pp. 82-98
    • Granger, C.W.J.1    Machina, M.J.2
  • 21
    • 6344274901 scopus 로고    scopus 로고
    • Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory
    • P. D. Grünwald and A. P. Dawid. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. The Annals of Statistics, 32 (4):1367-1433, 2004.
    • (2004) The Annals of Statistics , vol.32 , Issue.4 , pp. 1367-1433
    • Grünwald, P.D.1    Dawid, A.P.2
  • 23
    • 0037598692 scopus 로고    scopus 로고
    • Local versus global models for classification problems: Fitting models where it matters
    • D. J. Hand and V. Vinciotti. Local versus global models for classification problems: Fitting models where it matters. The American Statistician, 57 (2):124-131, 2003.
    • (2003) The American Statistician , vol.57 , Issue.2 , pp. 124-131
    • Hand, D.J.1    Vinciotti, V.2
  • 27
    • 0742284347 scopus 로고    scopus 로고
    • Loss functions, complexities, and the legendre transformation
    • Y. Kalnishkan, V. Vovk, and M. V. Vyugin. Loss functions, complexities, and the legendre transformation. Theoretical Computer Science, 313 (2):195-207, 2004.
    • (2004) Theoretical Computer Science , vol.313 , Issue.2 , pp. 195-207
    • Kalnishkan, Y.1    Vovk, V.2    Vyugin, M.V.3
  • 29
    • 0032202014 scopus 로고    scopus 로고
    • Efficient noise-tolerant learning from statistical queries
    • November
    • M. Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM, 45 (6):983-1006, November 1998.
    • (1998) Journal of the ACM , vol.45 , Issue.6 , pp. 983-1006
    • Kearns, M.1
  • 30
    • 0035575628 scopus 로고    scopus 로고
    • Relative loss bounds for multidimensional regression problems
    • J. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression problems. Machine Learning, 45:301-329, 2001.
    • (2001) Machine Learning , vol.45 , pp. 301-329
    • Kivinen, J.1    Warmuth, M.K.2
  • 34
    • 1542277529 scopus 로고    scopus 로고
    • A note on margin-based loss functions in classification
    • Department of Statistics, University of Wisconsin, Madison, February
    • Y. Lin. A note on margin-based loss functions in classification. Technical Report 1044, Department of Statistics, University of Wisconsin, Madison, February 2002.
    • (2002) Technical Report 1044
    • Lin, Y.1
  • 35
    • 56449118765 scopus 로고    scopus 로고
    • Random classification noise defeats all convex potential boosters
    • William W. Cohen, Andrew McCallum, and Sam T. Roweis, editors, doi: 10.1145/1390156.1390233
    • P. M. Long and R. A. Servedio. Random classification noise defeats all convex potential boosters. In William W. Cohen, Andrew McCallum, and Sam T. Roweis, editors, ICML, pages 608-615, 2008. doi: 10.1145/1390156.1390233.
    • (2008) ICML , pp. 608-615
    • Long, P.M.1    Servedio, R.A.2
  • 36
    • 77956005954 scopus 로고    scopus 로고
    • On the design of loss functions for classification: Theory, robustness to outliers, and savageboost
    • D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors
    • H. Masnadi-Shirazi and N. Vasconcelos. On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1049-1056. 2009.
    • (2009) Advances in Neural. Information Processing Systems , vol.21 , pp. 1049-1056
    • Masnadi-Shirazi, H.1    Vasconcelos, N.2
  • 39
    • 78649415021 scopus 로고    scopus 로고
    • On the efficient minimization of classification calibrated surrogates
    • D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, MIT Press
    • R. Nock and F. Nielsen. On the efficient minimization of classification calibrated surrogates. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1201-1208. MIT Press, 2009b.
    • (2009) Advances in Neural. Information Processing Systems , vol.21 , pp. 1201-1208
    • Nock, R.1    Nielsen, F.2
  • 40
    • 0003243224 scopus 로고    scopus 로고
    • Probabilities for sv machines
    • A. Smola, P. Bartlett, B. Schölkopf, and D. Schuurmans, editors, MIT Press
    • J. Platt. Probabilities for sv machines. In A. Smola, P. Bartlett, B. Schölkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61-71. MIT Press, 2000.
    • (2000) Advances in Large Margin Classifiers , pp. 61-71
    • Platt, J.1
  • 41
    • 0035283313 scopus 로고    scopus 로고
    • Robust classification for imprecise environments
    • F. Provost and T. Fawcett. Robust classification for imprecise environments. Machine Learning, 42 (3):203-231, 2001.
    • (2001) Machine Learning , vol.42 , Issue.3 , pp. 203-231
    • Provost, F.1    Fawcett, T.2
  • 45
    • 84950658032 scopus 로고
    • Elicitation of personal probabilities and expectations
    • L. J. Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66 (336):783-801, 1971.
    • (1971) Journal of the American Statistical Association , vol.66 , Issue.336 , pp. 783-801
    • Savage, L.J.1
  • 46
    • 0010300077 scopus 로고
    • A general method for comparing probability assessors
    • M. J. Schervish. A general method for comparing probability assessors. The Annals of Statistics, 17 (4):1856-1879, 1989.
    • (1989) The Annals of Statistics , vol.17 , Issue.4 , pp. 1856-1879
    • Schervish, M.J.1
  • 49
    • 0013920744 scopus 로고
    • Admissible probability measurement procedures
    • June
    • E. Shuford, A. Albert, and H. E. Massengill. Admissible probability measurement procedures. Psychometrika, 31 (2):125-145, June 1966.
    • (1966) Psychometrika , vol.31 , Issue.2 , pp. 125-145
    • Shuford, E.1    Albert, A.2    Massengill, H.E.3
  • 51
    • 34547483052 scopus 로고    scopus 로고
    • How to compare different loss functions and their risks
    • August
    • I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26 (2):225-287, August 2007.
    • (2007) Constructive Approximation , vol.26 , Issue.2 , pp. 225-287
    • Steinwart, I.1
  • 52
    • 78649414235 scopus 로고    scopus 로고
    • Two oracle inequalities for regularized boosting classifiers
    • I. Steinwart. Two oracle inequalities for regularized boosting classifiers. Statistics and Its Interface, 2:271-284, 2009.
    • (2009) Statistics and its Interface , vol.2 , pp. 271-284
    • Steinwart, I.1
  • 54
    • 33744967706 scopus 로고    scopus 로고
    • Robust classification and regression using support vector machines
    • T. B. Trafalis and R. C. Gilbert. Robust classification and regression using support vector machines. European Journal of Operational Research, 173 (3):893-909, 2006.
    • (2006) European Journal of Operational Research , vol.173 , Issue.3 , pp. 893-909
    • Trafalis, T.B.1    Gilbert, R.C.2
  • 55
    • 84950930899 scopus 로고
    • Bayesian estimation and prediction using asymmetric loss functions
    • June
    • A. Zellner. Bayesian estimation and prediction using asymmetric loss functions. Journal of the American Statistical Association, 81 (394):446-451, June 1986.
    • (1986) Journal of the American Statistical Association , vol.81 , Issue.394 , pp. 446-451
    • Zellner, A.1
  • 56
    • 85041123735 scopus 로고    scopus 로고
    • Divergence function, duality, and convex analysis
    • J. Zhang. Divergence function, duality, and convex analysis. Neural Computation, 16 (1):159-195, 2004a.
    • (2004) Neural. Computation , vol.16 , Issue.1 , pp. 159-195
    • Zhang, J.1
  • 57
    • 4644257995 scopus 로고    scopus 로고
    • Statistical behaviour and consistency of classification methods based on convex risk minimization
    • T. Zhang. Statistical behaviour and consistency of classification methods based on convex risk minimization. Annals of Mathematical Statistics, 32:56-134, 2004b.
    • (2004) Annals of Mathematical Statistics , vol.32 , pp. 56-134
    • Zhang, T.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.