메뉴 건너뛰기




Volumn , Issue , 1997, Pages 155-158

Why Does Bagging Work? A Bayesian Account and its Implications

Author keywords

[No Author keywords available]

Indexed keywords

ARTIFICIAL INTELLIGENCE; BAYESIAN NETWORKS; CLASSIFICATION (OF INFORMATION); STATISTICAL TESTS;

EID: 85172419807     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (69)

References (12)
  • 2
    • 0030211964 scopus 로고    scopus 로고
    • Bagging predictors
    • Breiman, L. 1996a. Bagging predictors. Machine Learning 24:123-140.
    • (1996) Machine Learning , vol.24 , pp. 123-140
    • Breiman, L.1
  • 3
    • 0003619255 scopus 로고    scopus 로고
    • Technical Report 460, Statistics Department, University of California at Berkeley, Berkeley, CA
    • Breiman, L. 1996b. Bias, variance and arcing classifiers. Technical Report 460, Statistics Department, University of California at Berkeley, Berkeley, CA.
    • (1996) Bias, variance and arcing classifiers
    • Breiman, L.1
  • 4
    • 0003637516 scopus 로고
    • Ph.D. Dissertation, School of Computing Science, University of Technology, Sydney, Australia
    • Buntine, W. L. 1990. A Theory of Learning Classification Rules. Ph.D. Dissertation, School of Computing Science, University of Technology, Sydney, Australia.
    • (1990) A Theory of Learning Classification Rules
    • Buntine, W. L.1
  • 6
    • 0010896339 scopus 로고
    • On finding the most probable model
    • Shrager, J., and Langley, P., eds., San Mateo, CA: Morgan Kaufmann. Domingos, P. 1997. Knowledge acquisition from examples via multiple models Proc. Fourteenth International Conference on Machine Learning. Nashville, TN: Morgan Kaufmann. Freund, Y., and Schapire, R. E. 1996. Experiments with a new boosting algorithm Proc. Thirteenth International Conference on Machine Learning, Bari, Italy: Morgan Kaufmann
    • Cheeseman, P. 1990. On finding the most probable model. In Shrager, J., and Langley, P., eds., Computational Models of Scientific Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann. Domingos, P. 1997. Knowledge acquisition from examples via multiple models. In Proc. Fourteenth International Conference on Machine Learning. Nashville, TN: Morgan Kaufmann. Freund, Y., and Schapire, R. E. 1996. Experiments with a new boosting algorithm. In Proc. Thirteenth International Conference on Machine Learning, 148-156. Bari, Italy: Morgan Kaufmann.
    • (1990) Computational Models of Scientific Discovery and Theory Formation , pp. 148-156
    • Cheeseman, P.1
  • 7
    • 0003483421 scopus 로고    scopus 로고
    • On bias, variance, 0/1 - loss, and the curse-of-dimensionality
    • Technical report, Department of Statistics and Stanford Linear Accelerator Center, Stanford University, Stanford, CA. Hyafi. 1, L., and Rivest, R. L. 1976. Constructing optimal binary decision trees is NP-complete. Information Processing Letters 5:15-17. Kohavi, R., and Wolpert, D. H. 1996. Bias plus variance decomposition for zero-one loss functions Proc. Thirteenth International Conference on Machine Learning, 283. Bari, Italy: Morgan Kaufmann
    • Friedman, J. H. 1996. On bias, variance, 0/1 - loss, and the curse-of-dimensionality. Technical report, Department of Statistics and Stanford Linear Accelerator Center, Stanford University, Stanford, CA. Hyafi. 1, L., and Rivest, R. L. 1976. Constructing optimal binary decision trees is NP-complete. Information Processing Letters 5:15-17. Kohavi, R., and Wolpert, D. H. 1996. Bias plus variance decomposition for zero-one loss functions. In Proc. Thirteenth International Conference on Machine Learning, 275-283. Bari, Italy: Morgan Kaufmann.
    • (1996) , pp. 275
    • Friedman, J. H.1
  • 12
    • 0026692226 scopus 로고
    • Stacked generalization
    • Wolpert, D. 1992. Stacked generalization. Neural Networks 5:241-259.
    • (1992) Neural Networks , vol.5 , pp. 241-259
    • Wolpert, D.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.