메뉴 건너뛰기




Volumn 41, Issue 1, 2000, Pages 5-25

Technical note: Naive Bayes for regression

Author keywords

[No Author keywords available]

Indexed keywords

DATA STRUCTURES; MATHEMATICAL MODELS; MEASUREMENT ERRORS; PROBABILITY DISTRIBUTIONS; REGRESSION ANALYSIS; STATISTICAL TESTS;

EID: 0034300831     PISSN: 08856125     EISSN: None     Source Type: Journal    
DOI: 10.1023/A:1007670802811     Document Type: Article
Times cited : (171)

References (32)
  • 1
    • 0000501656 scopus 로고
    • Information theory and an extension of the maximum likelihood principle
    • Budapest: Akadémiai Kiadó
    • Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Proceedings of the 2nd International Symposium on Information Theory (pp. 267-281). Budapest: Akadémiai Kiadó.
    • (1973) Proceedings of the 2nd International Symposium on Information Theory , pp. 267-281
    • Akaike, H.1
  • 3
    • 0003408496 scopus 로고    scopus 로고
    • Irvine, CA: University of California, Department of Information and Computer Science
    • Blake, C., Keogh, E., & Merz, C. J. (1998). UCI repository of machine learning data-bases. Irvine, CA: University of California, Department of Information and Computer Science. [http://www.ics.uci.edu/˜mlearn/MLRepository.html].
    • (1998) UCI Repository of Machine Learning Data-bases
    • Blake, C.1    Keogh, E.2    Merz, C.J.3
  • 4
    • 0003006556 scopus 로고
    • Estimating probabilities: A crucial task in machine learning
    • Stockholm, Sweden London: Pitman
    • Cestnik, B. (1990). Estimating probabilities: A crucial task in machine learning. In Proceedings of the 9th European Conference on Artificial Intelligence, Stockholm, Sweden (pp. 147-149). London: Pitman.
    • (1990) Proceedings of the 9th European Conference on Artificial Intelligence , pp. 147-149
    • Cestnik, B.1
  • 5
    • 34249966007 scopus 로고
    • The CN2 Induction Algorithm
    • Clark, P. & Niblett, T. (1989). The CN2 Induction Algorithm. Machine Learning, 3(4), 261-283.
    • (1989) Machine Learning , vol.3 , Issue.4 , pp. 261-283
    • Clark, P.1    Niblett, T.2
  • 6
    • 0031269184 scopus 로고    scopus 로고
    • On the optimality of the simple Bayesian classifier under zero-one loss
    • Domingos, P. & Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss. In Machine Learning, 29(2/3), 103-130.
    • (1997) Machine Learning , vol.29 , Issue.2-3 , pp. 103-130
    • Domingos, P.1    Pazzani, M.2
  • 8
    • 0002593344 scopus 로고
    • Multi-interval discretization of continuous-valued attributes for classification learning
    • Chambery, France San Mateo, CA: Morgan Kaufmann
    • Fayyad, U. M. & Irani, K. B. (1993). Multi-interval discretization of continuous-valued attributes for classification learning. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, Chambery, France (pp. 1022-1027). San Mateo, CA: Morgan Kaufmann.
    • (1993) Proceedings of the 13th International Joint Conference on Artificial Intelligence , pp. 1022-1027
    • Fayyad, U.M.1    Irani, K.B.2
  • 10
    • 21744462998 scopus 로고    scopus 로고
    • On bias, variance, 0/1-loss, and the curse-of-dimensionality
    • Friedman, J. (1997). On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1, 55-77.
    • (1997) Data Mining and Knowledge Discovery , vol.1 , pp. 55-77
    • Friedman, J.1
  • 12
    • 0001551844 scopus 로고
    • Supervised learning from incomplete data via an EM approach
    • San Mateo, CA: Morgan Kaufmann
    • Ghahramani, Z. & Jordan, M. I. (1994). Supervised learning from incomplete data via an EM approach. In Advances in neural information processing systems 6 (pp. 120-127). San Mateo, CA: Morgan Kaufmann.
    • (1994) Advances in Neural Information Processing Systems , vol.6 , pp. 120-127
    • Ghahramani, Z.1    Jordan, M.I.2
  • 13
    • 0027580356 scopus 로고
    • Very simple classification rules perform well on most commonly used datasets
    • Holte, R. C. (1993). Very simple classification rules perform well on most commonly used datasets. Machine Learning, 11, 63-91.
    • (1993) Machine Learning , vol.11 , pp. 63-91
    • Holte, R.C.1
  • 14
    • 0031381525 scopus 로고    scopus 로고
    • Wrappers for feature subset selection
    • John, G. H. & Kohavi, R. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1/2), 273-324.
    • (1997) Artificial Intelligence , vol.97 , Issue.1-2 , pp. 273-324
    • John, G.H.1    Kohavi, R.2
  • 17
    • 0343799382 scopus 로고    scopus 로고
    • Numeric prediction using instance-based learning with encoding length selection
    • Dunedin, New Zealand Singapore: Springer-Verlag
    • Kilpatrick, D. & Cameron-Jones, M. (1998). Numeric prediction using instance-based learning with encoding length selection. In Progress in Connectionist-Based Information Systems, Dunedin, New Zealand (pp. 984-987). Singapore: Springer-Verlag.
    • (1998) Progress in Connectionist-Based Information Systems , pp. 984-987
    • Kilpatrick, D.1    Cameron-Jones, M.2
  • 19
    • 85011898836 scopus 로고    scopus 로고
    • Personal Communication
    • Kononenko, I. (1998). Personal Communication.
    • (1998)
    • Kononenko, I.1
  • 21
    • 84886741606 scopus 로고
    • Induction of recursive Bayesian classifiers
    • Vienna, Austria Berlin: Springer-Verlag
    • Langley, P. (1993). Induction of recursive Bayesian classifiers. In Proceedings of the 8th European Conference on Machine Learning, Vienna, Austria (pp. 153-164). Berlin: Springer-Verlag.
    • (1993) Proceedings of the 8th European Conference on Machine Learning , pp. 153-164
    • Langley, P.1
  • 24
    • 0008155075 scopus 로고    scopus 로고
    • Searching for dependencies in Bayesian classifiers
    • New York: Springer-Verlag
    • Pazzani, M. (1996). Searching for dependencies in Bayesian classifiers. In Learning from data: Artificial intelligence and statistics V (pp. 343-348). New York: Springer-Verlag.
    • (1996) Learning from Data: Artificial Intelligence and Statistics , vol.5 , pp. 343-348
    • Pazzani, M.1
  • 30
    • 84873894260 scopus 로고
    • Retrofitting decision tree classifiers using kernel density estimation
    • Tahoe City, CA San Francisco, CA: Morgan Kaufmann
    • Smyth, P., Gray, A., & Fayyad, U. M. (1995). Retrofitting decision tree classifiers using kernel density estimation, In Proceedings of the 12th International Conference on Machine Learning, Tahoe City, CA (pp. 506-514). San Francisco, CA: Morgan Kaufmann.
    • (1995) Proceedings of the 12th International Conference on Machine Learning , pp. 506-514
    • Smyth, P.1    Gray, A.2    Fayyad, U.M.3
  • 31
    • 85011892962 scopus 로고    scopus 로고
    • Department of Statistics, Carnegie Mellon University
    • StatLib (1999). Department of Statistics, Carnegie Mellon University. [http://lib.stat.cmu.edu].
    • (1999)
  • 32
    • 0001717058 scopus 로고    scopus 로고
    • Induction of model trees for predicting continuous classes
    • Prague Prague: University of Economics, Faculty of Informatics and Statistics
    • Wang, Y. & Witten, I. H. (1997). Induction of model trees for predicting continuous classes, In Proceedings of the Poster Papers of the European Conference on Machine Learning, Prague (pp. 128-137). Prague: University of Economics, Faculty of Informatics and Statistics.
    • (1997) Proceedings of the Poster Papers of the European Conference on Machine Learning , pp. 128-137
    • Wang, Y.1    Witten, I.H.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.