메뉴 건너뛰기




Volumn 73, Issue 1, 2008, Pages 87-106

A bias/variance decomposition for models using collective inference

Author keywords

Collective inference; Evaluation; Statistical relational learning

Indexed keywords

ARTIFICIAL INTELLIGENCE; COMPUTER PROGRAMMING; EDUCATION; ERRORS; INFERENCE ENGINES; LEARNING ALGORITHMS; LEARNING SYSTEMS; LOGIC PROGRAMMING;

EID: 50649124443     PISSN: 08856125     EISSN: 15730565     Source Type: Journal    
DOI: 10.1007/s10994-008-5066-6     Document Type: Article
Times cited : (15)

References (23)
  • 4
    • 0031269184 scopus 로고    scopus 로고
    • On the optimality of the simple Bayesian classifier under zero-one loss
    • Domingos, P., & Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning, 29, 103-130.
    • (1997) Machine Learning , vol.29 , pp. 103-130
    • Domingos, P.1    Pazzani, M.2
  • 6
    • 21744462998 scopus 로고    scopus 로고
    • On bias, variance, 0/1-loss, and the curse-of-dimensionality
    • 1
    • Friedman, J. (1997). On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1(1), 55-77.
    • (1997) Data Mining and Knowledge Discovery , vol.1 , pp. 55-77
    • Friedman, J.1
  • 11
    • 33748544940 scopus 로고    scopus 로고
    • Network-based marketing: Identifying likely adopters via consumer networks
    • Hill, S., Provost, F., & Volinsky, C. (2006). Network-based marketing: identifying likely adopters via consumer networks. Statistical Science, 22(2).
    • (2006) Statistical Science , vol.22 , Issue.2
    • Hill, S.1    Provost, F.2    Volinsky, C.3
  • 12
    • 0027580356 scopus 로고
    • Very simple classification rules perform well on most commonly used datatsets
    • Holte, R. (1993). Very simple classification rules perform well on most commonly used datatsets. Machine Learning, 11, 63-91.
    • (1993) Machine Learning , vol.11 , pp. 63-91
    • Holte, R.1
  • 13
    • 0037403462 scopus 로고    scopus 로고
    • Variance and bias for general loss functions
    • James, G. (2003). Variance and bias for general loss functions. Machine Learning, 51, 115-135.
    • (2003) Machine Learning , vol.51 , pp. 115-135
    • James, G.1
  • 16
    • 34249102504 scopus 로고    scopus 로고
    • Classification in networked data: A toolkit and a univariate case study
    • Macskassy, S., & Provost, F. (2007). Classification in networked data: a toolkit and a univariate case study. Journal of Machine Learning Research, 8, 935-983.
    • (2007) Journal of Machine Learning Research , vol.8 , pp. 935-983
    • MacSkassy, S.1    Provost, F.2
  • 18
    • 0002425879 scopus 로고    scopus 로고
    • Loopy belief propagation for approximate inference: An empirical study
    • Murphy, K., Weiss, Y., & Jordan, M. (1999). Loopy belief propagation for approximate inference: an empirical study. In Proceedings of the 15th conference on uncertainty in artificial intelligence (pp. 467-479).
    • (1999) Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence , pp. 467-479
  • 23
    • 75149128575 scopus 로고    scopus 로고
    • Estimating the "wrong" Markov random field: Benefits in the computation-limited setting
    • Wainwright, M. (2005). Estimating the "wrong" Markov random field: benefits in the computation-limited setting. In Advances in neural information processing systems.
    • (2005) Advances in Neural Information Processing Systems
    • Wainwright, M.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.