메뉴 건너뛰기




Volumn , Issue , 2007, Pages 421-426

Latent Dirichlet conditional naive-Bayes models

Author keywords

[No Author keywords available]

Indexed keywords

BAYES MODELS; BAYESIAN MODELLING; CONDITIONAL DISTRIBUTIONS; DATA POINTS; DATA-SETS; DIRICHLET; DISCRETE DISTRIBUTIONS; E-M ALGORITHMS; EXPONENTIAL FAMILIES; GAUSSIAN; INTERNATIONAL CONFERENCES; LATENT STRUCTURES; MIXTURE MODELLING; PROBABILISTIC MIXTURE MODELS; SPARSE DATA; VARIATIONAL INFERENCE;

EID: 49749117072     PISSN: 15504786     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ICDM.2007.55     Document Type: Conference Paper
Times cited : (11)

References (14)
  • 4
    • 0031269184 scopus 로고    scopus 로고
    • On the optimality of the simple bayesian classifier under zero-one loss
    • P. Domingos and M. Pazzani. On the optimality of the simple bayesian classifier under zero-one loss. Machine Learning Journal, 29:103-130, 1997.
    • (1997) Machine Learning Journal , vol.29 , pp. 103-130
    • Domingos, P.1    Pazzani, M.2
  • 6
    • 49749105767 scopus 로고    scopus 로고
    • T. Hoffman. Probabilistic latent semantic indexing. In UAI, 1999.
    • T. Hoffman. Probabilistic latent semantic indexing. In UAI, 1999.
  • 8
    • 49749086295 scopus 로고    scopus 로고
    • Movielens. http://movielens.umn.edu.
    • Movielens
  • 9
    • 0002788893 scopus 로고    scopus 로고
    • A view of the EM algorithm that justifies incremental, sparse, and other variants
    • M. I. Jordan, editor, MIT Press
    • R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355-368. MIT Press, 1998.
    • (1998) Learning in Graphical Models , pp. 355-368
    • Neal, R.M.1    Hinton, G.E.2
  • 10
    • 1942418620 scopus 로고    scopus 로고
    • On discrminative vs generative classifiers: A comparison of logistic regression and naive Bayes
    • A. Ng and M. Jordan. On discrminative vs generative classifiers: A comparison of logistic regression and naive Bayes. In NIPS, 2001.
    • (2001) NIPS
    • Ng, A.1    Jordan, M.2
  • 11
    • 0033886806 scopus 로고    scopus 로고
    • Text classification from labeled and unlabeled documents using EM
    • K. Nigam, A. K. McCallum, S. Thrun, and T. M. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103-134, 2000.
    • (2000) Machine Learning , vol.39 , Issue.2-3 , pp. 103-134
    • Nigam, K.1    McCallum, A.K.2    Thrun, S.3    Mitchell, T.M.4
  • 12
    • 0021404166 scopus 로고
    • Mixture densities, maximum likelihood and the EM algorithm
    • R. Redner and H. Walker. Mixture densities, maximum likelihood and the EM algorithm. SIAM Review, 26(2): 195-239, 1984.
    • (1984) SIAM Review , vol.26 , Issue.2 , pp. 195-239
    • Redner, R.1    Walker, H.2
  • 13
    • 0039830273 scopus 로고
    • Unsupervised learning of mixtures of multiple causes in binary data
    • E. Saund. Unsupervised learning of mixtures of multiple causes in binary data. In NIPS, 1994.
    • (1994) NIPS
    • Saund, E.1
  • 14
    • 32344446698 scopus 로고    scopus 로고
    • Applying the multiple cause model to text categorization
    • M. Shahami, M. A. Hearst, and E. Saund. Applying the multiple cause model to text categorization. In ICML, 1996.
    • (1996) ICML
    • Shahami, M.1    Hearst, M.A.2    Saund, E.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.