메뉴 건너뛰기




Volumn , Issue , 2009, Pages 585-592

Approximate inference for planning in stochastic relational worlds

Author keywords

[No Author keywords available]

Indexed keywords

ACTION SEQUENCES; APPROXIMATE INFERENCE; DYNAMIC BAYESIAN NETWORK; EFFICIENT PLANNING; EMPIRICAL RESULTS; EXISTING METHOD; ON-LINE PLANNING; PROBABILISTIC RELATIONAL RULES; REALISTIC PHYSICS; STOCHASTIC DOMAINS; WORLD MODEL;

EID: 71149086468     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (15)

References (13)
  • 3
    • 0346982426 scopus 로고    scopus 로고
    • Using expectation-maximization for reinforcement learning
    • Dayan, P., & Hinton, G. E. (1997). Using expectation-maximization for reinforcement learning. Neural Computation, 9, 271-278.
    • (1997) Neural Computation , vol.9 , pp. 271-278
    • Dayan, P.1    Hinton, G.E.2
  • 6
    • 0036832951 scopus 로고    scopus 로고
    • A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    • Kearns, M. J., Mansour, Y., & Ng, A. Y. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49, 193-208.
    • (2002) Machine Learning , vol.49 , pp. 193-208
    • Kearns, M.J.1    Mansour, Y.2    Ng, A.Y.3
  • 12
    • 33749234798 scopus 로고    scopus 로고
    • Probabilistic inference for solving discrete and continuous state Markov decision processes
    • Toussaint, M., & Storkey, A. (2006). Probabilistic inference for solving discrete and continuous state Markov decision processes. Proc. of the Int. Conf. on Machine Learning (ICML) (pp. 945-952).
    • (2006) Proc. of the Int. Conf. on Machine Learning (ICML) , pp. 945-952
    • Toussaint, M.1    Storkey, A.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.