메뉴 건너뛰기




Volumn 2, Issue , 1999, Pages 740-747

Efficient reinforcement learning in factored MDPs

Author keywords

[No Author keywords available]

Indexed keywords

DYNAMIC BAYESIAN NETWORKS; GLOBAL STATE; GRAPHICAL STRUCTURES; MARKOV DECISION PROCESSES; NEAR-OPTIMAL ALGORITHMS; RUNNING TIME; TRANSITION MODEL;

EID: 84880677563     PISSN: 10450823     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (183)

References (13)
  • 2
    • 0002436850 scopus 로고    scopus 로고
    • Tractable inference for complex stochastic processes
    • X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. UAI, pages 33-42,1998.
    • (1998) Proc. UAI , pp. 33-42
    • Boyen, X.1    Koller, D.2
  • 6
    • 0000086731 scopus 로고
    • Influence diagrams
    • R. A. Howard and J. E. Matheson, editors, Strategic Decisions Group, Menlo Park, California
    • R. A. Howard and J. E. Matheson. Influence diagrams. In R. A. Howard and J. E. Matheson, editors, Readings on the Principles and Applications of Decision Analysis, pages 721-762. Strategic Decisions Group, Menlo Park, California, 1984.
    • (1984) Readings on the Principles and Applications of Decision Analysis , pp. 721-762
    • Howard, R.A.1    Matheson, J.E.2
  • 8
    • 0027677367 scopus 로고
    • Polynomial-time approximation algorithms for the Ising model
    • M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM Journal on Computing, 22:1087-1116,1993.
    • (1993) SIAM Journal on Computing , vol.22 , pp. 1087-1116
    • Jerrum, M.1    Sinclair, A.2
  • 9
    • 0012257655 scopus 로고    scopus 로고
    • Near-optimal performance for reinforcement learning in polynomial time
    • M. Keams and SP. Singh. Near-optimal performance for reinforcement learning in polynomial time. In Proc. ICML, pages 260-268,1998.
    • (1998) Proc. ICML , pp. 260-268
    • Keams, M.1    Singh, S.P.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.