메뉴 건너뛰기




Volumn , Issue , 2009, Pages 755-761

Reinforcement learning for spatial processes

Author keywords

Markov decision processes (MDPs); Planning under risk and uncertainty; Reinforcement Learning (RL); Spatial processes

Indexed keywords

AGRICULTURAL ROBOTS; BEHAVIORAL RESEARCH; CONSERVATION; ENVIRONMENTAL MANAGEMENT; FORESTRY; GRAPH ALGORITHMS; GRAPH STRUCTURES; GRAPHIC METHODS; MARKOV PROCESSES; MULTI AGENT SYSTEMS; NATURAL RESOURCES MANAGEMENT; STOCHASTIC SYSTEMS; UNCERTAINTY ANALYSIS;

EID: 85086267296     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (9)

References (17)
  • 2
    • 2042483849 scopus 로고    scopus 로고
    • WINDA-a system of models for assessing the probability of wind damage to forest stands within a landscape
    • Blennow, K. & Sallnäs, O. (2004). WINDA-a system of models for assessing the probability of wind damage to forest stands within a landscape. Ecological Modelling 175, 87-99.
    • (2004) Ecological Modelling , vol.175 , pp. 87-99
    • Blennow, K.1    Sallnäs, O.2
  • 3
    • 0034248853 scopus 로고    scopus 로고
    • Stochastic dynamic programming with factored representations
    • Boutilier, C., Dearden, R. & Goldszmidt, M. (2000). Stochastic Dynamic Programming with Factored Representations. Artificial Intelligence 121(1), 49-107.
    • (2000) Artificial Intelligence , vol.121 , Issue.1 , pp. 49-107
    • Boutilier, C.1    Dearden, R.2    Goldszmidt, M.3
  • 4
    • 0031630561 scopus 로고    scopus 로고
    • The dynamics of reinforcement learning in cooperative multiagent systems
    • Claus, C. & Boutilier, C. (1998). The Dynamics of Reinforcement Learning in Cooperative Multiagent Systems. In: Proceedings of AAAI/IAAI pp. 746-752.
    • (1998) Proceedings of AAAI/IAAI , pp. 746-752
    • Claus, C.1    Boutilier, C.2
  • 6
    • 84885992951 scopus 로고    scopus 로고
    • Approximate linear-programming algorithms for graph-based Markov decision processes
    • Riva del Garda, Italy
    • Forsell, N. & Sabbadin, R. (2006). Approximate Linear-Programming Algorithms for Graph-Based Markov Decision Processes. In: Proceedings of ECAI, Riva del Garda, Italy pp. 590-599.
    • (2006) Proceedings of ECAI , pp. 590-599
    • Forsell, N.1    Sabbadin, R.2
  • 8
    • 28744435672 scopus 로고    scopus 로고
    • The stability of different silvicultural systems: A wind-tunnel investigation
    • Gardiner, B.A., Marshall, B., Achim, A., Belcher, R.E. & Wood, C.J. (2005). The stability of different silvicultural systems: a wind-tunnel investigation. Forestry 78(5), 471-484.
    • (2005) Forestry , vol.78 , Issue.5 , pp. 471-484
    • Gardiner, B.A.1    Marshall, B.2    Achim, A.3    Belcher, R.E.4    Wood, C.J.5
  • 11
    • 33748543203 scopus 로고    scopus 로고
    • Collaborative multiagent reinforcement learning by payoff propagation
    • Kok, J.R. & Vlassis, N.A. (2006). Collaborative Multiagent Reinforcement Learning by Payoff Propagation. Journal of Machine Learning Research(7), 1789-1828.
    • (2006) Journal of Machine Learning Research , vol.7 , pp. 1789-1828
    • Kok, J.R.1    Vlassis, N.A.2
  • 13
    • 84885997270 scopus 로고    scopus 로고
    • Mean field approximation of the policy iteration algorithm for graph-based markov decision processes
    • Peyrard, N. & Sabbadin, R. (2006). Mean Field Approximation of the Policy Iteration Algorithm for Graph-Based Markov Decision Processes. In: Proceedings of ECAI pp. 595-599.
    • (2006) Proceedings of ECAI , pp. 595-599
    • Peyrard, N.1    Sabbadin, R.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.