메뉴 건너뛰기




Volumn , Issue , 2012, Pages 38-43

POMCoP: Belief space planning for sidekicks in cooperative games

Author keywords

[No Author keywords available]

Indexed keywords

BELIEF SPACE; COOPERATIVE GAME; HUMAN INTENTIONS; IN-BUILDINGS; INFORMATION GATHERING; ON-LINE PLANNING;

EID: 84883114920     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (55)

References (19)
  • 1
    • 84899413528 scopus 로고    scopus 로고
    • Empirical evaluation of ad hoc teamwork in the pursuit domain
    • Barrett, S.; Stone, P.; and Kraus, S. 2011. Empirical evaluation of ad hoc teamwork in the pursuit domain. In AAMAS.
    • (2011) AAMAS
    • Barrett, S.1    Stone, P.2    Kraus, S.3
  • 3
    • 80054833328 scopus 로고    scopus 로고
    • Ensemble monte-carlo planning: An empirical study
    • Fern, A., and Lewis, P. 2011. Ensemble Monte-Carlo planning: An empirical study. In ICAPS.
    • (2011) ICAPS
    • Fern, A.1    Lewis, P.2
  • 5
    • 84883115965 scopus 로고    scopus 로고
    • Goal recognition with markov logic networks for playeradaptive games
    • Ha, E. Y.; Rowe, J. P.; Mott, B. W.; and Lester, J. C. 2011. Goal recognition with Markov logic networks for playeradaptive games. In AIIDE.
    • (2011) AIIDE
    • Ha, E.Y.1    Rowe, J.P.2    Mott, B.W.3    Lester, J.C.4
  • 6
    • 33750731490 scopus 로고    scopus 로고
    • Handling complexity in the halo 2 AI
    • Isla, D. 2005. Handling complexity in the Halo 2 AI. GDC.
    • (2005) GDC
    • Isla, D.1
  • 7
    • 84883062845 scopus 로고
    • Planning and acting in partially observable stochastic domains
    • Kaelbling, L.; Littman, M.; and Cassandra, A. 1995. Planning and acting in partially observable stochastic domains. AIIOI.
    • (1995) AIIOI
    • Kaelbling, L.1    Littman, M.2    Cassandra, A.3
  • 8
    • 84876022211 scopus 로고    scopus 로고
    • Bandit based montecarlo planning
    • Kocsis, L., and Szepsvari, C. 2006. Bandit based montecarlo planning. In NIPS.
    • (2006) NIPS
    • Kocsis, L.1    Szepsvari, C.2
  • 9
    • 70349645087 scopus 로고    scopus 로고
    • SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces
    • Kurniawati, H.; Hsu, D.; and Lee, W. S. 2008. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. Robotics: Science and Systems.
    • (2008) Robotics: Science and Systems
    • Kurniawati, H.1    Hsu, D.2    Lee, W.S.3
  • 10
    • 85138579181 scopus 로고
    • Learning policies for partially observable environments: Scaling up
    • Littman, M. L.; Cassandra, A. R.; and Kaelbling, L. P. 1995. Learning policies for partially observable environments: Scaling up. In ICML.
    • (1995) ICML
    • Littman, M.L.1    Cassandra, A.R.2    Kaelbling, L.P.3
  • 13
    • 32144463683 scopus 로고    scopus 로고
    • Symbolic representation of game world state: Towards real-time planning in games
    • Orkin, J. 2004. Symbolic representation of game world state: Towards real-time planning in games. In AAAI Workshop on Challenges in Game AI.
    • (2004) AAAI Workshop on Challenges in Game AI
    • Orkin, J.1
  • 14
    • 84883057528 scopus 로고    scopus 로고
    • Automatic learning and generation of social behavior from collective human gameplay
    • Orkin, J. 2008. Automatic learning and generation of social behavior from collective human gameplay. In AAMAS.
    • (2008) AAMAS
    • Orkin, J.1
  • 15
    • 85161963598 scopus 로고    scopus 로고
    • Monte-carlo planning in large POMDPS
    • Silver, D., and Veness, J. 2010. Monte-Carlo planning in large POMDPs. In NIPS.
    • (2010) NIPS
    • Silver, D.1    Veness, J.2
  • 17
    • 84872005073 scopus 로고    scopus 로고
    • A bayesian model for plan recognition in RTS games applied to starcraft
    • Synnaeve, G., and Bessire, P. 2011. A Bayesian model for plan recognition in RTS games applied to StarCraft. In AIIDE.
    • (2011) AIIDE
    • Synnaeve, G.1    Bessire, P.2
  • 18
    • 84875634501 scopus 로고    scopus 로고
    • Learning policies for first person shooter games using inverse reinforcement learning
    • Tastan, B., and Sukthankar, G. 2012. Learning policies for first person shooter games using inverse reinforcement learning. In AIIDE.
    • (2012) AIIDE
    • Tastan, B.1    Sukthankar, G.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.