메뉴 건너뛰기




Volumn , Issue , 2009, Pages 94-99

IMPLANT: An integrated mdp and pomdp learning agent for adaptive games

Author keywords

[No Author keywords available]

Indexed keywords

ABSTRACT MODELING; ACTION POLICIES; AGENT IMPLEMENTATION; GAME ENVIRONMENT; LEARNING AGENTS; OPTIMAL POLICIES; PLAYER MODELING; PROOF OF CONCEPT;

EID: 77958607662     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (9)

References (21)
  • 1
    • 0030570119 scopus 로고    scopus 로고
    • On the complexity of partially observed markov decision processes
    • Burago, D.; Rougemont, M. D.; and Slissenko, A. 1996. On the complexity of partially observed markov decision processes. In Theoretical Computer Science 157:161-183.
    • (1996) Theoretical Computer Science , vol.157 , pp. 161-183
    • Burago, D.1    Rougemont, M.D.2    Slissenko, A.3
  • 3
    • 84865207978 scopus 로고    scopus 로고
    • Findin cover in dynamic environments
    • Hingham, Massachusetts: Charles River Media, first edition
    • Christian J., Darken, G. H. P. 2006. Findin Cover in Dynamic Environments, AI Game Programming Wisdom 3.Hingham, Massachusetts: Charles River Media, first edition.
    • (2006) AI Game Programming Wisdom , pp. 3
    • Christian, J.1    Darken, G.H.P.2
  • 4
    • 70549093955 scopus 로고    scopus 로고
    • Preference-based playermodeling
    • Hingham, Massachusetts: Charles River Media, first edition
    • Donkers, J., and Spronck, P. 2006. Preference-Based PlayerModeling, AI Game ProgrammingWisdom 3. Hingham, Massachusetts: Charles River Media, first edition.
    • (2006) AI Game ProgrammingWisdom , pp. 3
    • Donkers, J.1    Spronck, P.2
  • 5
    • 84899990618 scopus 로고    scopus 로고
    • The permutable pomdp: Fast solutions to pomdps for preference elicitation
    • Doshi, F., and Roy, N. 2008. The permutable POMDP: fast solutions to POMDPs for preference elicitation. In In AAMAS, 493-500.
    • (2008) AAMAS , pp. 493-500
    • Doshi, F.1    Roy, N.2
  • 8
    • 0032073263 scopus 로고    scopus 로고
    • Planning and acting in partially observable stochastic domains
    • Kaelbling, L. P.; Littman, M. L.; and Cassandra, A. R. 1998. Planning and acting in partially observable stochastic domains. In Artificial Intelligence 101:99-134.
    • (1998) Artificial Intelligence , vol.101 , pp. 99-134
    • Kaelbling, L.P.1    Littman, M.L.2    Cassandra, A.R.3
  • 9
    • 70349645087 scopus 로고    scopus 로고
    • Sarsop: Efficient point-based pomdp planning by approximating optimally reachable belief spaces
    • Kurniawati, H.; Hsu, D.; and Lee, W. 2008. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proc. Robotics: Science and Systems.
    • (2008) Proc. Robotics: Science and Systems
    • Kurniawati, H.1    Hsu, D.2    Lee, W.3
  • 11
    • 84883070753 scopus 로고    scopus 로고
    • United States: Human Kinetics Publishers, first edition
    • Matsuzaki, C. 2004. Tennis Fundamentals. United States: Human Kinetics Publishers, first edition.
    • (2004) Tennis Fundamentals
    • Matsuzaki, C.1
  • 12
    • 84880772945 scopus 로고    scopus 로고
    • Point-based value iteration: An anytime algorithm for pomdps
    • Pineau, J.; Gordon, G.; and Thrun, S. 2003. Point-based value iteration: An anytime algorithm for pomdps. In In IJCAI, 1025 - 1032.
    • (2003) IJCAI , pp. 1025-1032
    • Pineau, J.1    Gordon, G.2    Thrun, S.3
  • 13
    • 84883066491 scopus 로고    scopus 로고
    • Dynamic tactical postion evaluation
    • Hingham, Massachusetts: Charles River Media, first edition
    • Remco Straatman, A. B., and van der Sterren, W. 2006. Dynamic Tactical Postion Evaluation, AI Game Programming Wisdom 3. Hingham, Massachusetts: Charles River Media, first edition.
    • (2006) AI Game Programming Wisdom , pp. 3
    • RemcO Straatman, A.B.1    Van Der Sterren, W.2
  • 14
    • 84880906197 scopus 로고    scopus 로고
    • Forward search value iteration for pomdps
    • Shani, G.; Brafman, R. I.; and Shimony, S. E. 2007. Forward search value iteration for POMDPs. In In IJCAI.
    • (2007) IJCAI
    • Shani, G.1    Brafman, R.I.2    Shimony, S.E.3
  • 17
    • 84883087888 scopus 로고    scopus 로고
    • Tap: An effective personality representation for inter-agent adaptation in games
    • Tan, C. T., and Cheng, H. 2008b. TAP: An Effective Personality Representation for Inter-Agent Adaptation in Games. In AIIDE.
    • (2008) AIIDE
    • Tan, C.T.1    Cheng, H.2
  • 18
    • 68249127837 scopus 로고    scopus 로고
    • Modeling goal-directed players in digital games
    • Thue, D., and Bulitko, V. 2006. Modeling goal-directed players in digital games. In AIIDE 285, 298.
    • (2006) AIIDE , vol.285 , pp. 298
    • Thue, D.1    Bulitko, V.2
  • 19
    • 77955145725 scopus 로고    scopus 로고
    • Being a better buddy: Interpreting the player's behavior
    • Hingham, Massachusetts: Charles River Media, first edition
    • van der Sterren, W. 2006. Being a Better Buddy: Interpreting the Player's Behavior, AI Game Programming Wisdom 3. Hingham, Massachusetts: Charles River Media, first edition.
    • (2006) AI Game Programming Wisdom , pp. 3
    • Van Der Sterren, W.1
  • 20
    • 71549127298 scopus 로고    scopus 로고
    • The self organization of context for multi agent games
    • White, C., and Brogan, D. 2006. The self organization of context for multi agent games. In AIIDE.
    • (2006) AIIDE
    • White, C.1    Brogan, D.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.