메뉴 건너뛰기




Volumn 3, Issue 1, 2011, Pages 43-56

Reinforcement learning in first person shooter games

Author keywords

Artificial intelligence (AI); computer games; reinforcement learning (RL)

Indexed keywords

COMPUTER GAMES; FIRST PERSON SHOOTER GAMES; INDUSTRY STANDARDS; MACHINE LEARNING TECHNIQUES; NAVIGATION CONTROLS; PATH-FINDING ALGORITHMS; RULE BASED;

EID: 79952925038     PISSN: 1943068X     EISSN: None     Source Type: Journal    
DOI: 10.1109/TCIAIG.2010.2100395     Document Type: Article
Times cited : (56)

References (24)
  • 1
    • 0029276036 scopus 로고
    • Temporal difference learning and TD-gammon
    • G. Tesauro, "Temporal difference learning and TD-gammon," Commun. ACM, vol. 38, no. 3, pp. 58-68, 1994.
    • (1994) Commun. ACM , vol.38 , Issue.3 , pp. 58-68
    • Tesauro, G.1
  • 2
    • 38049011913 scopus 로고    scopus 로고
    • Feature construction for reinforcement learning in hearts
    • N. Sturtevant and A. White, "Feature construction for reinforcement learning in hearts," in Proc. 5th Int. Conf. Comput. Games, 2006, pp. 122-134.
    • (2006) Proc. 5th Int. Conf. Comput. Games , pp. 122-134
    • Sturtevant, N.1    White, A.2
  • 6
    • 29344473360 scopus 로고    scopus 로고
    • Evolving AI opponents in a first-person-shooter video game
    • Proceedings of the 20th National Conference on Artificial Intelligence and the 17th Innovative Applications of Artificial Intelligence Conference, AAAI-05/IAAI-05
    • C. A. Overholtzer and S. Levy, "Evolving AI opponents in a first-person-shooter video game," in Proc. 20th Nat. Conf. Artif. Intell., Pittsburgh, PA, 2005, pp. 1620-1621. (Pubitemid 43006511)
    • (2005) Proceedings of the National Conference on Artificial Intelligence , vol.4 , pp. 1620-1621
    • Overholtzer, C.A.1    Levy, S.D.2
  • 7
    • 79952946465 scopus 로고    scopus 로고
    • TEAM: The team-oriented evolutionary adaptability mechanism
    • S. Bakkes, P. Spronck, and E. Postma, "TEAM: The team-oriented evolutionary adaptability mechanism," in Proc. Entertain. Comput., 2004, pp. 296-307.
    • (2004) Proc. Entertain. Comput. , pp. 296-307
    • Bakkes, S.1    Spronck, P.2    Postma, E.3
  • 9
    • 70349278926 scopus 로고    scopus 로고
    • Creating a multi-purpose first person shooter bot with reinforcement learning
    • M. McPartland and M. Gallagher, "Creating a multi-purpose first person shooter bot with reinforcement learning," in IEEE Symp. Comput. Intell. Games, 2008, pp. 143-150.
    • (2008) IEEE Symp. Comput. Intell. Games , pp. 143-150
    • McPartland, M.1    Gallagher, M.2
  • 11
    • 29344469564 scopus 로고    scopus 로고
    • Using reinforcement learning to solve AI control problems
    • S. Rabin, Ed. Hingham, MA: Charles River Media
    • J. Manslow, "Using reinforcement learning to solve AI control problems," in AIGame Programming Wisdom 2, S. Rabin, Ed. Hingham, MA: Charles River Media, 2004.
    • (2004) AIGame Programming Wisdom, 2
    • Manslow, J.1
  • 12
    • 33846312864 scopus 로고    scopus 로고
    • Self-organizing cognitive agents and reinforcement learning in multi-agent environment
    • DOI 10.1109/IAT.2005.125, 1565565, Proceedings - 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT'05
    • A. H. Tan and D. Xiao, "Self-organizing cognitive agents and reinforcement learning in multi-agent environment," in Proc. Int. Conf. Intell. Agent Technol., Compiegne, France, 2005, pp. 351-357. (Pubitemid 46116575)
    • (2005) Proceedings - 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT'05 , vol.2005 , pp. 351-357
    • Tan, A.-H.1    Xiao, D.2
  • 13
    • 27144539454 scopus 로고    scopus 로고
    • Group utility functions: Learning equilibria between groups of agents in computer games by modifying the reinforcement signal
    • 2005 IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Proceedings
    • J. Bradley and G. Hayes, "Group utility functions: Learning equilibria between groups of agents in computer games by modifying the reinforcement signal," in Proc. Congr. Evol. Comput., 2005, vol. 2, pp. 1914-1921. (Pubitemid 41496082)
    • (2005) 2005 IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Proceedings , vol.2 , pp. 1914-1921
    • Bradley, J.1    Hayes, G.2
  • 14
    • 8844283285 scopus 로고    scopus 로고
    • Soar-RL: Integrating reinforcement learning with Soar
    • DOI 10.1016/j.cogsys.2004.09.006, PII S1389041704000646
    • S. Nason and J. E. Laird, "Soar-RL: Integrating reinforcement learning with soar," Cogn. Syst. Res., vol. 6, no. 1, pp. 51-59, 2005. (Pubitemid 39530523)
    • (2005) Cognitive Systems Research , vol.6 , Issue.SPEC.ISS. , pp. 51-59
    • Nason, S.1    Laird, J.E.2
  • 15
    • 0031630632 scopus 로고    scopus 로고
    • TD based reinforcement learning using neural networks in control problems with continuous action space
    • Anchorage, AK
    • J. H. Lee, S. Y. Oh, and D. H. Choi, "TD based reinforcement learning using neural networks in control problems with continuous action space," in Proc. IEEE World Congr. Comput. Intell., Anchorage, AK, 1998, vol. 3, pp. 2028-2033.
    • (1998) Proc. IEEE World Congr. Comput. Intell. , vol.3 , pp. 2028-2033
    • Lee, J.H.1    Oh, S.Y.2    Choi, D.H.3
  • 16
    • 34547172141 scopus 로고    scopus 로고
    • Motivated reinforcement learning for non-player characters in persistent computer game worlds
    • Los Angeles, CA
    • K. Merrick and M. L. Maher, "Motivated reinforcement learning for non-player characters in persistent computer game worlds," in Proc. ACM SIGCHI Int. Conf. Adv. Comput. Entertain. Technol., Los Angeles, CA, 2006.
    • (2006) Proc. ACM SIGCHI Int. Conf. Adv. Comput. Entertain. Technol.
    • Merrick, K.1    Maher, M.L.2
  • 18
    • 41149145158 scopus 로고    scopus 로고
    • Modeling motivation for adaptive nonplayer characters in dynamic computer game worlds
    • DOI 10.1145/1324198.1324203
    • K. Merrick, "Modeling motivation for adaptive nonplayer characters in dynamic computer game worlds," ACM Comput. Entertain., vol. 5, no. 4, 2008, DOI: 10.1145/1324198.1324203. (Pubitemid 351438238)
    • (2008) Computers in Entertainment , vol.5 , Issue.4 , pp. 5
    • Merrick, K.1
  • 20
    • 40949162812 scopus 로고    scopus 로고
    • Hierarchical reinforcement learning using a modular fuzzy model for multi-agent problem
    • DOI 10.1109/ICSMC.2007.4414013, 4414013, 2007 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2007
    • T. Watanabe and Y. Takahashi, "Hierarchical reinforcement learning using a modular Fuzzy model for multi-agent problem," in Proc. IEEE Int. Conf. Syst. Man Cybern., Montreal, QC, Canada, 2007, pp. 1681-1686. (Pubitemid 351409805)
    • (2007) Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics , pp. 1681-1686
    • Watanabe, T.1    Takahashi, Y.2
  • 21
    • 0033722449 scopus 로고    scopus 로고
    • A hybrid architecture for hierarchical reinforcement learning
    • San Francisco, CA
    • M. Huber, "A hybrid architecture for hierarchical reinforcement learning," in Proc. IEEE Int. Conf. Robot. Autom., San Francisco, CA, 2000, vol. 4, pp. 3290-3295.
    • (2000) Proc. IEEE Int. Conf. Robot. Autom. , vol.4 , pp. 3290-3295
    • Huber, M.1
  • 23
    • 36348963576 scopus 로고    scopus 로고
    • RETALIATE: Learning winning policies in first-person shooter games
    • AAAI-07/IAAI-07 Proceedings: 22nd AAAI Conference on Artificial Intelligence and the 19th Innovative Applications of Artificial Intelligence Conference
    • M. Vasta, S. Lee-Urban, and H. Munoz-Avila, "RETALIATE: Learning winning policies in first-person shooter games," in Proc. 17th Innovative Appl. Artif. Intell. Conf., 2007, pp. 1801-1806. (Pubitemid 350149828)
    • (2007) Proceedings of the National Conference on Artificial Intelligence , vol.2 , pp. 1801-1806
    • Smith, M.1    Lee-Urban, S.2    Munoz-Avila, H.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.