메뉴 건너뛰기




Volumn , Issue , 2009, Pages 518-521

Partially observable markov decision process for managing robot collaboration with human

Author keywords

[No Author keywords available]

Indexed keywords

APPROXIMATE ALGORITHMS; DECISION PROCESS; HUMAN-ROBOT COLLABORATION; OPTIMAL ALGORITHM; PARTIALLY OBSERVABLE MARKOV DECISION PROCESS;

EID: 77949520122     PISSN: 10823409     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ICTAI.2009.61     Document Type: Conference Paper
Times cited : (27)

References (12)
  • 2
    • 84880690163 scopus 로고    scopus 로고
    • Sequential optimality and coordination in multiagent systems
    • C. Boutilier. Sequential optimality and coordination in multiagent systems. In IJAI, 1999.
    • (1999) IJAI
    • Boutilier, C.1
  • 3
    • 33846175636 scopus 로고
    • Acting optimally in partially observable stochastic domains
    • A. Cassandra, L. Kaelbling, and M. Littman. Acting optimally in partially observable stochastic domains. In AAAI, 1994.
    • (1994) AAAI
    • Cassandra, A.1    Kaelbling, L.2    Littman, M.3
  • 5
    • 66149160388 scopus 로고    scopus 로고
    • J. Dibangoye, B. Chaib-draa, and A. Mouaddib. Topological orders based planning for solving pomdp's. In AAAI, 2008.
    • J. Dibangoye, B. Chaib-draa, and A. Mouaddib. Topological orders based planning for solving pomdp's. In AAAI, 2008.
  • 6
    • 0001770240 scopus 로고    scopus 로고
    • Value-function approximations for partially observable markov decision processes
    • M. Hauskrecht. Value-function approximations for partially observable markov decision processes. In Journal of Artificial Intelligence Research, 2000.
    • (2000) Journal of Artificial Intelligence Research
    • Hauskrecht, M.1
  • 7
    • 77949499211 scopus 로고    scopus 로고
    • M. L. Littman, A. R. Cassandra, and L. P. Kaelbling. Learning policies for partially observable environments: Scaling up. In In Proceedings of Twelfth International Conference on Machine Learning, 1995a.
    • M. L. Littman, A. R. Cassandra, and L. P. Kaelbling. Learning policies for partially observable environments: Scaling up. In In Proceedings of Twelfth International Conference on Machine Learning, 1995a.
  • 8
    • 84880772945 scopus 로고    scopus 로고
    • J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for pomdp's. In IJCAI, 2003.
    • J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for pomdp's. In IJCAI, 2003.
  • 9
    • 77949493493 scopus 로고    scopus 로고
    • J. Pineau, G. Gordon, and S. Thrun. Policy-contingent abstraction for robust robot control. In UAI, 2003.
    • J. Pineau, G. Gordon, and S. Thrun. Policy-contingent abstraction for robust robot control. In UAI, 2003.
  • 11
    • 84880906197 scopus 로고    scopus 로고
    • G. Shani, R. Brafman, and S. Shimony. Forward search value iteration for pomdp's. In IJCAI, 2007.
    • G. Shani, R. Brafman, and S. Shimony. Forward search value iteration for pomdp's. In IJCAI, 2007.
  • 12
    • 77949512867 scopus 로고    scopus 로고
    • T. Smith and R. Simmons. Heuristic search value iteration for pomdp's. In UAI, 2004.
    • T. Smith and R. Simmons. Heuristic search value iteration for pomdp's. In UAI, 2004.


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.