메뉴 건너뛰기




Volumn 2016-November, Issue , 2016, Pages 66-73

Information gathering actions over human internal state

Author keywords

[No Author keywords available]

Indexed keywords

ROBOT PROGRAMMING; ROBOTS; STATE ESTIMATION;

EID: 85006372826     PISSN: 21530858     EISSN: 21530866     Source Type: Conference Proceeding    
DOI: 10.1109/IROS.2016.7759036     Document Type: Conference Paper
Times cited : (199)

References (23)
  • 4
    • 70350569456 scopus 로고    scopus 로고
    • Action understanding as inverse planning
    • Chris L Baker, Rebecca Saxe, and Joshua B Tenenbaum. Action understanding as inverse planning. Cognition, 113(3), 2009.
    • (2009) Cognition , vol.113 , Issue.3
    • Baker, C.L.1    Saxe, R.2    Tenenbaum, J.B.3
  • 9
    • 0032073263 scopus 로고    scopus 로고
    • Planning and acting in partially observable stochastic domains
    • Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1):99-134, 1998.
    • (1998) Artificial Intelligence , vol.101 , Issue.1 , pp. 99-134
    • Kaelbling, L.P.1    Littman, M.L.2    Cassandra, A.R.3
  • 10
    • 35348845820 scopus 로고    scopus 로고
    • Affective state estimation for human-robot interaction
    • Dana Kulic and Elizabeth A Croft. Affective state estimation for human-robot interaction. IEEE Transactions on Robotics, 2007.
    • (2007) IEEE Transactions on Robotics
    • Kulic, D.1    Croft, E.A.2
  • 17
    • 84964814635 scopus 로고    scopus 로고
    • Formalizing human-robot mutual adaptation via a bounded memory based model
    • March
    • Stefanos Nikolaidis, Anton Kuznetsov, David Hsu, and Siddhartha Srinivasa. Formalizing human-robot mutual adaptation via a bounded memory based model. In Human-Robot Interaction, March 2016.
    • (2016) Human-robot Interaction
    • Nikolaidis, S.1    Kuznetsov, A.2    Hsu, D.3    Srinivasa, S.4
  • 18
    • 56449130538 scopus 로고    scopus 로고
    • Bayesian inverse reinforcement learning
    • Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. Urbana, 51:61801, 2007.
    • (2007) Urbana , vol.51 , pp. 61801
    • Ramachandran, D.1    Amir, E.2
  • 22
    • 57749097473 scopus 로고    scopus 로고
    • Maximum entropy inverse reinforcement learning
    • Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, pages 1433-1438, 2008.
    • (2008) AAAI , pp. 1433-1438
    • Ziebart, B.D.1    Maas, A.L.2    Andrew Bagnell, J.3    Dey, A.K.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.