메뉴 건너뛰기




Volumn 2, Issue , 2010, Pages 1089-1095

PUMA: Planning under uncertainty with macro-actions

Author keywords

[No Author keywords available]

Indexed keywords

ANY-TIME ALGORITHMS; E-OPTIMAL; EXTENDED SEQUENCES; FUTURE OBSERVATIONS; OPEN-LOOP; PLANNING UNDER UNCERTAINTY; STATE OF THE ART;

EID: 77958563254     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (35)

References (18)
  • 1
    • 0030388815 scopus 로고    scopus 로고
    • Acting under uncertainty: Discrete Bayesian models for mobile robot navigation
    • Cassandra, A.; Kaelbling, L.; and Kurien, J. 1996. Acting under uncertainty: Discrete Bayesian models for mobile robot navigation. In IROS.
    • (1996) IROS
    • Cassandra, A.1    Kaelbling, L.2    Kurien, J.3
  • 3
    • 77958566137 scopus 로고    scopus 로고
    • Robust belief-based execution of manipulation programs
    • Hsiao, K.; Lozano-Perez, T.; and Kaelbling, L. 2008. Robust belief-based execution of manipulation programs. In WAFR.
    • (2008) WAFR
    • Hsiao, K.1    Lozano-Perez, T.2    Kaelbling, L.3
  • 4
    • 51649106250 scopus 로고    scopus 로고
    • A point-based POMDP planner for target tracking
    • Hsu, D.; Lee, W.; and Rong, N. 2008. A point-based POMDP planner for target tracking. In ICRA.
    • (2008) ICRA
    • Hsu, D.1    Lee, W.2    Rong, N.3
  • 5
    • 0000148778 scopus 로고
    • A heuristic approach to the discovery of macro-operators
    • Iba, G. 1989. A heuristic approach to the discovery of macro-operators. Machine Learning 3(4):285-317.
    • (1989) Machine Learning , vol.3 , Issue.4 , pp. 285-317
    • Iba, G.1
  • 6
    • 78650161912 scopus 로고    scopus 로고
    • Motion planning under uncertainty for robotic tasks with long time horizons
    • Kurniawati, H.; Du, Y.; Hsu, D.; and Lee, W. 2009. Motion Planning under Uncertainty for Robotic Tasks with Long Time Horizons. In ISRR.
    • (2009) ISRR
    • Kurniawati, H.1    Du, Y.2    Hsu, D.3    Lee, W.4
  • 7
    • 77955779940 scopus 로고    scopus 로고
    • SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces
    • Kurniawati, H.; Hsu, D.; and Lee, W. 2008. SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces. In RSS.
    • (2008) RSS
    • Kurniawati, H.1    Hsu, D.2    Lee, W.3
  • 8
    • 0013465187 scopus 로고    scopus 로고
    • Automatic discovery of subgoals in reinforcement learning using diverse density
    • McGovern, A., and Barto, A. 2001. Automatic discovery of subgoals in reinforcement learning using diverse density. In ICML.
    • (2001) ICML
    • McGovern, A.1    Barto, A.2
  • 9
    • 84897718799 scopus 로고    scopus 로고
    • AcQuire-macros: An algorithm for automatically learning macro-actions
    • McGovern, A. 1998. acQuire-macros: An algorithm for automatically learning macro-actions. In NIPS AHRL.
    • (1998) NIPS AHRL
    • McGovern, A.1
  • 10
    • 20444478005 scopus 로고    scopus 로고
    • Policy-contingent abstraction for robust robot control
    • Pineau, J.; Gordon, G.; and Thrun, S. 2003. Policy-contingent abstraction for robust robot control. In UAI.
    • (2003) UAI
    • Pineau, J.1    Gordon, G.2    Thrun, S.3
  • 12
    • 52249086942 scopus 로고    scopus 로고
    • Online planning algorithms for POMDPs
    • Ross, S.; Pineau, J.; Paquet, S.; and Chaib-draa, B. 2008. Online planning algorithms for POMDPs. JAIR 32:663-704.
    • (2008) JAIR , vol.32 , pp. 663-704
    • Ross, S.1    Pineau, J.2    Paquet, S.3    Chaib-Draa, B.4
  • 13
    • 33750297371 scopus 로고    scopus 로고
    • Heuristic search value iteration for POMDPs
    • Smith, T., and Simmons, R. 2004. Heuristic search value iteration for POMDPs. In UAI.
    • (2004) UAI
    • Smith, T.1    Simmons, R.2
  • 15
    • 0033170372 scopus 로고    scopus 로고
    • Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning
    • Sutton, R.; Precup, D.; and Singh, S. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence 112(1): 181-211.
    • (1999) Artificial Intelligence , vol.112 , Issue.1 , pp. 181-211
    • Sutton, R.1    Precup, D.2    Singh, S.3
  • 16
    • 80052287410 scopus 로고    scopus 로고
    • Approximate planning in POMDPs with macro-actions
    • Theocharous, G., and Kaelbling, L. 2003. Approximate planning in POMDPs with macro-actions. NIPS.
    • (2003) NIPS
    • Theocharous, G.1    Kaelbling, L.2
  • 17
    • 67349102783 scopus 로고    scopus 로고
    • Hierarchical POMDP controller optimization by likelihood maximization
    • Toussaint, M.; Charlin, L.; and Poupart, P. 2008. Hierarchical POMDP controller optimization by likelihood maximization. In UAI.
    • (2008) UAI
    • Toussaint, M.1    Charlin, L.2    Poupart, P.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.