-
1
-
-
0030388815
-
Acting under uncertainty: Discrete Bayesian models for mobile robot navigation
-
Cassandra, A.; Kaelbling, L.; and Kurien, J. 1996. Acting under uncertainty: Discrete Bayesian models for mobile robot navigation. In IROS.
-
(1996)
IROS
-
-
Cassandra, A.1
Kaelbling, L.2
Kurien, J.3
-
3
-
-
77958566137
-
Robust belief-based execution of manipulation programs
-
Hsiao, K.; Lozano-Perez, T.; and Kaelbling, L. 2008. Robust belief-based execution of manipulation programs. In WAFR.
-
(2008)
WAFR
-
-
Hsiao, K.1
Lozano-Perez, T.2
Kaelbling, L.3
-
4
-
-
51649106250
-
A point-based POMDP planner for target tracking
-
Hsu, D.; Lee, W.; and Rong, N. 2008. A point-based POMDP planner for target tracking. In ICRA.
-
(2008)
ICRA
-
-
Hsu, D.1
Lee, W.2
Rong, N.3
-
5
-
-
0000148778
-
A heuristic approach to the discovery of macro-operators
-
Iba, G. 1989. A heuristic approach to the discovery of macro-operators. Machine Learning 3(4):285-317.
-
(1989)
Machine Learning
, vol.3
, Issue.4
, pp. 285-317
-
-
Iba, G.1
-
6
-
-
78650161912
-
Motion planning under uncertainty for robotic tasks with long time horizons
-
Kurniawati, H.; Du, Y.; Hsu, D.; and Lee, W. 2009. Motion Planning under Uncertainty for Robotic Tasks with Long Time Horizons. In ISRR.
-
(2009)
ISRR
-
-
Kurniawati, H.1
Du, Y.2
Hsu, D.3
Lee, W.4
-
7
-
-
77955779940
-
SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces
-
Kurniawati, H.; Hsu, D.; and Lee, W. 2008. SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces. In RSS.
-
(2008)
RSS
-
-
Kurniawati, H.1
Hsu, D.2
Lee, W.3
-
8
-
-
0013465187
-
Automatic discovery of subgoals in reinforcement learning using diverse density
-
McGovern, A., and Barto, A. 2001. Automatic discovery of subgoals in reinforcement learning using diverse density. In ICML.
-
(2001)
ICML
-
-
McGovern, A.1
Barto, A.2
-
9
-
-
84897718799
-
AcQuire-macros: An algorithm for automatically learning macro-actions
-
McGovern, A. 1998. acQuire-macros: An algorithm for automatically learning macro-actions. In NIPS AHRL.
-
(1998)
NIPS AHRL
-
-
McGovern, A.1
-
10
-
-
20444478005
-
Policy-contingent abstraction for robust robot control
-
Pineau, J.; Gordon, G.; and Thrun, S. 2003. Policy-contingent abstraction for robust robot control. In UAI.
-
(2003)
UAI
-
-
Pineau, J.1
Gordon, G.2
Thrun, S.3
-
12
-
-
52249086942
-
Online planning algorithms for POMDPs
-
Ross, S.; Pineau, J.; Paquet, S.; and Chaib-draa, B. 2008. Online planning algorithms for POMDPs. JAIR 32:663-704.
-
(2008)
JAIR
, vol.32
, pp. 663-704
-
-
Ross, S.1
Pineau, J.2
Paquet, S.3
Chaib-Draa, B.4
-
13
-
-
33750297371
-
Heuristic search value iteration for POMDPs
-
Smith, T., and Simmons, R. 2004. Heuristic search value iteration for POMDPs. In UAI.
-
(2004)
UAI
-
-
Smith, T.1
Simmons, R.2
-
15
-
-
0033170372
-
Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning
-
Sutton, R.; Precup, D.; and Singh, S. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence 112(1): 181-211.
-
(1999)
Artificial Intelligence
, vol.112
, Issue.1
, pp. 181-211
-
-
Sutton, R.1
Precup, D.2
Singh, S.3
-
16
-
-
80052287410
-
Approximate planning in POMDPs with macro-actions
-
Theocharous, G., and Kaelbling, L. 2003. Approximate planning in POMDPs with macro-actions. NIPS.
-
(2003)
NIPS
-
-
Theocharous, G.1
Kaelbling, L.2
-
17
-
-
67349102783
-
Hierarchical POMDP controller optimization by likelihood maximization
-
Toussaint, M.; Charlin, L.; and Poupart, P. 2008. Hierarchical POMDP controller optimization by likelihood maximization. In UAI.
-
(2008)
UAI
-
-
Toussaint, M.1
Charlin, L.2
Poupart, P.3
-
18
-
-
51649100048
-
-
Technical report, Stanford CS Dept
-
Yu, C; Chuang, J.; Computing, S.; Math, C; Gerkey, B.; Gordon, G.; and Ng, A. 2005. Open-loop plans in multi-robot POMDPs. Technical report, Stanford CS Dept.
-
(2005)
Open-loop Plans in Multi-robot POMDPs
-
-
Yu, C.1
Chuang, J.2
Computing, S.3
Math, C.4
Gerkey, B.5
Gordon, G.6
Ng, A.7
|