-
1
-
-
84880741298
-
Solving POMDPs with continuous or large discrete observation spaces
-
Hoey, J., and Poupart, P. 2005. Solving POMDPs with continuous or large discrete observation spaces. In IJCAI, 1332-1338.
-
(2005)
IJCAI
, pp. 1332-1338
-
-
Hoey, J.1
Poupart, P.2
-
2
-
-
33847683644
-
On the undecidability of probabilistic planning and inifinite-horizon partially observable markov decision problems
-
Madani, O.; Hanks, S.; and Condon, A. 1999. On the undecidability of probabilistic planning and inifinite-horizon partially observable markov decision problems. In AAAI.
-
(1999)
AAAI
-
-
Madani, O.1
Hanks, S.2
Condon, A.3
-
3
-
-
84880772945
-
Point-based value iteration: An anytime algorithm for POMDPs
-
Pineau, J.; Gordon, G.; and Thrun, S. 2003a. Point-based value iteration: An anytime algorithm for POMDPs. In IJCAI.
-
(2003)
IJCAI
-
-
Pineau, J.1
Gordon, G.2
Thrun, S.3
-
4
-
-
20444478005
-
Policy-contingent abstraction for robust robot control
-
Pineau, J.; Gordon, G.; and Thrun, S. 2003b. Policy-contingent abstraction for robust robot control. In UAI, 477-484.
-
(2003)
UAI
, pp. 477-484
-
-
Pineau, J.1
Gordon, G.2
Thrun, S.3
-
6
-
-
33748561594
-
Value-directed compression of POMDPs
-
Poupart, P., and Boutilier, C. 2003. Value-directed compression of POMDPs. In NIPS, volume 15.
-
(2003)
NIPS
, vol.15
-
-
Poupart, P.1
Boutilier, C.2
-
7
-
-
27344443125
-
Finding approximate POMDP solutions through belief compression
-
Roy, N.; Gordon, G.; and Thrun, S. 2005. Finding approximate POMDP solutions through belief compression. JAIR 23:1-40.
-
(2005)
JAIR
, vol.23
, pp. 1-40
-
-
Roy, N.1
Gordon, G.2
Thrun, S.3
-
9
-
-
0037841376
-
Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system
-
Singh, S.; Litman, D.; Kearns, M.; and Walker, M. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. JAIR 16:105-133.
-
(2002)
JAIR
, vol.16
, pp. 105-133
-
-
Singh, S.1
Litman, D.2
Kearns, M.3
Walker, M.4
-
11
-
-
0034860462
-
Learning hierarchical partially observable Markov decision process models for robot navigation
-
Theocharous, G.; Rohanimanesh, K.; and Mahadevan, S. 2001. Learning hierarchical partially observable Markov decision process models for robot navigation. In ICRA, 511-516.
-
(2001)
ICRA
, pp. 511-516
-
-
Theocharous, G.1
Rohanimanesh, K.2
Mahadevan, S.3
-
13
-
-
84863276973
-
Partially observable markov decision processes with continuous observations for dialogue management
-
Williams, J. D.; Poupart, P.; and Young, S. 2005. Partially observable markov decision processes with continuous observations for dialogue management. In SigDial Workshop on Discourse and Dialogue.
-
(2005)
SigDial Workshop on Discourse and Dialogue
-
-
Williams, J.D.1
Poupart, P.2
Young, S.3
|