메뉴 건너뛰기




Volumn 39, Issue , 2010, Pages 1-49

Planning with noisy probabilistic relational rules

Author keywords

[No Author keywords available]

Indexed keywords

ACTION SEQUENCES; APPROXIMATE INFERENCE; DECISION-THEORETIC; DYNAMIC BAYESIAN NETWORK; EMPIRICAL RESULTS; EXISTING METHOD; LOOK-AHEAD; PROBABILISTIC PLANNING; PROBABILISTIC RELATIONAL RULES; REALISTIC PHYSICS; ROBOT MANIPULATION; UPPER CONFIDENCE BOUND; WORLD MODEL;

EID: 78651517373     PISSN: None     EISSN: 10769757     Source Type: Journal    
DOI: 10.1613/jair.3093     Document Type: Article
Times cited : (53)

References (54)
  • 3
    • 0346942368 scopus 로고    scopus 로고
    • Decision-theoretic planning: Structural assumptions and computational leverage
    • Boutilier, C., Dean, T., & Hanks, S. (1999). Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11, 1-94.
    • (1999) Journal of Artificial Intelligence Research , vol.11 , pp. 1-94
    • Boutilier, C.1    Dean, T.2    Hanks, S.3
  • 5
    • 60549101047 scopus 로고    scopus 로고
    • The factored policy-gradient planner
    • Buffet, O., & Aberdeen, D. (2009). The factored policy-gradient planner. Artificial Intelligence Journal, 173(5-6), 722-747.
    • (2009) Artificial Intelligence Journal , vol.173 , Issue.5-6 , pp. 722-747
    • Buffet, O.1    Aberdeen, D.2
  • 8
    • 38349168404 scopus 로고    scopus 로고
    • Probabilistic planning via heuristic forward search and weighted model counting
    • Domshlak, C., & Hoffmann, J. (2007). Probabilistic planning via heuristic forward search and weighted model counting. Journal of Artificial Intelligence Research, 30, 565-620.
    • (2007) Journal of Artificial Intelligence Research , vol.30 , pp. 565-620
    • Domshlak, C.1    Hoffmann, J.2
  • 9
    • 33748273074 scopus 로고    scopus 로고
    • Graph kernels and Gaussian processes for relational reinforcement learning
    • Driessens, K., Ramon, J., & G̈artner, T. (2006). Graph kernels and Gaussian processes for relational reinforcement learning. Machine Learning, 64(1-3), 91-119.
    • (2006) Machine Learning , vol.64 , Issue.1-3 , pp. 91-119
    • Driessens, K.1    Ramon, J.2    G̈artner, T.3
  • 10
    • 0035312760 scopus 로고    scopus 로고
    • Relational reinforcement learning
    • DOI 10.1023/A:1007694015589
    • Ďzeroski, S., de Raedt, L., & Driessens, K. (2001). Relational reinforcement learning. Machine Learning, 43, 7-52. (Pubitemid 32286614)
    • (2001) Machine Learning , vol.43 , Issue.1-2 , pp. 7-52
    • Dzeroski, S.1    De Raedt, L.2    Driessens, K.3
  • 11
    • 33744466799 scopus 로고    scopus 로고
    • Approximate policy iteration with a policy language bias: Solving relational markov decision processes
    • Fern, A., Yoon, S., & Givan, R. (2006). Approximate policy iteration with a policy language bias: solving relational markov decision processes. Journal of Artificial Intelligence Research, 25(1), 75-118.
    • (2006) Journal of Artificial Intelligence Research , vol.25 , Issue.1 , pp. 75-118
    • Fern, A.1    Yoon, S.2    Givan, R.3
  • 17
    • 78651497647 scopus 로고    scopus 로고
    • Conscious thought as simulation of behaviour and perception
    • Grush, R. (2004). Conscious thought as simulation of behaviour and perception. Behaviorial and brain sciences, 27, 377-442.
    • (2004) Behaviorial and Brain Sciences , vol.27 , pp. 377-442
    • Grush, R.1
  • 19
    • 0036605946 scopus 로고    scopus 로고
    • Conscious thought as simulation of behaviour and perception
    • Hesslow, G. (2002). Conscious thought as simulation of behaviour and perception. Trends in Cognitive Science, 6(6), 242-247.
    • (2002) Trends in Cognitive Science , vol.6 , Issue.6 , pp. 242-247
    • Hesslow, G.1
  • 20
    • 0036377352 scopus 로고    scopus 로고
    • The FF planning system: Fast plan generation through heuristic search
    • Hoffmann, J., & Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 14, 253-302.
    • (2001) Journal of Artificial Intelligence Research , vol.14 , pp. 253-302
    • Hoffmann, J.1    Nebel, B.2
  • 26
    • 0036832951 scopus 로고    scopus 로고
    • A sparse sampling algorithm for near-optimal planning in large Markov decision processes
    • Kearns, M. J., Mansour, Y., & Ng, A. Y. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49(2-3), 193-208.
    • (2002) Machine Learning , vol.49 , Issue.2-3 , pp. 193-208
    • Kearns, M.J.1    Mansour, Y.2    Ng, A.Y.3
  • 27
    • 56449088242 scopus 로고    scopus 로고
    • Non-parametric policy gradients: A unified treatment of propositional and relational domains
    • Kersting, K., & Driessens, K. (2008). Non-parametric policy gradients: A unified treatment of propositional and relational domains. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 456-463.
    • (2008) Proc. of the Int. Conf. on Machine Learning (ICML , pp. 456-463
    • Kersting, K.1    Driessens, K.2
  • 30
    • 0029333536 scopus 로고
    • An algorithm for probabilistic planning
    • Kushmerick, N., Hanks, S., & Weld, D. (1995). An algorithm for probabilistic planning. Artificial Intelligence, 78(1-2), 239-286.
    • (1995) Artificial Intelligence , vol.78 , Issue.1-2 , pp. 239-286
    • Kushmerick, N.1    Hanks, S.2    Weld, D.3
  • 34
    • 78651506090 scopus 로고    scopus 로고
    • ICAPS-Workshop International Planning Competition: Past, Present and Future
    • Little, I., & Thíebaux, S. (2007). Probabilistic planning vs replanning. In ICAPS-Workshop International Planning Competition: Past, Present and Future.
    • (2007) Probabilistic Planning Vs Replanning
    • Little, I.1    Thíebaux, S.2
  • 41
    • 60549103706 scopus 로고    scopus 로고
    • Practical solution techniques for first-order MDPs
    • Sanner, S., & Boutilier, C. (2009). Practical solution techniques for first-order MDPs. Artificial Intelligence, 173(5-6), 748-788.
    • (2009) Artificial Intelligence , vol.173 , Issue.5-6 , pp. 748-788
    • Sanner, S.1    Boutilier, C.2
  • 42
    • 0024038570 scopus 로고
    • Probabilistic inference and influence diagrams
    • Shachter, R. (1988). Probabilistic inference and influence diagrams. Operations Research, 36, 589-605.
    • (1988) Operations Research , vol.36 , pp. 589-605
    • Shachter, R.1
  • 45
    • 33749234798 scopus 로고    scopus 로고
    • Probabilistic inference for solving discrete and continuous state Markov decision processes
    • Toussaint, M., & Storkey, A. (2006). Probabilistic inference for solving discrete and continuous state Markov decision processes. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 945-952.
    • (2006) Proc. of the Int. Conf. on Machine Learning (ICML , pp. 945-952
    • Toussaint, M.1    Storkey, A.2
  • 46
    • 78651507715 scopus 로고    scopus 로고
    • Expectation-maximization methods for solving (PO)MDPs and optimal control problems
    • Chiappa, S., & Barber, D. (Eds.) Cambridge University Press
    • Toussaint, M., Storkey, A., & Harmeling, S. (2010). Expectation-maximization methods for solving (PO)MDPs and optimal control problems. In Chiappa, S., & Barber, D. (Eds.), Inference and Learning in Dynamic Models. Cambridge University Press.
    • (2010) Inference and Learning in Dynamic Models
    • Toussaint, M.1    Storkey, A.2    Harmeling, S.3
  • 50
    • 0032633177 scopus 로고    scopus 로고
    • Recent advances in AI planning
    • Weld, D. S. (1999). Recent advances in AI planning. AI Magazine, 20(2), 93-123.
    • (1999) AI Magazine , vol.20 , Issue.2 , pp. 93-123
    • Weld, D.S.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.