메뉴 건너뛰기




Volumn 6489 LNAI, Issue , 2011, Pages 206-213

Incremental learning of relational action models in noisy environments

Author keywords

[No Author keywords available]

Indexed keywords

ACTION MODELS; DATA-DRIVEN; EMPIRICAL EVALUATIONS; FIRST ORDER; INCREMENTAL LEARNING; NOISY ENVIRONMENT; RELATIONAL REINFORCEMENT LEARNING; TRANSITION FUNCTIONS;

EID: 79959308007     PISSN: 03029743     EISSN: 16113349     Source Type: Book Series    
DOI: 10.1007/978-3-642-21295-6_24     Document Type: Conference Paper
Times cited : (13)

References (18)
  • 1
    • 85055241151 scopus 로고
    • Inductive learning of reactive action models
    • Benson, S.: Inductive learning of reactive action models. In: ICML 1995, pp. 47-54 (1995)
    • (1995) ICML 1995 , pp. 47-54
    • Benson, S.1
  • 2
    • 84880882489 scopus 로고    scopus 로고
    • Online learning and exploiting relational models in reinforcement learning
    • Croonenborghs, T., Ramon, J., Blockeel, H., Bruynooghe, M.: Online learning and exploiting relational models in reinforcement learning. In: IJCAI, pp. 726-731 (2007)
    • (2007) IJCAI , pp. 726-731
    • Croonenborghs, T.1    Ramon, J.2    Blockeel, H.3    Bruynooghe, M.4
  • 3
    • 1942421161 scopus 로고    scopus 로고
    • Relational instance based regression for relational reinforcement learning
    • Driessens, K., Ramon, J.: Relational instance based regression for relational reinforcement learning. In: ICML, pp. 123-130 (2003)
    • (2003) ICML , pp. 123-130
    • Driessens, K.1    Ramon, J.2
  • 4
    • 84948172455 scopus 로고    scopus 로고
    • Speeding Up Relational Reinforcement Learning through the Use of an Incremental First Order Decision Tree Learner
    • Machine Learning: ECML 2001
    • Driessens, K., Ramon, J., Blockeel, H.: Speeding up relational reinforcement learning through the use of an incremental first order decision tree algorithm. In: Flach, P.A., De Raedt, L. (eds.) ECML 2001. LNCS (LNAI), vol. 2167, pp. 97-108. Springer, Heidelberg (2001) (Pubitemid 33331061)
    • (2001) Lecture Notes in Computer Science , Issue.2167 , pp. 97-108
    • Driessens, K.1    Ramon, J.2    Blockeel, H.3
  • 5
    • 0035312760 scopus 로고    scopus 로고
    • Relational reinforcement learning
    • DOI 10.1023/A:1007694015589
    • Dzeroski, S., De Raedt, L., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7-52 (2001) (Pubitemid 32286614)
    • (2001) Machine Learning , vol.43 , Issue.1-2 , pp. 7-52
    • Dzeroski, S.1    De Raedt, L.2    Driessens, K.3
  • 6
    • 0033905392 scopus 로고    scopus 로고
    • Multistrategy theory revision: Induction and abduction in INTHELEX
    • Esposito, F., Semeraro, G., Fanizzi, N., Ferilli, S.: Multistrategy theory revision: Induction and abduction in INTHELEX. Machine Learning 38(1-2), 133-156 (2000) (Pubitemid 30576118)
    • (2000) Machine Learning , vol.38 , Issue.1 , pp. 133-156
    • Esposito, F.1    Semeraro, G.2    Fanizzi, N.3    Ferilli, S.4
  • 7
    • 85148686343 scopus 로고
    • Learning by experimentation: Incremental refinement of incomplete planning domains
    • Gil, Y.: Learning by experimentation: Incremental refinement of incomplete planning domains. In: ICML, pp. 87-95 (1994)
    • (1994) ICML , pp. 87-95
    • Gil, Y.1
  • 8
    • 56449122733 scopus 로고    scopus 로고
    • Knows what it knows: A framework for self-aware learning
    • Li, L., Littman, M.L., Walsh, T.J.: Knows what it knows: a framework for self-aware learning. In: ICML, pp. 568-575 (2008)
    • (2008) ICML , pp. 568-575
    • Li, L.1    Littman, M.L.2    Walsh, T.J.3
  • 12
    • 0027647461 scopus 로고
    • Discovery as autonomous learning from the environment
    • Shen, W.M.: Discovery as autonomous learning from the environment. Machine Learning 12(1-3), 143-165 (1993)
    • (1993) Machine Learning , vol.12 , Issue.1-3 , pp. 143-165
    • Shen, W.M.1
  • 13
    • 85132026293 scopus 로고
    • Integrated architectures for learning, planning, and reacting based on approximating dynamic programming
    • Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: ICML, pp. 216-224 (1990)
    • (1990) ICML , pp. 216-224
    • Sutton, R.S.1
  • 15
    • 57749171364 scopus 로고    scopus 로고
    • Efficient learning of action schemas and web-service descriptions
    • Walsh, T.J., Littman, M.L.: Efficient learning of action schemas and web-service descriptions. In: AAAI, pp. 714-719 (2008)
    • (2008) AAAI , pp. 714-719
    • Walsh, T.J.1    Littman, M.L.2
  • 16
    • 77956507755 scopus 로고    scopus 로고
    • Exploring compact reinforcement-learning representations with linear regression
    • Walsh, T.J., Szita, I., Diuk, M., Littman, M.L.: Exploring compact reinforcement-learning representations with linear regression. In: UAI, pp. 714-719 (2009)
    • (2009) UAI , pp. 714-719
    • Walsh, T.J.1    Szita, I.2    Diuk, M.3    Littman, M.L.4
  • 17
    • 84969334117 scopus 로고
    • Learning by observation and practice: An incremental approach for planning operator acquisition
    • Wang, X.: Learning by observation and practice: An incremental approach for planning operator acquisition. In: ICML, pp. 549-557 (1995)
    • (1995) ICML , pp. 549-557
    • Wang, X.1
  • 18
    • 33847340622 scopus 로고    scopus 로고
    • Learning action models from plan examples using weighted MAX-SAT
    • DOI 10.1016/j.artint.2006.11.005, PII S0004370206001408
    • Yang, Q., Wu, K., Jiang, Y.: Learning action models from plan examples using weighted max-sat. Artificial Intelligence 171(2-3), 107-143 (2007) (Pubitemid 46343538)
    • (2007) Artificial Intelligence , vol.171 , Issue.2-3 , pp. 107-143
    • Yang, Q.1    Wu, K.2    Jiang, Y.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.