메뉴 건너뛰기




Volumn , Issue , 2010, Pages 257-262

Competence progress intrinsic motivation

Author keywords

[No Author keywords available]

Indexed keywords

EFFICIENT LEARNING; INTRINSIC MOTIVATION; LEARNING EFFORTS; MODEL LEARNING; NEW APPROACHES;

EID: 78149251512     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/DEVLRN.2010.5578835     Document Type: Conference Paper
Times cited : (33)

References (18)
  • 3
    • 75149137813 scopus 로고    scopus 로고
    • What is intrinsic motivation? A typology of computational approaches
    • P.-Y. Oudeyer and F. Kaplan, "What is intrinsic motivation? A typology of computational approaches," Frontiers in Neurorobotics, vol. 1, no. 2, 2007.
    • (2007) Frontiers in Neurorobotics , vol.1 , Issue.2
    • Oudeyer, P.-Y.1    Kaplan, F.2
  • 4
    • 23144453063 scopus 로고    scopus 로고
    • Maximizing learning progress: An internal reward system for development
    • F. P. Lida and Y. Kuniyoshi, Eds. Springer-Verlag
    • F. Kaplan and P.-Y. Oudeyer, "Maximizing learning progress: an internal reward system for development," in Embodied artificial intelligence, F. P. Lida and Y. Kuniyoshi, Eds. Springer-Verlag, 2004, pp. 259-270.
    • (2004) Embodied Artificial Intelligence , pp. 259-270
    • Kaplan, F.1    Oudeyer, P.-Y.2
  • 5
    • 23144449488 scopus 로고    scopus 로고
    • L. Steels, ser. Lecture Notes in Artificial Intelligence. Springer Verlag, vol. 3139, ch. The Autotelic Principle
    • L. Steels, Embodied artificial intelligence, ser. Lecture Notes in Artificial Intelligence. Springer Verlag, 2004, vol. 3139, ch. The Autotelic Principle, pp. 231-242.
    • (2004) Embodied Artificial Intelligence , pp. 231-242
  • 8
    • 0033170372 scopus 로고    scopus 로고
    • Between MDPs and Semi-MDPs: A framework for temporal abstraction in reinforcement learning
    • R. S. Sutton, D. Precup, and S. Singh, "Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning," Artificial Intelligence Journal, vol. 112, pp. 181-211, 1999.
    • (1999) Artificial Intelligence Journal , vol.112 , pp. 181-211
    • Sutton, R.S.1    Precup, D.2    Singh, S.3
  • 9
    • 0141988716 scopus 로고    scopus 로고
    • Recent advances in hierarchical reinforcement learning
    • A. G. Barto and S. Mahadevan, "Recent advances in hierarchical reinforcement learning," Discrete Event Dynamic Systems, vol. 13, no. 4, pp. 341-379, 2003.
    • (2003) Discrete Event Dynamic Systems , vol.13 , Issue.4 , pp. 341-379
    • Barto, A.G.1    Mahadevan, S.2
  • 14
    • 0000248624 scopus 로고
    • Multi-armed bandits and the gittins index
    • P. Whittle, "Multi-armed bandits and the gittins index," Jrnl. Royal Stat. Soc. Ser. B (Methodology), vol. 42, no. 2, pp. 143-149, 1980.
    • (1980) Jrnl. Royal Stat. Soc. Ser. B (Methodology) , vol.42 , Issue.2 , pp. 143-149
    • Whittle, P.1
  • 15
    • 33845876447 scopus 로고    scopus 로고
    • Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization
    • B. Bakker and J. Schmidhuber, "Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization," in Proc. 8th Conf. on Intelligent Autonomous Systems, IAS-8, 2004.
    • (2004) Proc. 8th Conf. on Intelligent Autonomous Systems , vol.IAS-8
    • Bakker, B.1    Schmidhuber, J.2
  • 16
    • 50849094213 scopus 로고    scopus 로고
    • Evolving internal reinforcers for an intrinsically motivated reinforcement learning robot
    • Y. Demiris, D. Mareschal, B. Scassellati, and J. Weng, Eds
    • M. Schembri, M. Mirolli, and G. Baldassarre, "Evolving internal reinforcers for an intrinsically motivated reinforcement learning robot," in Proc. of the 6th Interntl. Conf. on Development and Learning, Y. Demiris, D. Mareschal, B. Scassellati, and J. Weng, Eds., 2007.
    • (2007) Proc. of the 6th Interntl. Conf. on Development and Learning
    • Schembri, M.1    Mirolli, M.2    Baldassarre, G.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.