메뉴 건너뛰기




Volumn , Issue , 2007, Pages 1681-1686

Hierarchical reinforcement learning using a modular fuzzy model for multi-agent problem

Author keywords

[No Author keywords available]

Indexed keywords

FUZZY SYSTEMS; HIERARCHICAL SYSTEMS; MATHEMATICAL MODELS; MOBILE ROBOTS; MULTI AGENT SYSTEMS; PROBLEM SOLVING;

EID: 40949162812     PISSN: 1062922X     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ICSMC.2007.4414013     Document Type: Conference Paper
Times cited : (14)

References (16)
  • 2
    • 34249833101 scopus 로고
    • Technical Note: Q-Leaning
    • C. J. H. Watkins and P.Dayan, "Technical Note: Q-Leaning", Machine Learning, Vol.8, pp.58-68, 1992
    • (1992) Machine Learning , vol.8 , pp. 58-68
    • Watkins, C.J.H.1    Dayan, P.2
  • 3
    • 0000146518 scopus 로고
    • Credit Assignment in Rule Discovery Systems Based on Genetic Algorithms
    • J. J. Grefenstette, "Credit Assignment in Rule Discovery Systems Based on Genetic Algorithms, Machine Learning, Vol.3, pp.225-245, 1988.
    • (1988) Machine Learning , vol.3 , pp. 225-245
    • Grefenstette, J.J.1
  • 4
    • 0007987833 scopus 로고    scopus 로고
    • Theory and Application of Reinforcement Learning Based on Profit Sharing
    • K. Miyazaki, H. Kimura, and S. Kobayashi, "Theory and Application of Reinforcement Learning Based on Profit Sharing", J. of JSAI, Vol.14, No.5, pp. 800-807, 1999.
    • (1999) J. of JSAI , vol.14 , Issue.5 , pp. 800-807
    • Miyazaki, K.1    Kimura, H.2    Kobayashi, S.3
  • 5
    • 0007982016 scopus 로고    scopus 로고
    • A Theory of Profit Sharing in Multi-agent Reinforcement Learning
    • K. Miyazaki, S. Arai, and S. Kobayashi, "A Theory of Profit Sharing in Multi-agent Reinforcement Learning", J. of JSAI, Vol. 14, No.6, pp. 1156-1164, 1999.
    • (1999) J. of JSAI , vol.14 , Issue.6 , pp. 1156-1164
    • Miyazaki, K.1    Arai, S.2    Kobayashi, S.3
  • 6
    • 0003529066 scopus 로고
    • On Optimal Cooperation of Knowledge Sources
    • Technical Report, BCS-G2010-28, Boeing AI Center
    • M. Benda, V. Jagannathan, and R. Dodhiawalla, "On Optimal Cooperation of Knowledge Sources", Technical Report, BCS-G2010-28, Boeing AI Center, 1985.
    • (1985)
    • Benda, M.1    Jagannathan, V.2    Dodhiawalla, R.3
  • 7
    • 40949097968 scopus 로고    scopus 로고
    • A. Ito and M. Kanabuchi, Speeding up Multi-Agent Reinforcement Learning by Coarse-Graining of Perception -Hunter Game as an Example-, Trans. of IEICE, J84.D-1, No.3, pp.285-293, 2001.
    • A. Ito and M. Kanabuchi, "Speeding up Multi-Agent Reinforcement Learning by Coarse-Graining of Perception -Hunter Game as an Example-", Trans. of IEICE, Vol.J84.D-1, No.3, pp.285-293, 2001.
  • 8
    • 0033315671 scopus 로고    scopus 로고
    • Behavior Acquisition by Multi-Layered Reinforcement Learning
    • Proc. of the IEEE Int
    • Y. Takahashi and M. Asada, "Behavior Acquisition by Multi-Layered Reinforcement Learning", Proc. of the 1999 IEEE Int. Conf. on Syst., Man, and Cybern, pp.716-721, 1999.
    • (1999) Conf. on Syst., Man, and Cybern , pp. 716-721
    • Takahashi, Y.1    Asada, M.2
  • 9
    • 0001100659 scopus 로고    scopus 로고
    • Acquisition of Stand-up Behavior by a Real Robot using Hierarchical Reinforcement Learning
    • J. Morimoto and K. Doya, "Acquisition of Stand-up Behavior by a Real Robot using Hierarchical Reinforcement Learning", Proc. of International Conference on Machine Learning, pp. 623-630, 2000.
    • (2000) Proc. of International Conference on Machine Learning , pp. 623-630
    • Morimoto, J.1    Doya, K.2
  • 11
    • 40949108698 scopus 로고    scopus 로고
    • K. Fujita and H. Matsuo, Multi-agent Reinforcement Learning with the Partly High-Dimensional State Space, Trans. of IEICE, J88-D-1, No.4, pp.864-872, 2005.
    • K. Fujita and H. Matsuo, "Multi-agent Reinforcement Learning with the Partly High-Dimensional State Space, Trans. of IEICE, Vol.J88-D-1, No.4, pp.864-872, 2005.
  • 13
    • 40949097146 scopus 로고    scopus 로고
    • T. Hamagami, S. Koakutsu, and H. Hirata, An Adjustment Method of the Number of States on Q-Learning Segmenting State Space Adaptively, Trans. of IEICE, J86-D1, No.7, pp.490-499, 2003.
    • T. Hamagami, S. Koakutsu, and H. Hirata, "An Adjustment Method of the Number of States on Q-Learning Segmenting State Space Adaptively", Trans. of IEICE, Vol. J86-D1, No.7, pp.490-499, 2003.
  • 14
    • 40949113901 scopus 로고    scopus 로고
    • On the Generalization of Single Input Rule Modules Connected Type Fuzzy Reasoning Method
    • H. Seki, H. Ishii, and M. Mizumoto: "On the Generalization of Single Input Rule Modules Connected Type Fuzzy Reasoning Method", Proc. of the SCIS&ISIS2006, pp.30-34, 2006.
    • (2006) Proc. of the SCIS&ISIS2006 , pp. 30-34
    • Seki, H.1    Ishii, H.2    Mizumoto, M.3
  • 15
    • 0001568172 scopus 로고    scopus 로고
    • SIRMs Dynamically Connected Fuzzy Inference Model and Its Applications
    • N. Yubazaki, J. Yi, M. Otani and K. Hirota, "SIRMs Dynamically Connected Fuzzy Inference Model and Its Applications, Proc. IFSA '97, vol.3, pp.410-415, 1997.
    • (1997) Proc. IFSA '97 , vol.3 , pp. 410-415
    • Yubazaki, N.1    Yi, J.2    Otani, M.3    Hirota, K.4
  • 16
    • 40949136350 scopus 로고    scopus 로고
    • Learning of Agent Behavior Based on Hierarchical Modular Reinforcement Learning
    • Y. Takahashi, T. Watanabe: "Learning of Agent Behavior Based on Hierarchical Modular Reinforcement Learning", Proc. of the SCIS&ISIS2006, pp.90-94, 2006.
    • (2006) Proc. of the SCIS&ISIS2006 , pp. 90-94
    • Takahashi, Y.1    Watanabe, T.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.