메뉴 건너뛰기




Volumn 5, Issue 8, 2011, Pages 789-797

Stochastic optimal generation command dispatch based on improved hierarchical reinforcement learning approach

Author keywords

[No Author keywords available]

Indexed keywords

AGC SYSTEM; AUTOMATIC GENERATION CONTROL; CONTROL LAYERS; CONTROL PERFORMANCE; CONTROL PERFORMANCE STANDARDS; CONVERGENCE TIME; CORE PROBLEMS; CURSE OF DIMENSIONALITY; ENGINEERING METHODS; HIERARCHICAL REINFORCEMENT LEARNING; HYDRO CAPACITY; LEARNING EFFICIENCY; MARKOV DECISION PROCESSES; OPTIMAL GENERATION; OPTIMISATIONS; PARTICIPATION FACTORS; POWER GRID MODELS; Q-LEARNING; REWARD FUNCTION; SOLUTION ALGORITHMS; SUBTASKS; TIME VARYING;

EID: 79960776326     PISSN: 17518687     EISSN: None     Source Type: Journal    
DOI: 10.1049/iet-gtd.2010.0600     Document Type: Article
Times cited : (63)

References (23)
  • 1
    • 0032592943 scopus 로고    scopus 로고
    • IEEE Trans. Power Syst.
    • 10.1109/59.780932, 0885-8950
    • Jaleeli, N., and VanSlyck, L.S.: 'NERC's new control performance standards', IEEE Trans. Power Syst., 1999, 14, (3), p. 1092-109910.1109/59.780932 0885-8950
    • (1999) NERC's new control performance standards , vol.14 , Issue.3 , pp. 1092-1099
    • Jaleeli, N.1    VanSlyck, L.S.2
  • 13
    • 34249833101 scopus 로고
    • Mach. Learn.
    • 10.1007/BF00992698, 0885-6125
    • Watkins, J.C.H., and Dayan, P.: 'Q-learning', Mach. Learn., 1992, 8, (3-4), p. 279-29210.1007/BF00992698 0885-6125
    • (1992) Q-learning , vol.8 , Issue.3-4 , pp. 279-292
    • Watkins, J.C.H.1    Dayan, P.2
  • 18
    • 79960776631 scopus 로고    scopus 로고
    • 'Hierarchical control and learning for Markov decision processes', 1998, PhD, University of California, Berkeley, CA
    • Parr, R.: 'Hierarchical control and learning for Markov decision processes', 1998, PhD, University of California, Berkeley, CA
    • Parr, R.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.