메뉴 건너뛰기




Volumn 3248, Issue , 2005, Pages 1-11

Fast reinforcement learning of dialogue policies using stable function approximation

Author keywords

[No Author keywords available]

Indexed keywords

APPROXIMATION THEORY; FUNCTIONS; INTERPOLATION; MATHEMATICAL MODELS; NATURAL LANGUAGE PROCESSING SYSTEMS; OPTIMIZATION;

EID: 26444573745     PISSN: 03029743     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1007/978-3-540-30211-7_1     Document Type: Conference Paper
Times cited : (2)

References (11)
  • 1
    • 0037841376 scopus 로고    scopus 로고
    • Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system
    • S. Singh, D. Litman, M. Kearns, and M. Walker. 2002. Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System. Journal of Artificial Intelligence Research, 16:105-133.
    • (2002) Journal of Artificial Intelligence Research , vol.16 , pp. 105-133
    • Singh, S.1    Litman, D.2    Kearns, M.3    Walker, M.4
  • 3
    • 85135155957 scopus 로고    scopus 로고
    • A stochastic model of human computer interaction for learning dialog strategies
    • E. Levin and R. Pieraccini. 1997. A Stochastic Model of Human Computer Interaction for Learning Dialog Strategies. In Proceedings of Eurospeech, Rhodos, Greece.
    • (1997) Proceedings of Eurospeech, Rhodos, Greece
    • Levin, E.1    Pieraccini, R.2
  • 5
    • 85149111795 scopus 로고    scopus 로고
    • Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email
    • M. Walker, J. Fromer, and S. Narayanan. 1998. Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email. In Proceedings of ACL/COLING 98.
    • (1998) Proceedings of ACL/COLING 98
    • Walker, M.1    Fromer, J.2    Narayanan, S.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.