메뉴 건너뛰기




Volumn , Issue , 2006, Pages 272-279

Comparing the utility of state features in spoken dialogue using reinforcement learning

Author keywords

[No Author keywords available]

Indexed keywords

COMPUTATIONAL LINGUISTICS; MACHINE LEARNING; SPEECH PROCESSING;

EID: 74049119541     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.3115/1220835.1220870     Document Type: Conference Paper
Times cited : (20)

References (16)
  • 1
    • 38749109275 scopus 로고    scopus 로고
    • Hedged responses and expressions of affect in human/human and human computer tutorial interactions
    • K. Bhatt, M. Evens, and S. Argamon. 2004. Hedged responses and expressions of affect in human/human and human computer tutorial interactions. In Proc. Cognitive Science.
    • (2004) Proc. Cognitive Science
    • Bhatt, K.1    Evens, M.2    Argamon, S.3
  • 2
    • 84857722379 scopus 로고    scopus 로고
    • Using bigrams to identify relationships between student certainness states and tutor responses in a spoken dialogue corpus
    • K. Forbes-Riley and D. Litman. 2005. Using bigrams to identify relationships between student certainness states and tutor responses in a spoken dialogue corpus. In SIGDial.
    • (2005) SIGDial
    • Forbes-Riley, K.1    Litman, D.2
  • 6
    • 85135155957 scopus 로고    scopus 로고
    • A stochastic model of computer-human interaction for learning dialogues
    • E. Levin and R. Pieraccini. 1997. A stochastic model of computer-human interaction for learning dialogues. In Proc. of EUROSPEECH '97.
    • (1997) Proc. of EUROSPEECH '97
    • Levin, E.1    Pieraccini, R.2
  • 8
    • 85042851955 scopus 로고    scopus 로고
    • Itspoke: An intelligent tutoring spoken dialogue system
    • D. Litman and S. Silliman. 2004. Itspoke: An intelligent tutoring spoken dialogue system. In HLT/NAACL.
    • (2004) HLT/NAACL
    • Litman, D.1    Silliman, S.2
  • 12
    • 84893429929 scopus 로고    scopus 로고
    • Using reinforcement learning to build a better model of dialogue state
    • J. Tetreault and D. Litman. 2006. Using reinforcement learning to build a better model of dialogue state. In EACL.
    • (2006) EACL
    • Tetreault, J.1    Litman, D.2
  • 14
    • 14344279109 scopus 로고    scopus 로고
    • An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email
    • M. Walker. 2000. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. JAIR, 12.
    • (2000) JAIR , pp. 12
    • Walker, M.1
  • 16
    • 84863276973 scopus 로고    scopus 로고
    • Partially obervable Markov decision processes with continuous observations for dialogue management
    • J. Williams, P. Poupart, and S. Young. 2005b. Partially obervable markov decision processes with continuous observations for dialogue management. In SIGDial.
    • (2005) SIGDial
    • Williams, J.1    Poupart, P.2    Young, S.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.