메뉴 건너뛰기




Volumn , Issue , 2005, Pages 284-286

Interaction techniques using prosodic features of speech and audio localization

Author keywords

Gesture; Interaction; Speech; Voice I O

Indexed keywords

AUDIO ACOUSTICS; GESTURE RECOGNITION; PARAMETER ESTIMATION; USER INTERFACES;

EID: 33644593119     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/1040830.1040900     Document Type: Conference Paper
Times cited : (13)

References (6)
  • 2
    • 0035750824 scopus 로고    scopus 로고
    • Voice as sound: Using non-verbal voice input for interactive control
    • Igarashi, T. and Hughes, J.F. Voice as sound: Using non-verbal voice input for interactive control. Proc. UIST 2001, 2001, 155-156.
    • (2001) Proc. UIST 2001 , pp. 155-156
    • Igarashi, T.1    Hughes, J.F.2
  • 4
    • 12344334690 scopus 로고    scopus 로고
    • Unit: Modular development of distributed interaction techniques for highly interactive user interfaces
    • Olwal, A. and Feiner, S. Unit: Modular Development of Distributed Interaction Techniques for Highly Interactive User Interfaces. Proc. GRAPHITE 2004, 2004, 131-138.
    • (2004) Proc. GRAPHITE 2004 , pp. 131-138
    • Olwal, A.1    Feiner, S.2
  • 5
    • 0032684957 scopus 로고    scopus 로고
    • Mutual disambiguation of recognition errors in a multimodel architecture
    • Oviatt, S. Mutual disambiguation of recognition errors in a multimodel architecture. Proc. CHI '99, 1999, 576-583.
    • (1999) Proc. CHI '99 , pp. 576-583
    • Oviatt, S.1
  • 6
    • 0035034995 scopus 로고    scopus 로고
    • Responding to subtle, fleeting changes in the user's internal state
    • Tsukahara, W. and Ward, N., Responding to subtle, fleeting changes in the user's internal state. Proc. CHI 2001, 2001, 77-84.
    • (2001) Proc. CHI 2001 , pp. 77-84
    • Tsukahara, W.1    Ward, N.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.