메뉴 건너뛰기




Volumn , Issue , 2008, Pages 93-96

An integrative recognition method for speech and gestures

Author keywords

Gesture recognition; Integrative recognition; Multimodal interface; Speech recognition

Indexed keywords

GESTURE RECOGNITION; INTERACTIVE COMPUTER SYSTEMS; MOTION ESTIMATION; PROBABILITY DISTRIBUTIONS; SPEECH;

EID: 63449109269     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/1452392.1452411     Document Type: Conference Paper
Times cited : (6)

References (13)
  • 3
    • 84963877980 scopus 로고    scopus 로고
    • Prosody based co-analysis for continuous recognition of co-verbal gestures
    • S. Kettebekov, M. Yeasin, and R. Sharma, "Prosody based co-analysis for continuous recognition of co-verbal gestures," Proc. ICME, 2002.
    • (2002) Proc. ICME
    • Kettebekov, S.1    Yeasin, M.2    Sharma, R.3
  • 4
    • 0028510966 scopus 로고
    • Finger-pointer: Pointing interface by image processing
    • M. Fukumoto, Y. Suenaga, and K. Mase, "Finger-pointer: pointing interface by image processing," Comput. Graph., Vol. 18, No. 5, pp. 633-642, 1994.
    • (1994) Comput. Graph , vol.18 , Issue.5 , pp. 633-642
    • Fukumoto, M.1    Suenaga, Y.2    Mase, K.3
  • 5
    • 0019038072 scopus 로고
    • Put-that-there: Voice and gesture at the graphics interface
    • R. A. Bolt, "Put-that-there: Voice and gesture at the graphics interface," ACM Computer Graphics, Vol. 14, No. 3, pp. 262-270, 1980.
    • (1980) ACM Computer Graphics , vol.14 , Issue.3 , pp. 262-270
    • Bolt, R.A.1
  • 6
    • 34547501327 scopus 로고    scopus 로고
    • Joint interpretation of input speech and pen gestures for multimodal human computer interaction
    • Sept
    • P. Hui and H. M. Meng, "Joint interpretation of input speech and pen gestures for multimodal human computer interaction," Proc. INTERSPEECH-2006, pp. 1197-1200, Sept. 2006.
    • (2006) Proc. INTERSPEECH-2006 , pp. 1197-1200
    • Hui, P.1    Meng, H.M.2
  • 7
    • 34547181265 scopus 로고    scopus 로고
    • Salience Modeling based on Non-Verbal Modalities for Spoken Language Understanding
    • S. Qu and J. Y. Chai, "Salience Modeling based on Non-Verbal Modalities for Spoken Language Understanding," Proc. ICMI'06, pp. 193-200, 2006.
    • (2006) Proc. ICMI'06 , pp. 193-200
    • Qu, S.1    Chai, J.Y.2
  • 8
    • 84963813554 scopus 로고    scopus 로고
    • A real-time framework for natural multimodal interaction with large screen displays
    • Oct
    • N. Krahnstoever, S. Kettebekov, M. Yeasin, and R. Sharma, "A real-time framework for natural multimodal interaction with large screen displays," Proc. ICMI 2002, Oct. 2002.
    • (2002) Proc. ICMI 2002
    • Krahnstoever, N.1    Kettebekov, S.2    Yeasin, M.3    Sharma, R.4
  • 10
    • 0010586181 scopus 로고    scopus 로고
    • Finite-state multimodal parsing and understanding
    • M. Johnston and S. Bangalore, "Finite-state multimodal parsing and understanding," Proc. COLING 2000, 2000.
    • (2000) Proc. COLING 2000
    • Johnston, M.1    Bangalore, S.2
  • 11
    • 0005031031 scopus 로고    scopus 로고
    • Unification-based multimodal parsing
    • 98
    • M. Johnston, "Unification-based multimodal parsing," Proc. COLING-ACL'98, 1998.
    • (1998) Proc. COLING-ACL
    • Johnston, M.1
  • 12
    • 0001259029 scopus 로고    scopus 로고
    • Multimodal integration-A statistical view
    • L. Wu, S. L. Oviatt, and P. R. Cohen, "Multimodal integration-A statistical view," IEEE Trans. Multimedia, Vol. 1, No. 4, pp. 334-341, 1999.
    • (1999) IEEE Trans. Multimedia , vol.1 , Issue.4 , pp. 334-341
    • Wu, L.1    Oviatt, S.L.2    Cohen, P.R.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.