메뉴 건너뛰기




Volumn , Issue , 2010, Pages 5254-5257

Classifying laughter and speech using audio-visual feature prediction

Author keywords

Audiovisual speech laughter feature relationship; Laughter vs speech discrimination; Prediction based classification

Indexed keywords

CLASSIFICATION (OF INFORMATION); SIGNAL PROCESSING; SPEECH;

EID: 78049406136     PISSN: 15206149     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ICASSP.2010.5494992     Document Type: Conference Paper
Times cited : (14)

References (12)
  • 1
    • 57549094671 scopus 로고    scopus 로고
    • Fusion of audio and visual cues for laughter detection
    • S. Petridis and M. Pantic, "Fusion of audio and visual cues for laughter detection," in Proc. ACM CIVR, 2008, pp. 329-337.
    • Proc. ACM CIVR, 2008 , pp. 329-337
    • Petridis, S.1    Pantic, M.2
  • 2
    • 63449114517 scopus 로고    scopus 로고
    • Audiovisual laughter detection based on temporal features
    • S. Petridis and M. Pantic, "Audiovisual laughter detection based on temporal features," in Proc. ACM ICMI, 2008, pp. 37-44.
    • Proc. ACM ICMI, 2008 , pp. 37-44
    • Petridis, S.1    Pantic, M.2
  • 3
    • 57949114004 scopus 로고    scopus 로고
    • Decision-level fusion for audio-visual laughter detection
    • B. Reuderink, M. Poel, K. Truong, R. Poppe, M. Pantic, "Decision-level fusion for audio-visual laughter detection," LNCS, 2008, vol. 5237, pp. 137-148.
    • (2008) LNCS , vol.5237 , pp. 137-148
    • Reuderink, B.1    Poel, M.2    Truong, K.3    Poppe, R.4    Pantic, M.5
  • 4
    • 0036656895 scopus 로고    scopus 로고
    • Linking facial animation, head motion and speech acoustics
    • H.C. Yehia, T. Kuratate, and E. Vatikiotis-Bateson, "Linking facial animation, head motion and speech acoustics," Journal of Phonetics, vol. 30, no. 3, pp. 555-568, 2002.
    • (2002) Journal of Phonetics , vol.30 , Issue.3 , pp. 555-568
    • Yehia, H.C.1    Kuratate, T.2    Vatikiotis-Bateson, E.3
  • 5
    • 42949107237 scopus 로고    scopus 로고
    • Interrelation between speech and facial gestures in emotional utterances: A single subject study
    • C. Busso and S. Narayanan, "Interrelation between speech and facial gestures in emotional utterances: A single subject study," IEEE Trans. Audio, Speech and Language Proc., vol. 15, no. 8, pp. 2331-2347, 2007.
    • (2007) IEEE Trans. Audio, Speech and Language Proc. , vol.15 , Issue.8 , pp. 2331-2347
    • Busso, C.1    Narayanan, S.2
  • 6
    • 56749174163 scopus 로고    scopus 로고
    • A linear model of acoustic-to-facial mapping: Model parameters, data set size, and generalization across speakers
    • M. S. Craig, P. Lieshout, andW.Wong, "A linear model of acoustic-to-facial mapping: Model parameters, data set size, and generalization across speakers," J. Acoustical Soc. America, vol. 124, no. 5, pp. 3183-3190, 2008.
    • (2008) J. Acoustical Soc. America , vol.124 , Issue.5 , pp. 3183-3190
    • Craig, M.S.1    Lieshout, P.2    Wong, W.3
  • 11
    • 34548063185 scopus 로고    scopus 로고
    • Toward pose-invariant 2-D face recognition through point distribution models and facial symmetry
    • D. G. Jimenez and J. L. A. Castro, "Toward pose-invariant 2-D face recognition through point distribution models and facial symmetry," IEEE Trans. Inform. Forensics and Security, vol. 2, no. 3, pp. 413-429, 2007.
    • (2007) IEEE Trans. Inform. Forensics and Security , vol.2 , Issue.3 , pp. 413-429
    • Jimenez, D.G.1    Castro, J.L.A.2
  • 12
    • 74049087896 scopus 로고    scopus 로고
    • Static vs. Dynamic Modelling of Human Nonverbal Behavior from Multiple Cues and Modalities
    • S. Petridis, H. Gunes, S. Kaltwang, and M. Pantic, "Static vs. Dynamic Modelling of Human Nonverbal Behavior from Multiple Cues and Modalities," in Proc. ACM ICMI, 2009, pp. 23-30.
    • Proc. ACM ICMI, 2009 , pp. 23-30
    • Petridis, S.1    Gunes, H.2    Kaltwang, S.3    Pantic, M.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.