메뉴 건너뛰기




Volumn , Issue , 2011, Pages

Vowels formants analysis allows straightforward detection of high arousal emotions

Author keywords

affective speech; emotion detection; formant analysis

Indexed keywords

ACOUSTIC FEATURE VECTORS; AFFECTIVE SPEECH; CONTEXT DEPENDENT; EMOTION CLASSIFICATION; EMOTION DETECTION; EMOTION RECOGNITION; EMOTIONAL SPEECH SYNTHESIS; ERROR RATE; FORMANT ANALYSIS; FORMANT VALUES; HIGH QUALITY; HUMAN MACHINE INTERACTION; NEYMAN - PEARSON CRITERION; RECOGNITION METHODS; RESEARCH COMMUNITIES; USE CONTEXT;

EID: 80155168973     PISSN: 19457871     EISSN: 1945788X     Source Type: Conference Proceeding    
DOI: 10.1109/ICME.2011.6012003     Document Type: Conference Paper
Times cited : (27)

References (16)
  • 1
    • 21544458365 scopus 로고    scopus 로고
    • 2005 special issue: Emotion recognition in human-computer interaction
    • N. Fragopanagos and J. G. Taylor, "2005 special issue: Emotion recognition in human-computer interaction," Neural Netw., vol. 18, no. 4, pp. 389-405, 2005.
    • (2005) Neural Netw. , vol.18 , Issue.4 , pp. 389-405
    • Fragopanagos, N.1    Taylor, J.G.2
  • 2
    • 78349274056 scopus 로고    scopus 로고
    • Segmenting into adequate units for automatic recognition of emotion-related episodes: A speech-based approach
    • A. Batliner, D. Seppi, S. Steidl, and B. Schuller, "Segmenting into adequate units for automatic recognition of emotion-related episodes: a speech-based approach," Advances in Human-Computer Interaction, 2010.
    • (2010) Advances in Human-computer Interaction
    • Batliner, A.1    Seppi, D.2    Steidl, S.3    Schuller, B.4
  • 3
    • 70450202122 scopus 로고    scopus 로고
    • Emotion dimensions and formant position
    • Brighton, United Kingdom
    • M. Goudbeek, J.P. Goldman, and K. R. Scherer, "Emotion dimensions and formant position," in InterSeech 2009, Brighton, United Kingdom, 2009.
    • (2009) InterSeech 2009
    • Goudbeek, M.1    Goldman, J.P.2    Scherer, K.R.3
  • 4
    • 58149203393 scopus 로고    scopus 로고
    • Data-driven emotion conversion in spoken english
    • Z. Inanoglu and S. Young, "Data-driven emotion conversion in spoken english," Speech Communication, vol. 51, no. 3, pp. 268-283, 2009.
    • (2009) Speech Communication , vol.51 , Issue.3 , pp. 268-283
    • Inanoglu, Z.1    Young, S.2
  • 5
    • 48149104055 scopus 로고    scopus 로고
    • Using neutral speech models for emotional speech analysis
    • Antwerp, Belgium
    • C. Busso, S. Lee, and S. Narayanan, "Using neutral speech models for emotional speech analysis.," in Inter-Speech 2007, Antwerp, Belgium, 2007, pp. 2225-2228.
    • (2007) Inter-speech 2007 , pp. 2225-2228
    • Busso, C.1    Lee, S.2    Narayanan, S.3
  • 7
    • 70449569295 scopus 로고    scopus 로고
    • Heading toward to the naturalway of human-machine interaction: The NIMITEK project
    • New York
    • B. Vlasenko and A. Wendemuth, "Heading toward to the naturalway of human-machine interaction: The NIMITEK project.," in IEEE ICME 2009, New York.
    • IEEE ICME 2009
    • Vlasenko, B.1    Wendemuth, A.2
  • 8
    • 80155167255 scopus 로고    scopus 로고
    • "The Kiel corpus of read speech," http://www.ipds.uni-kiel.de/ publikationen/kcrsp.de.html, 2002, vol. Vol. I.
    • (2002) The Kiel Corpus of Read Speech , vol.1
  • 16
    • 54049132987 scopus 로고    scopus 로고
    • Combining speech recognition and acoustic word emotion models for robust text-independent emotion recognition
    • Hannover, Germany
    • B. Schuller, B. Vlasenko, D. Arsic, G. Rigoll, and A. Wendemuth, "Combining speech recognition and acoustic word emotion models for robust text-independent emotion recognition," in IEEE ICME 2008, Hannover, Germany, 2008, pp. 1333-1336.
    • (2008) IEEE ICME 2008 , pp. 1333-1336
    • Schuller, B.1    Vlasenko, B.2    Arsic, D.3    Rigoll, G.4    Wendemuth, A.5


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.