-
1
-
-
34547505647
-
Combining Efforts for Improving Automatic Classification of Emotional User States
-
Batliner, A., Steidl, S., Schuller, B., Seppi, D., Laskowski, K., Vogt, T., Devillers, L., Vidrascu, L., Amir, N., Kessous, L., and Aharonson, V., "Combining Efforts for Improving Automatic Classification of Emotional User States", Proc. 1st Int. Language Technologies Conference IS-LTC, Ljubljana, Slovenia, 240-245, 2006.
-
(2006)
Proc. 1st Int. Language Technologies Conference IS-LTC, Ljubljana, Slovenia
, pp. 240-245
-
-
Batliner, A.1
Steidl, S.2
Schuller, B.3
Seppi, D.4
Laskowski, K.5
Vogt, T.6
Devillers, L.7
Vidrascu, L.8
Amir, N.9
Kessous, L.10
Aharonson, V.11
-
2
-
-
33745202280
-
A Database of German Emotional Speech
-
Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W. and Weiss, B., "A Database of German Emotional Speech", Proc. INTERSPEECH, ISCA, Lisbon, Portugal, 1517-1520, 2005.
-
(2005)
Proc. Interspeech, ISCA, Lisbon, Portugal
, pp. 1517-1520
-
-
Burkhardt, F.1
Paeschke, A.2
Rolfes, M.3
Sendlmeier, W.4
Weiss, B.5
-
3
-
-
11244267144
-
Speech emotion classification with the combination of statistic features and temporal features
-
Jiang, D. N. and Cai, L.-H., "Speech emotion classification with the combination of statistic features and temporal features", Proc. ICME 2004, IEEE, Taipei, Taiwan, 1967-1971, 2004.
-
(2004)
Proc. ICME 2004, IEEE, Taipei, Taiwan
, pp. 1967-1971
-
-
Jiang, D.N.1
Cai, L.-H.2
-
4
-
-
0002050113
-
Recognizing emotions in speech using short-term and long-term features
-
Lee, Z. and Zhao, Y., "Recognizing emotions in speech using short-term and long-term features", Proc. ICSLP, 2255-2558, 1998.
-
(1998)
Proc. ICSLP
, pp. 2255-2558
-
-
Lee, Z.1
Zhao, Y.2
-
5
-
-
0027447292
-
Toward the simulation of emotion in synthetic speech: A review of the literature of humans vocal emotion
-
Murray, L.R. and Arnot, I.L., "Toward the simulation of emotion in synthetic speech: A review of the literature of humans vocal emotion", JASA, Vol. 93, issue 2, 1097-1108, 1993.
-
(1993)
JASA
, vol.93
, Issue.2
, pp. 1097-1108
-
-
Murray, L.R.1
Arnot, I.L.2
-
6
-
-
4544352297
-
Detecting emotions in speech
-
Polzin, T.S. and Waibel, A., "Detecting emotions in speech", Proc. Cooperative Multimodal Communication, 2nd Int. Conf. 98, 1998.
-
Proc. Cooperative Multimodal Communication, 2nd Int. Conf.98,1998
-
-
Polzin, T.S.1
Waibel, A.2
-
7
-
-
0141478857
-
Hidden Markov Model-Based Speech Emotion Recognition
-
IEEE, Hong Kong, China
-
Schuller, B., Rigoll, G. and Lang, M., "Hidden Markov Model-Based Speech Emotion Recognition", Proc. ICASSP 2003, IEEE, Vol. II, Hong Kong, China, 1-4, 2003.
-
(2003)
Proc. ICASSP 2003
, vol.2
, pp. 1-4
-
-
Schuller, B.1
Rigoll, G.2
Lang, M.3
-
8
-
-
38049067290
-
Timing Levels in Segment-Based Speech Emotion Recognition
-
Schuller, B. and Rigoll, G., "Timing Levels in Segment-Based Speech Emotion Recognition, Proc. INTERSPEECH 2006, ICSLP, ISCA, 1818-1821, 2006.
-
(2006)
Proc. Interspeech 2006, ICSLP, ISCA
, pp. 1818-1821
-
-
Schuller, B.1
Rigoll, G.2
-
9
-
-
34547549142
-
Towards More Reality in the Recognition of Emotional Speech
-
Schuller, B., Seppi, D., Batliner, A., Maier, A. and Steidl, S., "Towards More Reality in the Recognition of Emotional Speech", Proc. ICASSP, Vol. IV, 941-944, 2007.
-
(2007)
Proc. ICASSP
, vol.4
, pp. 941-944
-
-
Schuller, B.1
Seppi, D.2
Batliner, A.3
Maier, A.4
Steidl, S.5
-
10
-
-
44849100275
-
Comparing One and Two-Stage Acoustic Modeling in the Recognition of Emotion in Speech
-
Schuller, B., Vlasenko, B., Minguez, R., Rigoll1, G. and Wendemuth, A., "Comparing One and Two-Stage Acoustic Modeling in the Recognition of Emotion in Speech" Proc. ASRU 2007, 596-600, 2007.
-
(2007)
Proc. ASRU 2007
, pp. 596-600
-
-
Schuller, B.1
Vlasenko, B.2
Minguez, R.3
Rigoll, G.4
Wendemuth, A.5
-
12
-
-
56149101219
-
Using Neutral Speech Models for Emotional Speech Analysis
-
Busso, C., Lee, S., Narayanan S. S., "Using Neutral Speech Models for Emotional Speech Analysis" Proc. INTERSPEECH, ISCA, Antwerp, Belgium, 2225-2228, 2007.
-
(2007)
Proc. Interspeech, ISCA, Antwerp, Belgium
, pp. 2225-2228
-
-
Busso, C.1
Lee, S.2
Narayanan, S.S.3
-
13
-
-
60749097551
-
-
Cambridge University, Cambridge, England
-
Young, S., Evermann, G., Kershaw, D., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev, V. and Woodland P., The HTK-Book 3.4, Cambridge University, Cambridge, England, 2006.
-
(2006)
The HTK-Book 3.4
-
-
Young, S.1
Evermann, G.2
Kershaw, D.3
Moore, G.4
Odell, J.5
Ollason, D.6
Povey, D.7
Valtchev, V.8
Woodland, P.9
-
14
-
-
85089273681
-
Getting Started with SUSAS: A Speech under Simulated and Actual Stress Database
-
Hansen, J.H.L., Bou-Ghazale, S., Getting Started with SUSAS: A Speech Under Simulated and Actual Stress Database, Proc. EUROSPEECH-97, Rhodes, Greece, Vol. 4, 1743-1746, 1997.
-
(1997)
Proc. Eurospeech-97, Rhodes, Greece
, vol.4
, pp. 1743-1746
-
-
Hansen, J.H.L.1
Bou-Ghazale, S.2
-
15
-
-
48249092791
-
Audiovisual recognition of spontaneous interest within conversations
-
Schuller, B., Müller, R., Hörnler, B., Höthker, A., Konosu, H. and Rigoll., G., "Audiovisual recognition of spontaneous interest within conversations." In Proc. 9th Int. Conf. on Multimodal Interfaces (ICMI), Special Session on Multimodal Analysis of Human Spontaneous Behaviour, Nagoya, Japan, ACM SIGCHI, 30-37, 2007.
-
(2007)
Proc. 9th Int. Conf. on Multimodal Interfaces (ICMI), Special Session on Multimodal Analysis of Human Spontaneous Behaviour, Nagoya, Japan, ACM SIGCHI
, pp. 30-37
-
-
Schuller, B.1
Müller, R.2
Hörnler, B.3
Höthker, A.4
Konosu, H.5
Rigoll, G.6
|