-
2
-
-
84994124293
-
Multimodal human discourse: Gesture and speech
-
F. Quek, D. McNeill, R. Bryll, S. Duncan, X.-F. Ma, C. Kirbas, K. E. McCullough, and R. Ansari, "Multimodal human discourse: gesture and speech, " ACM Transactions on Computer-Human Interaction (TOCHI), vol. 9, no. 3, pp. 171-193, 2002.
-
(2002)
ACM Transactions on Computer-Human Interaction (TOCHI)
, vol.9
, Issue.3
, pp. 171-193
-
-
Quek, F.1
Mcneill, D.2
Bryll, R.3
Duncan, S.4
Ma, X.-F.5
Kirbas, C.6
McCullough, K.E.7
Ansari, R.8
-
3
-
-
28444489289
-
Speech and gesture share the same communication system
-
P. Bernardis and M. Gentilucci, "Speech and gesture share the same communication system, " Neuropsychologia, vol. 44, no. 2, pp. 178-190, 2006.
-
(2006)
Neuropsychologia
, vol.44
, Issue.2
, pp. 178-190
-
-
Bernardis, P.1
Gentilucci, M.2
-
4
-
-
1542358818
-
Neural correlates of bimodal speech and gesture comprehension
-
S. D. Kelly, C. Kravitz, and M. Hopkins, "Neural correlates of bimodal speech and gesture comprehension, " Brain and language, vol. 89, no. 1, pp. 253-260, 2004.
-
(2004)
Brain and Language
, vol.89
, Issue.1
, pp. 253-260
-
-
Kelly, S.D.1
Kravitz, C.2
Hopkins, M.3
-
5
-
-
85029641276
-
Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents
-
ACM
-
J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket, B. Douville, S. Prevost, and M. Stone, "Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents, " in Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994, pp. 413-420.
-
(1994)
Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques
, pp. 413-420
-
-
Cassell, J.1
Pelachaud, C.2
Badler, N.3
Steedman, M.4
Achorn, B.5
Becket, T.6
Douville, B.7
Prevost, S.8
Stone, M.9
-
6
-
-
77749295421
-
Real-time prosody-driven synthesis of body language
-
ACM
-
S. Levine, C. Theobalt, and V. Koltun, "Real-time prosody-driven synthesis of body language, " in ACM Transactions on Graphics, vol. 28, no. 5. ACM, 2009, p. 172.
-
(2009)
ACM Transactions on Graphics
, vol.28
, Issue.5
-
-
Levine, S.1
Theobalt, C.2
Koltun, V.3
-
7
-
-
33749528158
-
An acoustic study of emotions expressed in speech
-
S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, Z. Deng, S. Lee, S. Narayanan, and C. Busso, "An acoustic study of emotions expressed in speech." in INTERSPEECH, 2004.
-
(2004)
Interspeech
-
-
Yildirim, S.1
Bulut, M.2
Lee, C.M.3
Kazemzadeh, A.4
Deng, Z.5
Lee, S.6
Narayanan, S.7
Busso, C.8
-
9
-
-
84962843862
-
Recognition of negative emotions from the speech signal
-
C.-M. Lee, S. Narayanan, and R. Pieraccini, "Recognition of negative emotions from the speech signal, " in Proc. of ASRU, 2001, pp. 240-243.
-
(2001)
Proc. of ASRU
, pp. 240-243
-
-
Lee, C.-M.1
Narayanan, S.2
Pieraccini, R.3
-
10
-
-
85032751766
-
Emotion recognition in human-computer interaction
-
IEEE
-
R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J. G. Taylor, "Emotion recognition in human-computer interaction, " Signal Processing Magazine, IEEE, vol. 18, no. 1, pp. 32-80, 2001.
-
(2001)
Signal Processing Magazine
, vol.18
, Issue.1
, pp. 32-80
-
-
Cowie, R.1
Douglas-Cowie, E.2
Tsapatsoulis, N.3
Votsis, G.4
Kollias, S.5
Fellenz, W.6
Taylor, J.G.7
-
11
-
-
52049124063
-
Emotion recognition through multiple modalities: Face, body gesture, speech
-
Springer
-
G. Castellano, L. Kessous, and G. Caridakis, "Emotion recognition through multiple modalities: face, body gesture, speech, " in Affect and emotion in human-computer interaction. Springer, 2008, pp. 92-103.
-
(2008)
Affect and Emotion in Human-computer Interaction
, pp. 92-103
-
-
Castellano, G.1
Kessous, L.2
Caridakis, G.3
-
12
-
-
84890489480
-
Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information
-
A. Metallinou, A. Katsamanis, and S. Narayanan, "Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information, " Image and Vision Computing, Special Issue on Continuous Affect Analysis, 2012.
-
(2012)
Image and Vision Computing, Special Issue on Continuous Affect Analysis
-
-
Metallinou, A.1
Katsamanis, A.2
Narayanan, S.3
-
13
-
-
84905224657
-
Analysis of interaction attitudes using data-driven hand gesture phrases
-
Z. Yang, A. Metallinou, E. Erzin, and S. Narayanan, "Analysis of interaction attitudes using data-driven hand gesture phrases, " in Proc. of ICASSP, 2014.
-
(2014)
Proc. of ICASSP
-
-
Yang, Z.1
Metallinou, A.2
Erzin, E.3
Narayanan, S.4
-
14
-
-
42949167982
-
Rigid head motion in expressive speech animation: Analysis and synthesis
-
C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan, "Rigid head motion in expressive speech animation: Analysis and synthesis, " Audio, Speech, and Language Processing, IEEE Transactions on, vol. 15, no. 3, pp. 1075-1086, 2007.
-
(2007)
Audio, Speech, and Language Processing, IEEE Transactions on
, vol.15
, Issue.3
, pp. 1075-1086
-
-
Busso, C.1
Deng, Z.2
Grimm, M.3
Neumann, U.4
Narayanan, S.5
-
15
-
-
42949107237
-
Interrelation between speech and facial gestures in emotional utterances: A single subject study
-
C. Busso and S. Narayanan, "Interrelation between speech and facial gestures in emotional utterances: A single subject study, " IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 8, pp. 2331-2347, 2007.
-
(2007)
IEEE Transactions on Audio, Speech, and Language Processing
, vol.15
, Issue.8
, pp. 2331-2347
-
-
Busso, C.1
Narayanan, S.2
-
16
-
-
84905273847
-
The USC CreativeIT database: A multimodal database of theatrical improvisation
-
A. Metallinou, C.-C. Lee, C. Busso, S. Carnicke, and S. Narayanan, "The USC CreativeIT database: A multimodal database of theatrical improvisation, " in Proc. of Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality (MMC), 2010.
-
(2010)
Proc. of Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality (MMC)
-
-
Metallinou, A.1
Lee, C.-C.2
Busso, C.3
Carnicke, S.4
Narayanan, S.5
-
19
-
-
46149109647
-
Analysis of head gesture and prosody patterns for prosody-driven head-gesture animation
-
M. Sargin, Y. Yemez, E. Erzin, and A. Tekalp, "Analysis of head gesture and prosody patterns for prosody-driven head-gesture animation, " IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 8, pp. 1330-1345, 2008.
-
(2008)
IEEE Transactions on Pattern Analysis and Machine Intelligence
, vol.30
, Issue.8
, pp. 1330-1345
-
-
Sargin, M.1
Yemez, Y.2
Erzin, E.3
Tekalp, A.4
-
20
-
-
4444257069
-
Praat, a system for doing phonetics by computer
-
P. Boersma, "Praat, a system for doing phonetics by computer, " Glot international, vol. 5, no. 9/10, pp. 341-345, 2002.
-
(2002)
Glot International
, vol.5
, Issue.9-10
, pp. 341-345
-
-
Boersma, P.1
-
21
-
-
0002896902
-
'Feeltrace': An instrument for recording perceived emotion in real time
-
R. Cowie, E. Douglas-Cowie, S. Savvidou, E. McMahon, M. Sawey, and M. Schröder, "'feeltrace': An instrument for recording perceived emotion in real time, " in ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion, 2000.
-
(2000)
ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion
-
-
Cowie, R.1
Douglas-Cowie, E.2
Savvidou, S.3
McMahon, E.4
Sawey, M.5
Schröder, M.6
-
23
-
-
22844453646
-
Hand, mouth and brain, The dynamic emergence of speech and gesture
-
J. M. Iverson and E. Thelen, "Hand, mouth and brain. the dynamic emergence of speech and gesture, " Journal of Consciousness Studies, vol. 6, no. 11-12, pp. 11-12, 1999.
-
(1999)
Journal of Consciousness Studies
, vol.6
, Issue.11-12
, pp. 11-12
-
-
Iverson, J.M.1
Thelen, E.2
-
24
-
-
0036297183
-
A coupled hmm for audio-visual speech recognition
-
A. V. Nefian, L. Liang, X. Pi, L. Xiaoxiang, C. Mao, and K. Murphy, "A coupled hmm for audio-visual speech recognition, " in Acoustics, Speech, and Signal Processing (ICASSP), IEEE International Conference on, vol. 2, 2002, pp. II-2013.
-
(2002)
Acoustics, Speech, and Signal Processing (ICASSP), IEEE International Conference on
, vol.2
, pp. II-2013
-
-
Nefian, A.V.1
Liang, L.2
Pi, X.3
Xiaoxiang, L.4
Mao, C.5
Murphy, K.6
-
25
-
-
0036656895
-
Linking facial animation, head motion and speech acoustics
-
H. C. Yehia, T. Kuratate, and E. Vatikiotis-Bateson, "Linking facial animation, head motion and speech acoustics, " Journal of Phonetics, vol. 30, no. 3, pp. 555-568, 2002.
-
(2002)
Journal of Phonetics
, vol.30
, Issue.3
, pp. 555-568
-
-
Yehia, H.C.1
Kuratate, T.2
Vatikiotis-Bateson, E.3
|