-
1
-
-
5444260495
-
Automatic generation of non-verbal facial expressions from speech
-
Bradford, U. K., July
-
I. Albrecht, J. Haber, and H. P. Seidel. Automatic generation of non-verbal facial expressions from speech. In Computer Graphics International (CGI 2002), pages 283-293, Bradford, U. K., July 2002.
-
(2002)
Computer Graphics International (CGI 2002)
, pp. 283-293
-
-
Albrecht, I.1
Haber, J.2
Seidel, H.P.3
-
2
-
-
84890517975
-
Least-squares fitting of two 3-d point sets
-
K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fitting of two 3-d point sets. IEEE Trans. Pattern Anal. Mach. Intell., 9(5):698-700, 1987.
-
(1987)
IEEE Trans. Pattern Anal. Mach. Intell.
, vol.9
, Issue.5
, pp. 698-700
-
-
Arun, K.S.1
Huang, T.S.2
Blostein, S.D.3
-
3
-
-
0004035636
-
Praat, a system for doing phonetics by computer
-
Institute of Phonetic Sciences of the University of Amsterdam, Amsterdam, Netherlands
-
P. Boersma and D. Weeninck. Praat, a system for doing phonetics by computer. Technical Report 132, Institute of Phonetic Sciences of the University of Amsterdam, Amsterdam, Netherlands, 1996. http://www.praat.org.
-
(1996)
Technical Report
, vol.132
-
-
Boersma, P.1
Weeninck, D.2
-
5
-
-
0030677313
-
Video rewrite: Driving visual speech with audio
-
Los Angeles, CA, August
-
C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. In Proc. 24th Annual Conf. Computer Graphics Interactive Techniques (SIGGRAPH 1997), pages 353-360, Los Angeles, CA, August 1997.
-
(1997)
Proc. 24th Annual Conf. Computer Graphics Interactive Techniques (SIGGRAPH 1997)
, pp. 353-360
-
-
Bregler, C.1
Covell, M.2
Slaney, M.3
-
6
-
-
42949167982
-
Rigid head motion in expressive speech animation: Analysis and synthesis
-
March
-
C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan. Rigid head motion in expressive speech animation: Analysis and synthesis. IEEE Transactions on Audio, Speech and Language Processing, 15(3):1075-1086, March 2007.
-
(2007)
IEEE Transactions on Audio, Speech and Language Processing
, vol.15
, Issue.3
, pp. 1075-1086
-
-
Busso, C.1
Deng, Z.2
Grimm, M.3
Neumann, U.4
Narayanan, S.5
-
7
-
-
27144506606
-
Natural head motion synthesis driven by acoustic prosodic features
-
July
-
C. Busso, Z. Deng, U. Neumann, and S. S. Narayanan. Natural head motion synthesis driven by acoustic prosodic features. Computer Animation and Virtual Worlds, 16(3-4):283-290, July 2005.
-
(2005)
Computer Animation and Virtual Worlds
, vol.16
, Issue.3-4
, pp. 283-290
-
-
Busso, C.1
Deng, Z.2
Neumann, U.3
Narayanan, S.S.4
-
8
-
-
42949107237
-
Interrelation between speech and facial gestures in emotional utterances: A single subject study
-
Audio and Language Processing
-
C. Busso and S. Narayanan. Interrelation between speech and facial gestures in emotional utterances: A single subject study. Accepted to appear in IEEE Transactions on Speech, Audio and Language Processing, 2007.
-
(2007)
Accepted to Appear in IEEE Transactions on Speech
-
-
Busso, C.1
Narayanan, S.2
-
9
-
-
48149084430
-
Interplay between linguistic and affective goals in facial expression during emotional utterances
-
Ubatuba-SP, Brazil, December
-
C. Busso and S. S. Narayanan. Interplay between linguistic and affective goals in facial expression during emotional utterances. In 7th International Seminar on Speech Production (ISSP 2006), pages 549-556, Ubatuba-SP, Brazil, December 2006.
-
(2006)
7th International Seminar on Speech Production (ISSP 2006)
, pp. 549-556
-
-
Busso, C.1
Narayanan, S.S.2
-
10
-
-
85029641276
-
Animated conversation: Rule-based generation of facial expression gesture and spoken intonation for multiple conversational agents
-
Orlando, FL
-
J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Bechet, B. Douville, S. Prevost, and M. Stone. Animated conversation: Rule-based generation of facial expression gesture and spoken intonation for multiple conversational agents. In Computer Graphics (Proc. ACM SIGGRAPH'94), pages 413-420, Orlando, FL, 1994.
-
(1994)
Computer Graphics (Proc. ACM SIGGRAPH'94)
, pp. 413-420
-
-
Cassell, J.1
Pelachaud, C.2
Badler, N.3
Steedman, M.4
Achorn, B.5
Bechet, T.6
Douville, B.7
Prevost, S.8
Stone, M.9
-
11
-
-
33645764471
-
Mood swings: Expressive speech animation
-
April
-
E. Chuang and C. Bregler. Mood swings: Expressive speech animation. ACM Transactions on Graphics, 24(2):331-347, April 2005.
-
(2005)
ACM Transactions on Graphics
, vol.24
, Issue.2
, pp. 331-347
-
-
Chuang, E.1
Bregler, C.2
-
12
-
-
0001514782
-
Modeling coarticulation in synthetic visual speech
-
Magnenat-Thalmann N., Thalmann D. Eds., Tokyo
-
M. M. Cohen and D. W. Massaro. Modeling coarticulation in synthetic visual speech. In Magnenat-Thalmann N., Thalmann D. (Eds.), Models and Techniques in Computer Animation, Springer Verlag, pages 139-156, Tokyo, 1993.
-
(1993)
Models and Techniques in Computer Animation, Springer Verlag
, pp. 139-156
-
-
Cohen, M.M.1
Massaro, D.W.2
-
13
-
-
0037382510
-
Describing the emotional states that are expressed in speech
-
April
-
R. Cowie and R. R. Cornelius. Describing the emotional states that are expressed in speech. Speech Communication, 40(1-2):5-32, April 2003.
-
(2003)
Speech Communication
, vol.40
, Issue.1-2
, pp. 5-32
-
-
Cowie, R.1
Cornelius, R.R.2
-
14
-
-
77954605110
-
Making discourse visible: Coding and animating conversational facial displays
-
Geneva, Switzerland, June
-
D. De Carlo, C. Revilla, M. Stone, and J. J. Venditti. Making discourse visible: coding and animating conversational facial displays. In Computer Animation (CA 2002), pages 11-16, Geneva, Switzerland, June 2002.
-
(2002)
Computer Animation (CA 2002)
, pp. 11-16
-
-
De Carlo, D.1
Revilla, C.2
Stone, M.3
Venditti, J.J.4
-
15
-
-
14944343108
-
Automatic dynamic expression synthesis for speech animation
-
Geneva, Switzerland, July
-
Z. Deng, M. Bulut, U. Neumann, and S. Narayanan. Automatic dynamic expression synthesis for speech animation. In IEEE 17th International Conference on Computer Animation and Social Agents (CASA 2004), pages 267-274, Geneva, Switzerland, July 2004.
-
(2004)
IEEE 17th International Conference on Computer Animation and Social Agents (CASA 2004)
, pp. 267-274
-
-
Deng, Z.1
Bulut, M.2
Neumann, U.3
Narayanan, S.4
-
16
-
-
14944376450
-
Audio-based head motion synthesis for avatar-based telepresence systems
-
ACM Press, New York
-
Z. Deng, C. Busso, S. Narayanan, and U. Neumann. Audio-based head motion synthesis for avatar-based telepresence systems. In ACM SIGMM 2004 Workshop on Effective Telepresence (ETP 2004), pages 24-30, ACM Press, New York, 2004.
-
(2004)
ACM SIGMM 2004 Workshop on Effective Telepresence (ETP 2004)
, pp. 24-30
-
-
Deng, Z.1
Busso, C.2
Narayanan, S.3
Neumann, U.4
-
17
-
-
17144375696
-
Automated eye motion using texture synthesis
-
March/April
-
Z. Deng, J. P. Lewis, and U. Neumann. Automated eye motion using texture synthesis. IEEE Computer Graphics and Applications, 25(2):24-30, March/April 2005.
-
(2005)
IEEE Computer Graphics and Applications
, vol.25
, Issue.2
, pp. 24-30
-
-
Deng, Z.1
Lewis, J.P.2
Neumann, U.3
-
18
-
-
27144451210
-
Synthesizing speech animation by learning compact speech co-articulation models
-
Stony Brook, NY, June
-
Z. Deng, J. P. Lewis, and U. Neumann. Synthesizing speech animation by learning compact speech co-articulation models. In Computer Graphics International (CGI 2005), pages 19-25, Stony Brook, NY, June 2005.
-
(2005)
Computer Graphics International (CGI 2005)
, pp. 19-25
-
-
Deng, Z.1
Lewis, J.P.2
Neumann, U.3
-
19
-
-
33749533574
-
Expressive facial animation synthesis by learning speech co-articultion and expression spaces
-
November/December
-
Z. Deng, U. Neumann, J. P. Lewis, T. Y. Kim, M. Bulut, and S. Narayanan. Expressive facial animation synthesis by learning speech co-articultion and expression spaces. IEEE Transactions on Visualization and Computer Graphics (TVCG), 12(6):1523-1534, November/December 2006.
-
(2006)
IEEE Transactions on Visualization and Computer Graphics (TVCG)
, vol.12
, Issue.6
, pp. 1523-1534
-
-
Deng, Z.1
Neumann, U.2
Lewis, J.P.3
Kim, T.Y.4
Bulut, M.5
Narayanan, S.6
-
21
-
-
0027588084
-
Facial expression and emotion
-
April
-
P. Ekman. Facial expression and emotion. American Psychologist, 48(4):384-392, April 1993.
-
(1993)
American Psychologist
, vol.48
, Issue.4
, pp. 384-392
-
-
Ekman, P.1
-
23
-
-
78650465043
-
Visual prosody: Facial movements accompanying speech
-
Washington, DC, May
-
H. P. Graf, E. Cosatto, V. Strom, and F. J. Huang. Visual prosody: Facial movements accompanying speech. In Proc. of IEEE International Conference on Automatic Faces and Gesture Recognition, pages 396-401, Washington, DC, May 2002.
-
(2002)
Proc. of IEEE International Conference on Automatic Faces and Gesture Recognition
, pp. 396-401
-
-
Graf, H.P.1
Cosatto, E.2
Strom, V.3
Huang, F.J.4
-
24
-
-
21844443583
-
Audiovisual representation of prosody in expressive speech communication
-
July
-
B. Granström and D. House. Audiovisual representation of prosody in expressive speech communication. Speech Communication, 46(3-4):473-484, July 2005.
-
(2005)
Speech Communication
, vol.46
, Issue.3-4
, pp. 473-484
-
-
Granström, B.1
House, D.2
-
25
-
-
17044393001
-
Lessons from emotion psychology for the design of lifelike characters
-
March-April
-
J. Gratch and S. Marsella. Lessons from emotion psychology for the design of lifelike characters. Applied Artificial Intelligence, 19(3-4):215-233, March-April 2005.
-
(2005)
Applied Artificial Intelligence
, vol.19
, Issue.3-4
, pp. 215-233
-
-
Gratch, J.1
Marsella, S.2
-
26
-
-
33750200255
-
Challenges ahead head movements and other social acts in conversation
-
Hertfordshire, U. K., April
-
D. Heylen. Challenges ahead head movements and other social acts in conversation. In Artificial Intelligence and Simulation of Behaviour (AISB 2005), Social Presence Cues for Virtual Humanoids Symposium, page 8, Hertfordshire, U. K., April 2005.
-
(2005)
Artificial Intelligence and Simulation of Behaviour (AISB 2005), Social Presence Cues for Virtual Humanoids Symposium
, pp. 8
-
-
Heylen, D.1
-
27
-
-
0035810978
-
Categorizing sex and identity from the biological motion of faces
-
June
-
H. Hill and A. Johnston. Categorizing sex and identity from the biological motion of faces. Current Biology, 11(11):880-885, June 2001.
-
(2001)
Current Biology
, vol.11
, Issue.11
, pp. 880-885
-
-
Hill, H.1
Johnston, A.2
-
28
-
-
0000107975
-
Relations between two sets of variates
-
December
-
H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321-377, December 1936.
-
(1936)
Biometrika
, vol.28
, Issue.3-4
, pp. 321-377
-
-
Hotelling, H.1
-
29
-
-
33745817402
-
Facial actions as visual cues for personality
-
July
-
L. N. Jefferies, J. T. Enns, S. Di Paola, and A. Arya. Facial actions as visual cues for personality. Computer Animation and Virtual Worlds, 17(3-4):371-382, July 2006.
-
(2006)
Computer Animation and Virtual Worlds
, vol.17
, Issue.3-4
, pp. 371-382
-
-
Jefferies, L.N.1
Enns, J.T.2
Di Paola, S.3
Arya, A.4
-
30
-
-
0034501586
-
Speech-to-face movement synthesis based on HMMS
-
New York, April
-
K. Kakihara, S. Nakamura, and K. Shikano. Speech-to-face movement synthesis based on HMMS. In IEEE International Conference on Multimedia and Expo (ICME), volume 1, pages 427-430, New York, April 2000.
-
(2000)
IEEE International Conference on Multimedia and Expo (ICME)
, vol.1
, pp. 427-430
-
-
Kakihara, K.1
Nakamura, S.2
Shikano, K.3
-
31
-
-
16244416858
-
Prosody based audiovisual coanalysis for coverbal gesture recognition
-
April
-
S. Kettebekov, M. Yeasin, and R. Sharma. Prosody based audiovisual coanalysis for coverbal gesture recognition. IEEE Transactions on Multimedia, 7(2):234-242, April 2005.
-
(2005)
IEEE Transactions on Multimedia
, vol.7
, Issue.2
, pp. 234-242
-
-
Kettebekov, S.1
Yeasin, M.2
Sharma, R.3
-
32
-
-
85034718268
-
Audio-visual synthesis of talking faces from speech production correlates
-
Budapest, Hungary, September
-
T. Kuratate, K. G. Munhall, P. E. Rubin, E. Vatikiotis-Bateson, and H. Yehia. Audio-visual synthesis of talking faces from speech production correlates. In Sixth European Conference on Speech Communication and Technology, Eurospeech 1999, pages 1279-1282, Budapest, Hungary, September 1999.
-
(1999)
Sixth European Conference on Speech Communication and Technology, Eurospeech 1999
, pp. 1279-1282
-
-
Kuratate, T.1
Munhall, K.G.2
Rubin, P.E.3
Vatikiotis-Bateson, E.4
Yehia, H.5
-
33
-
-
33745191649
-
An articulatory study of emotional speech production
-
Lisbon, Portugal, September
-
S. Lee, S. Yildirim, A. Kazemzadeh, and S. Narayanan. An articulatory study of emotional speech production. In 9th European Conference on Speech Communication and Technology (Interspeech'2005-Eurospeech), pages 497-500, Lisbon, Portugal, September 2005.
-
(2005)
9th European Conference on Speech Communication and Technology (Interspeech'2005-Eurospeech)
, pp. 497-500
-
-
Lee, S.1
Yildirim, S.2
Kazemzadeh, A.3
Narayanan, S.4
-
34
-
-
0018918171
-
An algorithm for vector quantizer design
-
January
-
Y. Linde, A. Buzo, and R. Gray. An algorithm for vector quantizer design. IEEE Transactions on Communications, 28(1):84-95, January 1980.
-
(1980)
IEEE Transactions on Communications
, vol.28
, Issue.1
, pp. 84-95
-
-
Linde, Y.1
Buzo, A.2
Gray, R.3
-
35
-
-
84892352241
-
-
Maya software, Alias Systems division of Silicon Graphics Limited. http://www.alias.com, 2005.
-
(2005)
-
-
-
36
-
-
1642405348
-
Visual prosody and speech intelligibility: Head movement improves auditory speech perception
-
February
-
K. G. Munhall, J. A. Jones, D. E. Callan, T. Kuratate, and E. Vatikiotis-Bateson. Visual prosody and speech intelligibility: Head movement improves auditory speech perception. Psychological Science, 15(2):133-137, February 2004.
-
(2004)
Psychological Science
, vol.15
, Issue.2
, pp. 133-137
-
-
Munhall, K.G.1
Jones, J.A.2
Callan, D.E.3
Kuratate, T.4
Vatikiotis-Bateson, E.5
-
37
-
-
0002473893
-
Generating facial expressions for speech
-
January
-
C. Pelachaud, N. Badler, and M. Steedman. Generating facial expressions for speech. Cognitive Science, 20(1):1-46, January 1996.
-
(1996)
Cognitive Science
, vol.20
, Issue.1
, pp. 1-46
-
-
Pelachaud, C.1
Badler, N.2
Steedman, M.3
-
39
-
-
0024610919
-
A tutorial on hidden Markov models and selected applications in speech recognition
-
February
-
L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286, February 1989.
-
(1989)
Proceedings of the IEEE
, vol.77
, Issue.2
, pp. 257-286
-
-
Rabiner, L.R.1
-
40
-
-
34247646607
-
Combined gesture-speech analysis and speech driven gesture synthesis
-
Toronto, ON, Canada, July
-
M. E. Sargin, O. Aran, A. Karpov, F. Ofli, Y. Yasinnik, S. Wilson, E. Erzin, Y. Yemez, and A. M. Tekalp. Combined gesture-speech analysis and speech driven gesture synthesis. In IEEE International Conference on Multimedia and Expo (ICME 2006), pages 893-896, Toronto, ON, Canada, July 2006.
-
(2006)
IEEE International Conference on Multimedia and Expo (ICME 2006)
, pp. 893-896
-
-
Sargin, M.E.1
Aran, O.2
Karpov, A.3
Ofli, F.4
Yasinnik, Y.5
Wilson, S.6
Erzin, E.7
Yemez, Y.8
Tekalp, A.M.9
-
41
-
-
0037384712
-
Vocal communication of emotion: A review of research paradigms
-
April
-
K. R. Scherer. Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(1-2):227-256, April 2003.
-
(2003)
Speech Communication
, vol.40
, Issue.1-2
, pp. 227-256
-
-
Scherer, K.R.1
-
43
-
-
29144475003
-
Autonomous speaker agent
-
Geneva, Switzerland, July
-
K. Smid, I. S. Pandzic, and V. Radman. Autonomous speaker agent. In IEEE 17th International Conference on Computer Animation and Social Agents (CASA 2004), pages 259-266, Geneva, Switzerland, July 2004.
-
(2004)
IEEE 17th International Conference on Computer Animation and Social Agents (CASA 2004)
, pp. 259-266
-
-
Smid, K.1
Pandzic, I.S.2
Radman, V.3
-
44
-
-
0030355346
-
Characterizing audiovisual information during speech
-
Philadelphia, PA, October
-
E. Vatikiotis-Bateson, K. G. Munhall, Y. Kasahara, F. Garcia, and H. Yehia. Characterizing audiovisual information during speech. In Fourth International Conference on Spoken Language Processing (ICSLP 96), volume 3, pages 1485-1488, Philadelphia, PA, October 1996.
-
(1996)
Fourth International Conference on Spoken Language Processing (ICSLP 96)
, vol.3
, pp. 1485-1488
-
-
Vatikiotis-Bateson, E.1
Munhall, K.G.2
Kasahara, Y.3
Garcia, F.4
Yehia, H.5
-
45
-
-
0000986839
-
Facial animation and head motion driven by speech acoustics
-
Kloster Seeon, Bavaria, Germany, May
-
H. Yehia, T. Kuratate, and E. Vatikiotis-Bateson. Facial animation and head motion driven by speech acoustics. In 5th Seminar on Speech Production: Models and Data, pages 265-268, Kloster Seeon, Bavaria, Germany, May 2000.
-
(2000)
5th Seminar on Speech Production: Models and Data
, pp. 265-268
-
-
Yehia, H.1
Kuratate, T.2
Vatikiotis-Bateson, E.3
-
46
-
-
0032178592
-
Quantitative association of vocal-tract and facial behavior
-
H. Yehia, P. Rubin, and E. Vatikiotis-Bateson. Quantitative association of vocal-tract and facial behavior. Speech Commun., 26(1-2):23-43, 1998.
-
(1998)
Speech Commun.
, vol.26
, Issue.1-2
, pp. 23-43
-
-
Yehia, H.1
Rubin, P.2
Vatikiotis-Bateson, E.3
-
47
-
-
33745190613
-
An acoustic study of emotions expressed in speech
-
Jeju Island, Korea
-
S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, C. Busso, Z. Deng, S. Lee, and S. S. Narayanan. An acoustic study of emotions expressed in speech. In 8th International Conference on Spoken Language Processing (ICSLP 04), Jeju Island, Korea, 2004.
-
(2004)
8th International Conference on Spoken Language Processing (ICSLP 04)
-
-
Yildirim, S.1
Bulut, M.2
Lee, C.M.3
Kazemzadeh, A.4
Busso, C.5
Deng, Z.6
Lee, S.7
Narayanan, S.S.8
-
48
-
-
0003822743
-
-
Entropic Cambridge Research Laboratory, Cambridge, UK
-
S. Young, G. Evermann, T. Hain, D. Kershaw, G. Moore, J. Odell, D. Ollason, D. Povey, V. Valtchev, and P. Woodland. The HTK Book. Entropic Cambridge Research Laboratory, Cambridge, UK, 2002.
-
(2002)
The HTK Book
-
-
Young, S.1
Evermann, G.2
Hain, T.3
Kershaw, D.4
Moore, G.5
Odell, J.6
Ollason, D.7
Povey, D.8
Valtchev, V.9
Woodland, P.10
|