-
1
-
-
0019256553
-
The detection of audiovisual desynchrony
-
N. F. Dixon and L. Spitz, "The detection of audiovisual desynchrony," Perception, vol. 9, pp. 719-721, 1980.
-
(1980)
Perception
, vol.9
, pp. 719-721
-
-
Dixon, N. F.1
Spitz, L.2
-
2
-
-
0017199877
-
Hearing lips and seeing voices
-
H. McGurk and J. MacDonald, "Hearing lips and seeing voices," Nature, vol. 264, pp. 746-748, 1976.
-
(1976)
Nature
, vol.264
, pp. 746-748
-
-
McGurk, H.1
MacDonald, J.2
-
3
-
-
84936526808
-
Coarticulation is largely planned
-
D. H. Whalen, "Coarticulation is largely planned," Journal of Phonetics, vol. 18, pp. 3-35, 1990.
-
(1990)
Journal of Phonetics
, vol.18
, pp. 3-35
-
-
Whalen, D. H.1
-
4
-
-
85133580799
-
Discrimination of auditory-visual synchrony
-
presented at St Jorioz, France
-
K. W. Grant, V. van Wassenhove, and D. Poeppel, "Discrimination of auditory-visual synchrony," presented at Audio Visual Speech Processing, St Jorioz, France, 2003.
-
(2003)
Audio Visual Speech Processing
-
-
Grant, K. W.1
van Wassenhove, V.2
Poeppel, D.3
-
5
-
-
0142216141
-
Audiovisual speech synthesis
-
G. Bailly, M. Bérar, F. Elisei, and M. Odisio, "Audiovisual speech synthesis," International Journal of Speech Technology, vol. 6, pp. 331-346, 2003.
-
(2003)
International Journal of Speech Technology
, vol.6
, pp. 331-346
-
-
Bailly, G.1
Bérar, M.2
Elisei, F.3
Odisio, M.4
-
6
-
-
84883424118
-
Rule-based Visual Speech Synthesis
-
presented at Madrid, Spain
-
J. Beskow, "Rule-based Visual Speech Synthesis," presented at Eurospeech, Madrid, Spain, 1995.
-
(1995)
Eurospeech
-
-
Beskow, J.1
-
7
-
-
0001514782
-
Modeling coarticulation in synthetic visual speech
-
D. Thalmann and N. Magnenat-Thalmann, Eds. Tokyo: Springer-Verlag
-
M. M. Cohen and D. W. Massaro, "Modeling coarticulation in synthetic visual speech," in Models and Techniques in Computer Animation, D. Thalmann and N. Magnenat-Thalmann, Eds. Tokyo: Springer-Verlag, 1993, pp. 141-155.
-
(1993)
Models and Techniques in Computer Animation
, pp. 141-155
-
-
Cohen, M. M.1
Massaro, D. W.2
-
8
-
-
84966335540
-
Evaluation of movement generation systems using the point-light technique
-
presented at Santa Monica, CA
-
G. Bailly, G. Gibert, and M. Odisio, "Evaluation of movement generation systems using the point-light technique," presented at IEEE Workshop on Speech Synthesis, Santa Monica, CA, 2002.
-
(2002)
IEEE Workshop on Speech Synthesis
-
-
Bailly, G.1
Gibert, G.2
Odisio, M.3
-
9
-
-
0030004909
-
A model of articulator trajectory formation based on the motor tasks of vocal-tract shapes
-
T. Kaburagi and M. Honda, "A model of articulator trajectory formation based on the motor tasks of vocal-tract shapes," Journal of the Acoustical Society of America, vol. 99, pp. 3154-3170, 1996.
-
(1996)
Journal of the Acoustical Society of America
, vol.99
, pp. 3154-3170
-
-
Kaburagi, T.1
Honda, M.2
-
10
-
-
85133715979
-
Framework for data-driven video-realistic audio-visual speech synthesis
-
presented at [11] M. Tamura, T. Masuko, T. Kobayashi, and K. Tokuda, "Visual speech synthesis based on parameter generation from HMM: speech-driven and text-and-speech-driven approaches, presented at Auditory-visual Speech Processing Workshop, Terrigal, Sydney, Australia, 1998
-
C. Weiss, "Framework for data-driven video-realistic audio-visual speech synthesis," presented at Int. Conf. on Language Resources and Evaluation, Lisbon, 2004. [11] M. Tamura, T. Masuko, T. Kobayashi, and K. Tokuda, "Visual speech synthesis based on parameter generation from HMM: speech-driven and text-and-speech-driven approaches," presented at Auditory-visual Speech Processing Workshop, Terrigal, Sydney, Australia, 1998.
-
Int. Conf. on Language Resources and Evaluation, Lisbon, 2004
-
-
Weiss, C.1
-
11
-
-
85133439657
-
An introduction of trajectory model into HMM-based speech synthesis
-
presented at Pittsburgh, PE
-
H. Zen, K. Tokuda, and T. Kitamura, "An introduction of trajectory model into HMM-based speech synthesis," presented at ISCA Speech Synthesis Workshop, Pittsburgh, PE, 2004.
-
(2004)
ISCA Speech Synthesis Workshop
-
-
Zen, H.1
Tokuda, K.2
Kitamura, T.3
-
12
-
-
0012668146
-
Asynchrony modeling for audio-visual speech recognition
-
presented at San Diego, CA
-
G. Gravier, G. Potamianos, and C. Neti, "Asynchrony modeling for audio-visual speech recognition," presented at Human Language Technology Conference, San Diego, CA, 2002.
-
(2002)
Human Language Technology Conference
-
-
Gravier, G.1
Potamianos, G.2
Neti, C.3
-
13
-
-
78649309580
-
Visual model structures and synchrony constraints for audio-visual speech recognition
-
T. J. Hazen, "Visual model structures and synchrony constraints for audio-visual speech recognition," IEEE Trans. on Speech and Audio Processing, 2005.
-
(2005)
IEEE Trans. on Speech and Audio Processing
-
-
Hazen, T. J.1
-
14
-
-
85034718268
-
Audio-visual synthesis of talking faces from speech production correlates
-
presented at
-
T. Kuratate, K. G. Munhall, P. E. Rubin, E. Vatikiotis-Bateson, and H. Yehia, "Audio-visual synthesis of talking faces from speech production correlates," presented at EuroSpeech, 1999.
-
(1999)
EuroSpeech
-
-
Kuratate, T.1
Munhall, K. G.2
Rubin, P. E.3
Vatikiotis-Bateson, E.4
Yehia, H.5
-
15
-
-
84870292720
-
MOTHER: a new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
-
presented at Beijing, China
-
L. Revéret, G. Bailly, and P. Badin, "MOTHER: a new generation of talking heads providing a flexible articulatory control for video-realistic speech animation," presented at International Conference on Speech and Language Processing, Beijing, China, 2000.
-
(2000)
International Conference on Speech and Language Processing
-
-
Revéret, L.1
Bailly, G.2
Badin, P.3
-
16
-
-
0010466389
-
Creating and controlling video-realistic talking heads
-
presented at Scheelsminde, Denmark
-
F. Elisei, M. Odisio, G. Bailly, and P. Badin, "Creating and controlling video-realistic talking heads," presented at Auditory-Visual Speech Processing Workshop, Scheelsminde, Denmark, 2001.
-
(2001)
Auditory-Visual Speech Processing Workshop
-
-
Elisei, F.1
Odisio, M.2
Bailly, G.3
Badin, P.4
-
17
-
-
33845245544
-
Degrees of freedom of facial movements in face-to-face conversational speech
-
presented at Genoa Italy
-
G. Bailly, F. Elisei, P. Badin, and C. Savariaux, "Degrees of freedom of facial movements in face-to-face conversational speech," presented at International Workshop on Multimodal Corpora, Genoa - Italy, 2006.
-
(2006)
International Workshop on Multimodal Corpora
-
-
Bailly, G.1
Elisei, F.2
Badin, P.3
Savariaux, C.4
-
18
-
-
0036656541
-
Three-dimensional linear articulatory modeling of tongue, lips and face based on MRI and video images
-
P. Badin, G. Bailly, L. Revéret, M. Baciu, C. Segebarth, and C. Savariaux, "Three-dimensional linear articulatory modeling of tongue, lips and face based on MRI and video images," Journal of Phonetics, vol. 30, pp. 533-553, 2002.
-
(2002)
Journal of Phonetics
, vol.30
, pp. 533-553
-
-
Badin, P.1
Bailly, G.2
Revéret, L.3
Baciu, M.4
Segebarth, C.5
Savariaux, C.6
-
19
-
-
0004131347
-
Trainable speech synthesis
-
Cambridge, UK: University of Cambridge
-
R. Donovan, "Trainable speech synthesis," in Univ. Eng. Dept. Cambridge, UK: University of Cambridge, 1996, pp. 164.
-
(1996)
Univ. Eng. Dept
, pp. 164
-
-
Donovan, R.1
-
20
-
-
84919370414
-
Text-to-audio-visual speech synthesis based on parameter generation from HMM
-
presented at Budapest, Hungary
-
M. Tamura, S. Kondo, T. Masuko, and T. Kobayashi, "Text-to-audio-visual speech synthesis based on parameter generation from HMM," presented at EUROSPEECH, Budapest, Hungary, 1999.
-
(1999)
EUROSPEECH
-
-
Tamura, M.1
Kondo, S.2
Masuko, T.3
Kobayashi, T.4
-
21
-
-
0033708106
-
Speech parameter generation algorithms for HMM-based speech synthesis
-
presented at Istanbul, Turkey
-
K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura, "Speech parameter generation algorithms for HMM-based speech synthesis," presented at IEEE International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, 2000.
-
(2000)
IEEE International Conference on Acoustics, Speech, and Signal Processing
-
-
Tokuda, K.1
Yoshimura, T.2
Masuko, T.3
Kobayashi, T.4
Kitamura, T.5
-
23
-
-
44949159884
-
TDA: A new trainable trajectory formation system for facial animation
-
presented at Pittsburgh, PE
-
O. Govokhina, G. Bailly, G. Breton, and P. Bagshaw, "TDA: A new trainable trajectory formation system for facial animation," presented at InterSpeech, Pittsburgh, PE, 2006.
-
(2006)
InterSpeech
-
-
Govokhina, O.1
Bailly, G.2
Breton, G.3
Bagshaw, P.4
|