-
1
-
-
33645186394
-
Coordination of hand and orofacial movements for CV sequences in French Cued Speech
-
Boulder
-
Attina, V., Beautemps, D., and Cathiard, M.-A. (2002). "Coordination of hand and orofacial movements for CV sequences in French Cued Speech," International Conference on Speech and Language Processing, Boulder, pp. 1945-1948.
-
(2002)
International Conference on Speech and Language Processing
, pp. 1945-1948
-
-
Attina, V.1
Beautemps, D.2
Cathiard, M.-A.3
-
2
-
-
33645183151
-
Temporal organization of French Cued Speech production
-
Barcelona, Spain
-
Attina, V., Beautemps, D., and Cathiard, M.-A. (2003a). "Temporal organization of French Cued Speech production," International Conference of Phonetic Sciences, Barcelona, Spain.
-
(2003)
International Conference of Phonetic Sciences
-
-
Attina, V.1
Beautemps, D.2
Cathiard, M.-A.3
-
3
-
-
84880301545
-
Towards an audiovisual synthesizer for Cued Speech: Rules for CV French syllables
-
St Jorioz, France
-
Attina, V., Beautemps, D., Cathiard, M.-A., and Odisio, M. (2003b). "Towards an audiovisual synthesizer for Cued Speech: rules for CV French syllables," Auditory-Visual Speech Processing, St Jorioz, France, pp. 227-232.
-
(2003)
Auditory-Visual Speech Processing
, pp. 227-232
-
-
Attina, V.1
Beautemps, D.2
Cathiard, M.-A.3
Odisio, M.4
-
4
-
-
10444280864
-
A pilot study of temporal organization in Cued Speech production of French syllables: Rules for a Cued Speech synthesizer
-
Attina, V., Beautemps, D., Cathiard, M.-A., and Odisio, M. (2004). "A pilot study of temporal organization in Cued Speech production of French syllables: rules for a Cued Speech synthesizer," Speech Commun. 44, 197-214.
-
(2004)
Speech Commun.
, vol.44
, pp. 197-214
-
-
Attina, V.1
Beautemps, D.2
Cathiard, M.-A.3
Odisio, M.4
-
5
-
-
0036656541
-
Three-dimensional linear articulatory modeling of tongue, lips and face based on MRI and video images
-
Badin, P., Bailly, G., Revéret, L., Baciu, M., Segebarth, C., and Savariaux, C. (2002). "Three-dimensional linear articulatory modeling of tongue, lips and face based on MRI and video images," J. Phonetics 30(3) 533-553.
-
(2002)
J. Phonetics
, vol.30
, Issue.3
, pp. 533-553
-
-
Badin, P.1
Bailly, G.2
Revéret, L.3
Baciu, M.4
Segebarth, C.5
Savariaux, C.6
-
6
-
-
85009255585
-
Seeing tongue movements from outside
-
Boulder, Colorado
-
Bailly, G., and Badin, P. (2002). "Seeing tongue movements from outside," International Conference on Speech and Language Processing, Boulder, Colorado, pp. 1913-1916.
-
(2002)
International Conference on Speech and Language Processing
, pp. 1913-1916
-
-
Bailly, G.1
Badin, P.2
-
7
-
-
84966335540
-
Evaluation of movement generation systems using the point-light technique
-
Santa Monica, CA
-
Bailly, G., Gibert, G., and Odisio, M. (2002). "Evaluation of movement generation systems using the point-light technique," IEEE Workshop on Speech Synthesis, Santa Monica, CA, pp. 27-30.
-
(2002)
IEEE Workshop on Speech Synthesis
, pp. 27-30
-
-
Bailly, G.1
Gibert, G.2
Odisio, M.3
-
8
-
-
85009062871
-
A trainable prosodic model: Learning the contours implementing communicative functions within a superpositional model of intonation
-
Jeju, Korea
-
Bailly, G., Holm, B., and Aubergé, V. (2004). "A trainable prosodic model: learning the contours implementing communicative functions within a superpositional model of intonation," International Conference on Speech and Language Processing, Jeju, Korea, pp. 1425-1428.
-
(2004)
International Conference on Speech and Language Processing
, pp. 1425-1428
-
-
Bailly, G.1
Holm, B.2
Aubergé, V.3
-
9
-
-
21844440585
-
SFC: A trainable prosodic model
-
to be published
-
Bailly, G., and Holm, B. (to be published). "SFC: a trainable prosodic model," Speech Commun.
-
Speech Commun.
-
-
Bailly, G.1
Holm, B.2
-
10
-
-
0142216141
-
Audiovisual speech synthesis
-
Bailly, G., Bérar, M., Elisei, F., and Odisio, M. (2003). "Audiovisual speech synthesis," International Journal of Speech Technology 6(4) 331-346.
-
(2003)
International Journal of Speech Technology
, vol.6
, Issue.4
, pp. 331-346
-
-
Bailly, G.1
Bérar, M.2
Elisei, F.3
Odisio, M.4
-
11
-
-
0034132106
-
Speech perception without hearing
-
Bernstein, L. E., Demorest, M. E., and Tucker, P. E. (2000). "Speech perception without hearing," Percept. Psychophys. 62 233-252.
-
(2000)
Percept. Psychophys.
, vol.62
, pp. 233-252
-
-
Bernstein, L.E.1
Demorest, M.E.2
Tucker, P.E.3
-
12
-
-
0003668555
-
Computing prosody: Computational models for processing spontaneous speech
-
edited by Y. Sagisaka, N. Campbell, and N. Higuchi (Springer-Verlag, Berlin)
-
Campbell, N. (1997). "Computing prosody: Computational models for processing spontaneous speech," Synthesizing Spontaneous Speech, edited by Y. Sagisaka, N. Campbell, and N. Higuchi (Springer-Verlag, Berlin), pp. 165-186.
-
(1997)
Synthesizing Spontaneous Speech
, pp. 165-186
-
-
Campbell, N.1
-
13
-
-
0026345072
-
Temporal dissociation of motor responses and subjective awareness
-
Castiello, U., Paulignan, Y., and Jeannerod, M. (1991). "Temporal dissociation of motor responses and subjective awareness," Brain 114, 2639-2655.
-
(1991)
Brain
, vol.114
, pp. 2639-2655
-
-
Castiello, U.1
Paulignan, Y.2
Jeannerod, M.3
-
14
-
-
0002150047
-
Cued Speech
-
Cornett, R. O. (1967). "Cued Speech," Am. Ann. Deaf 112, 3-13.
-
(1967)
Am. Ann. Deaf
, vol.112
, pp. 3-13
-
-
Cornett, R.O.1
-
15
-
-
0024222134
-
Cued Speech, manual complement to lipreading, for visual reception of spoken language. Principles, practice and prospects for automation
-
Cornett, R. O. (1988). "Cued Speech, manual complement to lipreading, for visual reception of spoken language. Principles, practice and prospects for automation," Acta Otorhinolaryngol. Belg. 42, 375-384.
-
(1988)
Acta Otorhinolaryngol. Belg.
, vol.42
, pp. 375-384
-
-
Cornett, R.O.1
-
16
-
-
0007346075
-
-
Raleigh, NC, The National Cued Speech Association, Inc.
-
Cornett, R. O., and Daisey, M. E. (1992). The Cued Speech Resource Book for Parents of Deaf Children. Raleigh, NC, The National Cued Speech Association, Inc.
-
(1992)
The Cued Speech Resource Book for Parents of Deaf Children
-
-
Cornett, R.O.1
Daisey, M.E.2
-
17
-
-
0034071757
-
Development of speechreading supplements based on automatic speech recognition
-
Duchnowski, P., Lum, D. S., Krause, J. C., Sexton, M. G., Bratakos, M. S., and Braida, L. D. (2000). "Development of speechreading supplements based on automatic speech recognition," IEEE Trans. Biomed. Eng. 47(4): 487-496.
-
(2000)
IEEE Trans. Biomed. Eng.
, vol.47
, Issue.4
, pp. 487-496
-
-
Duchnowski, P.1
Lum, D.S.2
Krause, J.C.3
Sexton, M.G.4
Bratakos, M.S.5
Braida, L.D.6
-
18
-
-
0010466389
-
Creating and controlling video-realistic talking heads
-
Scheelsminde, Denmark
-
Elisei, F., Odisio, M., Bailly, G., and Badin, P. (2001). "Creating and controlling video-realistic talking heads," Auditory-Visual Speech Processing Workshop, Scheelsminde, Denmark, pp. 90-97.
-
(2001)
Auditory-Visual Speech Processing Workshop
, pp. 90-97
-
-
Elisei, F.1
Odisio, M.2
Bailly, G.3
Badin, P.4
-
19
-
-
48149088768
-
Resynthesis of 3D tongue movements from facial data
-
Geneva
-
Engwall, O., and Beskow, J. (2003). "Resynthesis of 3D tongue movements from facial data," EuroSpeech, Geneva.
-
(2003)
EuroSpeech
-
-
Engwall, O.1
Beskow, J.2
-
20
-
-
0029765811
-
Unit selection in a concatenative speech synthesis system using a large speech database
-
Atlanta, GA
-
Hunt, A. J., and Black, A. W. (1996). "Unit selection in a concatenative speech synthesis system using a large speech database," International Conference on Acoustics, Speech and Signal Processing, Atlanta, GA, pp. 373-376.
-
(1996)
International Conference on Acoustics, Speech and Signal Processing
, pp. 373-376
-
-
Hunt, A.J.1
Black, A.W.2
-
21
-
-
85009141770
-
On the Correlation between facial movements, tongue movements and speech acoustics
-
Beijing, China
-
Jiang, J., Alwan, A., Bernstein, L., Keating, P., and Auer, E. (2000). "On the Correlation between facial movements, tongue movements and speech acoustics," International Conference on Speech and Language Processing, Beijing, China, pp. 42-45.
-
(2000)
International Conference on Speech and Language Processing
, pp. 42-45
-
-
Jiang, J.1
Alwan, A.2
Bernstein, L.3
Keating, P.4
Auer, E.5
-
22
-
-
85034718268
-
Audio-visual synthesis of talking faces from speech production correlates
-
Kuratate, T., Munhall, K. G., Rubin, P. E., Vatikioti-Bateson, E., and Yehia, H. (1999). "Audio-visual synthesis of talking faces from speech production correlates," EuroSpeech, pp. 1279-1282.
-
(1999)
EuroSpeech
, pp. 1279-1282
-
-
Kuratate, T.1
Munhall, K.G.2
Rubin, P.E.3
Vatikioti-Bateson, E.4
Yehia, H.5
-
23
-
-
0034164583
-
Phonology acquired through the eyes and spelling in deaf children
-
Leybaert, J. (2000). "Phonology acquired through the eyes and spelling in deaf children," J. Exp. Child Psychol. 75, 291-318.
-
(2000)
J. Exp. Child Psychol.
, vol.75
, pp. 291-318
-
-
Leybaert, J.1
-
24
-
-
85133627600
-
The role of Cued Speech in language processing by deaf children: An overview
-
St Jorioz, France
-
Leybaert, J. (2003). "The role of Cued Speech in language processing by deaf children: An overview," Auditory-Visual Speech Processing, St Jorioz, France, pp. 179-186.
-
(2003)
Auditory-Visual Speech Processing
, pp. 179-186
-
-
Leybaert, J.1
-
25
-
-
85009080445
-
Modeling visual coarticulation in synthetic talking heads using a lip motion unit inventory with concatenative synthesis
-
Beijing, China
-
Minnis, S., and Breen, A. P. (1998). "Modeling visual coarticulation in synthetic talking heads using a lip motion unit inventory with concatenative synthesis," International Conference on Speech and Language Processing, Beijing, China, pp. 759-762.
-
(1998)
International Conference on Speech and Language Processing
, pp. 759-762
-
-
Minnis, S.1
Breen, A.P.2
-
26
-
-
0020264162
-
Cued Speech and the reception of spoken language
-
Nicholls, G., and Ling, D. (1982). "Cued Speech and the reception of spoken language," J. Speech Hear. Res. 25, 262-269.
-
(1982)
J. Speech Hear. Res.
, vol.25
, pp. 262-269
-
-
Nicholls, G.1
Ling, D.2
-
27
-
-
10444275613
-
Shape and appearance models of talking faces for model-based tracking
-
Odisio, M., and Bailly, G. (2004). "Shape and appearance models of talking faces for model-based tracking," Speech Commun. 44, 63-82.
-
(2004)
Speech Commun.
, vol.44
, pp. 63-82
-
-
Odisio, M.1
Bailly, G.2
-
28
-
-
0021864128
-
Visemes observed by hearing-impaired and normal-hearing adult viewers
-
Owens, E., and Blazek, B. (1985). "Visemes observed by hearing-impaired and normal-hearing adult viewers," J. Speech Hear. Res. 28, 381-393.
-
(1985)
J. Speech Hear. Res.
, vol.28
, pp. 381-393
-
-
Owens, E.1
Blazek, B.2
-
29
-
-
84870292720
-
MOTHER: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
-
Beijing, China
-
Revéret, L., Bailly, G., and Badin, P. (2000). "MOTHER: a new generation of talking heads providing a flexible articulatory control for video-realistic speech animation," International Conference on Speech and Language Processing, Beijing, China, pp. 755-758.
-
(2000)
International Conference on Speech and Language Processing
, pp. 755-758
-
-
Revéret, L.1
Bailly, G.2
Badin, P.3
-
30
-
-
0028218869
-
Automatic speech recognition to aid the hearing impaired: Prospects for the automatic generation of cued speech
-
Uchanski, R., Delhorne, L., Dix, A., Braida, L., Reed, C., and Durlach, N. (1994). "Automatic speech recognition to aid the hearing impaired: Prospects for the automatic generation of cued speech," J. Rehabil. Res. Dev. 31, 20-41.
-
(1994)
J. Rehabil. Res. Dev.
, vol.31
, pp. 20-41
-
-
Uchanski, R.1
Delhorne, L.2
Dix, A.3
Braida, L.4
Reed, C.5
Durlach, N.6
-
31
-
-
0032178592
-
Quantitative association of vocal-tract and facial behavior
-
Yehia, H. C., Rubin, P. E., and Vatikiotis-Bateson, E. (1998). "Quantitative association of vocal-tract and facial behavior," Speech Commun. 26, 23-43.
-
(1998)
Speech Commun.
, vol.26
, pp. 23-43
-
-
Yehia, H.C.1
Rubin, P.E.2
Vatikiotis-Bateson, E.3
|