-
1
-
-
48149101094
-
Joint analysis of the emotional fingerprint the face and speech: A single subject study
-
Chania, Crete, Greece, Oct. 2007
-
C. Busso and S. Narayanan, "Joint analysis of the emotional fingerprint the face and speech: A single subject study," in Int. Workshop Multimedia Signal Process. (MMSP 2007), Chania, Crete, Greece, Oct. 2007, pp. 43-47.
-
Int. Workshop Multimedia Signal Process. (MMSP 2007)
, pp. 43-47
-
-
Busso, C.1
Narayanan, S.2
-
2
-
-
0037384712
-
Vocal communication of emotion: A review of research paradigms
-
Apr
-
K. Scherer, "Vocal communication of emotion: A review of research paradigms," Speech Commun., vol. 40, no. 1-2, pp. 227-256, Apr. 2003.
-
(2003)
Speech Commun.
, vol.40
, Issue.1-2
, pp. 227-256
-
-
Scherer, K.1
-
3
-
-
0017199877
-
Hearing lips and seeing voices
-
Dec
-
H. McGurk and J. W. MacDonald, "Hearing lips and seeing voices," Nature, vol. 264, pp. 746-748, Dec. 1976.
-
(1976)
Nature
, vol.264
, pp. 746-748
-
-
McGurk, H.1
MacDonald, J.W.2
-
4
-
-
85029641276
-
Animated conversation: Rule-based generation of facial expression gesture and spoken intonation for multiple conversational agents
-
Orlando, FL
-
J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Bechet, B. Douville, S. Prevost, and M. Stone, "Animated conversation: Rule-based generation of facial expression gesture and spoken intonation for multiple conversational agents," in Proc. Comput. Graphics (Proc. ACM SIGGRAPH'94), Orlando, FL, 1994, pp. 413-420.
-
(1994)
Proc. Comput. Graphics (Proc. ACM SIGGRAPH'94)
, pp. 413-420
-
-
Cassell, J.1
Pelachaud, C.2
Badler, N.3
Steedman, M.4
Achorn, B.5
Bechet, T.6
Douville, B.7
Prevost, S.8
Stone, M.9
-
6
-
-
84960898014
-
Multimodal signal analysis of prosody and hand motion: Temporal correlation of speech and gestures
-
Tolouse, France Sep
-
L. Valbonesi, R. Ansari, D. McNeill, F. Quek, S. Duncan, K. McCullough, and R. Bryll, "Multimodal signal analysis of prosody and hand motion: Temporal correlation of speech and gestures," in Proc. Eur. Signal Process. Conf. (EUSIPCO 02), Tolouse, France, Sep. 2002, pp. 75-78.
-
(2002)
Proc. Eur. Signal Process. Conf. (EUSIPCO 02)
, pp. 75-78
-
-
Valbonesi, L.1
Ansari, R.2
McNeill, D.3
Quek, F.4
Duncan, S.5
McCullough, K.6
Bryll, R.7
-
7
-
-
0035277175
-
More than just a pretty face: Conversational protocols and the affordances of embodiment
-
DOI 10.1016/S0950-7051(00)00102-7
-
J. Cassell, T. Bickmore, L. Campbell, H. Vilhjálmsson, and H. Yan, "More than just a pretty face: Conversational protocols and the affordances of embodiment," Knowl.-Based Syst., vol. 14, pp. 55-64, Mar. 2001. (Pubitemid 32264619)
-
(2001)
Knowledge-Based Systems
, vol.14
, Issue.1-2
, pp. 55-64
-
-
Cassell, J.1
Bickmore, T.2
Campbell, L.3
Vilhjalmsson, H.4
Yan, H.5
-
8
-
-
79952572055
-
Expressing uncertainty with a talking head
-
Aberdeen, U.K., Jan
-
E. Marsi and F. van Rooden, "Expressing uncertainty with a talking head," in Proc.Workshop Multimodal Output Generation (MOG 2007), Aberdeen, U.K., Jan. 2007, pp. 105-116.
-
(2007)
Proc.Workshop Multimodal Output Generation (MOG 2007)
, pp. 105-116
-
-
Marsi, E.1
Van Rooden, F.2
-
9
-
-
0010569888
-
Performative facial expressions animated faces
-
J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, Eds. Cambridge, MA: MIT Press
-
I. Poggi and C. Pelachaud, "Performative facial expressions animated faces," in Embodied Conversational Agents, J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, Eds. Cambridge, MA:MIT Press, 2000, p. 154188.
-
(2000)
Embodied Conversational Agents
, pp. 154188
-
-
Poggi, I.1
Pelachaud, C.2
-
10
-
-
78049482995
-
Comparing rule-based and data-driven selection of facial displays
-
Prague, Czech Republic Jun
-
M. Foster, "Comparing rule-based and data-driven selection of facial displays," in Proc. Workshop Embodied Lang. Process., Assoc. for Comput. Linguist., Prague, Czech Republic, Jun. 2007, pp. 1-8.
-
(2007)
Proc. Workshop Embodied Lang. Process., Assoc. for Comput. Linguist.
, pp. 1-8
-
-
Foster, M.1
-
11
-
-
42949167982
-
Rigid head motion expressive speech animation: Analysis and synthesis
-
Mar
-
C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan, "Rigid head motion expressive speech animation: Analysis and synthesis," IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 3, pp. 1075-1086, Mar. 2007.
-
(2007)
IEEE Trans. Audio, Speech, Lang. Process.
, vol.15
, Issue.3
, pp. 1075-1086
-
-
Busso, C.1
Deng, Z.2
Grimm, M.3
Neumann, U.4
Narayanan, S.5
-
12
-
-
3042515718
-
Non-verbal cues for discourse structure
-
Toulouse, France Jul
-
J. Cassell, Y. Nakano, T. Bickmore, C. Sidner, and C. Rich, "Non-verbal cues for discourse structure," in Proc. 39th Annu. Meeting Assoc. for Comput. Linguist. (ACL2001), Toulouse, France, Jul. 2001, pp. 114-123.
-
(2001)
Proc. 39th Annu. Meeting Assoc. for Comput. Linguist. (ACL2001)
, pp. 114-123
-
-
Cassell, J.1
Nakano, Y.2
Bickmore, T.3
Sidner, C.4
Rich, C.5
-
13
-
-
84877622000
-
Speaking with hands: Creating animated conversational characters from recordings of human performance
-
August
-
M. Stone, D. DeCarlo, I. Oh, C. Rodriguez, A. Stere, A. Lees, and C. Bregler, "Speaking with hands: Creating animated conversational characters from recordings of human performance," ACM Trans. Graphics (TOG), vol. 23, pp. 506-513, August 2004.
-
(2004)
ACM Trans. Graphics (TOG)
, vol.23
, pp. 506-513
-
-
Stone, M.1
DeCarlo, D.2
Oh, I.3
Rodriguez, C.4
Stere, A.5
Lees, A.6
Bregler, C.7
-
14
-
-
27144506606
-
Natural head motion synthesis driven by acoustic prosodic features
-
DOI 10.1002/cav.80
-
C. Busso, Z. Deng, U. Neumann, and S. Narayanan, "Natural head motion synthesis driven by acoustic prosodic features," Comput. Animation and Virtual Worlds, vol. 16, no. 3-4, pp. 283-290, Jul. 2005. (Pubitemid 41495224)
-
(2005)
Computer Animation and Virtual Worlds
, vol.16
, Issue.3-4
, pp. 283-290
-
-
Busso, C.1
Deng, Z.2
Neumann, U.3
Narayanan, S.4
-
15
-
-
72349087042
-
Learning expressive human-like head motion sequences from speech
-
Z. Deng and U. Neumann, Eds. Surrey, U.K.: Springer-Verlag
-
C. Busso, Z. Deng, U. Neumann, and S. Narayanan, "Learning expressive human-like head motion sequences from speech," in Data-Driven 3D Facial Animations, Z. Deng and U. Neumann, Eds. Surrey, U.K.: Springer-Verlag, 2007, pp. 113-131.
-
(2007)
Data-Driven 3D Facial Animations
, pp. 113-131
-
-
Busso, C.1
Deng, Z.2
Neumann, U.3
Narayanan, S.4
-
16
-
-
42949107237
-
Interrelation between speech and facial gestures emotional utterances: A single subject study
-
Nov
-
C. Busso and S. Narayanan, "Interrelation between speech and facial gestures emotional utterances: A single subject study," IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 8, pp. 2331-2347, Nov. 2007.
-
(2007)
IEEE Trans. Audio, Speech, Lang. Process.
, vol.15
, Issue.8
, pp. 2331-2347
-
-
Busso, C.1
Narayanan, S.2
-
17
-
-
0030355346
-
Characterizing audiovisual information during speech
-
Philadelphia, PA Oct
-
E. Vatikiotis-Bateson, K. Munhall, Y. Kasahara, F. Garcia, and H. Yehia, "Characterizing audiovisual information during speech," in Proc. 4h Int. Conf. Spoken Lang. Process. (ICSLP 96), Philadelphia, PA, Oct. 1996, vol. 3, pp. 1485-1488.
-
(1996)
Proc. 4h Int. Conf. Spoken Lang. Process. (ICSLP 96)
, vol.3
, pp. 1485-1488
-
-
Vatikiotis-Bateson, E.1
Munhall, K.2
Kasahara, Y.3
Garcia, F.4
Yehia, H.5
-
18
-
-
0032178592
-
Quantitative association of vocal-tract and facial behavior
-
PII S016763939800048X
-
H. Yehia, P. Rubin, and E. Vatikiotis-Bateson, "Quantitative association of vocal-tract and facial behavior," Speech Commun., vol. 26, no. 1-2, pp. 23-43, 1998. (Pubitemid 128381217)
-
(1998)
Speech Communication
, vol.26
, Issue.1-2
, pp. 23-43
-
-
Yehia, H.1
Rubin, P.2
Vatikiotis-Bateson, E.3
-
19
-
-
21844443583
-
Audiovisual representation of prosody in expressive speech communication
-
DOI 10.1016/j.specom.2005.02.017, PII S0167639305001032, Quantitative Prosody Modelling for Natural Speech Description and Generation
-
B. Granström and D. House, "Audiovisual representation of prosody expressive speech communication," Speech Commun., vol. 46, no. 3-4, pp. 473-484, Jul. 2005. (Pubitemid 40952529)
-
(2005)
Speech Communication
, vol.46
, Issue.3-4
, pp. 473-484
-
-
Granstrom, B.1
House, D.2
-
20
-
-
0030369502
-
About the relationship between eyebrow movements and F0 variations
-
Philadelphia, PA Oct
-
C. Cavé, I. Guaïtella, R. Bertrand, S. Santi, F. Harlay, and R. Espesser, "About the relationship between eyebrow movements and F0 variations," in Proc. Int. Conf. Spoken Lang. (ICSLP), Philadelphia, PA, Oct. 1996, vol. 4, pp. 2175-2178.
-
(1996)
Proc. Int. Conf. Spoken Lang. (ICSLP)
, vol.4
, pp. 2175-2178
-
-
Cavé, C.1
Guaïtella, I.2
Bertrand, R.3
Santi, S.4
Harlay, F.5
Espesser, R.6
-
21
-
-
77951108686
-
Eyebrowraises dialogue and their relation to discourse structure, utterance function and pitch accents English
-
Jun
-
M. L. Flecha-García, "Eyebrowraises dialogue and their relation to discourse structure, utterance function and pitch accents English," Speech Commun., vol. 52, pp. 542-554, Jun. 2010.
-
(2010)
Speech Commun.
, vol.52
, pp. 542-554
-
-
Flecha-García, M.L.1
-
22
-
-
78650465043
-
Visual prosody: Facial movements accompanying speech
-
Washington May
-
H. P. Graf, E. Cosatto, V. Strom, and F. J. Huang, "Visual prosody: Facial movements accompanying speech," in Proc. of IEEE Int. Conf. Autom. Faces and Gesture Recognition,Washington, D. C., May 2002, pp. 396-401.
-
(2002)
Proc. of IEEE Int. Conf. Autom. Faces and Gesture Recognition
, vol.600
, pp. 396-401
-
-
Graf, H.P.1
Cosatto, E.2
Strom, V.3
Huang, F.J.4
-
23
-
-
1642405348
-
Visual prosody and speech intelligibility: Head movement improves auditory speech perception
-
K. G. Munhall, J. A. Jones, D. E. Callan, T. Kuratate, and E. Vatikiotis-Bateson, "Visual prosody and speech intelligibility: Head movement improves auditory speech perception," Psychol. Sci., vol. 15, no. 2, pp. 133-137, February 2004. (Pubitemid 38361306)
-
(2004)
Psychological Science
, vol.15
, Issue.2
, pp. 133-137
-
-
Munhall, K.G.1
Jones, J.A.2
Callan, D.E.3
Kuratate, T.4
Vatikiotis-Bateson, E.5
-
24
-
-
0032634966
-
Embodiment conversational interfaces: Rea
-
Pittsburgh, PA May
-
J. Cassell, T. Bickmore, M. Billinghurst, L. Campbell, K. Chang, H. Vilhjalmsson, and H. Yan, "Embodiment conversational interfaces: Rea," in Proc. Int. Conf. Human Factors Comput. Syst. (CHI-99), Pittsburgh, PA, May 1999, pp. 520-527.
-
(1999)
Proc. Int. Conf. Human Factors Comput. Syst. (CHI-99)
, pp. 520-527
-
-
Cassell, J.1
Bickmore, T.2
Billinghurst, M.3
Campbell, L.4
Chang, K.5
Vilhjalmsson, H.6
Yan, H.7
-
25
-
-
0002473893
-
Generating facial expressions for speech
-
DOI 10.1016/S0364-0213(99)80001-9
-
C. Pelachaud, N. Badler, and M. Steedman, "Generating facial expressions for speech," Cognitive Sci., vol. 20, no. 1, pp. 1-46, January 1996. (Pubitemid 126159862)
-
(1996)
Cognitive Science
, vol.20
, Issue.1
, pp. 1-46
-
-
Pelachaud, C.1
Badler, N.I.2
Steedman, M.3
-
26
-
-
84860554331
-
An expressive ECA showing complex emotions
-
Newcastle, U.K., Apr. 2007
-
E. Bevacqua, M. Mancini, R. Niewiadomski, and C. Pelachaud, "An expressive ECA showing complex emotions," in Proc. Artif. Intell. and Simulation of Behaviour (AISB 2007) Annu. Conv., Newcastle, U.K., Apr. 2007, pp. 208-216.
-
Proc. Artif. Intell. and Simulation of Behaviour (AISB 2007) Annu. Conv
, pp. 208-216
-
-
Bevacqua, E.1
Mancini, M.2
Niewiadomski, R.3
Pelachaud, C.4
-
27
-
-
77954605110
-
Making discourse visible: Coding and animating conversational facial displays
-
Geneva, Switzerland, Jun. 2002
-
D. DeCarlo, C. Revilla, M. Stone, and J. Venditti, "Making discourse visible: Coding and animating conversational facial displays," in Proc. Comput. Animat. (CA 2002), Geneva, Switzerland, Jun. 2002, pp. 11-16.
-
Proc. Comput. Animat. (CA 2002)
, pp. 11-16
-
-
DeCarlo, D.1
Revilla, C.2
Stone, M.3
Venditti, J.4
-
28
-
-
0000986839
-
Facial animation and head motion driven by speech acoustics
-
Bavaria, Germany May
-
H. Yehia, T. Kuratate, and E. Vatikiotis-Bateson, "Facial animation and head motion driven by speech acoustics," in Proc. 5th Seminar Speech Production: Models and Data, Kloster Seeon, Bavaria, Germany, May 2000, pp. 265-268.
-
(2000)
Proc. 5th Seminar Speech Production: Models and Data, Kloster Seeon
, pp. 265-268
-
-
Yehia, H.1
Kuratate, T.2
Vatikiotis-Bateson, E.3
-
29
-
-
0026156861
-
A media conversion from speech to facial image for intelligent man-machine interface
-
DOI 10.1109/49.81953
-
S. Morishima and H. Harashima, "A media conversion from speech to facial image for intelligent man-machine interface," IEEE J. Sel. Areas Commun., vol. 9, no. 4, pp. 594-600, May 1991. (Pubitemid 21645615)
-
(1991)
IEEE Journal on Selected Areas in Communications
, vol.9
, Issue.4
, pp. 594-600
-
-
Morishima Shigeo1
Harashima Hiroshi2
-
30
-
-
0031997085
-
Audio-to-visual conversion for multimedia communication
-
PII S0278004698004158
-
R. Rao, T. Chen, and R. Mersereau, "Audio-to-visual conversion for multimedia communication," IEEE Trans. Industrial Electron., vol. 45, no. 1, pp. 15-22, Feb. 1998. (Pubitemid 128739734)
-
(1998)
IEEE Transactions on Industrial Electronics
, vol.45
, Issue.1
, pp. 15-22
-
-
Rao, R.R.1
Chen, T.2
Mersereau, R.M.3
-
31
-
-
0035426641
-
Hidden Markov model inversion for audio-to-visual conversion in an MPEG-4 facial animation system
-
DOI 10.1023/A:1011171430700
-
K. Choi, Y. Luo, and J. Hwang, "Hidden Markov model inversion for audio-to-visual conversion an MPEG-4 facial animation system," J. VLSI Signal Process., vol. 29, pp. 51-61, Aug. 2001. (Pubitemid 32693621)
-
(2001)
Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology
, vol.29
, Issue.1-2
, pp. 51-61
-
-
Choi, K.1
Luo, Y.2
Hwang, J.-N.3
-
32
-
-
84865364960
-
-
Ph.D. dissertation Univ. of Zagreb, Zagreb, Croatia, Jul
-
G. Zoric, "Hybrid Approach to Real-Time Speech Driven Facial Gesturing of Virtual Characters," Ph.D. dissertation, Univ. of Zagreb, Zagreb, Croatia, Jul. 2010.
-
(2010)
Hybrid Approach to Real-Time Speech Driven Facial Gesturing of Virtual Characters
-
-
Zoric, G.1
-
33
-
-
0036874999
-
Dynamic Bayesian networks for audio-visual speech recognition
-
Jan
-
A. V. Nefian, L. Liang, X. Pi, X. Liu, and K. Murphy, "Dynamic Bayesian networks for audio-visual speech recognition," EURASIP J. Appl. Signal Process., vol. 2002, pp. 1274-1288, Jan. 2002.
-
(2002)
EURASIP J. Appl. Signal Process.
, vol.2002
, pp. 1274-1288
-
-
Nefian, A.V.1
Liang, L.2
Pi, X.3
Liu, X.4
Murphy, K.5
-
34
-
-
34247623168
-
Acoustically-driven talking face synthesis using dynamic Bayesian networks
-
DOI 10.1109/ICME.2006.262743, 4036812, 2006 IEEE International Conference on Multimedia and Expo, ICME 2006 - Proceedings
-
J. Xue, J. Borgstrom, J. Jiang, L. Bernstein, and A. Alwan, "Acoustically-driven talking face synthesis using dynamic Bayesian networks," in Proc. IEEE Int. Conf. Multimedia and Expo (ICME 2006), Toronto, ON, Canada, Jul. 2006, pp. 1165-1168. (Pubitemid 46679928)
-
(2006)
2006 IEEE International Conference on Multimedia and Expo, ICME 2006 - Proceedings
, vol.2006
, pp. 1165-1168
-
-
Xue, J.1
Borgstrom, J.2
Jiang, J.3
Bernstein, L.E.4
Alwan, A.5
-
35
-
-
33645777234
-
Expressive speech-driven facial animation
-
Oct
-
Y. Cao, W. Tien, P. Faloutsos, and F. Pighin, "Expressive speech-driven facial animation," ACM Trans. Graphics, vol. 24, pp. 1283-1302, Oct. 2005.
-
(2005)
ACM Trans. Graphics
, vol.24
, pp. 1283-1302
-
-
Cao, Y.1
Tien, W.2
Faloutsos, P.3
Pighin, F.4
-
36
-
-
59849093076
-
IEMOCAP: Interactive emotional dyadic motion capture database
-
Dec
-
C. Busso, M. Bulut, C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. Chang, S. Lee, and S. Narayanan, "IEMOCAP: Interactive emotional dyadic motion capture database," J. Lang. Resources Eval., vol. 42, no. 4, pp. 335-359, Dec. 2008.
-
(2008)
J. Lang. Resources Eval.
, vol.42
, Issue.4
, pp. 335-359
-
-
Busso, C.1
Bulut, M.2
Lee, C.3
Kazemzadeh, A.4
Mower, E.5
Kim, S.6
Chang, J.7
Lee, S.8
Narayanan, S.9
-
38
-
-
84890517975
-
Least-squares fitting of two 3-d point sets
-
Sep
-
K. Arun, T. Huang, and S. Blostein, "Least-squares fitting of two 3-d point sets," IEEE Trans. Pattern Anal. Mach. Intell., vol. 9, no. 5, pp. 698-700, Sep. 1987.
-
(1987)
IEEE Trans. Pattern Anal. Mach. Intell.
, vol.9
, Issue.5
, pp. 698-700
-
-
Arun, K.1
Huang, T.2
Blostein, S.3
-
39
-
-
0036817570
-
FAP extraction using three-dimensional motion estimation
-
DOI 10.1109/TCSVT.2002.804888
-
N. Sarris, N. Grammalidis, and M. Strintzis, "FAP extraction using three-dimensional motion estimation," IEEE Trans. Circuits Syst. Video Technol., vol. 12, no. 8, pp. 865-876, Oct. 2002. (Pubitemid 35468411)
-
(2002)
IEEE Transactions on Circuits and Systems for Video Technology
, vol.12
, Issue.10
, pp. 865-876
-
-
Sarris, N.1
Grammalidis, N.2
Strintzis, M.G.3
-
40
-
-
0004035636
-
Praat, a system for doing phonetics by computer
-
Univ. of Amsterdam, Amsterdam, The Netherlands, Tech. Rep. [Online]. Available:
-
P. Boersma and D. Weeninck, "Praat, a system for doing phonetics by computer," Inst. of Phonetic Sci., Univ. of Amsterdam, Amsterdam, The Netherlands, Tech. Rep. 132, 1996 [Online]. Available: http://www.praat.org
-
(1996)
Inst. of Phonetic Sci.
, vol.132
-
-
Boersma, P.1
Weeninck, D.2
-
42
-
-
0031268341
-
Factorial hidden Markov models
-
Z. Ghahramani and M. I. Jordan, "Factorial hidden Markov models," Mach. Learn., vol. 29, pp. 245-273, Nov. 1997. (Pubitemid 127510040)
-
(1997)
Machine Learning
, vol.29
, Issue.2-3
, pp. 245-273
-
-
Ghahramani, Z.1
Jordan, M.I.2
-
43
-
-
0013288412
-
-
Ph.D. dissertation Univ. of California, Berkely Fall
-
K. Murphy, "Dynamic Bayesian networks: Representation, inference and learning," Ph.D. dissertation, Univ. of California, Berkely, Fall, 2002.
-
(2002)
Dynamic Bayesian networks: Representation, inference and learning
-
-
Murphy, K.1
-
45
-
-
84876513525
-
Xface: MPEG-4 based open source toolkit for 3D facial animation
-
Gallipoli, Italy, May 2004
-
K. Balci, "Xface: MPEG-4 based open source toolkit for 3D facial animation," in Proc. Conf. Adv. Vis. Interfaces (AVI 2004), Gallipoli, Italy, May 2004, pp. 399-402.
-
Proc. Conf. Adv. Vis. Interfaces (AVI 2004)
, pp. 399-402
-
-
Balci, K.1
-
46
-
-
17644380476
-
Robust methods for canonical correlation analysis
-
Berlin, Germany: Springer-Verlag
-
C. Dehon, P. Filzmoser, and C. Croux, "Robust methods for canonical correlation analysis," in Data Analysis, Classification, Related Methods. Berlin, Germany: Springer-Verlag, 2000, pp. 321-326.
-
(2000)
Data Analysis, Classification, Related Methods
, pp. 321-326
-
-
Dehon, C.1
Filzmoser, P.2
Croux, C.3
-
47
-
-
65249116503
-
Analysis of emotionally salient aspects of fundamental frequency for emotion detection
-
May
-
C. Busso, S. Lee, and S. Narayanan, "Analysis of emotionally salient aspects of fundamental frequency for emotion detection," IEEE Trans. Audio, Speech, Lang. Process., vol. 17, no. 4, pp. 582-596, May 2009.
-
(2009)
IEEE Trans. Audio, Speech, Lang. Process.
, vol.17
, Issue.4
, pp. 582-596
-
-
Busso, C.1
Lee, S.2
Narayanan, S.3
|