-
1
-
-
57549094671
-
Fusion of audio and visual cues for laughter detection
-
S. Petridis and M. Pantic, "Fusion of audio and visual cues for laughter detection," in Proc. ACM CIVR, 2008, pp. 329-337.
-
Proc. ACM CIVR, 2008
, pp. 329-337
-
-
Petridis, S.1
Pantic, M.2
-
2
-
-
63449114517
-
Audiovisual laughter detection based on temporal features
-
S. Petridis and M. Pantic, "Audiovisual laughter detection based on temporal features," in Proc. ACM ICMI, 2008, pp. 37-44.
-
Proc. ACM ICMI, 2008
, pp. 37-44
-
-
Petridis, S.1
Pantic, M.2
-
3
-
-
57949114004
-
Decision-level fusion for audio-visual laughter detection
-
B. Reuderink, M. Poel, K. Truong, R. Poppe, M. Pantic, "Decision-level fusion for audio-visual laughter detection," LNCS, 2008, vol. 5237, pp. 137-148.
-
(2008)
LNCS
, vol.5237
, pp. 137-148
-
-
Reuderink, B.1
Poel, M.2
Truong, K.3
Poppe, R.4
Pantic, M.5
-
4
-
-
0036656895
-
Linking facial animation, head motion and speech acoustics
-
H.C. Yehia, T. Kuratate, and E. Vatikiotis-Bateson, "Linking facial animation, head motion and speech acoustics," Journal of Phonetics, vol. 30, no. 3, pp. 555-568, 2002.
-
(2002)
Journal of Phonetics
, vol.30
, Issue.3
, pp. 555-568
-
-
Yehia, H.C.1
Kuratate, T.2
Vatikiotis-Bateson, E.3
-
5
-
-
42949107237
-
Interrelation between speech and facial gestures in emotional utterances: A single subject study
-
C. Busso and S. Narayanan, "Interrelation between speech and facial gestures in emotional utterances: A single subject study," IEEE Trans. Audio, Speech and Language Proc., vol. 15, no. 8, pp. 2331-2347, 2007.
-
(2007)
IEEE Trans. Audio, Speech and Language Proc.
, vol.15
, Issue.8
, pp. 2331-2347
-
-
Busso, C.1
Narayanan, S.2
-
6
-
-
56749174163
-
A linear model of acoustic-to-facial mapping: Model parameters, data set size, and generalization across speakers
-
M. S. Craig, P. Lieshout, andW.Wong, "A linear model of acoustic-to-facial mapping: Model parameters, data set size, and generalization across speakers," J. Acoustical Soc. America, vol. 124, no. 5, pp. 3183-3190, 2008.
-
(2008)
J. Acoustical Soc. America
, vol.124
, Issue.5
, pp. 3183-3190
-
-
Craig, M.S.1
Lieshout, P.2
Wong, W.3
-
8
-
-
57549096216
-
The AMI meeting corpus
-
I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, and V. Karaiskos, "The AMI meeting corpus," in Int'l. Conf. on Methods and Techniques in Behavioral Research, 2005, pp. 137-140.
-
Int'l. Conf. on Methods and Techniques in Behavioral Research, 2005
, pp. 137-140
-
-
McCowan, I.1
Carletta, J.2
Kraaij, W.3
Ashby, S.4
Bourban, S.5
Flynn, M.6
Guillemot, M.7
Hain, T.8
Kadlec, J.9
Karaiskos, V.10
-
9
-
-
77949380781
-
The Sensitive Artificial Listener: An induction technique for generating emotionally coloured conversation
-
E. Douglas-Cowie, R. Cowie, C. Cox, N. Amir, and D. Heylen, "The Sensitive Artificial Listener: an induction technique for generating emotionally coloured conversation," in Workshop on Corpora for Research on Emotion and Affect, 2008, pp. 1-4.
-
Workshop on Corpora for Research on Emotion and Affect, 2008
, pp. 1-4
-
-
Douglas-Cowie, E.1
Cowie, R.2
Cox, C.3
Amir, N.4
Heylen, D.5
-
11
-
-
34548063185
-
Toward pose-invariant 2-D face recognition through point distribution models and facial symmetry
-
D. G. Jimenez and J. L. A. Castro, "Toward pose-invariant 2-D face recognition through point distribution models and facial symmetry," IEEE Trans. Inform. Forensics and Security, vol. 2, no. 3, pp. 413-429, 2007.
-
(2007)
IEEE Trans. Inform. Forensics and Security
, vol.2
, Issue.3
, pp. 413-429
-
-
Jimenez, D.G.1
Castro, J.L.A.2
-
12
-
-
74049087896
-
Static vs. Dynamic Modelling of Human Nonverbal Behavior from Multiple Cues and Modalities
-
S. Petridis, H. Gunes, S. Kaltwang, and M. Pantic, "Static vs. Dynamic Modelling of Human Nonverbal Behavior from Multiple Cues and Modalities," in Proc. ACM ICMI, 2009, pp. 23-30.
-
Proc. ACM ICMI, 2009
, pp. 23-30
-
-
Petridis, S.1
Gunes, H.2
Kaltwang, S.3
Pantic, M.4
|