-
1
-
-
77953608058
-
Developing multimodal interactive systems with eyesweb xmi
-
New York, NY, USA, ACM
-
A. Camurri, P. Coletta, G. Varni, and S. Ghisio. Developing multimodal interactive systems with eyesweb xmi. In NIME '07: Proc. of the 7th international conference on New interfaces for musical expression, pages 305-308, New York, NY, USA, 2007. ACM.
-
(2007)
NIME '07: Proc. of the 7th international conference on New interfaces for musical expression
, pp. 305-308
-
-
Camurri, A.1
Coletta, P.2
Varni, G.3
Ghisio, S.4
-
2
-
-
84899873561
-
Emotional input for character-based interactive storytelling
-
Budapest, Hungary
-
F. Charles, D. Pizzi, M. Cavazza, T. Vogt, and E. Andre©. Emotional input for character-based interactive storytelling. In The 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Budapest, Hungary, 2009.
-
(2009)
The 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS)
-
-
Charles, F.1
Pizzi, D.2
Cavazza, M.3
Vogt, T.4
Andre©, E.5
-
4
-
-
67649395677
-
An affective model of user experience for interactive art
-
New York, NY, USA, ACM
-
S. W. Gilroy, M. Cavazza, R. Chaignon, S.-M. Mäkelä, M. Niranen, E. André, T. Vogt, J. Urbain, H. Seichter, M. Billinghurst, and M. Benayoun. An affective model of user experience for interactive art. In ACE '08: Proc. of the 2008 International Conference on Advances in Computer Entertainment Technology, pages 107-110, New York, NY, USA, 2008. ACM.
-
(2008)
ACE '08: Proc. of the 2008 International Conference on Advances in Computer Entertainment Technology
, pp. 107-110
-
-
Gilroy, S.W.1
Cavazza, M.2
Chaignon, R.3
Mäkelä, S.-M.4
Niranen, M.5
André, E.6
Vogt, T.7
Urbain, J.8
Seichter, H.9
Billinghurst, M.10
Benayoun, M.11
-
5
-
-
84932644890
-
Real-time inference of complex mental states from facial expressions and head gestures
-
R. E. Kaliouby and P. Robinson. Real-time inference of complex mental states from facial expressions and head gestures. Computer Vision and Pattern RecognitionWorkshop, 10:154, 2004.
-
(2004)
Computer Vision and Pattern RecognitionWorkshop
, vol.10
, pp. 154
-
-
Kaliouby, R.E.1
Robinson, P.2
-
7
-
-
33744969564
-
Face detection and tracking in video sequences using the modi.edcensus transformation
-
Face Processing in Video Sequences
-
C. Küblbeck and A. Ernst. Face detection and tracking in video sequences using the modi.edcensus transformation. Image and Vision Computing, 24(6):564-572, 2006. Face Processing in Video Sequences.
-
(2006)
Image and Vision Computing
, vol.24
, Issue.6
, pp. 564-572
-
-
Küblbeck, C.1
Ernst, A.2
-
8
-
-
57349087888
-
Exploring emotions and multimodality in digitally augmented puppeteering
-
New York, NY, USA, ACM
-
L. A. Liikkanen, G. Jacucci, E. Huvio, T. Laitinen, and E. Andre. Exploring emotions and multimodality in digitally augmented puppeteering. In AVI '08: Proc. of the working conference on Advanced visual interfaces, pages 339-342, New York, NY, USA, 2008. ACM.
-
(2008)
AVI '08: Proc. of the working conference on Advanced visual interfaces
, pp. 339-342
-
-
Liikkanen, L.A.1
Jacucci, G.2
Huvio, E.3
Laitinen, T.4
Andre, E.5
-
9
-
-
37848999283
-
-
L. Maat and M. Pantic. Gaze-x: Adaptive, affective, multimodal interface for single-user of.ce scenarios. pages 251-271. 2007.
-
L. Maat and M. Pantic. Gaze-x: Adaptive, affective, multimodal interface for single-user of.ce scenarios. pages 251-271. 2007.
-
-
-
-
11
-
-
2942590310
-
Toward an affect-sensitive multimodal human-computer interaction
-
M. Pantic and L. J. M. Rothkrantz. Toward an affect-sensitive multimodal human-computer interaction. In Proc. of the IEEE, pages 1370-1390, 2003.
-
(2003)
Proc. of the IEEE
, pp. 1370-1390
-
-
Pantic, M.1
Rothkrantz, L.J.M.2
-
12
-
-
0003959340
-
Affective computing. Technical Report 321, MIT Media Laboratory, Perceptual Computing Section
-
November
-
R. Picard. Affective computing. Technical Report 321, MIT Media Laboratory, Perceptual Computing Section, November 1995.
-
(1995)
-
-
Picard, R.1
-
14
-
-
57049148523
-
The openinterface framework: A tool for multimodal interaction
-
New York, NY, USA, ACM
-
M. Serrano, L. Nigay, J.-Y. L. Lawson, A. Ramsay, R. Murray-Smith, and S. Denef. The openinterface framework: a tool for multimodal interaction. In CHI '08: CHI '08 extended abstracts on Human factors in computing systems, pages 3501-3506, New York, NY, USA, 2008. ACM.
-
(2008)
CHI '08: CHI '08 extended abstracts on Human factors in computing systems
, pp. 3501-3506
-
-
Serrano, M.1
Nigay, L.2
Lawson, J.-Y.L.3
Ramsay, A.4
Murray-Smith, R.5
Denef, S.6
-
15
-
-
77949355886
-
Emovoice - a framework for online recognition of emotions from voice
-
Kloster Irsee, Germany, June, Springer
-
T. Vogt, E. André, and N. Bee. Emovoice - a framework for online recognition of emotions from voice. In Proc. of Workshop on Perception and Interactive Technologies for Speech-Based Systems, Kloster Irsee, Germany, June 2008. Springer.
-
(2008)
Proc. of Workshop on Perception and Interactive Technologies for Speech-Based Systems
-
-
Vogt, T.1
André, E.2
Bee, N.3
|