-
1
-
-
0001183904
-
Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models
-
ALLOPENNA, P.,MAGNUSON, J., AND TANENHAUS, M. 1998. Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. J. Mem. Lang. 38, 4, 419-439.
-
(1998)
J. Mem. Lang.
, vol.38
, Issue.4
, pp. 419-439
-
-
Allopenna, P.1
Magnuson, J.2
Tanenhaus, M.3
-
2
-
-
35348963191
-
Eye contact, distance and affiliation
-
ARGYLE, M. AND DEAN, J. 1965. Eye contact, distance and affiliation. Sociometry 28, 2, 89-304.
-
(1965)
Sociometry
, vol.28
, Issue.2
, pp. 89-304
-
-
Argyle, M.1
Dean, J.2
-
3
-
-
0005057904
-
Gaze, mutual gaze, and proximity
-
ARGYLE, M. AND INGHAM, J. 1972. Gaze, mutual gaze, and proximity. Semiotica 6, 32-49.
-
(1972)
Semiotica
, vol.6
, pp. 32-49
-
-
Argyle, M.1
Ingham, J.2
-
4
-
-
21144470546
-
Early referential understanding: Infants' ability to recognize referential acts for what they are
-
BALDWIN, D. 1993. Early referential understanding: Infants' ability to recognize referential acts for what they are. Dev. Psych. 29, 5, 832-843.
-
(1993)
Dev. Psych.
, vol.29
, Issue.5
, pp. 832-843
-
-
Baldwin, D.1
-
5
-
-
0031429913
-
Deictic codes for the embodiment of cognition
-
BALLARD, D., HAYHOE, M., POOK, P., AND RAO, R. 1997. Deictic codes for the embodiment of cognition. Behav. Brain Sci. 20, 4, 723-742.
-
(1997)
Behav. Brain Sci.
, vol.20
, Issue.4
, pp. 723-742
-
-
Ballard, D.1
Hayhoe, M.2
Pook, P.3
Rao, R.4
-
6
-
-
78650965179
-
Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application
-
BEE, N.,WAGNER, J., ANDRE, E., VOGT, T., CHARLES, F., PIZZI, D., AND CAVAZZA, M. 2010a. Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application. In Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction.
-
(2010)
Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
-
-
Bee, N.1
Wagner, J.2
Andre, E.3
Vogt, T.4
Charles, F.5
Pizzi, D.6
Cavazza, M.7
-
7
-
-
84897920671
-
Multimodal interaction with a virtual character in interactive storytelling
-
BEE, N.,WAGNER, J., ANDRE, E., VOGT, T., CHARLES, F., PIZZI, D., AND CAVAZZA, M. 2010b. Multimodal interaction with a virtual character in interactive storytelling. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems. 1535-1536.
-
(2010)
Proceedings of the International Conference on Autonomous Agents and Multiagent Systems
, pp. 1535-1536
-
-
Bee, N.1
Wagner, J.2
Andre, E.3
Vogt, T.4
Charles, F.5
Pizzi, D.6
Cavazza, M.7
-
8
-
-
23044522891
-
Infant-like social interactions between a robot and a human caregiver
-
BREAZEAL, C. AND SCASSELLATI, B. 2000. Infant-like social interactions between a robot and a human caregiver. Adapt. Behav. 8, 1, 49.
-
(2000)
Adapt. Behav.
, vol.8
, Issue.1
, pp. 49
-
-
Breazeal, C.1
Scassellati, B.2
-
9
-
-
38649100638
-
Coordinating cognition: The costs and benefits of shared gaze during collaborative search
-
BRENNAN, S., CHEN, X., DICKINSON, C., NEIDER, M., AND ZELINSKY, G. 2008. Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition 106, 3, 1465-1477.
-
(2008)
Cognition
, vol.106
, Issue.3
, pp. 1465-1477
-
-
Brennan, S.1
Chen, X.2
Dickinson, C.3
Neider, M.4
Zelinsky, G.5
-
12
-
-
0347031267
-
Speaking while monitoring addressees for understanding
-
CLARK, H. AND KRYCH, M. 2004. Speaking while monitoring addressees for understanding. J. Mem. Lang. 50, 1, 62-81.
-
(2004)
J. Mem. Lang.
, vol.50
, Issue.1
, pp. 62-81
-
-
Clark, H.1
Krych, M.2
-
15
-
-
0034232298
-
What the eyes say about speaking
-
GRIFFIN, Z. AND BOCK, K. 2000. What the eyes say about speaking. Psych. Sci. 11, 4, 274.
-
(2000)
Psych. Sci.
, vol.11
, Issue.4
, pp. 274
-
-
Griffin, Z.1
Bock, K.2
-
17
-
-
34748917750
-
Speakers' eye gaze disambiguates referring expressions early during faceto-face conversation
-
HANNA, J. AND BRENNAN, S. 2007. Speakers' eye gaze disambiguates referring expressions early during faceto-face conversation. J. Mem. Lang. 57, 4, 596-615.
-
(2007)
J. Mem. Lang.
, vol.57
, Issue.4
, pp. 596-615
-
-
Hanna, J.1
Brennan, S.2
-
19
-
-
17444385937
-
Eye movements in natural behavior
-
HAYHOE, M. AND BALLARD, D. 2005. Eye movements in natural behavior. Trends Cogn. Sci. 9, 4, 188-194.
-
(2005)
Trends Cogn. Sci.
, vol.9
, Issue.4
, pp. 188-194
-
-
Hayhoe, M.1
Ballard, D.2
-
20
-
-
0038175227
-
Visual memory and motor planning in a natural task
-
HAYHOE, M. M., SHRIVASTAVA, A., MRUCZEK, R., AND PELZ, J. B. 2003. Visual memory and motor planning in a natural task. J. Vis. 3, 1, 49-63.
-
(2003)
J. Vis.
, vol.3
, Issue.1
, pp. 49-63
-
-
Hayhoe, M.M.1
Shrivastava, A.2
Mruczek, R.3
Pelz, J.B.4
-
24
-
-
0033252705
-
The roles of vision and eye movements in the control of activities of daily living
-
LAND, M.,MENNIE, N., AND RUSTED, J. 1999. The roles of vision and eye movements in the control of activities of daily living. Perception 28, 11, 1311-1328.
-
(1999)
Perception
, vol.28
, Issue.11
, pp. 1311-1328
-
-
Land, M.1
Mennie, N.2
Rusted, J.3
-
25
-
-
84903958802
-
The uncanny advantage of using androids in cognitive and social science research
-
MACDORMAN, K. AND ISHIGURO, H. 2006. The uncanny advantage of using androids in cognitive and social science research. Interact. Stud. 7, 3, 297-337.
-
(2006)
Interact. Stud.
, vol.7
, Issue.3
, pp. 297-337
-
-
MacDorman, K.1
Ishiguro, H.2
-
26
-
-
0032060018
-
Viewing and naming objects: Eye movements during noun phrase production
-
MEYER, A., SLEIDERINK, A., AND LEVELT, W. 1998. Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66, 2, 25-33.
-
(1998)
Cognition
, vol.66
, Issue.2
, pp. 25-33
-
-
Meyer, A.1
Sleiderink, A.2
Levelt, W.3
-
29
-
-
84872694394
-
Towards a model of face-to-face grounding
-
NAKANO, Y., REINSTEIN, G., STOCKY, T., AND CASSELL, J. 2003. Towards a model of face-to-face grounding. In Proceedings of the 41st Meeting of the Association for Computational Linguistics. 553-561.
-
(2003)
Proceedings of the 41st Meeting of the Association for Computational Linguistics
, pp. 553-561
-
-
Nakano, Y.1
Reinstein, G.2
Stocky, T.3
Cassell, J.4
-
30
-
-
0034919335
-
The coordination of eye, head, and hand movements in a natural task
-
PELZ, J., HAYHOE, M., AND LOEBER, R. 2001. The coordination of eye, head, and hand movements in a natural task. Exper. Brain Resear. 139, 3, 266-277.
-
(2001)
Exper. Brain Resear.
, vol.139
, Issue.3
, pp. 266-277
-
-
Pelz, J.1
Hayhoe, M.2
Loeber, R.3
-
31
-
-
77952680332
-
Context-based word acquisition for situated dialogue in a virtual world
-
QU, S. AND CHAI, J. 2010. Context-based word acquisition for situated dialogue in a virtual world. J. Artif. Intel. Resear. 37, 1, 247-278.
-
(2010)
J. Artif. Intel. Resear.
, vol.37
, Issue.1
, pp. 247-278
-
-
Qu, S.1
Chai, J.2
-
32
-
-
0032215040
-
Eye movements in reading and information processing: 20 years of research
-
RAYNER, K. 1998. Eye movements in reading and information processing: 20 years of research. Psych. Bull. 124, 372-422.
-
(1998)
Psych. Bull.
, vol.124
, pp. 372-422
-
-
Rayner, K.1
-
34
-
-
33947546171
-
First steps toward natural human-like HRI
-
SCHEUTZ, M., SCHERMERHORN, P., KRAMER, J., AND ANDERSON, D. 2007. First steps toward natural human-like HRI. Auton. Robots, 22, 4, 411-423.
-
(2007)
Auton. Robots
, vol.22
, Issue.4
, pp. 411-423
-
-
Scheutz, M.1
Schermerhorn, P.2
Kramer, J.3
Anderson, D.4
-
35
-
-
77949308313
-
Less is more: A minimalist account of joint action in communication
-
SHINTEL, H. AND KEYSAR, B. 2009. Less is more: A minimalist account of joint action in communication. Topics Cogn. Sci. 1, 2, 260-273.
-
(2009)
Topics Cogn. Sci.
, vol.1
, Issue.2
, pp. 260-273
-
-
Shintel, H.1
Keysar, B.2
-
36
-
-
0041350327
-
Mutual interpersonal postural constraints are involved in cooperative conversation
-
SHOCKLEY, K., SANTANA, M., AND FOWLER, C. 2003. Mutual interpersonal postural constraints are involved in cooperative conversation. J. Exper. Psych. 29, 2, 326-332.
-
(2003)
J. Exper. Psych.
, vol.29
, Issue.2
, pp. 326-332
-
-
Shockley, K.1
Santana, M.2
Fowler, C.3
-
38
-
-
0029033939
-
Integration of visual and linguistic information in spoken language comprehension
-
TANENHAUS, M., SPIVEY-KNOWLTON, M., EBERHARD, K., AND SEDIVY, J. 1995. Integration of visual and linguistic information in spoken language comprehension. Science, 268, 5217, 1632-1634.
-
(1995)
Science
, vol.268
, Issue.5217
, pp. 1632-1634
-
-
Tanenhaus, M.1
Spivey-Knowlton, M.2
Eberhard, K.3
Sedivy, J.4
-
39
-
-
22244461883
-
Enabling effective humanrobot interaction using perspective-taking in robots
-
TRAFTON, J.,CASSIMATIS, N.,BUGAJSKA,M.,BROCK, D.,MINTZ, F., AND SCHULTZ, A. 2005.Enabling effective humanrobot interaction using perspective-taking in robots. IEEE Trans Syst. Man Cyber. 35, 4,460-470.
-
(2005)
IEEE Trans Syst. Man Cyber.
, vol.35
, Issue.4
, pp. 460-470
-
-
Trafton, J.1
Cassimatis, N.2
Bugajska, M.3
Brock, D.4
Mintz, F.5
Schultz, A.6
-
40
-
-
2442478859
-
Attentive user interfaces
-
VERTEGAAL, R. 2003. Attentive user interfaces. Comm. ACM 46, 3, 3133.
-
(2003)
Comm. ACM
, vol.46
, Issue.3
, pp. 3133
-
-
Vertegaal, R.1
-
41
-
-
57649155479
-
Precision timing in human-robot interaction: Coordination of head movement and utterance
-
YAMAZAKI, A.,YAMAZAKI, K.,KUNO, Y.,BURDELSKI, M.,KAWASHIMA, M., AND KUZUOKA, H. 2008. Precision timing in human-robot interaction: coordination of head movement and utterance. In Proceedings of the Conference on Human Factors in Computing Systems.
-
(2008)
Proceedings of the Conference on Human Factors in Computing Systems
-
-
Yamazaki, A.1
Yamazaki, K.2
Kuno, Y.3
Burdelski, M.4
Kawashima, M.5
Kuzuoka, H.6
-
42
-
-
84867456688
-
A multi modal learning interface for grounding spoken language in sensory perceptions
-
YU, C. AND BALLARD, D. 2004. A multi modal learning interface for grounding spoken language in sensory perceptions. ACM Trans. Appl. Percept. 1, 1, 57-80.
-
(2004)
ACM Trans. Appl. Percept.
, vol.1
, Issue.1
, pp. 57-80
-
-
Yu, C.1
Ballard, D.2
-
44
-
-
77951564226
-
Active information selection: Visual attention through the hands
-
YU, C., SMITH, L., SHEN, H., PEREIRA, A., AND SMITH, T. 2009a. Active information selection: Visual attention through the hands. IEEE Trans. Auton. Ment. Devel. 2, 141-151.
-
(2009)
IEEE Trans. Auton. Ment. Devel.
, vol.2
, pp. 141-151
-
-
Yu, C.1
Smith, L.2
Shen, H.3
Pereira, A.4
Smith, T.5
-
45
-
-
84884517307
-
A data-driven paradigm to understand multimodal communication in human-human and human-robot interaction
-
YU, C., SMITH, T., HIDAKA, S., SCHEUTZ, M., AND SMITH, L. 2010b. A data-driven paradigm to understand multimodal communication in human-human and human-robot interaction. In Advances in Intelligent Data Analysis, vol. 9, 232-244.
-
(2010)
Advances in Intelligent Data Analysis
, vol.9
, pp. 232-244
-
-
Yu, C.1
Smith, T.2
Hidaka, S.3
Scheutz, M.4
Smith, L.5
-
46
-
-
65449158083
-
Visual data mining of multimedia data for social and behavioral studies
-
YU, C., ZHONG, Y., SMITH, T., PARK, I., AND HUANG, W. 2009b. Visual data mining of multimedia data for social and behavioral studies. Inf. Visual. 8, 1, 56-70.
-
(2009)
Inf. Visual.
, vol.8
, Issue.1
, pp. 56-70
-
-
Yu, C.1
Zhong, Y.2
Smith, T.3
Park, I.4
Huang, W.5
-
47
-
-
78650928781
-
Real-time adaptive behaviors in multimodal human-avatar interactions
-
ZHANG, H., FRICKER, D., SMITH, T., AND YU, C. 2010. Real-time adaptive behaviors in multimodal human-avatar interactions. In Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction.
-
(2010)
Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
-
-
Zhang, H.1
Fricker, D.2
Smith, T.3
Yu, C.4
|