-
1
-
-
84884563198
-
Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models
-
M. Barthet, G. Fazekas, and M. Sandler. Multidisciplinary perspectives on music emotion recognition: Implications for content and context-based models. In Int'l Symp. Computer Music Modelling & Retrieval, pages 492-507, 2012.
-
(2012)
Int'l Symp. Computer Music Modelling & Retrieval
, pp. 492-507
-
-
Barthet, M.1
Fazekas, G.2
Sandler, M.3
-
3
-
-
84887430642
-
-
R. Cowie, E. Douglas-Cowie, S. Savvidou, E. Mcmahon, M. Sawey, and M. Schröder. feeltrace': an instrument for recording perceived emotion in real time, 2000.
-
Feeltrace': An Instrument for Recording Perceived Emotion in Real Time
, pp. 2000
-
-
Cowie, R.1
Douglas-Cowie, E.2
Savvidou, S.3
McMahon, E.4
Sawey, M.5
Schröder, M.6
-
4
-
-
34548283842
-
-
John Wiley & Sons, Ltd
-
P. Ekman. Basic Emotions, pages 45-60. John Wiley & Sons, Ltd, 2005.
-
(2005)
Basic Emotions
, pp. 45-60
-
-
Ekman, P.1
-
6
-
-
0001175419
-
Experimental studies of the elements of expression in music
-
K. Hevner. Experimental studies of the elements of expression in music. American Journal of Psychology, (48):246-268, 1936.
-
(1936)
American Journal of Psychology
, vol.48
, pp. 246-268
-
-
Hevner, K.1
-
7
-
-
84873433681
-
The 2007 MIREX audio mood classification task: Lessons learned
-
X. Hu, J. S. Downie, C. Laurier, M. Bay, and A. F. Ehmann. The 2007 MIREX audio mood classification task: Lessons learned. In Proc. Int. Soc. Music Info. Retrieval Conf., pages 462-467, 2008.
-
(2008)
Proc. Int. Soc. Music Info. Retrieval Conf
, pp. 462-467
-
-
Hu, X.1
Downie, J.S.2
Laurier, C.3
Bay, M.4
Ehmann, A.F.5
-
8
-
-
78249252118
-
Automated music emotion recognition: A systematic evaluation
-
A. Huq, J. P. Bello, and R. Rowe. Automated music emotion recognition: A systematic evaluation. Journal of New Music Research, 39(3):227-244, 2010.
-
(2010)
Journal of New Music Research
, vol.39
, Issue.3
, pp. 227-244
-
-
Huq, A.1
Bello, J.P.2
Rowe, R.3
-
10
-
-
85190324696
-
Music emotion recognition: A state of the art review
-
Y. E. Kim, E. M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. Speck, and D. Turnbull. Music emotion recognition: A state of the art review. In Proc. Int. Soc. Music Info. Retrieval Conf., 2010.
-
Proc. Int. Soc. Music Info. Retrieval Conf.
, pp. 2010
-
-
Kim, Y.E.1
Schmidt, E.M.2
Migneco, R.3
Morton, B.G.4
Richardson, P.5
Scott, J.6
Speck, J.7
Turnbull, D.8
-
11
-
-
57649217556
-
Crowdsourcing user studies with Mechanical Turk
-
New York, NY, USA, ACM
-
A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with Mechanical Turk. In Proc. annual SIGCHI Conf. Human factors in computing systems, CHI '08, pages 453-456, New York, NY, USA, 2008. ACM.
-
(2008)
Proc. Annual SIGCHI Conf. Human Factors in Computing Systems, CHI '08
, pp. 453-456
-
-
Kittur, A.1
Chi, E.H.2
Suh, B.3
-
13
-
-
79751506928
-
Why do we listen to music? A uses and gratifications analysis
-
A. J. Lonsdale and A. C. North. Why do we listen to music? a uses and gratifications analysis. British Journal of Psychology, 102:108-134, 2011.
-
(2011)
British Journal of Psychology
, vol.102
, pp. 108-134
-
-
Lonsdale, A.J.1
North, A.C.2
-
14
-
-
84857258471
-
-
chapter 1.2. Oxford University Press, Oxford, UK
-
S. Marsella, J. Gratch, and P. Petta. Computational models of emotion, chapter 1.2, pages 21-41. Oxford University Press, Oxford, UK, 2010.
-
(2010)
Computational Models of Emotion
, pp. 21-41
-
-
Marsella, S.1
Gratch, J.2
Petta, P.3
-
15
-
-
84859899698
-
The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent
-
G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder. The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affective Computing, 3(1):5-17, 2012.
-
(2012)
IEEE Trans. Affective Computing
, vol.3
, Issue.1
, pp. 5-17
-
-
McKeown, G.1
Valstar, M.2
Cowie, R.3
Pantic, M.4
Schroder, M.5
-
17
-
-
68949198065
-
When nonsense sounds happy or helpless: The implicit positive and negative affect test (IPANAT)
-
M. Quirin, M. Kazén, and J. Kuhl. When Nonsense Sounds Happy or Helpless: The Implicit Positive and Negative Affect Test (IPANAT). Journal of Personality and Social Psychology, 97(3):500-516, 2009.
-
(2009)
Journal of Personality and Social Psychology
, vol.97
, Issue.3
, pp. 500-516
-
-
Quirin, M.1
Kazén, M.2
Kuhl, J.3
-
19
-
-
27944495684
-
What are emotions? And how can they be measured?
-
K. R. Scherer. What are emotions? And how can they be measured? Social Science Information, 44(4):695-729, 2005.
-
(2005)
Social Science Information
, vol.44
, Issue.4
, pp. 695-729
-
-
Scherer, K.R.1
-
21
-
-
77952382770
-
Feature selection for content-based, time-varying musical emotion regression
-
Philadelphia, PA, March
-
E. M. Schmidt, D. Turnbull, and Y. E. Kim. Feature selection for content-based, time-varying musical emotion regression. In Proc. ACM Int. Conf. Multimedia Information Retrieval, Philadelphia, PA, March 2010.
-
(2010)
Proc. ACM Int. Conf. Multimedia Information Retrieval
-
-
Schmidt, E.M.1
Turnbull, D.2
Kim, Y.E.3
-
22
-
-
84873867576
-
Crowd sourcing for affective annotation of video: Development of a viewer-reported boredom corpus
-
Geneva, Switzerland
-
M. Soleymani and M. Larson. Crowd sourcing for affective annotation of video: Development of a viewer-reported boredom corpus. In Workshop on Crowd sourcing for Search Evaluation, SIGIR 2010, Geneva, Switzerland, 2010.
-
(2010)
Workshop on Crowd Sourcing for Search Evaluation, SIGIR 2010
-
-
Soleymani, M.1
Larson, M.2
-
24
-
-
84871358453
-
The acoustic emotion Gaussians model for emotion-based music annotation and retrieval
-
J.-C. Wang, Y.-H. Yang, H.-M. Wang, and S.-K. Jeng. The acoustic emotion Gaussians model for emotion-based music annotation and retrieval. In Proc. ACM Multimedia, pages 89-98, 2012.
-
(2012)
Proc. ACM Multimedia
, pp. 89-98
-
-
Wang, J.-C.1
Yang, Y.-H.2
Wang, H.-M.3
Jeng, S.-K.4
|