메뉴 건너뛰기




Volumn , Issue , 2014, Pages 5412-5416

On-line continuous-time music mood regression with deep recurrent neural networks

Author keywords

emotion recognition; music information retrieval; recurrent neural networks

Indexed keywords

ARTIFICIAL INTELLIGENCE; CONTINUOUS TIME SYSTEMS; FEEDFORWARD NEURAL NETWORKS; RECURRENT NEURAL NETWORKS; SIGNAL PROCESSING;

EID: 84905226811     PISSN: 15206149     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ICASSP.2014.6854637     Document Type: Conference Paper
Times cited : (63)

References (21)
  • 1
    • 1542317649 scopus 로고    scopus 로고
    • Popular music retrieval by detecting mood
    • Y. Feng, Y. Zhuang, and Y. Pan, "Popular music retrieval by detecting mood," in Proc. of SIGIR, 2003, pp. 375-376.
    • (2003) Proc. of SIGIR , pp. 375-376
    • Feng, Y.1    Zhuang, Y.2    Pan, Y.3
  • 3
    • 72449184099 scopus 로고    scopus 로고
    • SVR-based music mood classification and context-based music recommendation
    • Beijing, China
    • S. Rho, B.J. Han, and E. Hwang, "SVR-based music mood classification and context-based music recommendation," in Proc. ACM Multimedia, Beijing, China, 2009, pp. 713-716.
    • (2009) Proc. ACM Multimedia , pp. 713-716
    • Rho, S.1    Han, B.J.2    Hwang, E.3
  • 4
    • 84871398919 scopus 로고    scopus 로고
    • Multi-modal non-prototypical music mood analysis in continuous space: Reliability and performances
    • ISMIR, Miami, FL, October 2011, ISMIR ISMIR, (acceptance rate: 59 %)
    • B. Schuller, F. Weninger, and J. Dorfner, "Multi-Modal Non-Prototypical Music Mood Analysis in Continuous Space: Reliability and Performances," in Proceedings 12th International Society for Music Information Retrieval Conference, ISMIR 2011, Miami, FL, October 2011, ISMIR, pp. 759-764, ISMIR, (acceptance rate: 59 %).
    • (2011) Proceedings 12th International Society for Music Information Retrieval Conference , pp. 759-764
    • Schuller, B.1    Weninger, F.2    Dorfner, J.3
  • 5
    • 84555174290 scopus 로고    scopus 로고
    • Prediction of time-varying musical mood distributions from audio
    • E.M. Schmidt and Y.E. Kim, "Prediction of time-varying musical mood distributions from audio," in Proc. of ISMIR, 2010, pp. 465-470.
    • (2010) Proc. of ISMIR , pp. 465-470
    • Schmidt, E.M.1    Kim, Y.E.2
  • 6
    • 85014296970 scopus 로고    scopus 로고
    • A neural network model for the prediction of musical emotions
    • IET Publisher, London, UK, ISBN: 978-1849190756, S. Nefti-Meziani and J. Grey, Eds
    • E. Coutinho and A. Cangelosi, "A neural network model for the prediction of musical emotions," in Advances in Cognitive Systems, S. Nefti-Meziani and J. Grey, Eds., pp. 331-368. IET Publisher, London, UK, 2010, ISBN: 978-1849190756.
    • (2010) Advances in Cognitive Systems , pp. 331-368
    • Coutinho, E.1    Cangelosi, A.2
  • 8
    • 84872700353 scopus 로고    scopus 로고
    • Modeling musical emotion dynamics with conditional random fields
    • Miami, FL, USA
    • E.M. Schmidt and Y.E. Kim, "Modeling musical emotion dynamics with conditional random fields," in Proc. of ISMIR, Miami, FL, USA, 2011, pp. 777-782.
    • (2011) Proc. of ISMIR , pp. 777-782
    • Schmidt, E.M.1    Kim, Y.E.2
  • 10
    • 84867593805 scopus 로고    scopus 로고
    • Polyphonic piano note transcription with recurrent neural networks
    • Kyoto, Japan
    • S. Bock and M. Schedl, "Polyphonic piano note transcription with recurrent neural networks," in Proc. of ICASSP, Kyoto, Japan, 2012, pp. 121-124.
    • (2012) Proc. of ICASSP , pp. 121-124
    • Bock, S.1    Schedl, M.2
  • 11
    • 84878925980 scopus 로고    scopus 로고
    • On the acoustics of emotion in audio: What speech, music and sound have in common
    • May Article ID 292
    • F. Weninger, F. Eyben, B.W. Schuller, M. Mortillaro, and K.R. Scherer, "On the Acoustics of Emotion in Audio: What Speech, Music and Sound have in Common," Frontiers in Emotion Science, vol. 4, no. Article ID 292, pp. 1-12, May 2013.
    • (2013) Frontiers in Emotion Science , vol.4 , pp. 1-12
    • Weninger, F.1    Eyben, F.2    Schuller, B.W.3    Mortillaro, M.4    Scherer, K.R.5
  • 14
    • 84873420121 scopus 로고    scopus 로고
    • Feature learning in dynamic environments: Modeling the acoustic structure of musical emotion
    • E.M. Schmidt, J. Scott, and Y.E. Kim, "Feature learning in dynamic environments: Modeling the acoustic structure of musical emotion," in Proc. of ISMIR, 2012, pp. 325-330.
    • (2012) Proc. of ISMIR , pp. 325-330
    • Schmidt, E.M.1    Scott, J.2    Kim, Y.E.3
  • 15
    • 84906269266 scopus 로고    scopus 로고
    • The interspeech 2013 computational paralinguistics challenge: Social signals, conflict, emotion, autism
    • Lyon, France, ISCA
    • B. Schuller, S. Steidl, A. Batliner, A. Vinciarelli, K. Scherer, F. Ringeval, M. Chetouani, et al., "The INTERSPEECH 2013 Computational Paralinguistics Challenge: Social Signals, Conflict, Emotion, Autism," in Proc. of INTERSPEECH, Lyon, France, 2013, pp. 148-152, ISCA.
    • (2013) Proc. of INTERSPEECH , pp. 148-152
    • Schuller, B.1    Steidl, S.2    Batliner, A.3    Vinciarelli, A.4    Scherer, K.5    Ringeval, F.6    Chetouani, M.7
  • 17
    • 0034293152 scopus 로고    scopus 로고
    • Learning to forget: Continual prediction with LSTM
    • F. Gers, J. Schmidhuber, and F. Cummins, "Learning to forget: Continual prediction with LSTM," Neural Computation, vol. 12, no. 10, pp. 2451-2471, 2000.
    • (2000) Neural Computation , vol.12 , Issue.10 , pp. 2451-2471
    • Gers, F.1    Schmidhuber, J.2    Cummins, F.3
  • 18
    • 56449089103 scopus 로고    scopus 로고
    • Extracting and composing robust features with denoising autoencoders
    • P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol, "Extracting and composing robust features with denoising autoencoders," in Proc. of ICML, 2008, pp. 1096-1103.
    • (2008) Proc. of ICML , pp. 1096-1103
    • Vincent, P.1    Larochelle, H.2    Bengio, Y.3    Manzagol, P.4
  • 20
    • 85006140676 scopus 로고    scopus 로고
    • The TUM approach to the MediaEval music emotion task using generic affective audio features
    • Barcelona, Spain, October
    • F. Weninger, F. Eyben, and B. Schuller, "The TUM approach to the MediaEval music emotion task using generic affective audio features," in Proc. of MediaEval 2013 held in conjunction with ACM MM, Barcelona, Spain, October 2013.
    • (2013) Proc. of MediaEval 2013 Held in Conjunction with ACM MM
    • Weninger, F.1    Eyben, F.2    Schuller, B.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.