메뉴 건너뛰기




Volumn , Issue , 2013, Pages 987-996

Static saliency vs. Dynamic saliency: A comparative study

Author keywords

Camera motion; Cinematography; Dynamic saliency; Static saliency

Indexed keywords

CAMERA MOTIONS; CINEMATOGRAPHY; COMPARATIVE STUDIES; DOCUMENTARY FILMS; HEARING IMPAIRMENTS; IMAGE SALIENCIES; STATE-OF-THE-ART METHODS; STATIC SALIENCY;

EID: 84887455190     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/2502081.2502128     Document Type: Conference Paper
Times cited : (72)

References (33)
  • 1
    • 34547653853 scopus 로고    scopus 로고
    • Seam carving for content-aware image resizing
    • S. Avidan and A. Shamir. Seam carving for content-aware image resizing. TOG, 2007.
    • (2007) TOG
    • Avidan, S.1    Shamir, A.2
  • 2
    • 34247614190 scopus 로고    scopus 로고
    • Detecting irregularities in images and in video
    • O. Boiman and M. Irani. Detecting irregularities in images and in video. IJCV, 2007.
    • (2007) IJCV
    • Boiman, O.1    Irani, M.2
  • 3
    • 33750447293 scopus 로고    scopus 로고
    • Visual causes versus correlates of attentional selection in dynamic scenes
    • R. Carmi and L. Itti. Visual causes versus correlates of attentional selection in dynamic scenes. Vision Research, 2006.
    • (2006) Vision Research
    • Carmi, R.1    Itti., L.2
  • 4
    • 50649089739 scopus 로고    scopus 로고
    • Predicting human gaze using low-level saliency combined with face detection
    • M. Cerf, J. Harel, W. Einḧauser, and C. Koch. Predicting human gaze using low-level saliency combined with face detection. In NIPS, 2007.
    • NIPS , pp. 2007
    • Cerf, M.1    Harel, J.2    Einhäuser, W.3    Koch, C.4
  • 5
    • 40949085188 scopus 로고    scopus 로고
    • Automatic video region-of-interest determination based on user attention model
    • W. Cheng, W. Chu, J. Kuo, and J. Wu. Automatic video region-of-interest determination based on user attention model. In ISCAS, 2005.
    • (2005) ISCAS
    • Cheng, W.1    Chu, W.2    Kuo, J.3    Wu, J.4
  • 7
    • 77954814129 scopus 로고    scopus 로고
    • A Bayesian model for efficient visual search and recognition
    • L. Elazary and L. Itti. A Bayesian model for efficient visual search and recognition. Vision Research, 2010.
    • (2010) Vision Research
    • Elazary, L.1    Itti, L.2
  • 8
    • 84878335260 scopus 로고    scopus 로고
    • Robust video registration applied to field-sports video analysis
    • B. Ghanem, T. Zhang, and N. Ahuja. Robust video registration applied to field-sports video analysis. ICASSP, 2012.
    • (2012) ICASSP
    • Ghanem, B.1    Zhang, T.2    Ahuja, N.3
  • 9
    • 85156217966 scopus 로고    scopus 로고
    • Graph-based visual saliency
    • J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In NIPS, 2006.
    • (2006) NIPS
    • Harel, J.1    Koch, C.2    Perona, P.3
  • 10
    • 78650993562 scopus 로고    scopus 로고
    • Dynamic captioning: Video accessibility enhancement for hearing impairment
    • R. Hong, M. Wang, M. Xu, S. Yan, and T. Chua. Dynamic captioning: video accessibility enhancement for hearing impairment. In ACM Multimedia, 2010.
    • (2010) ACM Multimedia
    • Hong, R.1    Wang, M.2    Xu, M.3    Yan, S.4    Chua, T.5
  • 12
    • 65649150346 scopus 로고    scopus 로고
    • Center-surround patterns emerge as optimal predictors for human saccade targets
    • W. Kienzle, M. Franz, B. Scholkopf, and F. Wichmann. Center-surround patterns emerge as optimal predictors for human saccade targets. Journal of Vision, 2009.
    • (2009) Journal of Vision
    • Kienzle, W.1    Franz, M.2    Scholkopf, B.3    Wichmann, F.4
  • 14
    • 34548472194 scopus 로고    scopus 로고
    • Predicting visual fixations on video based on low-level visual features
    • O. Le Meur, P. Le Callet, and D. Barba. Predicting visual fixations on video based on low-level visual features. Vision Research, 2007.
    • (2007) Vision Research
    • Meur, O.L.1    Callet, P.L.2    Barba, D.3
  • 15
    • 79955112208 scopus 로고    scopus 로고
    • Probabilistic multi-task learning for visual saliency estimation in video
    • J. Li, Y. Tian, T. Huang, and W. Gao. Probabilistic multi-task learning for visual saliency estimation in video. IJCV, 2010.
    • (2010) IJCV
    • Li, J.1    Tian, Y.2    Huang, T.3    Gao, W.4
  • 16
    • 0001859044 scopus 로고
    • A technique for the measurement of attitudes
    • R. Likert. A technique for the measurement of attitudes. Archives of Psychology, 1932.
    • (1932) Archives of Psychology
    • Likert, R.1
  • 17
    • 70449584117 scopus 로고    scopus 로고
    • Video summarization using a visual attention model
    • S. Marat, M. Guironnet, and D. Pellerin. Video summarization using a visual attention model. ESPC, 2007.
    • (2007) ESPC
    • Marat, S.1    Guironnet, M.2    Pellerin, D.3
  • 19
    • 84893353862 scopus 로고    scopus 로고
    • Dynamic eye movement datasets and learnt saliency models for visual action recognition
    • S. Mathe and C. Sminchisescu. Dynamic eye movement datasets and learnt saliency models for visual action recognition. In ECCV, 2012.
    • (2012) ECCV
    • Mathe, S.1    Sminchisescu, C.2
  • 21
    • 80052890815 scopus 로고    scopus 로고
    • Saliency estimation using a non-parametric low-level vision model
    • N. Murray, M. Vanrell, X. Otazu, and C. A. Ṕarraga. Saliency estimation using a non-parametric low-level vision model. In CVPR, 2011.
    • (2011) CVPR
    • Murray, N.1    Vanrell, M.2    Otazu, X.3    Párraga, C.A.4
  • 23
    • 84896936092 scopus 로고    scopus 로고
    • Towards decrypting attractiveness via multi-modality cues
    • T. Nguyen, S. Liu, B. Ni, J. Tan, Y. Rui, and S. Yan. Towards decrypting attractiveness via multi-modality cues. In TOMCCAP, 2013.
    • (2013) TOMCCAP
    • Nguyen, T.1    Liu, S.2    Ni, B.3    Tan, J.4    Rui, Y.5    Yan, S.6
  • 24
    • 0035328421 scopus 로고    scopus 로고
    • Modeling the shape of the scene: A holistic representation of the spatial envelope
    • A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 2001.
    • (2001) IJCV
    • Oliva, A.1    Torralba, A.2
  • 25
    • 34948863856 scopus 로고    scopus 로고
    • Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention
    • R. Peters and L. Itti. Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention. In CVPR, 2007.
    • (2007) CVPR
    • Peters, R.1    Itti, L.2
  • 27
    • 39749186006 scopus 로고    scopus 로고
    • Labelme: A database and web-based tool for image annotation
    • B. Russell, A. Torralba, K. Murphy, and W. Freeman. Labelme: A database and web-based tool for image annotation. IJCV, 2008.
    • (2008) IJCV
    • Russell, B.1    Torralba, A.2    Murphy, K.3    Freeman, W.4
  • 28
    • 36448979181 scopus 로고    scopus 로고
    • The central fixation bias in scene viewing: Selecting anoptimal viewing position independently of motor biases and image feature distributions
    • B. Tatler. The central fixation bias in scene viewing: Selecting anoptimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 2007.
    • (2007) Journal of Vision
    • Tatler, B.1
  • 31
    • 34547210110 scopus 로고    scopus 로고
    • Visual attention detection in video sequences using spatiotemporal cues
    • Y. Zhai and M. Shah. Visual attention detection in video sequences using spatiotemporal cues. In ACM Multimedia, 2006.
    • (2006) ACM Multimedia
    • Zhai, Y.1    Shah, M.2
  • 33
    • 84865657272 scopus 로고    scopus 로고
    • Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost
    • Q. Zhao and C. Koch. Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost. Journal of Vision, 2012.
    • (2012) Journal of Vision
    • Zhao, Q.1    Koch, C.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.