메뉴 건너뛰기




Volumn , Issue , 2012, Pages 1194-1201

A database for fine grained activity detection of cooking activities

Author keywords

[No Author keywords available]

Indexed keywords

ACTIVITY DETECTION; ACTIVITY RECOGNITION; BODY MODELS; BODY MOTIONS; DATA SETS; FINE GRAINED; HIGH RESOLUTION; HOLISTIC APPROACH; VIDEO FEATURES;

EID: 84866710901     PISSN: 10636919     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/CVPR.2012.6247801     Document Type: Conference Paper
Times cited : (594)

References (44)
  • 1
    • 79955649703 scopus 로고    scopus 로고
    • Human activity analysis: A review
    • Apr. 2
    • J. Aggarwal and M. Ryoo. Human activity analysis: A review. ACM Comput. Surv., 43, Apr. 2011. 2
    • (2011) ACM Comput. Surv. , vol.43
    • Aggarwal, J.1    Ryoo, M.2
  • 3
    • 70450203723 scopus 로고    scopus 로고
    • Pictorial structures revisited: People detection and articulated pose estimation
    • 4
    • M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009. 4
    • (2009) CVPR
    • Andriluka, M.1    Roth, S.2    Schiele, B.3
  • 4
    • 37849052446 scopus 로고    scopus 로고
    • Advene: An open-source framework for integrating and visualising audiovisual metadata
    • 4
    • O. Aubert and Y. Prié. Advene: an open-source framework for integrating and visualising audiovisual metadata. In ACM Multimedia, 2007. 4
    • (2007) ACM Multimedia
    • Aubert, O.1    Prié, Y.2
  • 5
    • 84856661125 scopus 로고    scopus 로고
    • Learning spatiotemporal graphs of human activities
    • 3
    • W. Brendel and S. Todorovic. Learning spatiotemporal graphs of human activities. In ICCV, 2011. 3
    • (2011) ICCV
    • Brendel, W.1    Todorovic, S.2
  • 6
    • 84856635898 scopus 로고    scopus 로고
    • A selective spatio-temporal interest point detector for human action recognition in complex scenes
    • 1, 3
    • B. Chakraborty, M. B. Holte, T. B. Moeslund, J. Gonzalez, and F. X. Roca. A selective spatio-temporal interest point detector for human action recognition in complex scenes. In ICCV, 2011. 1, 3
    • (2011) ICCV
    • Chakraborty, B.1    Holte, M.B.2    Moeslund, T.B.3    Gonzalez, J.4    Roca, F.X.5
  • 7
    • 34948855444 scopus 로고    scopus 로고
    • Human detection using oriented histograms of flow and appearance
    • 2, 3, 5
    • N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance. In ECCV, 2006. 2, 3, 5
    • (2006) ECCV
    • Dalal, N.1    Triggs, B.2    Schmid, C.3
  • 9
    • 77955422240 scopus 로고    scopus 로고
    • Object detection with discriminatively trained part-based models
    • 5
    • P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. PAMI, 32, 2010. 5
    • (2010) PAMI , vol.32
    • Felzenszwalb, P.F.1    Girshick, R.B.2    McAllester, D.3    Ramanan, D.4
  • 10
    • 51949088307 scopus 로고    scopus 로고
    • Progressive search space reduction for human pose estimation
    • 3, 4, 5
    • V. Ferrari, M. Marin, and A. Zisserman. Progressive search space reduction for human pose estimation. In CVPR 2008. 3, 4, 5
    • (2008) CVPR
    • Ferrari, V.1    Marin, M.2    Zisserman, A.3
  • 13
    • 84856682691 scopus 로고    scopus 로고
    • Hmdb: A large video database for human motion recognition
    • 2
    • H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: A large video database for human motion recognition. In ICCV, 2011. 2
    • (2011) ICCV
    • Kuehne, H.1    Jhuang, H.2    Garrote, E.3    Poggio, T.4    Serre, T.5
  • 15
    • 24944451092 scopus 로고    scopus 로고
    • On space-time interest points
    • 1, 3
    • I. Laptev. On space-time interest points. In IJCV, 2005. 1, 3
    • (2005) IJCV
    • Laptev, I.1
  • 17
    • 50649122769 scopus 로고    scopus 로고
    • Retrieving actions in movies
    • 2
    • I. Laptev and P. Pérez. Retrieving actions in movies. In ICCV, 2007. 2
    • (2007) ICCV
    • Laptev, I.1    Pérez, P.2
  • 18
    • 70450203660 scopus 로고    scopus 로고
    • Recognizing realistic actions from videos 'in the wild'
    • 2, 3
    • J. G. Liu, J. B. Luo, and M. Shah. Recognizing realistic actions from videos 'in the wild'. In CVPR, 2009. 2, 3
    • (2009) CVPR
    • Liu, J.G.1    Luo, J.B.2    Shah, M.3
  • 20
    • 77953182943 scopus 로고    scopus 로고
    • Activity recognition using the velocity histories of tracked keypoints
    • 2, 3, 5
    • R. Messing, C. Pal, and H. Kautz. Activity recognition using the velocity histories of tracked keypoints. In ICCV, 2009. 2, 3, 5
    • (2009) ICCV
    • Messing, R.1    Pal, C.2    Kautz, H.3
  • 21
    • 51949112298 scopus 로고    scopus 로고
    • View and scale invariant action recognition using multiview shape-flow models
    • 2
    • P. Natarajan and R. Nevatia. View and scale invariant action recognition using multiview shape-flow models. In CVPR, 2008. 2
    • (2008) CVPR
    • Natarajan, P.1    Nevatia, R.2
  • 22
    • 80052874353 scopus 로고    scopus 로고
    • Modeling temporal structure of decomposablemotion segments for activity classification
    • 1, 2
    • J. Niebles, C.-W. Chen, and L. Fei-Fei. Modeling temporal structure of decomposablemotion segments for activity classification. In ECCV 2010. 1, 2
    • (2010) ECCV
    • Niebles, J.1    Chen, C.-W.2    Fei-Fei, L.3
  • 23
    • 80052890615 scopus 로고    scopus 로고
    • A large-scale benchmark dataset for event recognition in surveillance video
    • 2
    • S. Oh and et al. A large-scale benchmark dataset for event recognition in surveillance video. In CVPR, 2011. 2
    • (2011) CVPR
    • Oh, S.1
  • 25
    • 51949084792 scopus 로고    scopus 로고
    • Action MACH a spatiotemporal maximum average correlation height filter for action recognition
    • 2, 3
    • M. D. Rodriguez, J. Ahmed, and M. Shah. Action MACH a spatiotemporal maximum average correlation height filter for action recognition. In CVPR, 2008. 2, 3
    • (2008) CVPR
    • Rodriguez, M.D.1    Ahmed, J.2    Shah, M.3
  • 26
    • 78149236604 scopus 로고    scopus 로고
    • Collecting complex activity data sets in highly rich networked sensor environments
    • 3
    • D. Roggen and et al. Collecting complex activity data sets in highly rich networked sensor environments. In ICNSS, 2010. 3
    • (2010) ICNSS
    • Roggen, D.1
  • 27
    • 80052892795 scopus 로고    scopus 로고
    • Evaluating knowledge transfer andzero-shot learning in a large-scale setting
    • 6
    • M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowledge transfer andzero-shot learning in a large-scale setting. In CVPR, 2011. 6
    • (2011) CVPR
    • Rohrbach, M.1    Stark, M.2    Schiele, B.3
  • 28
    • 77953187842 scopus 로고    scopus 로고
    • Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities
    • 2
    • M. S. Ryoo and J. K. Aggarwal. Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities. In ICCV, 2009. 2
    • (2009) ICCV
    • Ryoo, M.S.1    Aggarwal, J.K.2
  • 29
    • 79958720362 scopus 로고    scopus 로고
    • Cascaded models for articulated pose estimation
    • 4
    • B. Sapp, A. Toshev, and B. Taskar. Cascaded models for articulated pose estimation. In ECCV, 2010. 4
    • (2010) ECCV
    • Sapp, B.1    Toshev, A.2    Taskar, B.3
  • 30
    • 80052890828 scopus 로고    scopus 로고
    • Parsing human motion with stretchable models
    • 5
    • B. Sapp, D. Weiss, and B. Taskar. Parsing human motion with stretchable models. In CVPR 2011. 5
    • (2011) CVPR
    • Sapp, B.1    Weiss, D.2    Taskar, B.3
  • 31
    • 10044233701 scopus 로고    scopus 로고
    • Recognizing human actions: A local SVM approach
    • 2, 3
    • C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: a local SVM approach. In ICPR, 2004. 2, 3
    • (2004) ICPR
    • Schuldt, C.1    Laptev, I.2    Caputo, B.3
  • 32
    • 84866715558 scopus 로고    scopus 로고
    • Unsupervised learning of event and-or grammar and semantics from video
    • 1
    • Z. Sia, M. Peib, B. Yaoa, and S.-C. Zhua. Unsupervised learning of event and-or grammar and semantics from video. In ICCV, 2011. 1
    • (2011) ICCV
    • Sia, Z.1    Peib, M.2    Yaoa, B.3    Zhua, S.-C.4
  • 33
    • 84856661036 scopus 로고    scopus 로고
    • Action recognition in cluttered dynamic scenes using pose-specific part models
    • 1, 2, 3
    • V. Singh and R. Nevatia. Action recognition in cluttered dynamic scenes using pose-specific part models. In ICCV, 2011. 1, 2, 3
    • (2011) ICCV
    • Singh, V.1    Nevatia, R.2
  • 34
    • 84867867393 scopus 로고    scopus 로고
    • Temporal segmentation and activity classification from first-person sensing
    • 3
    • E. H. Spriggs, F. de la Torre, and M. Hebert. Temporal segmentation and activity classification from first-person sensing. In Egoc.Vis.'09. 3
    • Egoc.Vis.'09
    • Spriggs, E.H.1    De La Torre, F.2    Hebert, M.3
  • 36
    • 84856665398 scopus 로고    scopus 로고
    • The TUM kitchen data set of everyday manipulation activities for motion tracking and action recognition
    • 2, 3
    • M. Tenorth, J. Bandouch, and M. Beetz. The TUM Kitchen Data Set of Everyday Manipulation Activities for Motion Tracking and Action Recognition. In THEMIS, 2009. 2, 3
    • (2009) THEMIS
    • Tenorth, M.1    Bandouch, J.2    Beetz, M.3
  • 37
    • 77955989063 scopus 로고    scopus 로고
    • Efficient additive kernels via explicit feature maps
    • 6
    • A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. In CVPR, 2010. 6
    • (2010) CVPR
    • Vedaldi, A.1    Zisserman, A.2
  • 38
    • 80052877143 scopus 로고    scopus 로고
    • Action recognition by dense trajectories
    • 1, 2, 3, 4, 5, 6, 8
    • H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Action Recognition by Dense Trajectories. In CVPR, 2011. 1, 2, 3, 4, 5, 6, 8
    • (2011) CVPR
    • Wang, H.1    Kläser, A.2    Schmid, C.3    Liu, C.-L.4
  • 39
    • 84898890371 scopus 로고    scopus 로고
    • Evaluation of local spatio-temporal features for action recognition
    • 3
    • H. Wang, M. Ullah, A. Klaser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In BMVC'09. 3
    • BMVC'09
    • Wang, H.1    Ullah, M.2    Klaser, A.3    Laptev, I.4    Schmid, C.5
  • 40
    • 77955996308 scopus 로고    scopus 로고
    • Recognizing human actions from still images with latent poses
    • 3
    • W. Yang, Y. Wang, and G. Mori. Recognizing human actions from still images with latent poses. In CVPR 2010. 3
    • (2010) CVPR
    • Yang, W.1    Wang, Y.2    Mori, G.3
  • 41
    • 80052895150 scopus 로고    scopus 로고
    • Articulated pose estimation with flexible mixtures of-parts
    • 4
    • Y. Yang and D. Ramanan. Articulated pose estimation with flexible mixtures of-parts. In CVPR, 2011. 4
    • (2011) CVPR
    • Yang, Y.1    Ramanan, D.2
  • 42
    • 77953205594 scopus 로고    scopus 로고
    • Local trinary patterns for human action recognition
    • 3
    • L. Yeffet and L. Wolf. Local trinary patterns for human action recognition. InICCV, 2009. 3
    • (2009) ICCV
    • Yeffet, L.1    Wolf, L.2
  • 43
    • 70450164163 scopus 로고    scopus 로고
    • Discriminative sub volume search for efficientaction detection
    • 2
    • J. S. Yuan, Z. C. Liu, and Y. Wu. Discriminative sub volume search for efficientaction detection. In CVPR, 2009. 2
    • (2009) CVPR
    • Yuan, J.S.1    Liu, Z.C.2    Wu, Y.3
  • 44
    • 70450164036 scopus 로고    scopus 로고
    • An analysis of sensor-oriented vs. modelbased activity recognition
    • 2, 4, 5
    • A. Zinnen, U. Blanke, and B. Schiele. An analysis of sensor-oriented vs. modelbased activity recognition. In ISWC, 2009. 2, 4, 5
    • (2009) ISWC
    • Zinnen, A.1    Blanke, U.2    Schiele, B.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.