메뉴 건너뛰기




Volumn 2017-January, Issue , 2017, Pages 3671-3680

Global context-aware attention LSTM networks for 3D action recognition

Author keywords

[No Author keywords available]

Indexed keywords

COMPUTER VISION; PATTERN RECOGNITION;

EID: 85041922534     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/CVPR.2017.391     Document Type: Conference Paper
Times cited : (651)

References (73)
  • 1
    • 84906782091 scopus 로고    scopus 로고
    • Human activity recognition from 3d data: A review
    • J. K. Aggarwal and L. Xia. Human activity recognition from 3d data: A review. PR Letters, 2014.
    • (2014) PR Letters
    • Aggarwal, J.K.1    Xia, L.2
  • 2
    • 84959193616 scopus 로고    scopus 로고
    • Elastic functional coding of human actions: From vector-fields to latent variables
    • R. Anirudh, P. Turaga, J. Su, and A. Srivastava. Elastic functional coding of human actions: from vector-fields to latent variables. In CVPR, 2015.
    • (2015) CVPR
    • Anirudh, R.1    Turaga, P.2    Su, J.3    Srivastava, A.4
  • 3
    • 85083953689 scopus 로고    scopus 로고
    • Neural machine translation by jointly learning to align and translate
    • D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
    • (2015) ICLR
    • Bahdanau, D.1    Cho, K.2    Bengio, Y.3
  • 4
    • 84884947071 scopus 로고    scopus 로고
    • Bio-inspired dynamic 3d discriminative skeletal features for human action recognition
    • R. Chaudhry, F. Ofli, G. Kurillo, R. Bajcsy, and R. Vidal. Bio-inspired dynamic 3d discriminative skeletal features for human action recognition. In CVPRW, 2013.
    • (2013) CVPRW
    • Chaudhry, R.1    Ofli, F.2    Kurillo, G.3    Bajcsy, R.4    Vidal, R.5
  • 5
    • 84973279176 scopus 로고    scopus 로고
    • Fusion of depth, skeleton, and inertial data for human action recognition
    • C. Chen, R. Jafari, and N. Kehtarnavaz. Fusion of depth, skeleton, and inertial data for human action recognition. In ICASSP, 2016.
    • (2016) ICASSP
    • Chen, C.1    Jafari, R.2    Kehtarnavaz, N.3
  • 6
    • 84959209495 scopus 로고    scopus 로고
    • A novel hierarchical framework for human action recognition
    • H. Chen, G. Wang, J.-H. Xue, and L. He. A novel hierarchical framework for human action recognition. PR, 2016.
    • (2016) PR
    • Chen, H.1    Wang, G.2    Xue, J.-H.3    He, L.4
  • 8
    • 84888340666 scopus 로고    scopus 로고
    • Torch7: A matlab-like environment for machine learning
    • R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning. In NIPSW, 2011.
    • (2011) NIPSW
    • Collobert, R.1    Kavukcuoglu, K.2    Farabet, C.3
  • 11
    • 84959217041 scopus 로고    scopus 로고
    • Hierarchical recurrent neural network for skeleton based action recognition
    • Y. Du, W. Wang, and L. Wang. Hierarchical recurrent neural network for skeleton based action recognition. In CVPR, 2015.
    • (2015) CVPR
    • Du, Y.1    Wang, W.2    Wang, L.3
  • 12
    • 84919934807 scopus 로고    scopus 로고
    • Skeletal quads: Human action recognition using joint quadruples
    • G. Evangelidis, G. Singh, and R. Horaud. Skeletal quads: Human action recognition using joint quadruples. In ICPR, 2014.
    • (2014) ICPR
    • Evangelidis, G.1    Singh, G.2    Horaud, R.3
  • 16
    • 84959219372 scopus 로고    scopus 로고
    • Jointly learning heterogeneous features for rgb-d activity recognition
    • J.-F. Hu, W.-S. Zheng, J. Lai, and J. Zhang. Jointly learning heterogeneous features for rgb-d activity recognition. In CVPR, 2015.
    • (2015) CVPR
    • Hu, J.-F.1    Zheng, W.-S.2    Lai, J.3    Zhang, J.4
  • 17
    • 84986332689 scopus 로고    scopus 로고
    • A hierarchical deep temporal model for group activity recognition
    • M. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat, and G. Mori. A hierarchical deep temporal model for group activity recognition. In CVPR, 2016.
    • (2016) CVPR
    • Ibrahim, M.1    Muralidharan, S.2    Deng, Z.3    Vahdat, A.4    Mori, G.5
  • 18
    • 84986292208 scopus 로고    scopus 로고
    • Structuralrnn: Deep learning on spatio-temporal graphs
    • A. Jain, A. R. Zamir, S. Savarese, and A. Saxena. Structuralrnn: Deep learning on spatio-temporal graphs. In CVPR, 2016.
    • (2016) CVPR
    • Jain, A.1    Zamir, A.R.2    Savarese, S.3    Saxena, A.4
  • 19
    • 84937129715 scopus 로고    scopus 로고
    • Interactive body part contrast mining for human interaction recognition
    • Y. Ji, G. Ye, and H. Cheng. Interactive body part contrast mining for human interaction recognition. In ICMEW, 2014.
    • (2014) ICMEW
    • Ji, Y.1    Ye, G.2    Cheng, H.3
  • 22
    • 85020179705 scopus 로고    scopus 로고
    • Tensor representations via kernel linearization for action recognition from 3d skeletons
    • P. Koniusz, A. Cherian, and F. Porikli. Tensor representations via kernel linearization for action recognition from 3d skeletons. In ECCV, 2016.
    • (2016) ECCV
    • Koniusz, P.1    Cherian, A.2    Porikli, F.3
  • 24
    • 84973860944 scopus 로고    scopus 로고
    • Category-blind human action recognition: A practical recognition system
    • W. Li, L. Wen, M. Choo Chuah, and S. Lyu. Category-blind human action recognition: A practical recognition system. In ICCV, 2015.
    • (2015) ICCV
    • Li, W.1    Wen, L.2    Choo Chuah, M.3    Lyu, S.4
  • 25
    • 84990069764 scopus 로고    scopus 로고
    • Online human action detection using joint classification-regression recurrent neural networks
    • Y. Li, C. Lan, J. Xing, W. Zeng, C. Yuan, and J. Liu. Online human action detection using joint classification-regression recurrent neural networks. In ECCV, 2016.
    • (2016) ECCV
    • Li, Y.1    Lan, C.2    Xing, J.3    Zeng, W.4    Yuan, C.5    Liu, J.6
  • 26
    • 84986253374 scopus 로고    scopus 로고
    • A hierarchical posebased approach to complex action understanding using dictionaries of actionlets and motion poselets
    • I. Lillo, J. Carlos Niebles, and A. Soto. A hierarchical posebased approach to complex action understanding using dictionaries of actionlets and motion poselets. In CVPR, 2016.
    • (2016) CVPR
    • Lillo, I.1    Carlos Niebles, J.2    Soto, A.3
  • 27
    • 85030235359 scopus 로고    scopus 로고
    • Spatio-temporal lstm with trust gates for 3d human action recognition
    • J. Liu, A. Shahroudy, D. Xu, and G. Wang. Spatio-temporal lstm with trust gates for 3d human action recognition. In ECCV, 2016.
    • (2016) ECCV
    • Liu, J.1    Shahroudy, A.2    Xu, D.3    Wang, G.4
  • 28
    • 84898787235 scopus 로고    scopus 로고
    • Group sparsity and geometry constrained dictionary learning for action recognition from depth maps
    • J. Luo, W. Wang, and H. Qi. Group sparsity and geometry constrained dictionary learning for action recognition from depth maps. In ICCV, 2013.
    • (2013) ICCV
    • Luo, J.1    Wang, W.2    Qi, H.3
  • 29
    • 84959874994 scopus 로고    scopus 로고
    • Effective approaches to attention-based neural machine translation
    • M.-T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015.
    • (2015) EMNLP
    • Luong, M.-T.1    Pham, H.2    Manning, C.D.3
  • 30
    • 84986254033 scopus 로고    scopus 로고
    • Learning activity progression in lstms for activity detection and early detection
    • S. Ma, L. Sigal, and S. Sclaroff. Learning activity progression in lstms for activity detection and early detection. In CVPR, 2016.
    • (2016) CVPR
    • Ma, S.1    Sigal, L.2    Sclaroff, S.3
  • 31
    • 85073424263 scopus 로고    scopus 로고
    • Humanobject interaction recognition by learning the distances between the object and the skeleton joints
    • M. Meng, H. Drira, M. Daoudi, and J. Boonaert. Humanobject interaction recognition by learning the distances between the object and the skeleton joints. In FG, 2015.
    • (2015) FG
    • Meng, M.1    Drira, H.2    Daoudi, M.3    Boonaert, J.4
  • 32
    • 84891625198 scopus 로고    scopus 로고
    • Sequence of the most informative joints (smij): A new representation for human skeletal action recognition
    • F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy. Sequence of the most informative joints (smij): A new representation for human skeletal action recognition. JVCIR, 2014.
    • (2014) JVCIR
    • Ofli, F.1    Chaudhry, R.2    Kurillo, G.3    Vidal, R.4    Bajcsy, R.5
  • 33
    • 84958104996 scopus 로고    scopus 로고
    • 3d skeleton-based human action classification: A survey
    • L. L. Presti and M. La Cascia. 3d skeleton-based human action classification: A survey. PR, 2016.
    • (2016) PR
    • Presti, L.L.1    La Cascia, M.2
  • 34
    • 84904663139 scopus 로고    scopus 로고
    • Real time action recognition using histograms of depth gradients and random decision forests
    • H. Rahmani, A. Mahmood, D. Q. Huynh, and A. Mian. Real time action recognition using histograms of depth gradients and random decision forests. In WACV, 2014.
    • (2014) WACV
    • Rahmani, H.1    Mahmood, A.2    Huynh, D.Q.3    Mian, A.4
  • 35
    • 84986252370 scopus 로고    scopus 로고
    • Ntu rgb+d: A large scale dataset for 3d human activity analysis
    • A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In CVPR, 2016.
    • (2016) CVPR
    • Shahroudy, A.1    Liu, J.2    Ng, T.-T.3    Wang, G.4
  • 36
    • 85041942496 scopus 로고    scopus 로고
    • Deep multimodal feature analysis for action recognition in rgb+ d videos
    • A. Shahroudy, T.-T. Ng, Y. Gong, and G. Wang. Deep multimodal feature analysis for action recognition in rgb+ d videos. TPAMI, 2017.
    • (2017) TPAMI
    • Shahroudy, A.1    Ng, T.-T.2    Gong, Y.3    Wang, G.4
  • 37
    • 84986267489 scopus 로고    scopus 로고
    • Multimodal multipart learning for action recognition in depth videos
    • A. Shahroudy, T.-T. Ng, Q. Yang, and G. Wang. Multimodal multipart learning for action recognition in depth videos. TPAMI, 2016.
    • (2016) TPAMI
    • Shahroudy, A.1    Ng, T.-T.2    Yang, Q.3    Wang, G.4
  • 38
    • 84906737910 scopus 로고    scopus 로고
    • Multi-modal feature fusion for action recognition in rgb-d sequences
    • A. Shahroudy, G. Wang, and T.-T. Ng. Multi-modal feature fusion for action recognition in rgb-d sequences. In ISCCSP, 2014.
    • (2014) ISCCSP
    • Shahroudy, A.1    Wang, G.2    Ng, T.-T.3
  • 39
  • 40
    • 84937862424 scopus 로고    scopus 로고
    • Two-stream convolutional networks for action recognition in videos
    • K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
    • (2014) NIPS
    • Simonyan, K.1    Zisserman, A.2
  • 41
    • 84911003814 scopus 로고    scopus 로고
    • Accurate 3d action recognition using learning on the grassmann manifold
    • R. Slama, H. Wannous, M. Daoudi, and A. Srivastava. Accurate 3d action recognition using learning on the grassmann manifold. PR, 2015.
    • (2015) PR
    • Slama, R.1    Wannous, H.2    Daoudi, M.3    Srivastava, A.4
  • 43
    • 84969544782 scopus 로고    scopus 로고
    • Unsupervised learning of video representations using lstms
    • N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using lstms. In ICML, 2015.
    • (2015) ICML
    • Srivastava, N.1    Mansimov, E.2    Salakhutdinov, R.3
  • 44
    • 84937961845 scopus 로고    scopus 로고
    • Deep networks with internal selective attention through feedback connections
    • M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention through feedback connections. In NIPS, 2014.
    • (2014) NIPS
    • Stollenga, M.F.1    Masci, J.2    Gomez, F.3    Schmidhuber, J.4
  • 47
    • 84986296553 scopus 로고    scopus 로고
    • Moving poselets: A discriminative and interpretable skeletal motion representation for action recognition
    • L. Tao and R. Vidal. Moving poselets: A discriminative and interpretable skeletal motion representation for action recognition. In ICCVW, 2015.
    • (2015) ICCVW
    • Tao, L.1    Vidal, R.2
  • 48
    • 84973920651 scopus 로고    scopus 로고
    • Differential recurrent neural networks for action recognition
    • V. Veeriah, N. Zhuang, and G.-J. Qi. Differential recurrent neural networks for action recognition. In ICCV, 2015.
    • (2015) ICCV
    • Veeriah, V.1    Zhuang, N.2    Qi, G.-J.3
  • 49
    • 84911376484 scopus 로고    scopus 로고
    • Human action recognition by representing 3d skeletons as points in a lie group
    • R. Vemulapalli, F. Arrate, and R. Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In CVPR, 2014.
    • (2014) CVPR
    • Vemulapalli, R.1    Arrate, F.2    Chellappa, R.3
  • 50
    • 84986264590 scopus 로고    scopus 로고
    • Recognizing actions in 3d using action-snippets and activated simplices
    • C. Wang, J. Flynn, Y. Wang, and A. L. Yuille. Recognizing actions in 3d using action-snippets and activated simplices. In AAAI, 2016.
    • (2016) AAAI
    • Wang, C.1    Flynn, J.2    Wang, Y.3    Yuille, A.L.4
  • 51
    • 84986266365 scopus 로고    scopus 로고
    • Mining 3d key-posemotifs for action recognition
    • C. Wang, Y. Wang, and A. L. Yuille. Mining 3d key-posemotifs for action recognition. In CVPR, 2016.
    • (2016) CVPR
    • Wang, C.1    Wang, Y.2    Yuille, A.L.3
  • 52
    • 84866672692 scopus 로고    scopus 로고
    • Mining actionlet ensemble for action recognition with depth cameras
    • J. Wang, Z. Liu, Y. Wu, and J. Yuan. Mining actionlet ensemble for action recognition with depth cameras. In CVPR, 2012.
    • (2012) CVPR
    • Wang, J.1    Liu, Z.2    Wu, Y.3    Yuan, J.4
  • 53
    • 84900557701 scopus 로고    scopus 로고
    • Learning actionlet ensemble for 3d human action recognition
    • J. Wang, Z. Liu, Y. Wu, and J. Yuan. Learning actionlet ensemble for 3d human action recognition. TPAMI, 2014.
    • (2014) TPAMI
    • Wang, J.1    Liu, Z.2    Wu, Y.3    Yuan, J.4
  • 54
    • 84898794902 scopus 로고    scopus 로고
    • Learning maximum margin temporal warping for action recognition
    • J. Wang and Y. Wu. Learning maximum margin temporal warping for action recognition. In ICCV, 2013.
    • (2013) ICCV
    • Wang, J.1    Wu, Y.2
  • 55
    • 85043297155 scopus 로고    scopus 로고
    • Scene flow to action map: A new representation for rgb-d based action recognition with convolutional neural networks
    • P.Wang, W. Li, Z. Gao, Y. Zhang, C. Tang, and P. Ogunbona. Scene flow to action map: A new representation for rgb-d based action recognition with convolutional neural networks. In CVPR, 2017.
    • (2017) CVPR
    • Wang, P.1    Li, W.2    Gao, Z.3    Zhang, Y.4    Tang, C.5    Ogunbona, P.6
  • 56
    • 84922566687 scopus 로고    scopus 로고
    • Mining mid-level features for action recognition based on effective skeleton representation
    • P.Wang, W. Li, P. Ogunbona, Z. Gao, and H. Zhang. Mining mid-level features for action recognition based on effective skeleton representation. In DICTA, 2014.
    • (2014) DICTA
    • Wang, P.1    Li, W.2    Ogunbona, P.3    Gao, Z.4    Zhang, H.5
  • 57
    • 85019099168 scopus 로고    scopus 로고
    • Graph based skeleton motion representation and similarity measurement for action recognition
    • P. Wang, C. Yuan, W. Hu, B. Li, and Y. Zhang. Graph based skeleton motion representation and similarity measurement for action recognition. In ECCV, 2016.
    • (2016) ECCV
    • Wang, P.1    Yuan, C.2    Hu, W.3    Li, B.4    Zhang, Y.5
  • 59
    • 85044256751 scopus 로고    scopus 로고
    • Spatio-temporal naive-bayes nearest-neighbor for skeleton-based action recognition
    • J.Weng, C.Weng, and J. Yuan. Spatio-temporal naive-bayes nearest-neighbor for skeleton-based action recognition. In CVPR, 2017.
    • (2017) CVPR
    • Weng, J.1    Weng, C.2    Yuan, J.3
  • 61
    • 84990022067 scopus 로고    scopus 로고
    • Modeling spatial-temporal clues in a hybrid deep learning framework for video classification
    • Z. Wu, X. Wang, Y.-G. Jiang, H. Ye, and X. Xue. Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In ACM MM, 2015.
    • (2015) ACM MM
    • Wu, Z.1    Wang, X.2    Jiang, Y.-G.3    Ye, H.4    Xue, X.5
  • 62
    • 84865033379 scopus 로고    scopus 로고
    • View invariant human action recognition using histograms of 3d joints
    • L. Xia, C.-C. Chen, and J. Aggarwal. View invariant human action recognition using histograms of 3d joints. In CVPRW, 2012.
    • (2012) CVPRW
    • Xia, L.1    Chen, C.-C.2    Aggarwal, J.3
  • 63
    • 84999008900 scopus 로고    scopus 로고
    • Dynamic memory networks for visual and textual question answering
    • C. Xiong, S. Merity, and R. Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016.
    • (2016) ICML
    • Xiong, C.1    Merity, S.2    Socher, R.3
  • 65
    • 84891632193 scopus 로고    scopus 로고
    • Effective 3d action recognition using eigenjoints
    • X. Yang and Y. Tian. Effective 3d action recognition using eigenjoints. JVCIR, 2014.
    • (2014) JVCIR
    • Yang, X.1    Tian, Y.2
  • 68
    • 84986253505 scopus 로고    scopus 로고
    • Endto-end learning of action detection from frame glimpses in videos
    • S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei. Endto-end learning of action detection from frame glimpses in videos. In CVPR, 2016.
    • (2016) CVPR
    • Yeung, S.1    Russakovsky, O.2    Mori, G.3    Fei-Fei, L.4
  • 70
    • 84865015840 scopus 로고    scopus 로고
    • Two-person interaction detection using bodypose features and multiple instance learning
    • K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras. Two-person interaction detection using bodypose features and multiple instance learning. In CVPRW, 2012.
    • (2012) CVPRW
    • Yun, K.1    Honorio, J.2    Chattopadhyay, D.3    Berg, T.L.4    Samaras, D.5
  • 71
    • 84898773205 scopus 로고    scopus 로고
    • The moving pose: An efficient 3d kinematics descriptor for low-latency action recognition and detection
    • M. Zanfir, M. Leordeanu, and C. Sminchisescu. The moving pose: An efficient 3d kinematics descriptor for low-latency action recognition and detection. In ICCV, 2013.
    • (2013) ICCV
    • Zanfir, M.1    Leordeanu, M.2    Sminchisescu, C.3
  • 72
  • 73
    • 85007199276 scopus 로고    scopus 로고
    • Co-occurrence feature learning for skeleton based action recognition using regularized deep lstm networks
    • W. Zhu, C. Lan, J. Xing, W. Zeng, Y. Li, L. Shen, and X. Xie. Co-occurrence feature learning for skeleton based action recognition using regularized deep lstm networks. In AAAI, 2016.
    • (2016) AAAI
    • Zhu, W.1    Lan, C.2    Xing, J.3    Zeng, W.4    Li, Y.5    Shen, L.6    Xie, X.7


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.