메뉴 건너뛰기




Volumn 35, Issue 6, 2016, Pages

EgoCAp: Egocentric marker-less motion capture with two fisheye cameras

Author keywords

Crowded scenes; First person vision; Inside in; Large scale; Markerless; Motion capture; Optical

Indexed keywords

EXOSKELETON (ROBOTICS); VIRTUAL REALITY;

EID: 85044267747     PISSN: 07300301     EISSN: 15577368     Source Type: Journal    
DOI: 10.1145/2980179.2980235     Document Type: Article
Times cited : (158)

References (72)
  • 1
    • 85057822952 scopus 로고    scopus 로고
    • Multi-view pictorial structures for 3D human pose estimation
    • AMIN, S., ANDRILUKA, M., ROHRBACH, M., AND SCHIELE, B. 2009. Multi-view pictorial structures for 3D human pose estimation. In BMVC.
    • (2009) BMVC
    • Amin, S.1    Andriluka, M.2    Rohrbach, M.3    Schiele, B.4
  • 2
    • 84911448580 scopus 로고    scopus 로고
    • 2D human pose estimation: New benchmark and state of the art analysis
    • ANDRILUKA, M., PISHCHULIN, L., GEHLER, P., AND SCHIELE, B. 2014. 2D human pose estimation: New benchmark and state of the art analysis. In CVPR.
    • (2014) CVPR
    • Andriluka, M.1    Pishchulin, L.2    Gehler, P.3    Schiele, B.4
  • 3
    • 84863395338 scopus 로고    scopus 로고
    • A data-driven approach for real-time full body pose reconstruction from a depth camera
    • BAAK, A., MÜLLER, M., BHARAJ, G., SEIDEL, H.-P., AND THEOBALT, C. 2011. A data-driven approach for real-time full body pose reconstruction from a depth camera. In ICCV.
    • (2011) ICCV
    • Baak, A.1    Müller, M.2    Bharaj, G.3    Seidel, H.-P.4    Theobalt, C.5
  • 5
    • 85028639150 scopus 로고    scopus 로고
    • A morphable model for the synthesis of 3D faces
    • BLANZ, V., AND VETTER, T. 1999. A morphable model for the synthesis of 3D faces. In SIGGRAPH.
    • (1999) SIGGRAPH
    • Blanz, V.1    Vetter, T.2
  • 6
    • 0032293266 scopus 로고    scopus 로고
    • Tracking people with twists and exponential maps
    • BREGLER, C., AND MALIK, J. 1998. Tracking people with twists and exponential maps. In CVPR.
    • (1998) CVPR
    • Bregler, C.1    Malik, J.2
  • 7
    • 84887329445 scopus 로고    scopus 로고
    • 3D pictorial structures for multiple view articulated pose estimation
    • BURENIUS, M., SULLIVAN, J., AND CARLSSON, S. 2013. 3D pictorial structures for multiple view articulated pose estimation. In CVPR.
    • (2013) CVPR
    • Burenius, M.1    Sullivan, J.2    Carlsson, S.3
  • 9
    • 33645771948 scopus 로고    scopus 로고
    • Performance animation from low-dimensional control signals
    • CHAI, J., AND HODGINS, J. K. 2005. Performance animation from low-dimensional control signals. ACM Transactions on Graphics 24, 3, 686-696.
    • (2005) ACM Transactions on Graphics , vol.24 , Issue.3 , pp. 686-696
    • Chai, J.1    Hodgins, J.K.2
  • 10
    • 84937873698 scopus 로고    scopus 로고
    • Articulated pose estimation by a graphical model with image dependent pairwise relations
    • CHEN, X., AND YUILLE, A. L. 2014. Articulated pose estimation by a graphical model with image dependent pairwise relations. In NIPS.
    • (2014) NIPS
    • Chen, X.1    Yuille, A.L.2
  • 11
    • 85057886094 scopus 로고    scopus 로고
    • EGOCAP, 2016. EgoCap dataset. http://gvv.mpi-inf.mpg.de/projects/EgoCap/.
    • (2016) EgoCap Dataset
  • 13
    • 84856655308 scopus 로고    scopus 로고
    • Understanding egocentric activities
    • FATHI, A., FARHADI, A., AND REHG, J. M. 2011. Understanding egocentric activities. In ICCV.
    • (2011) ICCV
    • Fathi, A.1    Farhadi, A.2    Rehg, J.M.3
  • 15
    • 80052584986 scopus 로고    scopus 로고
    • Human motion reconstruction from force sensors
    • HA, S., BAI, Y., AND LIU, C. K. 2011. Human motion reconstruction from force sensors. In SCA.
    • (2011) SCA
    • Ha, S.1    Bai, Y.2    Liu, C.K.3
  • 16
    • 84986274465 scopus 로고    scopus 로고
    • Deep residual learning for image recognition
    • HE, K., ZHANG, X., REN, S., AND SUN, J. 2016. Deep residual learning for image recognition. In CVPR.
    • (2016) CVPR
    • He, K.1    Zhang, X.2    Ren, S.3    Sun, J.4
  • 17
    • 84865384447 scopus 로고    scopus 로고
    • Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments
    • HOLTE, M. B., TRAN, C., TRIVEDI, M. M., AND MOESLUND, T. B. 2012. Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments. IEEE Journal of Selected Topics in Signal Processing 6, 5, 538-552.
    • (2012) IEEE Journal of Selected Topics in Signal Processing , vol.6 , Issue.5 , pp. 538-552
    • Holte, M.B.1    Tran, C.2    Trivedi, M.M.3    Moeslund, T.B.4
  • 18
  • 19
    • 85083953149 scopus 로고    scopus 로고
    • Learning human pose estimation features with convolutional networks
    • JAIN, A., TOMPSON, J., ANDRILUKA, M., TAYLOR, G. W., AND BREGLER, C. 2014. Learning human pose estimation features with convolutional networks. In ICLR.
    • (2014) ICLR
    • Jain, A.1    Tompson, J.2    Andriluka, M.3    Taylor, G.W.4    Bregler, C.5
  • 20
    • 84977621671 scopus 로고    scopus 로고
    • Modeep: A deep learning framework using motion features for human pose estimation
    • JAIN, A., TOMPSON, J., LECUN, Y., AND BREGLER, C. 2015. MoDeep: A deep learning framework using motion features for human pose estimation. In ACCV.
    • (2015) ACCV
    • Jain, A.1    Tompson, J.2    Lecun, Y.3    Bregler, C.4
  • 22
    • 80052884516 scopus 로고    scopus 로고
    • Learning effective human pose estimation from inaccurate annotation
    • JOHNSON, S., AND EVERINGHAM, M. 2011. Learning effective human pose estimation from inaccurate annotation. In CVPR.
    • (2011) CVPR
    • Johnson, S.1    Everingham, M.2
  • 26
    • 80052870292 scopus 로고    scopus 로고
    • Fast unsupervised ego-action learning for first-person sports videos
    • KITANI, K. M., OKABE, T., SATO, Y., AND SUGIMOTO, A. 2011. Fast unsupervised ego-action learning for first-person sports videos. In CVPR.
    • (2011) CVPR
    • Kitani, K.M.1    Okabe, T.2    Sato, Y.3    Sugimoto, A.4
  • 27
    • 84914694942 scopus 로고    scopus 로고
    • Mosh: Motion and shape capture from sparse markers
    • LOPER, M., MAHMOOD, N., AND BLACK, M. J. 2014. MoSh: Motion and shape capture from sparse markers. ACM Transactions on Graphics 33, 6, 220:1-13.
    • (2014) ACM Transactions on Graphics , vol.33 , Issue.6 , pp. 1-13
    • Loper, M.1    Mahmood, N.2    Black, M.J.3
  • 28
    • 84986322953 scopus 로고    scopus 로고
    • Going deeper into first-person activity recognition
    • MA, M., FAN, H., AND KITANI, K. M. 2016. Going deeper into first-person activity recognition. In CVPR.
    • (2016) CVPR
    • Ma, M.1    Fan, H.2    Kitani, K.M.3
  • 32
    • 84898795785 scopus 로고    scopus 로고
    • Global fusion of relative motions for robust, accurate and scalable structure from motion
    • MOULON, P., MONASSE, P., AND MARLET, R. 2013. Global fusion of relative motions for robust, accurate and scalable structure from motion. In ICCV.
    • (2013) ICCV
    • Moulon, P.1    Monasse, P.2    Marlet, R.3
  • 35
    • 84986277891 scopus 로고    scopus 로고
    • Recognizing activities of daily living with a wrist-mounted camera
    • OHNISHI, K., KANEHIRA, A., KANEZAKI, A., AND HARADA, T. 2016. Recognizing activities of daily living with a wrist-mounted camera. In CVPR.
    • (2016) CVPR
    • Ohnishi, K.1    Kanehira, A.2    Kanezaki, A.3    Harada, T.4
  • 36
    • 57649104942 scopus 로고    scopus 로고
    • Data-driven modeling of skin and muscle deformation
    • 96
    • PARK, S. I., AND HODGINS, J. K. 2008. Data-driven modeling of skin and muscle deformation. ACM Transactions on Graphics 27, 3, 96:1-6.
    • (2008) ACM Transactions on Graphics , vol.27 , Issue.3 , pp. 1-6
    • Park, S.I.1    Hodgins, J.K.2
  • 37
    • 84877735254 scopus 로고    scopus 로고
    • 3D social saliency from head-mounted cameras
    • PARK, H. S., JAIN, E., AND SHEIKH, Y. 2012. 3D social saliency from head-mounted cameras. In NIPS.
    • (2012) NIPS
    • Park, H.S.1    Jain, E.2    Sheikh, Y.3
  • 38
    • 84973882951 scopus 로고    scopus 로고
    • Flowing ConvNets for human pose estimation in videos
    • PFISTER, T., CHARLES, J., AND ZISSERMAN, A. 2015. Flowing ConvNets for human pose estimation in videos. In ICCV.
    • (2015) ICCV
    • Pfister, T.1    Charles, J.2    Zisserman, A.3
  • 42
    • 84911448899 scopus 로고    scopus 로고
    • Posebits for monocular human pose estimation
    • PONS-MOLL, G., FLEET, D. J., AND ROSENHAHN, B. 2014. Posebits for monocular human pose estimation. In CVPR.
    • (2014) CVPR
    • Pons-Moll, G.1    Fleet, D.J.2    Rosenhahn, B.3
  • 43
    • 84986313463 scopus 로고    scopus 로고
    • Learning action maps of large environments via first-person vision
    • RHINEHART, N., AND KITANI, K. M. 2016. Learning action maps of large environments via first-person vision. In CVPR.
    • (2016) CVPR
    • Rhinehart, N.1    Kitani, K.M.2
  • 44
    • 84973922841 scopus 로고    scopus 로고
    • A versatile scene model with differentiable visibility applied to generative pose estimation
    • RHODIN, H., ROBERTINI, N., RICHARDT, C., SEIDEL, H.-P., AND THEOBALT, C. 2015. A versatile scene model with differentiable visibility applied to generative pose estimation. In ICCV.
    • (2015) ICCV
    • Rhodin, H.1    Robertini, N.2    Richardt, C.3    Seidel, H.-P.4    Theobalt, C.5
  • 47
    • 84887370243 scopus 로고    scopus 로고
    • Modec: Multimodal decomposable models for human pose estimation
    • SAPP, B., AND TASKAR, B. 2013. MODEC: Multimodal decomposable models for human pose estimation. In CVPR.
    • (2013) CVPR
    • Sapp, B.1    Taskar, B.2
  • 48
    • 34250667154 scopus 로고    scopus 로고
    • A toolbox for easily calibrating omnidirectional cameras
    • SCARAMUZZA, D., MARTINELLI, A., AND SIEGWART, R. 2006. A toolbox for easily calibrating omnidirectional cameras. In IROS.
    • (2006) IROS
    • Scaramuzza, D.1    Martinelli, A.2    Siegwart, R.3
  • 51
    • 75149150235 scopus 로고    scopus 로고
    • Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion
    • SIGAL, L., BALAN , A. O., AND BLACK, M. J. 2010. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. International Journal of Computer Vision, 87, 4-27.
    • (2010) International Journal of Computer Vision , vol.87 , pp. 4-27
    • Sigal, L.1    Balan, A.O.2    Black, M.J.3
  • 52
    • 84859270367 scopus 로고    scopus 로고
    • Loose-limbed people: Estimating 3D human pose and motion using non-parametric belief propagation
    • SIGAL, L., ISARD, M., HAUSSECKER, H., AND BLACK, M. J. 2012. Loose-limbed people: Estimating 3D human pose and motion using non-parametric belief propagation. International Journal of Computer Vision 98, 1, 15-48.
    • (2012) International Journal of Computer Vision , vol.98 , Issue.1 , pp. 15-48
    • Sigal, L.1    Isard, M.2    Haussecker, H.3    Black, M.J.4
  • 53
    • 84959210765 scopus 로고    scopus 로고
    • Fast and robust hand tracking using detection-guided optimization
    • SRIDHAR, S., MUELLER, F., OULASVIRTA, A., AND THEOBALT, C. 2015. Fast and robust hand tracking using detection-guided optimization. In CVPR.
    • (2015) CVPR
    • Sridhar, S.1    Mueller, F.2    Oulasvirta, A.3    Theobalt, C.4
  • 54
    • 84856645761 scopus 로고    scopus 로고
    • Fast articulated motion tracking using a sums of Gaussians body model
    • STOLL, C., HASLER, N., GALL, J., SEIDEL, H.-P., AND THEOBALT, C. 2011. Fast articulated motion tracking using a sums of Gaussians body model. In ICCV.
    • (2011) ICCV
    • Stoll, C.1    Hasler, N.2    Gall, J.3    Seidel, H.-P.4    Theobalt, C.5
  • 55
    • 85045308009 scopus 로고    scopus 로고
    • Detecting engagement in egocentric video
    • SU, Y.-C., AND GRAUMAN, K. 2016. Detecting engagement in egocentric video. In ECCV.
    • (2016) ECCV
    • Su, Y.-C.1    Grauman, K.2
  • 56
    • 84959297812 scopus 로고    scopus 로고
    • Self-calibrating head-mounted eye trackers using egocentric visual saliency
    • SUGANO, Y., AND BULLING, A. 2015. Self-calibrating head-mounted eye trackers using egocentric visual saliency. In UIST.
    • (2015) UIST
    • Sugano, Y.1    Bulling, A.2
  • 58
    • 84986269202 scopus 로고    scopus 로고
    • Direct prediction of 3D body poses from motion compensated sequences
    • TEKIN, B., ROZANTSEV, A., LEPETIT, V., AND FUA, P. 2016. Direct prediction of 3D body poses from motion compensated sequences. In CVPR.
    • (2016) CVPR
    • Tekin, B.1    Rozantsev, A.2    Lepetit, V.3    Fua, P.4
  • 60
    • 84930634156 scopus 로고    scopus 로고
    • Joint training of a convolutional network and a graphical model for human pose estimation
    • TOMPSON, J. J., JAIN, A., LECUN, Y., AND BREGLER, C. 2014. Joint training of a convolutional network and a graphical model for human pose estimation. In NIPS.
    • (2014) NIPS
    • Tompson, J.J.1    Jain, A.2    Lecun, Y.3    Bregler, C.4
  • 61
    • 84911381180 scopus 로고    scopus 로고
    • Deeppose: Human pose estimation via deep neural networks
    • TOSHEV, A., AND SZEGEDY, C. 2014. DeepPose: Human pose estimation via deep neural networks. In CVPR.
    • (2014) CVPR
    • Toshev, A.1    Szegedy, C.2
  • 62
    • 33750031202 scopus 로고    scopus 로고
    • Temporal motion models for monocular and multiview 3D human body tracking
    • URTASUN, R., FLEET, D. J., AND FUA, P. 2006. Temporal motion models for monocular and multiview 3D human body tracking. Computer Vision and Image Understanding 104, 2, 157-177.
    • (2006) Computer Vision and Image Understanding , vol.104 , Issue.2 , pp. 157-177
    • Urtasun, R.1    Fleet, D.J.2    Fua, P.3
  • 64
    • 70349684622 scopus 로고    scopus 로고
    • Real-time hand-tracking with a color glove
    • WANG, R. Y., AND POPOVIĆ, J. 2009. Real-time hand-tracking with a color glove. ACM Transactions on Graphics 28, 3, 63.
    • (2009) ACM Transactions on Graphics , vol.28 , Issue.3 , pp. 63
    • Wang, R.Y.1    Popović, J.2
  • 65
    • 84986253608 scopus 로고    scopus 로고
    • Walk and learn: Facial attribute representation learning from egocentric video and contextual data
    • WANG, J., CHENG, Y., AND FERIS, R. S. 2016. Walk and learn: Facial attribute representation learning from egocentric video and contextual data. In CVPR.
    • (2016) CVPR
    • Wang, J.1    Cheng, Y.2    Feris, R.S.3
  • 66
    • 84870234121 scopus 로고    scopus 로고
    • Accurate realtime full-body motion capture using a single depth camera
    • 188
    • WEI, X., ZHANG, P., AND CHAI, J. 2012. Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics 31, 6, 188:1-12.
    • (2012) ACM Transactions on Graphics , vol.31 , Issue.6 , pp. 1-12
    • Wei, X.1    Zhang, P.2    Chai, J.3
  • 69
    • 84986277851 scopus 로고    scopus 로고
    • A dual-source approach for 3D pose estimation from a single image
    • YASIN, H., IQBAL, U., KRÜGER, B., WEBER, A., AND GALL, J. 2016. A dual-source approach for 3D pose estimation from a single image. In CVPR.
    • (2016) CVPR
    • Yasin, H.1    Iqbal, U.2    Krüger, B.3    Weber, A.4    Gall, J.5
  • 70
    • 84868023122 scopus 로고    scopus 로고
    • Footsee: An interactive animation system
    • YIN, K., AND PAI, D. K. 2003. Footsee: an interactive animation system. In SCA.
    • (2003) SCA
    • Yin, K.1    Pai, D.K.2
  • 72
    • 84914689133 scopus 로고    scopus 로고
    • Leveraging depth cameras and wearable pressure sensors for full-body kinematics and dynamics capture
    • ZHANG, P., SIU, K., ZHANG, J., LIU, C. K., AND CHAI, J. 2014. Leveraging depth cameras and wearable pressure sensors for full-body kinematics and dynamics capture. ACM Transactions on Graphics 33, 6, 221:1-14.
    • (2014) ACM Transactions on Graphics , vol.33 , Issue.6 , pp. 1-14
    • Zhang, P.1    Siu, K.2    Zhang, J.3    Liu, C.K.4    Chai, J.5


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.