메뉴 건너뛰기




Volumn , Issue , 2016, Pages 265-283

TensorFlow: A system for large-scale machine learning

Author keywords

[No Author keywords available]

Indexed keywords

DEEP NEURAL NETWORKS; FLOW GRAPHS; MACHINE LEARNING; PROGRAM PROCESSORS; SYSTEMS ANALYSIS; TENSORS;

EID: 85075670920     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (19270)

References (76)
  • 3
    • 84938261376 scopus 로고    scopus 로고
    • Pedestrian detection with a large-field-of-view deep network
    • A. Angelova, A. Krizhevsky, and V. Vanhoucke. Pedestrian detection with a large-field-of-view deep network. In Proceedings of ICRA, pages 704-711. IEEE, 2015. www.vision.caltech.edu/anelia/publications/Angelova15LFOV.pdf.
    • (2015) Proceedings of ICRA , pp. 704-711
    • Angelova, A.1    Krizhevsky, A.2    Vanhoucke, V.3
  • 4
    • 0003376994 scopus 로고
    • Dataflow architectures
    • Annual Reviews Inc., 1986
    • Arvind and D. E. Culler. Dataflow architectures. In Annual Review of Computer Science Vol. 1, 1986, pages 225-253. Annual Reviews Inc., 1986. www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA166235.
    • (1986) Annual Review of Computer Science , vol.1 , pp. 225-253
    • Arvind1    Culler, D.E.2
  • 8
    • 84865685824 scopus 로고    scopus 로고
    • Sample size selection in optimization methods for machine learning
    • R. H. Byrd, G. M. Chin, J. Nocedal, and Y. Wu. Sample size selection in optimization methods for machine learning. Mathematical Programming, 134(1):127-155, 2012. dx.doi.org/10.1007/s10107-012-0572-5.
    • (2012) Mathematical Programming , vol.134 , Issue.1 , pp. 127-155
    • Byrd, R.H.1    Chin, G.M.2    Nocedal, J.3    Wu, Y.4
  • 14
    • 85069497682 scopus 로고    scopus 로고
    • Project adam: Building an efficient and scalable deep learning training system
    • T. Chilimbi, Y. Suzue, J. Apacible, and K. Kalyanaraman. Project Adam: Building an efficient and scalable deep learning training system. In Proceedings of OSDI, pages 571-582, 2014. www.usenix.org/system/files/conference/osdi14/osdi14-paper-chilimbi.pdf.
    • (2014) Proceedings of OSDI , pp. 571-582
    • Chilimbi, T.1    Suzue, Y.2    Apacible, J.3    Kalyanaraman, K.4
  • 16
    • 84881142714 scopus 로고    scopus 로고
    • LINQits: Big data on little clients
    • E. S. Chung, J. D. Davis, and J. Lee. LINQits: Big data on little clients. In Proceedings of ISCA, pages 261-272, 2013. www.microsoft.com/en-us/research/wp-content/uploads/2013/06/ISCA13-linqits.pdf.
    • (2013) Proceedings of ISCA , pp. 261-272
    • Chung, E.S.1    Davis, J.D.2    Lee, J.3
  • 17
    • 5044234815 scopus 로고    scopus 로고
    • Torch: A modular machine learning software library
    • R. Collobert, S. Bengio, and J. Mariéthoz. Torch: A modular machine learning software library. Technical report, IDIAP, 2002. infoscience.epfl.ch/record/82802/files/rr02-46.pdf.
    • (2002) Technical Report, IDIAP
    • Collobert, R.1    Bengio, S.2    Mariéthoz, J.3
  • 18
    • 84971575164 scopus 로고    scopus 로고
    • GeePS: Scalable deep learning on distributed GPUs with a GPU-specialized parameter server
    • H. Cui, H. Zhang, G. R. Ganger, P. B. Gibbons, and E. P. Xing. GeePS: Scalable deep learning on distributed GPUs with a GPU-specialized parameter server. In Proceedings of EuroSys, 2016. www.pdl.cmu.edu/PDL-FTP/CloudComputing/GeePS-cui-eurosys16.pdf.
    • (2016) Proceedings of EuroSys
    • Cui, H.1    Zhang, H.2    Ganger, G.R.3    Gibbons, P.B.4    Xing, E.P.5
  • 21
    • 85030321143 scopus 로고    scopus 로고
    • MapReduce: Simplified data processing on large clusters
    • J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In Proceedings of OSDI, pages 137-149, 2004. research.google.com/archive/mapreduce-osdi04.pdf.
    • (2004) Proceedings of OSDI , pp. 137-149
    • Dean, J.1    Ghemawat, S.2
  • 23
    • 80052250414 scopus 로고    scopus 로고
    • Adaptive subgradient methods for online learning and stochastic optimization
    • J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011. jmlr.org/papers/volume12/duchi11a/duchi11a.pdf.
    • (2011) Journal of Machine Learning Research , vol.12 , pp. 2121-2159
    • Duchi, J.1    Hazan, E.2    Singer, Y.3
  • 25
    • 84922385124 scopus 로고    scopus 로고
    • Frame-by-frame language identification in short utterances using deep neural networks
    • J. Gonzalez-Dominguez, I. Lopez-Moreno, P. J. Moreno, and J. Gonzalez-Rodriguez. Frame-by-frame language identification in short utterances using deep neural networks. Neural Networks, 64:49-58, 2015. research.google.com/pubs/archive/42929.pdf.
    • (2015) Neural Networks , vol.64 , pp. 49-58
    • Gonzalez-Dominguez, J.1    Lopez-Moreno, I.2    Moreno, P.J.3    Gonzalez-Rodriguez, J.4
  • 27
    • 85076759367 scopus 로고    scopus 로고
    • Google Research. Tensorflow serving, 2016. tensor-flow.github.io/serving/.
    • (2016) Tensorflow Serving
  • 28
    • 84986274465 scopus 로고    scopus 로고
    • Deep residual learning for image recognition
    • arxiv.org/abs/1512.03385
    • K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of CVPR, pages 770-778, 2016. arxiv.org/abs/1512.03385.
    • (2016) Proceedings of CVPR , pp. 770-778
    • He, K.1    Zhang, X.2    Ren, S.3    Sun, J.4
  • 32
    • 0031573117 scopus 로고    scopus 로고
    • Long short-term memory
    • S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. deeplearning.cs.cmu.edu/pdfs/Hochreiter97-lstm.pdf.
    • (1997) Neural Computation , vol.9 , Issue.8 , pp. 1735-1780
    • Hochreiter, S.1    Schmidhuber, J.2
  • 33
    • 84969584486 scopus 로고    scopus 로고
    • Batch normalization: Accelerating deep network training by reducing internal covariate shift
    • S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of ICML, pages 448-456, 2015. jmlr.org/proceedings/papers/v37/ioffe15.pdf.
    • (2015) Proceedings of ICML , pp. 448-456
    • Ioffe, S.1    Szegedy, C.2
  • 34
    • 34548041192 scopus 로고    scopus 로고
    • Dryad: Distributed data-parallel programs from sequential building blocks
    • M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly. Dryad: distributed data-parallel programs from sequential building blocks. In Proceedings of EuroSys, pages 59-72, 2007. www.microsoft.com/en-us/research/wp-content/uploads/2007/03/eurosys07.pdf.
    • (2007) Proceedings of EuroSys , pp. 59-72
    • Isard, M.1    Budiu, M.2    Yu, Y.3    Birrell, A.4    Fetterly, D.5
  • 37
    • 84943744936 scopus 로고    scopus 로고
    • On using very large target vocabulary for neural machine translation
    • July
    • S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of ACL-ICJNLP, pages 1-10, July 2015. www.aclweb.org/anthology/P15-1001.
    • (2015) Proceedings of ACL-ICJNLP , pp. 1-10
    • Jean, S.1    Cho, K.2    Memisevic, R.3    Bengio, Y.4
  • 42
  • 44
    • 84876231242 scopus 로고    scopus 로고
    • ImageNet classification with deep convolutional neural networks
    • A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of NIPS, pages 1106-1114, 2012. papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf.
    • (2012) Proceedings of NIPS , pp. 1106-1114
    • Krizhevsky, A.1    Sutskever, I.2    Hinton, G.E.3
  • 46
    • 84986325583 scopus 로고    scopus 로고
    • Fast algorithms for convolutional neural networks
    • arxiv.org/abs/1509.09308
    • A. Lavin and S. Gray. Fast algorithms for convolutional neural networks. In Proceedings of CVPR, pages 4013-4021, 2016. arxiv.org/abs/1509.09308.
    • (2016) Proceedings of CVPR , pp. 4013-4021
    • Lavin, A.1    Gray, S.2
  • 47
    • 84867135575 scopus 로고    scopus 로고
    • Building high-level features using large scale unsupervised learning
    • Q. Le, M. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. In Proceedings of ICML, pages 81-88, 2012. research.google.com/archive/unsupervised-icml2012.pdf.
    • (2012) Proceedings of ICML , pp. 81-88
    • Le, Q.1    Ranzato, M.2    Monga, R.3    Devin, M.4    Corrado, G.5    Chen, K.6    Dean, J.7    Ng, A.8
  • 53
    • 84937959846 scopus 로고    scopus 로고
    • Recurrent models of visual attention
    • V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In Proceedings of NIPS, pages 2204-2212, 2014. papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf.
    • (2014) Proceedings of NIPS , pp. 2204-2212
    • Mnih, V.1    Heess, N.2    Graves, A.3    Kavukcuoglu, K.4
  • 56
    • 84989299198 scopus 로고    scopus 로고
    • Incremental, iterative data processing with timely dataflow
    • Sept.
    • D. G. Murray, F. McSherry, M. Isard, R. Isaacs, P. Barham, and M. Abadi. Incremental, iterative data processing with timely dataflow. Commun. ACM, 59(10):75-83, Sept. 2016. dl.acm.org/citation.cfm?id=2983551.
    • (2016) Commun. ACM , vol.59 , Issue.10 , pp. 75-83
    • Murray, D.G.1    McSherry, F.2    Isard, M.3    Isaacs, R.4    Barham, P.5    Abadi, M.6
  • 60
    • 84892982833 scopus 로고    scopus 로고
    • On the difficulty of training recurrent neural networks
    • R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In Proceedings of ICML, pages 1310-1318, 2013. jmlr.org/proceedings/papers/v28/pascanu13.pdf.
    • (2013) Proceedings of ICML , pp. 1310-1318
    • Pascanu, R.1    Mikolov, T.2    Bengio, Y.3
  • 61
    • 85162467517 scopus 로고    scopus 로고
    • Hogwild: A lock-free approach to parallelizing stochastic gradient descent
    • B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Proceedings of NIPS, pages 693-701, 2011. papers.nips.cc/paper/4390-hogwild-a-lock-free-approach-to-parallelizing-stochastic-gradient-descent.pdf.
    • (2011) Proceedings of NIPS , pp. 693-701
    • Recht, B.1    Re, C.2    Wright, S.3    Niu, F.4
  • 62
    • 84889679621 scopus 로고    scopus 로고
    • Dandelion: A compiler and runtime for heterogeneous systems
    • C. J. Rossbach, Y. Yu, J. Currey, J.-P. Martin, and D. Fetterly. Dandelion: a compiler and runtime for heterogeneous systems. In Proceedings of SOSP, pages 49-68, 2013. sigops.org/sosp/sosp13/papers/p49-rossbach.pdf.
    • (2013) Proceedings of SOSP , pp. 49-68
    • Rossbach, C.J.1    Yu, Y.2    Currey, J.3    Martin, J.-P.4    Fetterly, D.5
  • 63
    • 84921817164 scopus 로고
    • Learning representations by back-propagating errors
    • MIT Press
    • D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. In Cognitive modeling, Volume 5, pages 213-220. MIT Press, 1988. www.cs.toronto.edu/~hinton/absps/naturebp.pdf.
    • (1988) Cognitive Modeling , vol.5 , pp. 213-220
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 65
    • 80052119994 scopus 로고    scopus 로고
    • An architecture for parallel topic models
    • Sept. vldb.org/pvldb/vldb2010/papers/R63.pdf
    • A. Smola and S. Narayanamurthy. An architecture for parallel topic models. Proc. VLDB Endow., 3(1-2):703-710, Sept. 2010. vldb.org/pvldb/vldb2010/papers/R63.pdf.
    • (2010) Proc. VLDB Endow. , vol.3 , Issue.1-2 , pp. 703-710
    • Smola, A.1    Narayanamurthy, S.2
  • 66
    • 84892623436 scopus 로고    scopus 로고
    • On the importance of initialization and momentum in deep learning
    • I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of ICML, pages 1139-1147, 2013. jmlr.org/proceedings/papers/v28/sutskever13.pdf.
    • (2013) Proceedings of ICML , pp. 1139-1147
    • Sutskever, I.1    Martens, J.2    Dahl, G.E.3    Hinton, G.E.4
  • 67
    • 84928547704 scopus 로고    scopus 로고
    • Sequence to sequence learning with neural networks
    • I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104-3112, 2014. papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural.pdf.
    • (2014) Proceedings of NIPS , pp. 3104-3112
    • Sutskever, I.1    Vinyals, O.2    Le, Q.V.3
  • 74
    • 85076882757 scopus 로고    scopus 로고
    • DryadLINQ: A system for general-purpose distributed dataparallel computing using a high-level language
    • Y. Yu, M. Isard, D. Fetterly, M. Budiu, U. Erlingsson, P. K. Gunda, and J. Currey. DryadLINQ: A system for general-purpose distributed dataparallel computing using a high-level language. In Proceedings of OSDI, pages 1-14, 2008. www.usenix.org/legacy/event/osdi08/tech/full papers/yu_y/yu_y.pdf.
    • (2008) Proceedings of OSDI , pp. 1-14
    • Yu, Y.1    Isard, M.2    Fetterly, D.3    Budiu, M.4    Erlingsson, U.5    Gunda, P.K.6    Currey, J.7
  • 75
    • 85040175609 scopus 로고    scopus 로고
    • Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing
    • M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proceedings of NSDI, pages 15-28, 2012. https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf.
    • (2012) Proceedings of NSDI , pp. 15-28
    • Zaharia, M.1    Chowdhury, M.2    Das, T.3    Dave, A.4    Ma, J.5    McCauley, M.6    Franklin, M.J.7    Shenker, S.8    Stoica, I.9


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.