-
1
-
-
84958264664
-
-
arXiv preprint
-
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, and Jeffrey Dean. “Tensorflow: Large-scale machine learning on heterogeneous distributed systems”. In: arXiv preprint:1603.04467 (2016).
-
(2016)
Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
-
-
Abadi, M.1
Agarwal, A.2
Barham, P.3
Brevdo, E.4
Chen, Z.5
Citro, C.6
Corrado, G.S.7
Davis, A.8
Dean, J.9
-
4
-
-
84990032982
-
-
arXiv preprint
-
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems”. In: arXiv preprint:1512.01274 (2015).
-
(2015)
Mxnet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
-
-
Chen, T.1
Li, M.2
Li, Y.3
Lin, M.4
Wang, N.5
Wang, M.6
Xiao, T.7
Xu, B.8
Zhang, C.9
Zhang, Z.10
-
5
-
-
85069497682
-
Project adam: Buildingan efficient and scalable deep learning training system
-
Trishul M Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. “Project adam: Buildingan efficient and scalable deep learning training system”. In: USENIX Symposium on Operating Systems Design and Implementation 14 (2014), pp. 571–582.
-
(2014)
USENIX Symposium on Operating Systems Design and Implementation
, vol.14
, pp. 571-582
-
-
Chilimbi, T.M.1
Suzue, Y.2
Apacible, J.3
Kalyanaraman, K.4
-
7
-
-
84967157678
-
-
Elias De Coninck, Tim Verbelen, Bert Vankeirsbilck, Steven Bohez, Sam Leroux, and Pieter Simoens. “DIANNE: Distributed Artificial Neural Networks for the Internet of Things”. In: (2015), pp. 19–24. URL: http://doi.acm.org/10.1145/2836127.2836130.
-
(2015)
DIANNE: Distributed Artificial Neural Networks for the Internet of Things
, pp. 19-24
-
-
de Coninck, E.1
Verbelen, T.2
Vankeirsbilck, B.3
Bohez, S.4
Leroux, S.5
Simoens, P.6
-
8
-
-
84877760312
-
Large scale distributed deep networks
-
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. “Large scale distributed deep networks”. In: Advances in Neural Information Processing Systems (2012), pp. 1223–1231.
-
(2012)
Advances in Neural Information Processing Systems
, pp. 1223-1231
-
-
Dean, J.1
Corrado, G.2
Monga, R.3
Chen, K.4
Devin, M.5
Mao, M.6
Ranzato, M.7
Senior, A.8
Tucker, P.9
Yang, K.10
Le, Q.V.11
Ng, A.Y.12
-
9
-
-
84910069984
-
1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs
-
Frank Seide Hao Fu Jasha Droppo Gang Li and Dong Yu. “1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distributed Training of Speech DNNs”. In: INTERSPEECH (2014).
-
(2014)
INTERSPEECH
-
-
Seide, F.1
Fu, H.2
Droppo, J.3
Li, G.4
Yu, D.5
-
14
-
-
85077743953
-
-
visited on 05/02/2018
-
Resource efficient ML for Edge and Endpoint IoT Devices. URL: https://www.microsoft.com/en-us/research/project/resource-efficient-ml-for-the-edge-and-endpoint-iot-devices/ (visited on 05/02/2018).
-
Resource Efficient ML for Edge and Endpoint IoT Devices
-
-
-
15
-
-
84937912100
-
Scaling distributed machine learning with the parameter server
-
Mu Li David G Andersen Jun Woo Park Alexander J Smola Amr Ahmed Vanja Josifovski James-Long Eugene J Shekita and Bor-Yiing Su. “Scaling distributed machine learning with the parameter server”. In: OSDI (2014), pp. 583–598.
-
(2014)
OSDI
, pp. 583-598
-
-
Li, M.1
Andersen, D.G.2
Park, J.W.3
Smola, A.J.4
Vanja, A.A.5
James-Long, J.6
Shekita, E.J.7
Su, B.-Y.8
-
16
-
-
84959142008
-
Scalable distributed DNN training using commodity GPU cloud computing
-
Nikko Strom. “Scalable Distributed DNN Training Using Commodity GPU Cloud Computing”. In: INTERSPEECH (2015).
-
(2015)
INTERSPEECH
-
-
Strom, N.1
-
19
-
-
85044036193
-
-
arXiv print
-
Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. “TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning”. In: arXiv print:1705.07878 (2017).
-
(2017)
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
-
-
Wen, W.1
Xu, C.2
Yan, F.3
Wu, C.4
Wang, Y.5
Chen, Y.6
Li, H.7
|