-
1
-
-
84942134575
-
Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses
-
Baldassi, C., Ingrosso, A., Lucibello, C., Saglietti, L., and Zecchina, R. Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses. Physical Review Letters, 115(12): 1-5, 2015.
-
(2015)
Physical Review Letters
, vol.115
, Issue.12
, pp. 1-5
-
-
Baldassi, C.1
Ingrosso, A.2
Lucibello, C.3
Saglietti, L.4
Zecchina, R.5
-
2
-
-
84897544737
-
Theano: New features and speed improvements
-
Bastien, F., Lamblin, P., Pascanu, R., et al. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
-
(2012)
Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop
-
-
Bastien, F.1
Lamblin, P.2
Pascanu, R.3
-
3
-
-
33745868036
-
Embedded floating-point units in FPGAs
-
ACM
-
Beauchamp, M. J., Hauck, S., Underwood, K. D., and Hemmert, K. S. Embedded floating-point units in FPGAs. In Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays, pp. 12-20. ACM, 2006.
-
(2006)
Proceedings of the 2006 ACM/SIGDA 14th International Symposium on Field Programmable Gate Arrays
, pp. 12-20
-
-
Beauchamp, M.J.1
Hauck, S.2
Underwood, K.D.3
Hemmert, K.S.4
-
5
-
-
84857819132
-
Theano: A CPU and GPU math expression compiler
-
June Oral Presentation
-
Bergstra, J., Breuleux, O., Bastien, F., et al. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
-
(2010)
Proceedings of the Python for Scientific Computing Conference (SciPy)
-
-
Bergstra, J.1
Breuleux, O.2
Bastien, F.3
-
6
-
-
84897780584
-
Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning
-
ACM
-
Chen, T., Du, Z., Sun, N., et al. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In Proceedings of the 19th international conference on Architectural support for programming languages and operating systems, pp. 269-284. ACM, 2014.
-
(2014)
Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems
, pp. 269-284
-
-
Chen, T.1
Du, Z.2
Sun, N.3
-
7
-
-
84965138857
-
-
arXiv preprint arXiv: 1503.03562
-
Cheng, Z., Soudry, D., Mao, Z., and Lan, Z. Training binary multilayer neural networks for image classification using expectation backpropgation. arXiv preprint arXiv: 1503.03562, 2015.
-
(2015)
Training Binary Multilayer Neural Networks for Image Classification Using Expectation Backpropgation
-
-
Cheng, Z.1
Soudry, D.2
Mao, Z.3
Lan, Z.4
-
8
-
-
84897484337
-
Deep learning with COTS HPC systems
-
Coates, A., Huval, B., Wang, T., et al. Deep learning with COTS HPC systems. In Proceedings of the 30th international conference on machine learning, pp. 1337-1345, 2013.
-
(2013)
Proceedings of the 30th International Conference on Machine Learning
, pp. 1337-1345
-
-
Coates, A.1
Huval, B.2
Wang, T.3
-
9
-
-
84888340666
-
Torch7: A matlab-like environment for machine learning
-
NIPS Workshop
-
Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011.
-
(2011)
BigLearn
-
-
Collobert, R.1
Kavukcuoglu, K.2
Farabet, C.3
-
10
-
-
84988027563
-
-
ArXiv e-prints, abs/1412.7024, December
-
Courbariaux, M., Bengio, Y., and David, J.-P. Training deep neural networks with low precision multiplications. ArXiv e-prints, abs/1412.7024, December 2014.
-
(2014)
Training Deep Neural Networks with Low Precision Multiplications
-
-
Courbariaux, M.1
Bengio, Y.2
David, J.-P.3
-
11
-
-
85009395458
-
-
ArXiv e-prints, abs/1511.00363, November
-
Courbariaux, M., Bengio, Y., and David, J.-P. Binaryconnect: Training deep neural networks with binary weights during propagations. ArXiv e-prints, abs/1511.00363, November 2015.
-
(2015)
Binaryconnect: Training Deep Neural Networks with Binary Weights During Propagations
-
-
Courbariaux, M.1
Bengio, Y.2
David, J.-P.3
-
12
-
-
84973384984
-
-
August
-
Dieleman, S., Schlüter, J., Raffel, C., et al. Lasagne: First release, August 2015.
-
(2015)
Lasagne: First Release
-
-
Dieleman, S.1
Schlüter, J.2
Raffel, C.3
-
13
-
-
84965118275
-
Backpropagation for energy-efficient neuromorphic computing
-
Esser, S. K., Appuswamy, R., Merolla, P., Arthur, J. V., and Modha, D. S. Backpropagation for energy-efficient neuromorphic computing. In Advances in Neural Information Processing Systems, pp. 1117-1125, 2015.
-
(2015)
Advances in Neural Information Processing Systems
, pp. 1117-1125
-
-
Esser, S.K.1
Appuswamy, R.2
Merolla, P.3
Arthur, J.V.4
Modha, D.S.5
-
14
-
-
79951563340
-
Understanding the difficulty of training deep feedforward neural networks
-
Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In AISTATS'2010, 2010.
-
(2010)
AISTATS'2010
-
-
Glorot, X.1
Bengio, Y.2
-
15
-
-
84940682866
-
-
arXiv preprint arXiv: 1412.6115
-
Gong, Y., Liu, L., Yang, M., and Bourdev, L. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv: 1412.6115, 2014.
-
(2014)
Compressing Deep Convolutional Networks Using Vector Quantization
-
-
Gong, Y.1
Liu, L.2
Yang, M.3
Bourdev, L.4
-
16
-
-
84893401626
-
-
arXiv preprint arXiv: 1308.4214
-
Goodfellow, I. J., Warde-Farley, D., Lamblin, P., et al. Pylearn2: a machine learning research library. arXiv preprint arXiv: 1308.4214, 2013.
-
(2013)
Pylearn2: A Machine Learning Research Library
-
-
Goodfellow, I.J.1
Warde-Farley, D.2
Lamblin, P.3
-
17
-
-
12444275638
-
Analysis of high-performance floating-point arithmetic on FPGAs
-
2004. Proceedings. 18th International IEEE
-
Govindu, G., Zhuo, L., Choi, S., and Prasanna, V. Analysis of high-performance floating-point arithmetic on FPGAs. In Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, pp. 149. IEEE, 2004.
-
(2004)
Parallel and Distributed Processing Symposium
, pp. 149
-
-
Govindu, G.1
Zhuo, L.2
Choi, S.3
Prasanna, V.4
-
19
-
-
84965175092
-
-
arXiv preprint arXiv: 1510.00149
-
Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv: 1510.00149, 2015a.
-
(2015)
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
-
-
Han, S.1
Mao, H.2
Dally, W.J.3
-
20
-
-
84965140688
-
Learning both weights and connections for efficient neural network
-
Han, S., Pool, J., Tran, J., and Dally, W. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pp. 1135-1143, 2015b.
-
(2015)
Advances in Neural Information Processing Systems
, pp. 1135-1143
-
-
Han, S.1
Pool, J.2
Tran, J.3
Dally, W.4
-
21
-
-
84893548516
-
Neural networks for machine learning
-
video lectures
-
Hinton, G. Neural networks for machine learning. Coursera, video lectures, 2012.
-
(2012)
Coursera
-
-
Hinton, G.1
-
23
-
-
85019231773
-
-
arXiv preprint arXiv: 1609.07061
-
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv: 1609.07061, 2016.
-
(2016)
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
-
-
Hubara, I.1
Courbariaux, M.2
Soudry, D.3
El-Yaniv, R.4
Bengio, Y.5
-
24
-
-
84920265200
-
Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1
-
IEEE
-
Hwang, K. and Sung, W. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1-6. IEEE, 2014.
-
(2014)
Signal Processing Systems (SiPS), 2014 IEEE Workshop on
, pp. 1-6
-
-
Hwang, K.1
Sung, W.2
-
28
-
-
84930630277
-
Deep learning
-
LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 521(7553): 436-444, 2015.
-
(2015)
Nature
, vol.521
, Issue.7553
, pp. 436-444
-
-
LeCun, Y.1
Bengio, Y.2
Hinton, G.3
-
29
-
-
84990058786
-
-
arXiv preprint arXiv: 1509.08985
-
Lee, C.-Y., Gallagher, P. W., and Tu, Z. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. arXiv preprint arXiv: 1509.08985, 2015.
-
(2015)
Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
-
-
Lee, C.-Y.1
Gallagher, P.W.2
Tu, Z.3
-
30
-
-
84986330658
-
-
ArXiv e-prints, abs/1510.03009, October
-
Lin, Z., Courbariaux, M., Memisevic, R., and Bengio, Y. Neural networks with few multiplications. ArXiv e-prints, abs/1510.03009, October 2015.
-
(2015)
Neural Networks with Few Multiplications
-
-
Lin, Z.1
Courbariaux, M.2
Memisevic, R.3
Bengio, Y.4
-
31
-
-
84937908919
-
Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights
-
Soudry, D., Hubara, I., and Meir, R. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In NIPS'2014, 2014.
-
(2014)
NIPS'2014
-
-
Soudry, D.1
Hubara, I.2
Meir, R.3
-
32
-
-
84904163933
-
Dropout: A simple way to prevent neural networks from overfitting
-
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15: 1929-1958, 2014.
-
(2014)
Journal of Machine Learning Research
, vol.15
, pp. 1929-1958
-
-
Srivastava, N.1
Hinton, G.2
Krizhevsky, A.3
Sutskever, I.4
Salakhutdinov, R.5
-
33
-
-
84964983441
-
-
Technical report, arXiv: 1409.4842
-
Szegedy, C., Liu, W., Jia, Y., et al. Going deeper with convolutions. Technical report, arXiv: 1409.4842, 2014.
-
(2014)
Going Deeper with Convolutions
-
-
Szegedy, C.1
Liu, W.2
Jia, Y.3
-
34
-
-
84897550107
-
Regularization of neural networks using dropconnect
-
Wan, L., Zeiler, M., Zhang, S., LeCun, Y., and Fergus, R. Regularization of neural networks using dropconnect. In ICML'2013, 2013.
-
(2013)
ICML'2013
-
-
Wan, L.1
Zeiler, M.2
Zhang, S.3
LeCun, Y.4
Fergus, R.5
|