메뉴 건너뛰기




Volumn , Issue , 2017, Pages

Loss-aware binarization of deep networks

Author keywords

[No Author keywords available]

Indexed keywords

APPROXIMATION ALGORITHMS; RECURRENT NEURAL NETWORKS;

EID: 85050926955     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (121)

References (28)
  • 1
    • 85009395458 scopus 로고    scopus 로고
    • BinaryConnect: Training deep neural networks with binary weights during propagations
    • M. Courbariaux, Y. Bengio, and J.P. David. BinaryConnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3105-3113, 2015.
    • (2015) NIPS , pp. 3105-3113
    • Courbariaux, M.1    Bengio, Y.2    David, J.P.3
  • 2
    • 84965117097 scopus 로고    scopus 로고
    • Equilibrated adaptive learning rates for non-convex optimization
    • Y. Dauphin, H. de Vries, and Y. Bengio. Equilibrated adaptive learning rates for non-convex optimization. In NIPS, pp. 1504-1512, 2015a.
    • (2015) NIPS , pp. 1504-1512
    • Dauphin, Y.1    De Vries, H.2    Bengio, Y.3
  • 4
    • 80052250414 scopus 로고    scopus 로고
    • Adaptive subgradient methods for online learning and stochastic optimization
    • J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011.
    • (2011) Journal of Machine Learning Research , vol.12 , pp. 2121-2159
    • Duchi, J.1    Hazan, E.2    Singer, Y.3
  • 5
    • 84862277874 scopus 로고    scopus 로고
    • Understanding the difficulty of training deep feedforward neural networks
    • X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTAT, pp. 249-256, 2010.
    • (2010) AISTAT , pp. 249-256
    • Glorot, X.1    Bengio, Y.2
  • 8
    • 85083950579 scopus 로고    scopus 로고
    • Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding
    • S. Han, H. Mao, and W.J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. In ICLR, 2016.
    • (2016) ICLR
    • Han, S.1    Mao, H.2    Dally, W.J.3
  • 11
    • 84959876313 scopus 로고    scopus 로고
    • Visualizing and understanding recurrent networks
    • A. Karpathy, J. Johnson, and F.-F. Li. Visualizing and understanding recurrent networks. In ICLR, 2016.
    • (2016) ICLR
    • Karpathy, A.1    Johnson, J.2    Li, F.-F.3
  • 12
    • 85083951289 scopus 로고    scopus 로고
    • Compression of deep convolutional neural networks for fast and low power mobile applications
    • Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In ICLR, 2016.
    • (2016) ICLR
    • Kim, Y.-D.1    Park, E.2    Yoo, S.3    Choi, T.4    Yang, L.5    Shin, D.6
  • 13
    • 85083951076 scopus 로고    scopus 로고
    • A method for stochastic optimization
    • D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
    • (2015) ICLR
    • Kingma, D.1    Adam, J.Ba.2
  • 14
    • 84930630277 scopus 로고    scopus 로고
    • Deep learning
    • Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436-444, 2015.
    • (2015) Nature , vol.521 , Issue.7553 , pp. 436-444
    • LeCun, Y.1    Bengio, Y.2    Hinton, G.3
  • 15
    • 84910594199 scopus 로고    scopus 로고
    • Proximal Newton-type methods for minimizing composite functions
    • J.D. Lee, Y. Sun, and M.A. Saunders. Proximal Newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420-1443, 2014.
    • (2014) SIAM Journal on Optimization , vol.24 , Issue.3 , pp. 1420-1443
    • Lee, J.D.1    Sun, Y.2    Saunders, M.A.3
  • 18
    • 84872565347 scopus 로고    scopus 로고
    • Training deep and recurrent networks with Hessian-free optimization
    • Springer
    • J. Martens and I. Sutskever. Training deep and recurrent networks with Hessian-free optimization. In Neural Networks: Tricks of the trade, pp. 479-535. Springer, 2012.
    • (2012) Neural Networks: Tricks of the Trade , pp. 479-535
    • Martens, J.1    Sutskever, I.2
  • 20
    • 85083950291 scopus 로고    scopus 로고
    • Revisiting natural gradient for deep networks
    • R. Pascanu and Y. Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014.
    • (2014) ICLR
    • Pascanu, R.1    Bengio, Y.2
  • 21
    • 84892982833 scopus 로고    scopus 로고
    • On the difficulty of training recurrent neural networks
    • R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICLR, pp. 1310-1318, 2013.
    • (2013) ICLR , pp. 1310-1318
    • Pascanu, R.1    Mikolov, T.2    Bengio, Y.3
  • 23
    • 84990055874 scopus 로고    scopus 로고
    • XNOR-Net: ImageNet classification using binary convolutional neural networks
    • M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet classification using binary convolutional neural networks. In ECCV, 2016.
    • (2016) ECCV
    • Rastegari, M.1    Ordonez, V.2    Redmon, J.3    Farhadi, A.4
  • 26
    • 77951160349 scopus 로고    scopus 로고
    • The concave-convex procedure (CCCP)
    • A.L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). NIPS, 2:1033-1040, 2002.
    • (2002) NIPS , vol.2 , pp. 1033-1040
    • Yuille, A.L.1    Rangarajan, A.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.