메뉴 건너뛰기




Volumn , Issue , 2017, Pages

Nonparametric neural networks

Author keywords

[No Author keywords available]

Indexed keywords

GRADIENT METHODS;

EID: 85052107291     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (11)

References (42)
  • 1
    • 85018888114 scopus 로고    scopus 로고
    • Learning the number of neurons in deep networks
    • Jose M. Alvarez and Mathieu Salzmann. Learning the number of neurons in deep networks. In NIPS, 2016.
    • (2016) NIPS
    • Alvarez, J.M.1    Salzmann, M.2
  • 3
    • 84937961091 scopus 로고    scopus 로고
    • Do deep nets really need to be deep?
    • Lei Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPS, 2014.
    • (2014) NIPS
    • Ba, L.J.1    Caruana, R.2
  • 4
    • 85014561619 scopus 로고    scopus 로고
    • A fast iterative shrinkage-thresholding algorithm for linear inverse problems
    • Amir Back and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sciences, 2:183-202, 2006.
    • (2006) SIAM J Imaging Sciences , vol.2 , pp. 183-202
    • Back, A.1    Teboulle, M.2
  • 5
    • 84857855190 scopus 로고    scopus 로고
    • Random search for hyper-parameter optimization
    • James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR, 13: 281-305, 2012.
    • (2012) JMLR , vol.13 , pp. 281-305
    • Bergstra, J.1    Bengio, Y.2
  • 6
    • 85083953532 scopus 로고    scopus 로고
    • Net2Net: Accelerating learning via knowledge transfer
    • Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: accelerating learning via knowledge transfer. In ICLR, 2016.
    • (2016) ICLR
    • Chen, T.1    Goodfellow, I.2    Shlens, J.3
  • 7
    • 84986246906 scopus 로고    scopus 로고
    • Memory bounded deep convolutional networks
    • abs/1412.1442
    • Maxwell D. Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. CoRR, pp. abs/1412.1442, 2014.
    • (2014) CoRR
    • Collins, M.D.1    Kohli, P.2
  • 8
    • 84890527827 scopus 로고    scopus 로고
    • Improving deep neural networks for lvcsr using rectified linear units and dropout
    • George E. Dahl, Tara N. Sainath, and Geoffrey E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In ICASSP, 2013.
    • (2013) ICASSP
    • Dahl, G.E.1    Sainath, T.N.2    Hinton, G.E.3
  • 10
    • 80052250414 scopus 로고    scopus 로고
    • Adaptive subgradient methods for online learning and stochastic optimization
    • John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121-2159, 2011.
    • (2011) JMLR , vol.12 , pp. 2121-2159
    • Duchi, J.1    Hazan, E.2    Singer, Y.3
  • 11
    • 0000155950 scopus 로고
    • The cascade-correlation learning architecture
    • Scott Fahlman and Christian Lebiere. The cascade-correlation learning architecture. In NIPS, 1990.
    • (1990) NIPS
    • Fahlman, S.1    Lebiere, C.2
  • 12
    • 84973901098 scopus 로고    scopus 로고
    • Learning the structure of deep convolutional networks
    • Jiashi Feng and Trevor Darrell. Learning the structure of deep convolutional networks. In ICCV, 2015.
    • (2015) ICCV
    • Feng, J.1    Darrell, T.2
  • 13
    • 85019215359 scopus 로고    scopus 로고
    • PerforatedCNNs: Acceleration through elimination of redundant convolutions
    • Michael Figurnov, Aijan Ibraimova, Dmitry Vetrov, and Pushmeet Kohli. Perforatedcnns: Acceleration through elimination of redundant convolutions. In NIPS, 2016.
    • (2016) NIPS
    • Figurnov, M.1    Ibraimova, A.2    Vetrov, D.3    Kohli, P.4
  • 15
    • 85018895765 scopus 로고    scopus 로고
    • Dynamic network durgery for efficient dnns
    • Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network durgery for efficient dnns. In NIPS, 2016.
    • (2016) NIPS
    • Guo, Y.1    Yao, A.2    Chen, Y.3
  • 18
    • 80051969610 scopus 로고    scopus 로고
    • Sequential model-based optimization for general algorithm configuration (extended version)
    • University of British Columbia, Department of Computer Science
    • Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration (extended version). Tech. Rep. TR-2009-01, University of British Columbia, Department of Computer Science, 2009.
    • (2009) Tech. Rep. TR-2009-01
    • Hutter, F.1    Hoos, H.H.2    Leyton-Brown, K.3
  • 19
    • 84969584486 scopus 로고    scopus 로고
    • Batch normalization: Accelerating deep network training by reducing internal covariate shift
    • Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
    • (2015) ICML
    • Ioffe, S.1    Szegedy, C.2
  • 20
    • 85083951076 scopus 로고    scopus 로고
    • ADaM: A method for stochastic optimization
    • Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
    • (2015) ICLR
    • Kingma, D.P.1    Ba, J.L.2
  • 21
    • 85070975222 scopus 로고    scopus 로고
    • DSD: Dense-sparse-dense training for deep neural networks
    • Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. Dsd: Dense-sparse-dense training for deep neural networks. In ICLR, 2017.
    • (2017) ICLR
    • Klein, A.1    Falkner, S.2    Springenberg, J.T.3    Hutter, F.4
  • 23
    • 84998813242 scopus 로고    scopus 로고
    • Scalable gradient-based tuning of continuous regularization hyperparameters
    • Jelena Luketina, Mathias Berglund, Klaus Greff, and Raiko Tapani. Scalable gradient-based tuning of continuous regularization hyperparameters. In ICML, 2016.
    • (2016) ICML
    • Luketina, J.1    Berglund, M.2    Greff, K.3    Tapani, R.4
  • 24
    • 84989338543 scopus 로고    scopus 로고
    • Gradient-based hyperparameter optimization through reversible learning
    • Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimization through reversible learning. In ICML, 2015.
    • (2015) ICML
    • Maclaurin, D.1    Duvenaud, D.2    Adams, R.P.3
  • 25
    • 0001025418 scopus 로고
    • A practical Bayesian framework for backpropagation networks
    • David McKay. A practical bayesian framework for backpropagation networks. Neural Computation, 4:448-472, 1992.
    • (1992) Neural Computation , vol.4 , pp. 448-472
    • McKay, D.1
  • 26
    • 85088228467 scopus 로고    scopus 로고
    • Pruning convolutional neural networks for efficient inference
    • Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for efficient inference. In ICLR, 2017.
    • (2017) ICLR
    • Molchanov, P.1    Tyree, S.2    Karras, T.3    Aila, T.4    Kautz, J.5
  • 27
    • 84919826006 scopus 로고    scopus 로고
    • Learning by stretching deep networks
    • Gaurav Pandey and Ambedkar Dukkipati. Learning by stretching deep networks. In ICML, 2014.
    • (2014) ICML
    • Pandey, G.1    Dukkipati, A.2
  • 30
    • 85083953063 scopus 로고    scopus 로고
    • Very deep convolutional networks for large-scale image recognition
    • Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
    • (2015) ICLR
    • Simonyan, K.1    Zisserman, A.2
  • 31
    • 84869201485 scopus 로고    scopus 로고
    • Practical Bayesian optimization of machine learning algorithms
    • Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012.
    • (2012) NIPS
    • Snoek, J.1    Larochelle, H.2    Adams, R.P.3
  • 33
    • 85015791874 scopus 로고    scopus 로고
    • Bayesian optimization with robust Bayesian neural networks
    • Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian optimization with robust bayesian neural networks. In NIPS, 2016.
    • (2016) NIPS
    • Springenberg, J.T.1    Klein, A.2    Falkner, S.3    Hutter, F.4
  • 34
    • 84897510162 scopus 로고    scopus 로고
    • On the importance of initialization and momentum in deep learning
    • Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013.
    • (2013) ICML
    • Sutskever, I.1    Martens, J.2    Dahl, G.3    Hinton, G.4
  • 38
    • 85015334059 scopus 로고    scopus 로고
    • Learning structured sparsity in deep neural networks
    • Wei Wen, Chunpeng Wu, Wandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In NIPS, 2016.
    • (2016) NIPS
    • Wen, W.1    Wu, C.2    Wang, W.3    Chen, Y.4    Li, H.5
  • 39
    • 84898974226 scopus 로고    scopus 로고
    • Computing with infinite networks
    • Christopher K. I. Williams. Computing with infinite networks. In NIPS, 1997.
    • (1997) NIPS
    • Williams, C.K.I.1
  • 40
    • 33645035051 scopus 로고    scopus 로고
    • Model selection and estimation in regression with grouped variables
    • Ming Yuan and Yin Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society, Series B, 68:49-67, 2006.
    • (2006) Journal of the Royal Statistical Society, Series B , vol.68 , pp. 49-67
    • Yuan, M.1    Lin, Y.2
  • 42
    • 85068717703 scopus 로고    scopus 로고
    • Neural architecture search with reinforcement learning
    • Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In ICLR, 2017.
    • (2017) ICLR
    • Zoph, B.1    Le, Q.V.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.