메뉴 건너뛰기




Volumn , Issue , 2018, Pages

SmaSH: One-shot model architecture search through hypernetworks

Author keywords

[No Author keywords available]

Indexed keywords

DEEP NEURAL NETWORKS;

EID: 85083953913     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (413)

References (45)
  • 1
    • 84998773543 scopus 로고    scopus 로고
    • Normalization propagation: A parametric technique for removing internal covariate shift in deep networks
    • D. Arpit, Y. Zhou, B.U. Kota, and V. Govindaraju. Normalization propagation: A parametric technique for removing internal covariate shift in deep networks. In ICML, 2016.
    • (2016) ICML
    • Arpit, D.1    Zhou, Y.2    Kota, B.U.3    Govindaraju, V.4
  • 3
    • 85079594941 scopus 로고    scopus 로고
    • Designing neural network architectures using reinforcement learning
    • B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures using reinforcement learning. In ICLR, 2017.
    • (2017) ICLR
    • Baker, B.1    Gupta, O.2    Naik, N.3    Raskar, R.4
  • 4
    • 84857855190 scopus 로고    scopus 로고
    • Random search for hyperparameter optimization
    • J. Bergstra and Y. Bengio. Random search for hyperparameter optimization. In JMLR, 2012.
    • (2012) JMLR
    • Bergstra, J.1    Bengio, Y.2
  • 6
    • 85083953532 scopus 로고    scopus 로고
    • Net2Net: Accelerating learning via knowledge transfer
    • T. Chen, I. Goodfellow, and J. Shiens. Net2net: Accelerating learning via knowledge transfer. In ICLR, 2016.
    • (2016) ICLR
    • Chen, T.1    Goodfellow, I.2    Shiens, J.3
  • 8
    • 80053446757 scopus 로고    scopus 로고
    • N analysis of single layer networks in unsupervised feature learning
    • A. Coates, H. Lee, and A.Y. Ng. n analysis of single layer networks in unsupervised feature learning. In AISTATS, 2011.
    • (2011) AISTATS
    • Coates, A.1    Lee, H.2    Ng, A.Y.3
  • 11
    • 85144243631 scopus 로고    scopus 로고
    • Shake-shake regularization of 3-branch residual networks
    • X. Gastaldi. Shake-shake regularization of 3-branch residual networks. ICLR 2017 Workshop, 2017.
    • (2017) ICLR 2017 Workshop
    • Gastaldi, X.1
  • 13
    • 84986274465 scopus 로고    scopus 로고
    • Deep residual learning for image recognition
    • K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
    • (2016) CVPR
    • He, K.1    Zhang, X.2    Ren, S.3    Sun, J.4
  • 16
    • 84969584486 scopus 로고    scopus 로고
    • Batch normalization: Accelerating deep network training by reducing internal covariate shift
    • S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
    • (2015) ICML
    • Ioffe, S.1    Szegedy, C.2
  • 20
    • 85075204691 scopus 로고    scopus 로고
    • FractalNet: Ultra-deep neural networks without residuals
    • G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. In ICLR, 2017.
    • (2017) ICLR
    • Larsson, G.1    Maire, M.2    Shakhnarovich, G.3
  • 21
    • 85049152514 scopus 로고    scopus 로고
    • Hyperband: Bandit-based configuration evaluation for hyperparameter optimization
    • L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: Bandit-based configuration evaluation for hyperparameter optimization. In ICLR, 2017.
    • (2017) ICLR
    • Li, L.1    Jamieson, K.2    DeSalvo, G.3    Rostamizadeh, A.4    Talwalkar, A.5
  • 22
    • 85081410026 scopus 로고    scopus 로고
    • SGDR: Stochastic gradient descent with warm restarts
    • I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. In ICLR, 2017.
    • (2017) ICLR
    • Loshchilov, I.1    Hutter, F.2
  • 23
    • 34548549772 scopus 로고    scopus 로고
    • Cartesian genetic programming
    • J.F. Miller and P. Thomson. Cartesian genetic programming. In EuroGP, 2000.
    • (2000) EuroGP
    • Miller, J.F.1    Thomson, P.2
  • 28
    • 85017457992 scopus 로고    scopus 로고
    • Weight normalization: A simple reparameterization to accelerate training of deep neural networks
    • T. Salimans and D.P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS, 2016.
    • (2016) NIPS
    • Salimans, T.1    Kingma, D.P.2
  • 30
    • 85019232626 scopus 로고    scopus 로고
    • Convolutional neural fabrics
    • S. Saxena and J. Verbeek. Convolutional neural fabrics. In NIPS, 2016.
    • (2016) NIPS
    • Saxena, S.1    Verbeek, J.2
  • 31
    • 0346377064 scopus 로고
    • Learning to control fast-weight memories: An alternative to dynamic recurrent networks
    • J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. In Neural Computation, volume 4, pages 131–139, 1992.
    • (1992) Neural Computation , vol.4 , pp. 131-139
    • Schmidhuber, J.1
  • 32
    • 85018925999 scopus 로고    scopus 로고
    • Swapout: Learning an ensemble of deep architectures
    • S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. In NIPS, 2016.
    • (2016) NIPS
    • Singh, S.1    Hoiem, D.2    Forsyth, D.3
  • 33
    • 84869201485 scopus 로고    scopus 로고
    • Practical Bayesian optimization of machinelearning algorithms
    • J. Snoek, H. Larochelle, and R.P. Adams. Practical bayesian optimization of machinelearning algorithms. In NIPS, 2012.
    • (2012) NIPS
    • Snoek, J.1    Larochelle, H.2    Adams, R.P.3
  • 37
    • 67650188046 scopus 로고    scopus 로고
    • A hypercube-based encoding for evolving large-scale neural networks
    • K.O. Stanley, D.B. D’Ambrosio, and J Gauci. A hypercube-based encoding for evolving large-scale neural networks. In Artificial Life, 15(2):185-212, 2009.
    • (2009) Artificial Life , vol.15 , Issue.2 , pp. 185-212
    • Stanley, K.O.1    D’Ambrosio, D.B.2    Gauci, J.3
  • 38
    • 85026354551 scopus 로고    scopus 로고
    • A genetic programming approach to designing convolutional neural network architectures
    • M. Suganuma, S. Shirakawa, and T. Nagao. A genetic programming approach to designing convolutional neural network architectures. In GECCO, 2017.
    • (2017) GECCO
    • Suganuma, M.1    Shirakawa, S.2    Nagao, T.3
  • 40
    • 32444434467 scopus 로고    scopus 로고
    • Modeling systems with internal state using evolino
    • D. Wierstra, F.J. Gomez, and J. Schmidhuber. Modeling systems with internal state using evolino. In GECCO, 2005.
    • (2005) GECCO
    • Wierstra, D.1    Gomez, F.J.2    Schmidhuber, J.3
  • 42
    • 84937508363 scopus 로고    scopus 로고
    • How transferable are features in deep neural networks?
    • J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
    • (2014) NIPS
    • Yosinski, J.1    Clune, J.2    Bengio, Y.3    Lipson, H.4
  • 44
    • 85068717703 scopus 로고    scopus 로고
    • Neural architecture search with reinforcement learning
    • B. Zoph and Q. Le. Neural architecture search with reinforcement learning. In ICLR, 2017.
    • (2017) ICLR
    • Zoph, B.1    Le, Q.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.