메뉴 건너뛰기




Volumn , Issue , 2019, Pages

DARTS: Differentiable architecture search

Author keywords

[No Author keywords available]

Indexed keywords

GRADIENT METHODS; MODELING LANGUAGES;

EID: 85083950041     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (2283)

References (46)
  • 3
    • 85079594941 scopus 로고    scopus 로고
    • Designing neural network architectures using reinforcement learning
    • Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. ICLR, 2017.
    • (2017) ICLR
    • Baker, B.1    Gupta, O.2    Naik, N.3    Raskar, R.4
  • 4
    • 85083954329 scopus 로고    scopus 로고
    • Accelerating neural architecture search using performance prediction
    • Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating neural architecture search using performance prediction. ICLR Workshop, 2018.
    • (2018) ICLR Workshop
    • Baker, B.1    Gupta, O.2    Raskar, R.3    Naik, N.4
  • 6
    • 85083953913 scopus 로고    scopus 로고
    • SmaSH: One-shot model architecture search through hypernetworks
    • Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston. Smash: one-shot model architecture search through hypernetworks. ICLR, 2018.
    • (2018) ICLR
    • Brock, A.1    Lim, T.2    Ritchie, J.M.3    Weston, N.4
  • 7
    • 85055694066 scopus 로고    scopus 로고
    • Efficient architecture search by network transformation
    • Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. AAAI, 2018.
    • (2018) AAAI
    • Cai, H.1    Chen, T.2    Zhang, W.3    Yu, Y.4    Wang, J.5
  • 11
    • 85046762258 scopus 로고    scopus 로고
    • Model-agnostic meta-learning for fast adaptation of deep networks
    • Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pp. 1126-1135, 2017.
    • (2017) ICML , pp. 1126-1135
    • Finn, C.1    Abbeel, P.2    Levine, S.3
  • 12
    • 85057302642 scopus 로고    scopus 로고
    • Bilevel programming for hyperparameter optimization and meta-learning
    • Luca Franceschi, Paolo Frasconi, Saverio Salzo, and Massimilano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. ICML, 2018.
    • (2018) ICML
    • Franceschi, L.1    Frasconi, P.2    Salzo, S.3    Pontil, M.4
  • 13
    • 85019171807 scopus 로고    scopus 로고
    • A theoretically grounded application of dropout in recurrent neural networks
    • Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, pp. 1019-1027, 2016.
    • (2016) NIPS , pp. 1019-1027
    • Gal, Y.1    Ghahramani, Z.2
  • 17
    • 85088226616 scopus 로고    scopus 로고
    • Tying word vectors and word classifiers: A loss framework for language modeling
    • Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. ICLR, 2017.
    • (2017) ICLR
    • Inan, H.1    Khosravi, K.2    Socher, R.3
  • 19
    • 85064546131 scopus 로고    scopus 로고
    • Neural architecture search with Bayesian optimisation and optimal transport
    • Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric Xing. Neural architecture search with bayesian optimisation and optimal transport. NIPS, 2018.
    • (2018) NIPS
    • Kandasamy, K.1    Neiswanger, W.2    Schneider, J.3    Poczos, B.4    Xing, E.5
  • 23
    • 85083952289 scopus 로고    scopus 로고
    • Hierarchical representations for efficient architecture search
    • Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. ICLR, 2018b.
    • (2018) ICLR
    • Liu, H.1    Simonyan, K.2    Vinyals, O.3    Fernando, C.4    Kavukcuoglu, K.5
  • 25
    • 85050792969 scopus 로고    scopus 로고
    • Scalable gradient-based tuning of continuous regularization hyperparameters
    • Jelena Luketina, Mathias Berglund, Klaus Greff, and Tapani Raiko. Scalable gradient-based tuning of continuous regularization hyperparameters. In ICML, pp. 2952-2960, 2016.
    • (2016) ICML , pp. 2952-2960
    • Luketina, J.1    Berglund, M.2    Greff, K.3    Raiko, T.4
  • 26
    • 84989338543 scopus 로고    scopus 로고
    • Gradient-based hyperparameter optimization through reversible learning
    • Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In ICML, pp. 2113-2122, 2015.
    • (2015) ICML , pp. 2113-2122
    • Maclaurin, D.1    Duvenaud, D.2    Adams, R.3
  • 27
    • 85083950108 scopus 로고    scopus 로고
    • On the state of the art of evaluation in neural language models
    • Gábor Melis, Chris Dyer, and Phil Blunsom. On the state of the art of evaluation in neural language models. ICLR, 2018.
    • (2018) ICLR
    • Melis, G.1    Dyer, C.2    Blunsom, P.3
  • 28
    • 85083950454 scopus 로고    scopus 로고
    • Regularizing and optimizing lstm language models
    • Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing lstm language models. ICLR, 2018.
    • (2018) ICLR
    • Merity, S.1    Keskar, N.S.2    Socher, R.3
  • 32
    • 84998567518 scopus 로고    scopus 로고
    • Hyperparameter optimization with approximate gradient
    • Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In ICML, 2016.
    • (2016) ICML
    • Pedregosa, F.1
  • 34
    • 85057232755 scopus 로고    scopus 로고
    • Efficient neural architecture search via parameter sharing
    • Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. ICML, 2018b.
    • (2018) ICML
    • Pham, H.1    Guan, M.Y.2    Zoph, B.3    Le, Q.V.4    Dean, J.5
  • 37
    • 85019232626 scopus 로고    scopus 로고
    • Convolutional neural fabrics
    • Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS, pp. 4053-4061, 2016.
    • (2016) NIPS , pp. 4053-4061
    • Saxena, S.1    Verbeek, J.2
  • 38
    • 85083950743 scopus 로고    scopus 로고
    • Differentiable neural network architecture search
    • Richard Shin, Charles Packer, and Dawn Song. Differentiable neural network architecture search. In ICLR Workshop, 2018.
    • (2018) ICLR Workshop
    • Shin, R.1    Packer, C.2    Song, D.3
  • 41
    • 85083953332 scopus 로고    scopus 로고
    • Breaking the softmax bottleneck: A high-rank rnn language model
    • Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. Breaking the softmax bottleneck: a high-rank rnn language model. ICLR, 2018.
    • (2018) ICLR
    • Yang, Z.1    Dai, Z.2    Salakhutdinov, R.3    Cohen, W.W.4
  • 45
    • 85068717703 scopus 로고    scopus 로고
    • Neural architecture search with reinforcement learning
    • Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. ICLR, 2017.
    • (2017) ICLR
    • Zoph, B.1    Le, Q.V.2
  • 46
    • 85062864819 scopus 로고    scopus 로고
    • Learning transferable architectures for scalable image recognition
    • Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. CVPR, 2018.
    • (2018) CVPR
    • Zoph, B.1    Vasudevan, V.2    Shlens, J.3    Le, Q.V.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.