메뉴 건너뛰기




Volumn , Issue , 2016, Pages 4141-4149

Bayesian optimization with Robust Bayesian neural networks

Author keywords

[No Author keywords available]

Indexed keywords

DEEP LEARNING; DEEP NEURAL NETWORKS; FUNCTION EVALUATION; LEARNING ALGORITHMS; LEARNING SYSTEMS; NEURAL NETWORKS; REINFORCEMENT LEARNING; SCALABILITY; STOCHASTIC SYSTEMS;

EID: 85015791874     PISSN: 10495258     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (445)

References (32)
  • 1
    • 84869201485 scopus 로고    scopus 로고
    • Practical Bayesian optimization of machine learning algorithms
    • J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Proc. of NIPS'12, 2012.
    • (2012) Proc. of NIPS'12
    • Snoek, J.1    Larochelle, H.2    Adams, R.P.3
  • 3
    • 84856930049 scopus 로고    scopus 로고
    • Sequential model-based optimization for general algorithm configuration
    • F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In LION'11, 2011.
    • (2011) LION'11
    • Hutter, F.1    Hoos, H.2    Leyton-Brown, K.3
  • 7
    • 84869826137 scopus 로고    scopus 로고
    • A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning
    • E. Brochu, V. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. CoRR, 2010.
    • (2010) CoRR
    • Brochu, E.1    Cora, V.2    De Freitas, N.3
  • 9
    • 0000561424 scopus 로고    scopus 로고
    • Efficient global optimization of expensive black box functions
    • D. Jones, M. Schonlau, and W. Welch. Efficient global optimization of expensive black box functions. JGO, 1998.
    • (1998) JGO
    • Jones, D.1    Schonlau, M.2    Welch, W.3
  • 10
    • 77956501313 scopus 로고    scopus 로고
    • Gaussian process optimization in the bandit setting: No regret and experimental design
    • N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. of ICML'10, 2010.
    • (2010) Proc. of ICML'10
    • Srinivas, N.1    Krause, A.2    Kakade, S.3    Seeger, M.4
  • 12
    • 85019180185 scopus 로고    scopus 로고
    • Practical variational inference for neural networks
    • A. Graves. Practical variational inference for neural networks. In Proc. of ICML'11, 2011.
    • (2011) Proc. of ICML'11
    • Graves, A.1
  • 14
    • 84969909658 scopus 로고    scopus 로고
    • Probabilistic backpropagation for scalable learning of Bayesian neural networks
    • J. M. Hernández-Lobato and R. Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In Proc. of ICML'15, 2015.
    • (2015) Proc. of ICML'15
    • Hernández-Lobato, J.M.1    Adams, R.2
  • 16
    • 84965103544 scopus 로고    scopus 로고
    • Variational dropout and the local reparameterization trick
    • D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Proc. of NIPS'15, 2015.
    • (2015) Proc. of NIPS'15
    • Kingma, D.P.1    Salimans, T.2    Welling, M.3
  • 20
    • 84965156531 scopus 로고    scopus 로고
    • A complete recipe for stochastic gradient MCMC
    • Y. Ma, T. Chen, and E.B. Fox. A complete recipe for stochastic gradient MCMC. In Proc. of NIPS'15, 2015.
    • (2015) Proc. of NIPS'15
    • Ma, Y.1    Chen, T.2    Fox, E.B.3
  • 21
    • 85007196088 scopus 로고    scopus 로고
    • Preconditioned stochastic gradient langevin dynamics for deep neural networks
    • Chunyuan Li, Changyou Chen, David E. Carlson, and Lawrence Carin. Preconditioned stochastic gradient langevin dynamics for deep neural networks. In Proc. of AAAI'16, 2016.
    • (2016) Proc. of AAAI'16
    • Li, C.1    Chen, C.2    Carlson, D.E.3    Carin, L.4
  • 22
    • 84986265678 scopus 로고    scopus 로고
    • Bridging the gap between stochastic gradient MCMC and stochastic optimization
    • Changyou Chen, David E. Carlson, Zhe Gan, Chunyuan Li, and Lawrence Carin. Bridging the gap between stochastic gradient MCMC and stochastic optimization. In Proc. of AISTATS, 2016.
    • (2016) Proc. of AISTATS
    • Chen, C.1    Carlson, D.E.2    Gan, Z.3    Li, C.4    Carin, L.5
  • 27
    • 85007221118 scopus 로고    scopus 로고
    • Initializing Bayesian hyperparameter optimization via meta-learning
    • M. Feurer, T. Springenberg, and F. Hutter. Initializing Bayesian hyperparameter optimization via meta-learning. In Proc. of AAAI'15, 2015.
    • (2015) Proc. of AAAI'15
    • Feurer, M.1    Springenberg, T.2    Hutter, F.3
  • 31
    • 84938340353 scopus 로고    scopus 로고
    • Freeze-thaw Bayesian optimization
    • K. Swersky, J. Snoek, and R. Adams. Freeze-thaw Bayesian optimization. CoRR, 2014.
    • (2014) CoRR
    • Swersky, K.1    Snoek, J.2    Adams, R.3
  • 32
    • 85019248308 scopus 로고    scopus 로고
    • Fast Bayesian optimization of machine learning hyperparameters on large datasets
    • A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. CoRR, 2016.
    • (2016) CoRR
    • Klein, A.1    Falkner, S.2    Bartels, S.3    Hennig, P.4    Hutter, F.5


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.