메뉴 건너뛰기




Volumn , Issue , 2016, Pages 271-279

f-GAN: Training generative neural samplers using variational divergence minimization

Author keywords

[No Author keywords available]

Indexed keywords

PROBABILITY DISTRIBUTIONS;

EID: 85018914753     PISSN: 10495258     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (1813)

References (35)
  • 1
    • 0001199215 scopus 로고
    • A general class of coefficients of divergence of one distribution from another
    • S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. JRSS (B), pages 131-142, 1966.
    • (1966) JRSS (B) , pp. 131-142
    • Ali, S.M.1    Silvey, S.D.2
  • 6
    • 84928534967 scopus 로고    scopus 로고
    • Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
    • Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS, pages 2933-2941, 2014.
    • (2014) NIPS , pp. 2933-2941
    • Dauphin, Y.N.1    Pascanu, R.2    Gulcehre, C.3    Cho, K.4    Ganguli, S.5    Bengio, Y.6
  • 7
    • 84983185824 scopus 로고    scopus 로고
    • Training generative neural networks via maximum mean discrepancy optimization
    • G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, pages 258-267, 2015.
    • (2015) UAI , pp. 258-267
    • Dziugaite, G.K.1    Roy, D.M.2    Ghahramani, Z.3
  • 9
    • 33947274775 scopus 로고    scopus 로고
    • Strictly proper scoring rules, prediction, and estimation
    • T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. JASA, 102(477): 359-378, 2007.
    • (2007) JASA , vol.102 , Issue.477 , pp. 359-378
    • Gneiting, T.1    Raftery, A.E.2
  • 14
    • 77956510865 scopus 로고    scopus 로고
    • Noise-contrastive estimation: A new estimation principle for Unnormalized statistical models
    • M. Gutmann and A. Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, pages 297-304, 2010.
    • (2010) AISTATS , pp. 297-304
    • Gutmann, M.1    Hyvärinen, A.2
  • 19
    • 84862524901 scopus 로고    scopus 로고
    • The neural autoregressive distribution estimator
    • H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.
    • (2011) AISTATS
    • Larochelle, H.1    Murray, I.2
  • 20
    • 84970016114 scopus 로고    scopus 로고
    • Generative moment matching networks
    • Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML, 2015.
    • (2015) ICML
    • Li, Y.1    Swersky, K.2    Zemel, R.3
  • 21
    • 33947426775 scopus 로고    scopus 로고
    • On divergences and informations in statistics and information theory
    • F. Liese and I. Vajda. On divergences and informations in statistics and information theory. Information Theory, IEEE, 52(10):4394-4412, 2006.
    • (2006) Information Theory, IEEE , vol.52 , Issue.10 , pp. 4394-4412
    • Liese, F.1    Vajda, I.2
  • 22
    • 33645246465 scopus 로고
    • Bayesian neural networks and density networks
    • D. J. C. MacKay. Bayesian neural networks and density networks. Nucl. Instrum. Meth. A, 354(1):73-80, 1995.
    • (1995) Nucl. Instrum. Meth. A , vol.354 , Issue.1 , pp. 73-80
    • MacKay, D.J.C.1
  • 25
    • 77958588617 scopus 로고    scopus 로고
    • Estimating divergence functionals and the likelihood ratio by convex risk minimization
    • X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. Information Theory, IEEE, 56(11):5847-5861, 2010.
    • (2010) Information Theory, IEEE , vol.56 , Issue.11 , pp. 5847-5861
    • Nguyen, X.1    Wainwright, M.J.2    Jordan, M.I.3
  • 26
    • 84888265445 scopus 로고    scopus 로고
    • On the chi-square and higher-order chi distances for approximating f-divergences
    • F. Nielsen and R. Nock. On the chi-square and higher-order chi distances for approximating f-divergences. Signal Processing Letters, IEEE, 21(1):10-13, 2014.
    • (2014) Signal Processing Letters, IEEE , vol.21 , Issue.1 , pp. 10-13
    • Nielsen, F.1    Nock, R.2
  • 28
    • 79955815221 scopus 로고    scopus 로고
    • Information, divergence and risk for binary experiments
    • (Mar)
    • M. D. Reid and R. C. Williamson. Information, divergence and risk for binary experiments. Journal of Machine Learning Research, 12(Mar):731-817, 2011.
    • (2011) Journal of Machine Learning Research , vol.12 , pp. 731-817
    • Reid, M.D.1    Williamson, R.C.2
  • 29
    • 84919796093 scopus 로고    scopus 로고
    • Stochastic backpropagation and approximate inference in deep generative models
    • D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278-1286, 2014.
    • (2014) ICML , pp. 1278-1286
    • Rezende, D.J.1    Mohamed, S.2    Wierstra, D.3
  • 31
    • 84969975031 scopus 로고    scopus 로고
    • Deep unsupervised learning using non-equilibrium thermodynamics
    • J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using non-equilibrium thermodynamics. ICML, pages 2256-2265, 2015.
    • (2015) ICML , pp. 2256-2265
    • Sohl-Dickstein, J.1    Weiss, E.A.2    Maheswaranathan, N.3    Ganguli, S.4
  • 34
    • 84898933061 scopus 로고    scopus 로고
    • RNADE: The real-valued neural autoregressive density-estimator
    • B. Uria, I. Murray, and H. Larochelle. RNADE: The real-valued neural autoregressive density-estimator. In NIPS, pages 2175-2183, 2013.
    • (2013) NIPS , pp. 2175-2183
    • Uria, B.1    Murray, I.2    Larochelle, H.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.