메뉴 건너뛰기




Volumn , Issue , 2018, Pages

An online learning approach to generative adversarial networks

Author keywords

[No Author keywords available]

Indexed keywords

NETWORK ARCHITECTURE; NETWORK LAYERS;

EID: 85083950788     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (38)

References (37)
  • 1
    • 85031915103 scopus 로고    scopus 로고
    • Towards principled methods for training generative adversarial networks
    • review for ICLR, 2017
    • Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, volume 2016, 2017.
    • (2016) NIPS 2016 Workshop on Adversarial Training
    • Arjovsky, M.1    Bottou, L.2
  • 6
    • 84938960088 scopus 로고    scopus 로고
    • Near-optimal no-regret algorithms for zero-sum games
    • Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. Near-optimal no-regret algorithms for zero-sum games. Games and Economic Behavior, 92:327–348, 2015.
    • (2015) Games and Economic Behavior , vol.92 , pp. 327-348
    • Daskalakis, C.1    Deckelbaum, A.2    Kim, A.3
  • 7
    • 0002267135 scopus 로고    scopus 로고
    • Adaptive game playing using multiplicative weights
    • Yoav Freund and Robert E Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29(1-2):79–103, 1999.
    • (1999) Games and Economic Behavior , vol.29 , Issue.1-2 , pp. 79-103
    • Freund, Y.1    Schapire, R.E.2
  • 8
    • 84862277874 scopus 로고    scopus 로고
    • Understanding the difficulty of training deep feedforward neural networks
    • Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249–256, 2010.
    • (2010) Aistats , vol.9 , pp. 249-256
    • Glorot, X.1    Bengio, Y.2
  • 13
    • 84979298967 scopus 로고    scopus 로고
    • The computational power of optimization in online learning
    • Elad Hazan and Tomer Koren. The computational power of optimization in online learning. In Proc. STOC, pp. 128–141. ACM, 2016.
    • (2016) Proc. STOC , pp. 128-141
    • Hazan, E.1    Koren, T.2
  • 14
    • 85018868676 scopus 로고    scopus 로고
    • Introduction to online convex optimization
    • Elad Hazan et al. Introduction to online convex optimization. Foundations and TrendsR in Optimization, 2(3-4):157–325, 2016.
    • (2016) Foundations and TrendsR in Optimization , vol.2 , Issue.3-4 , pp. 157-325
    • Hazan, E.1
  • 16
    • 24644463787 scopus 로고    scopus 로고
    • Efficient algorithms for online decision problems
    • Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005.
    • (2005) Journal of Computer and System Sciences , vol.71 , Issue.3 , pp. 291-307
    • Kalai, A.1    Vempala, S.2
  • 19
    • 0008815681 scopus 로고    scopus 로고
    • Exponentiated gradient versus gradient descent for linear predictors
    • Jyrki Kivinen and Manfred K Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997.
    • (1997) Information and Computation , vol.132 , Issue.1 , pp. 1-63
    • Kivinen, J.1    Warmuth, M.K.2
  • 21
    • 84970016114 scopus 로고    scopus 로고
    • Generative moment matching networks
    • Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In ICML, pp. 1718–1727, 2015.
    • (2015) ICML , pp. 1718-1727
    • Li, Y.1    Swersky, K.2    Zemel, R.S.3
  • 25
    • 0010487372 scopus 로고    scopus 로고
    • Integral probability metrics and their generating classes of functions
    • 02
    • Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(02):429–443, 1997.
    • (1997) Advances in Applied Probability , vol.29 , pp. 429-443
    • Müller, A.1
  • 28
    • 85018914753 scopus 로고    scopus 로고
    • F-GaN: Training generative neural samplers using variational divergence minimization
    • Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271–279, 2016.
    • (2016) Advances in Neural Information Processing Systems , pp. 271-279
    • Nowozin, S.1    Cseke, B.2    Tomioka, R.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.