-
1
-
-
85031915103
-
Towards principled methods for training generative adversarial networks
-
review for ICLR, 2017
-
Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, volume 2016, 2017.
-
(2016)
NIPS 2016 Workshop on Adversarial Training
-
-
Arjovsky, M.1
Bottou, L.2
-
3
-
-
85046565283
-
-
arXiv preprint
-
Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017.
-
(2017)
Generalization and Equilibrium in Generative Adversarial Nets (Gans)
-
-
Arora, S.1
Ge, R.2
Liang, Y.3
Ma, T.4
Zhang, Y.5
-
5
-
-
85041920050
-
-
arXiv preprint
-
Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. arXiv preprint arXiv:1612.02136, 2016.
-
(2016)
Mode Regularized Generative Adversarial Networks
-
-
Che, T.1
Li, Y.2
Jacob, A.P.3
Bengio, Y.4
Li, W.5
-
7
-
-
0002267135
-
Adaptive game playing using multiplicative weights
-
Yoav Freund and Robert E Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29(1-2):79–103, 1999.
-
(1999)
Games and Economic Behavior
, vol.29
, Issue.1-2
, pp. 79-103
-
-
Freund, Y.1
Schapire, R.E.2
-
8
-
-
84862277874
-
Understanding the difficulty of training deep feedforward neural networks
-
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249–256, 2010.
-
(2010)
Aistats
, vol.9
, pp. 249-256
-
-
Glorot, X.1
Bengio, Y.2
-
10
-
-
84937849144
-
-
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. pp. 2672–2680, 2014.
-
(2014)
Generative Adversarial Nets
, pp. 2672-2680
-
-
Goodfellow, I.1
Pouget-Abadie, J.2
Mirza, M.3
Xu, B.4
Warde-Farley, D.5
Ozair, S.6
Courville, A.7
Bengio, Y.8
-
11
-
-
84859477054
-
A kernel two-sample test
-
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723–773, 2012.
-
(2012)
Journal of Machine Learning Research
, vol.13
, Issue.Mar
, pp. 723-773
-
-
Gretton, A.1
Borgwardt, K.M.2
Rasch, M.J.3
Schölkopf, B.4
Smola, A.5
-
12
-
-
85047004943
-
-
arXiv preprint
-
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.
-
(2017)
Improved Training of Wasserstein Gans
-
-
Gulrajani, I.1
Ahmed, F.2
Arjovsky, M.3
Dumoulin, V.4
Courville, A.5
-
13
-
-
84979298967
-
The computational power of optimization in online learning
-
Elad Hazan and Tomer Koren. The computational power of optimization in online learning. In Proc. STOC, pp. 128–141. ACM, 2016.
-
(2016)
Proc. STOC
, pp. 128-141
-
-
Hazan, E.1
Koren, T.2
-
14
-
-
85018868676
-
Introduction to online convex optimization
-
Elad Hazan et al. Introduction to online convex optimization. Foundations and TrendsR in Optimization, 2(3-4):157–325, 2016.
-
(2016)
Foundations and TrendsR in Optimization
, vol.2
, Issue.3-4
, pp. 157-325
-
-
Hazan, E.1
-
15
-
-
85049562159
-
-
arXiv preprint
-
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a nash equilibrium. arXiv preprint arXiv:1706.08500, 2017.
-
(2017)
Gans Trained by A Two Time-Scale Update Rule Converge to A Nash Equilibrium
-
-
Heusel, M.1
Ramsauer, H.2
Unterthiner, T.3
Nessler, B.4
Klambauer, G.5
Hochreiter, S.6
-
16
-
-
24644463787
-
Efficient algorithms for online decision problems
-
Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005.
-
(2005)
Journal of Computer and System Sciences
, vol.71
, Issue.3
, pp. 291-307
-
-
Kalai, A.1
Vempala, S.2
-
19
-
-
0008815681
-
Exponentiated gradient versus gradient descent for linear predictors
-
Jyrki Kivinen and Manfred K Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997.
-
(1997)
Information and Computation
, vol.132
, Issue.1
, pp. 1-63
-
-
Kivinen, J.1
Warmuth, M.K.2
-
21
-
-
84970016114
-
Generative moment matching networks
-
Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In ICML, pp. 1718–1727, 2015.
-
(2015)
ICML
, pp. 1718-1727
-
-
Li, Y.1
Swersky, K.2
Zemel, R.S.3
-
22
-
-
84973917446
-
Deep learning face attributes in the wild
-
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738, 2015.
-
(2015)
Proceedings of the IEEE International Conference on Computer Vision
, pp. 3730-3738
-
-
Liu, Z.1
Luo, P.2
Wang, X.3
Tang, X.4
-
25
-
-
0010487372
-
Integral probability metrics and their generating classes of functions
-
02
-
Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(02):429–443, 1997.
-
(1997)
Advances in Applied Probability
, vol.29
, pp. 429-443
-
-
Müller, A.1
-
28
-
-
85018914753
-
F-GaN: Training generative neural samplers using variational divergence minimization
-
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271–279, 2016.
-
(2016)
Advances in Neural Information Processing Systems
, pp. 271-279
-
-
Nowozin, S.1
Cseke, B.2
Tomioka, R.3
-
34
-
-
85018875486
-
Improved techniques for training gans
-
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2226–2234, 2016.
-
(2016)
Advances in Neural Information Processing Systems
, pp. 2226-2234
-
-
Salimans, T.1
Goodfellow, I.2
Zaremba, W.3
Cheung, V.4
Radford, A.5
Chen, X.6
-
37
-
-
85041920690
-
-
arXiv preprint
-
Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and Bernhard Schölkopf. Adagan: Boosting generative models. arXiv preprint arXiv:1701.02386, 2017.
-
(2017)
Adagan: Boosting Generative Models
-
-
Tolstikhin, I.1
Gelly, S.2
Bousquet, O.3
Simon-Gabriel, C.-J.4
Schölkopf, B.5
|