-
1
-
-
85075670920
-
TensorFlow: A system for large-scale machine learning
-
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: A system for large-scale machine learning. In OSDI, 2016.
-
(2016)
OSDI
-
-
Abadi, M.1
Barham, P.2
Chen, J.3
Chen, Z.4
Davis, A.5
Dean, J.6
Devin, M.7
Ghemawat, S.8
Irving, G.9
Isard, M.10
Kudlur, M.11
Levenberg, J.12
Monga, R.13
Moore, S.14
Murray, D.15
Steiner, B.16
Tucker, P.17
Vasudevan, V.18
Warden, P.19
Wicke, M.20
Yu, Y.21
Zheng, X.22
more..
-
2
-
-
85047016172
-
Wasserstein generative adversarial networks
-
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017.
-
(2017)
ICML
-
-
Arjovsky, M.1
Chintala, S.2
Bottou, L.3
-
4
-
-
85047010071
-
-
arXiv preprint
-
Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The Cramer distance as a solution to biased Wasserstein gradients. In arXiv preprint arXiv:1705.10743, 2017.
-
(2017)
The Cramer Distance as a Solution to Biased Wasserstein Gradients
-
-
Bellemare, M.G.1
Danihelka, I.2
Dabney, W.3
Mohamed, S.4
Lakshminarayanan, B.5
Hoyer, S.6
Munos, R.7
-
6
-
-
85088232940
-
Neural photo editing with introspective adversarial networks
-
Andrew Brock, Theodore Lim, J.M. Ritchie, and Nick Weston. Neural photo editing with introspective adversarial networks. In ICLR, 2017.
-
(2017)
ICLR
-
-
Brock, A.1
Lim, T.2
Ritchie, J.M.3
Weston, N.4
-
7
-
-
85019228440
-
Infogan: Interpretable representation learning by information maximizing generative adversarial nets
-
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
-
(2016)
NIPS
-
-
Chen, X.1
Duan, Y.2
Houthooft, R.3
Schulman, J.4
Sutskever, I.5
Abbeel, P.6
-
8
-
-
85043992858
-
Modulating early visual processing by language
-
Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron Courville. Modulating early visual processing by language. In NIPS, 2017.
-
(2017)
NIPS
-
-
De Vries, H.1
Strub, F.2
Mary, J.3
Larochelle, H.4
Pietquin, O.5
Courville, A.6
-
9
-
-
84965143571
-
Deep generative image models using a laplacian pyramid of adversarial networks
-
Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015.
-
(2015)
NIPS
-
-
Denton, E.1
Chintala, S.2
Szlam, A.3
Fergus, R.4
-
10
-
-
85088228106
-
A learned representation for artistic style
-
Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In ICLR, 2017.
-
(2017)
ICLR
-
-
Dumoulin, V.1
Shlens, J.2
Kudlur, M.3
-
11
-
-
85083954393
-
Many paths to equilibrium: GANs do not need to decrease a divergence at every step
-
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: GANs do not need to decrease a divergence at every step. In ICLR, 2018.
-
(2018)
ICLR
-
-
Fedus, W.1
Rosca, M.2
Lakshminarayanan, B.3
Dai, A.M.4
Mohamed, S.5
Goodfellow, I.6
-
12
-
-
79951563340
-
Understanding the difficulty of training deep feedforward neural networks
-
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
-
(2010)
AISTATS
-
-
Glorot, X.1
Bengio, Y.2
-
14
-
-
84937849144
-
Generative adversarial nets
-
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, and Aaron Courville Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
-
(2014)
NIPS
-
-
Goodfellow, I.1
Pouget-Abadie, J.2
Mirza, M.3
Xu, B.4
Warde-Farley, D.5
Ozair, S.6
Bengio, A.C.Y.7
-
15
-
-
85071171163
-
-
Google. Cloud TPUs. https://cloud.google.com/tpu/, 2018.
-
(2018)
Cloud TPUs
-
-
-
16
-
-
85047004943
-
Improved training of Wasserstein GANs
-
Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. In NIPS, 2017.
-
(2017)
NIPS
-
-
Gulrajani, I.1
Ahmed, F.2
Arjovsky, M.3
Dumoulin, V.4
Courville, A.C.5
-
17
-
-
84986274465
-
Deep residual learning for image recognition
-
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
-
(2016)
CVPR
-
-
He, K.1
Zhang, X.2
Ren, S.3
Sun, J.4
-
18
-
-
85041020882
-
GANs trained by a two time-scale update rule converge to a local nash equilibrium
-
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
-
(2017)
NIPS
-
-
Heusel, M.1
Ramsauer, H.2
Unterthiner, T.3
Nessler, B.4
Klambauer, G.5
Hochreiter, S.6
-
19
-
-
84969584486
-
Batch normalization: Accelerating deep network training by reducing internal covariate shift
-
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
-
(2015)
ICML
-
-
Ioffe, S.1
Szegedy, C.2
-
20
-
-
85083950495
-
Progressive growing of GANs for improved quality, stability, and variation
-
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In ICLR, 2018.
-
(2018)
ICLR
-
-
Karras, T.1
Aila, T.2
Laine, S.3
Lehtinen, J.4
-
21
-
-
84941620184
-
ADaM: A method for stochastic optimization
-
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
-
(2014)
ICLR
-
-
Kingma, D.1
Ba, J.2
-
25
-
-
85039149742
-
-
arXiv preprint
-
Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Least squares generative adversarial networks. In arXiv preprint arXiv:1611.04076, 2016.
-
(2016)
Least Squares Generative Adversarial Networks
-
-
Mao, X.1
Li, Q.2
Xie, H.3
Lau, R.Y.K.4
Wang, Z.5
-
27
-
-
85057290214
-
Which training methods for GANs do actually converge?
-
Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In ICML, 2018.
-
(2018)
ICML
-
-
Mescheder, L.1
Geiger, A.2
Nowozin, S.3
-
29
-
-
85083951993
-
CGANs with projection discriminator
-
Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In ICLR, 2018.
-
(2018)
ICLR
-
-
Miyato, T.1
Koyama, M.2
-
30
-
-
85083950959
-
Spectral normalization for generative adversarial networks
-
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
-
(2018)
ICLR
-
-
Miyato, T.1
Kataoka, T.2
Koyama, M.3
Yoshida, Y.4
-
31
-
-
85018914753
-
F-GaN: Training generative neural samplers using variational divergence minimization
-
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In NIPS, 2016.
-
(2016)
NIPS
-
-
Nowozin, S.1
Cseke, B.2
Tomioka, R.3
-
32
-
-
85029586812
-
Deconvolution and checkerboard artifacts
-
Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
-
(2016)
Distill
-
-
Odena, A.1
Dumoulin, V.2
Olah, C.3
-
33
-
-
85041893734
-
Conditional image synthesis with auxiliary classifier GANs
-
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In ICML, 2017.
-
(2017)
ICML
-
-
Odena, A.1
Olah, C.2
Shlens, J.3
-
34
-
-
85057237953
-
Is generator conditioning causally related to GAN performance?
-
Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B. Brown, Christopher Olah, Colin Raf-fel, and Ian Goodfellow. Is generator conditioning causally related to GAN performance? In ICML, 2018.
-
(2018)
ICML
-
-
Odena, A.1
Buckman, J.2
Olsson, C.3
Brown, T.B.4
Olah, C.5
Raf-Fel, C.6
Goodfellow, I.7
-
35
-
-
85055416465
-
FilM: Visual reasoning with a general conditioning layer
-
Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual reasoning with a general conditioning layer. In AAAI, 2018.
-
(2018)
AAAI
-
-
Perez, E.1
Strub, F.2
De Vries, H.3
Dumoulin, V.4
Courville, A.5
-
37
-
-
85083950271
-
Unsupervised representation learning with deep convolutional generative adversarial networks
-
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
-
(2016)
ICLR
-
-
Radford, A.1
Metz, L.2
Chintala, S.3
-
38
-
-
84947041871
-
ImageNet large scale visual recognition challenge
-
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, and Michael Bernstein. ImageNet large scale visual recognition challenge. IJCV, 115:211-252, 2015.
-
(2015)
IJCV
, vol.115
, pp. 211-252
-
-
Russakovsky, O.1
Deng, J.2
Su, H.3
Krause, J.4
Satheesh, S.5
Ma, S.6
Huang, Z.7
Karpathy, A.8
Khosla, A.9
Bernstein, M.10
-
39
-
-
85017457992
-
Weight normalization: A simple reparameterization to accelerate training of deep neural networks
-
Tim Salimans and Diederik Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS, 2016.
-
(2016)
NIPS
-
-
Salimans, T.1
Kingma, D.2
-
40
-
-
85018875486
-
Improved techniques for training GANs
-
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016.
-
(2016)
NIPS
-
-
Salimans, T.1
Goodfellow, I.2
Zaremba, W.3
Cheung, V.4
Radford, A.5
Chen, X.6
-
42
-
-
85083950783
-
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
-
Andrew Saxe, James McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR, 2014.
-
(2014)
ICLR
-
-
Saxe, A.1
McClelland, J.2
Ganguli, S.3
-
43
-
-
85083953063
-
Very deep convolutional networks for large-scale image recognition
-
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
-
(2015)
ICLR
-
-
Simonyan, K.1
Zisserman, A.2
-
44
-
-
85081923574
-
Amortised map inference for image super-resolution
-
Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszr. Amortised map inference for image super-resolution. In ICLR, 2017.
-
(2017)
ICLR
-
-
Sønderby, C.K.1
Caballero, J.2
Theis, L.3
Shi, W.4
Huszr, F.5
-
45
-
-
84904163933
-
Dropout: A simple way to prevent neural networks from overfitting
-
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15:1929-1958, 2014.
-
(2014)
JMLR
, vol.15
, pp. 1929-1958
-
-
Srivastava, N.1
Hinton, G.2
Krizhevsky, A.3
Sutskever, I.4
Salakhutdinov, R.5
-
46
-
-
85041923826
-
Revisiting unreasonable effectiveness of data in deep learning era
-
Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017.
-
(2017)
ICCV
-
-
Sun, C.1
Shrivastava, A.2
Singh, S.3
Gupta, A.4
-
47
-
-
84986296808
-
Rethinking the inception architecture for computer vision
-
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
-
(2016)
CVPR
-
-
Szegedy, C.1
Vanhoucke, V.2
Ioffe, S.3
Shlens, J.4
Wojna, Z.5
-
49
-
-
85046995969
-
Hierarchical implicit models and likelihood-free variational inference
-
Dustin Tran, Rajesh Ranganath, and David M. Blei. Hierarchical implicit models and likelihood-free variational inference. In NIPS, 2017.
-
(2017)
NIPS
-
-
Tran, D.1
Ranganath, R.2
Blei, D.M.3
-
51
-
-
85088225625
-
On the quantitative analysis of decoder-based generative models
-
Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysis of decoder-based generative models. In ICLR, 2017.
-
(2017)
ICLR
-
-
Wu, Y.1
Burda, Y.2
Salakhutdinov, R.3
Grosse, R.B.4
-
52
-
-
85071161857
-
-
arXiv preprint
-
Yasin Yazc, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, and Vijay Chandrasekhar. The unusual effectiveness of averaging in gan training. In arXiv preprint arXiv:1806.04498, 2018.
-
(2018)
The Unusual Effectiveness of Averaging in Gan Training
-
-
Yazc, Y.1
Foo, C.-S.2
Winkler, S.3
Yap, K.-H.4
Piliouras, G.5
Chandrasekhar, V.6
|