-
1
-
-
85081997770
-
Towards principled methods for training generative adversarial networks
-
M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017
-
(2017)
ICLR
-
-
Arjovsky, M.1
Bottou, L.2
-
3
-
-
85083954075
-
Super-resolution with deep convolutional sufficient statistics
-
J. Bruna, P. Sprechmann, and Y. LeCun. Super-resolution with deep convolutional sufficient statistics. In ICLR, 2016
-
(2016)
ICLR
-
-
Bruna, J.1
Sprechmann, P.2
LeCun, Y.3
-
4
-
-
84959180587
-
Robust reconstruction of indoor scenes
-
S. Choi, Q. Zhou, and V. Koltun. Robust reconstruction of indoor scenes. In CVPR, 2015
-
(2015)
CVPR
-
-
Choi, S.1
Zhou, Q.2
Koltun, V.3
-
5
-
-
84986255616
-
The Cityscapes dataset for semantic urban scene understanding
-
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The Cityscapes dataset for semantic urban scene understanding. In CVPR, 2016
-
(2016)
CVPR
-
-
Cordts, M.1
Omran, M.2
Ramos, S.3
Rehfeld, T.4
Enzweiler, M.5
Benenson, R.6
Franke, U.7
Roth, S.8
Schiele, B.9
-
6
-
-
84965143571
-
Deep generative image models using a Laplacian pyramid of adversarial networks
-
E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015
-
(2015)
NIPS
-
-
Denton, E.L.1
Chintala, S.2
Szlam, A.3
Fergus, R.4
-
7
-
-
85019269786
-
Generating images with perceptual similarity metrics based on deep networks
-
A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016
-
(2016)
NIPS
-
-
Dosovitskiy, A.1
Brox, T.2
-
8
-
-
85026286150
-
Learning to generate chairs, tables and cars with convolutional networks
-
A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox. Learning to generate chairs, tables and cars with convolutional networks. PAMI, 2016
-
(2016)
PAMI
-
-
Dosovitskiy, A.1
Springenberg, J.T.2
Tatarchenko, M.3
Brox, T.4
-
9
-
-
85018936904
-
Unsupervised learning for physical interaction through video prediction
-
C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016
-
(2016)
NIPS
-
-
Finn, C.1
Goodfellow, I.J.2
Levine, S.3
-
10
-
-
84986252211
-
Deep stereo: Learning to predict new views from the world's imagery
-
J. Flynn, I. Neulander, J. Philbin, and N. Snavely. Deep stereo: Learning to predict new views from the world's imagery. In CVPR, 2016
-
(2016)
CVPR
-
-
Flynn, J.1
Neulander, I.2
Philbin, J.3
Snavely, N.4
-
11
-
-
84986325538
-
Image style transfer using convolutional neural networks
-
L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016
-
(2016)
CVPR
-
-
Gatys, L.A.1
Ecker, A.S.2
Bethge, M.3
-
12
-
-
84937849144
-
Generative adversarial nets
-
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014
-
(2014)
NIPS
-
-
Goodfellow, I.J.1
Pouget-Abadie, J.2
Mirza, M.3
Xu, B.4
Warde-Farley, D.5
Ozair, S.6
Courville, A.C.7
Bengio, Y.8
-
13
-
-
85083951001
-
Explaining and harnessing adversarial examples
-
I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015
-
(2015)
ICLR
-
-
Goodfellow, I.J.1
Shlens, J.2
Szegedy, C.3
-
14
-
-
84983208884
-
DRAW: A recurrent neural network for image generation
-
K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. DRAW: A recurrent neural network for image generation. In ICML, 2015
-
(2015)
ICML
-
-
Gregor, K.1
Danihelka, I.2
Graves, A.3
Rezende, D.J.4
Wierstra, D.5
-
15
-
-
84877765715
-
Multiple choice learning: Learning to produce multiple structured outputs
-
A. Guzmán-Rivera, D. Batra, and P. Kohli. Multiple choice learning: Learning to produce multiple structured outputs. In NIPS, 2012
-
(2012)
NIPS
-
-
Guzmán-Rivera, A.1
Batra, D.2
Kohli, P.3
-
16
-
-
85030759098
-
Image-to-image translation with conditional adversarial networks
-
P. Isola, J. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017
-
(2017)
CVPR
-
-
Isola, P.1
Zhu, J.2
Zhou, T.3
Efros, A.A.4
-
17
-
-
85019245160
-
Perceptual losses for real-time style transfer and super-resolution
-
J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016
-
(2016)
ECCV
-
-
Johnson, J.1
Alahi, A.2
Fei-Fei, L.3
-
19
-
-
85035231525
-
Photo-realistic single image super-resolution using a generative adversarial network
-
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, 2017
-
(2017)
CVPR
-
-
Ledig, C.1
Theis, L.2
Huszar, F.3
Caballero, J.4
Aitken, A.P.5
Tejani, A.6
Totz, J.7
Wang, Z.8
Shi, W.9
-
20
-
-
85041928470
-
Photorealism: 50 Years of Hyperrealistic Painting
-
O. Letze. Photorealism: 50 Years of Hyperrealistic Painting. Hatje Cantz, 2013
-
(2013)
Hatje Cantz
-
-
Letze, O.1
-
21
-
-
84959205572
-
Fully convolutional networks for semantic segmentation
-
J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015
-
(2015)
CVPR
-
-
Long, J.1
Shelhamer, E.2
Darrell, T.3
-
22
-
-
84893676344
-
Rectifier nonlinearities improve neural network acoustic models
-
A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013
-
(2013)
ICML
-
-
Maas, A.L.1
Hannun, A.Y.2
Ng, A.Y.3
-
25
-
-
85083952137
-
Deep multi-scale video prediction beyond mean square error
-
M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016
-
(2016)
ICLR
-
-
Mathieu, M.1
Couprie, C.2
LeCun, Y.3
-
27
-
-
85019234593
-
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
-
A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In NIPS, 2016
-
(2016)
NIPS
-
-
Nguyen, A.1
Dosovitskiy, A.2
Yosinski, J.3
Brox, T.4
Clune, J.5
-
28
-
-
85041919547
-
Plug & play generative networks: Conditional iterative generation of images in latent space
-
A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. In CVPR, 2017
-
(2017)
CVPR
-
-
Nguyen, A.1
Yosinski, J.2
Bengio, Y.3
Dosovitskiy, A.4
Clune, J.5
-
29
-
-
84946206172
-
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
-
A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015
-
(2015)
CVPR
-
-
Nguyen, A.1
Yosinski, J.2
Clune, J.3
-
30
-
-
85029586812
-
Deconvolution and checkerboard artifacts
-
A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 2016. http://distill.pub/2016/deconv-checkerboard
-
(2016)
Distill
-
-
Odena, A.1
Dumoulin, V.2
Olah, C.3
-
31
-
-
84965178314
-
Actionconditional video prediction using deep networks in Atari games
-
J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. P. Singh. Actionconditional video prediction using deep networks in Atari games. In NIPS, 2015
-
(2015)
NIPS
-
-
Oh, J.1
Guo, X.2
Lee, H.3
Lewis, R.L.4
Singh, S.P.5
-
32
-
-
84986294165
-
Context encoders: Feature learning by inpainting
-
D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016
-
(2016)
CVPR
-
-
Pathak, D.1
Krähenbühl, P.2
Donahue, J.3
Darrell, T.4
Efros, A.A.5
-
34
-
-
0034291204
-
A parametric texture model based on joint statistics of complex wavelet coefficients
-
J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. IJCV, 40(1), 2000
-
(2000)
IJCV
, vol.40
, Issue.1
-
-
Portilla, J.1
Simoncelli, E.P.2
-
35
-
-
85083950271
-
Unsupervised representation learning with deep convolutional generative adversarial networks
-
A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016
-
(2016)
ICLR
-
-
Radford, A.1
Metz, L.2
Chintala, S.3
-
36
-
-
85018890661
-
Learning what and where to draw
-
S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In NIPS, 2016
-
(2016)
NIPS
-
-
Reed, S.E.1
Akata, Z.2
Mohan, S.3
Tenka, S.4
Schiele, B.5
Lee, H.6
-
37
-
-
84998636515
-
Generative adversarial text to image synthesis
-
S. E. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In ICML, 2016
-
(2016)
ICML
-
-
Reed, S.E.1
Akata, Z.2
Yan, X.3
Logeswaran, L.4
Schiele, B.5
Lee, H.6
-
38
-
-
85011287288
-
U-Net: Convolutional networks for biomedical image segmentation
-
O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015
-
(2015)
MICCAI
-
-
Ronneberger, O.1
Fischer, P.2
Brox, T.3
-
39
-
-
85018875486
-
Improved techniques for training GANs
-
T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training GANs. In NIPS, 2016
-
(2016)
NIPS
-
-
Salimans, T.1
Goodfellow, I.J.2
Zaremba, W.3
Cheung, V.4
Radford, A.5
Chen, X.6
-
41
-
-
84886053441
-
The visual Turing test for scene reconstruction
-
Q. Shan, R. Adams, B. Curless, Y. Furukawa, and S. M. Seitz. The visual Turing test for scene reconstruction. In 3DV, 2013
-
(2013)
3DV
-
-
Shan, Q.1
Adams, R.2
Curless, B.3
Furukawa, Y.4
Seitz, S.M.5
-
42
-
-
84881536861
-
Indoor segmentation and support inference from RGBD images
-
N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from RGBD images. In ECCV, 2012
-
(2012)
ECCV
-
-
Silberman, N.1
Hoiem, D.2
Kohli, P.3
Fergus, R.4
-
43
-
-
85083953063
-
Very deep convolutional networks for large-scale image recognition
-
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015
-
(2015)
ICLR
-
-
Simonyan, K.1
Zisserman, A.2
-
44
-
-
84969544782
-
Unsupervised learning of video representations using LSTMs
-
N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015
-
(2015)
ICML
-
-
Srivastava, N.1
Mansimov, E.2
Salakhutdinov, R.3
-
45
-
-
85041920327
-
Multi-view 3D models from single images with a convolutional network
-
M. Tatarchenko, A. Dosovitskiy, and T. Brox. Multi-view 3D models from single images with a convolutional network. In ECCV, 2016
-
(2016)
ECCV
-
-
Tatarchenko, M.1
Dosovitskiy, A.2
Brox, T.3
-
47
-
-
84990022453
-
Generative image modeling using style and structure adversarial networks
-
X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016
-
(2016)
ECCV
-
-
Wang, X.1
Gupta, A.2
-
48
-
-
85018923844
-
Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks
-
T. Xue, J. Wu, K. L. Bouman, and B. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In NIPS, 2016
-
(2016)
NIPS
-
-
Xue, T.1
Wu, J.2
Bouman, K.L.3
Freeman, B.4
-
49
-
-
85018933379
-
Attribute2Image: Conditional image generation from visual attributes
-
X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2Image: Conditional image generation from visual attributes. In ECCV, 2016
-
(2016)
ECCV
-
-
Yan, X.1
Yang, J.2
Sohn, K.3
Lee, H.4
-
50
-
-
85083952059
-
Multi-scale context aggregation by dilated convolutions
-
F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016
-
(2016)
ICLR
-
-
Yu, F.1
Koltun, V.2
-
51
-
-
85018883046
-
View synthesis by appearance flow
-
T. Zhou, S. Tulsiani,W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. In ECCV, 2016
-
(2016)
ECCV
-
-
Zhou, T.1
Tulsiani, S.2
Sun, W.3
Malik, J.4
Efros, A.A.5
|