-
2
-
-
85072753030
-
Generating sentences from a continuous space
-
Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Joze-fowicz, R., and Bengio, S. Generating sentences from a continuous space. Conference on Computational Natural Language Learning, 2016.
-
(2016)
Conference on Computational Natural Language Learning
-
-
Bowman, S.R.1
Vilnis, L.2
Vinyals, O.3
Dai, A.M.4
Joze-Fowicz, R.5
Bengio, S.6
-
3
-
-
84961291190
-
Learning phrase representations using rnn encoder-decoder for statistical machine translation
-
Cho, K., van Merrienboer, B., Gulcehre, C, Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Empirical Methods on Natural Language Processing, 2014.
-
(2014)
Empirical Methods on Natural Language Processing
-
-
Cho, K.1
Van Merrienboer, B.2
Gulcehre, C.3
Bahdanau, D.4
Bougares, F.5
Schwenk, H.6
Bengio, Y.7
-
4
-
-
55949089312
-
A first look at music composition using lstm recurrent neural networks
-
Eck, D. and Schmidhuber, J. A first look at music composition using lstm recurrent neural networks. IDSIA Technical Report, 2002.
-
(2002)
IDSIA Technical Report
-
-
Eck, D.1
Schmidhuber, J.2
-
5
-
-
77949522811
-
Why does unsupervised pretraining help deep learning?
-
Erhan, D., Bengio, Y, Courville, A., Manzagol, P., Vincent, R, and Bengio, S. Why does unsupervised pretraining help deep learning? Journal of Machine Learning Research, 11:625-660, 2010.
-
(2010)
Journal of Machine Learning Research
, vol.11
, pp. 625-660
-
-
Erhan, D.1
Bengio, Y.2
Courville, A.3
Manzagol, P.4
Vincent, R.5
Bengio, S.6
-
7
-
-
85007238720
-
-
Gómez-Bombarelli, R., Duvenaud, D., Hernández-Lobato, J. M., Aguilera-Iparraguirre, J.,, Hirzel, T., Adams, R. R, and Aspuru-Guzik, A. Automatic chemical design using a data-driven continuous representation of molecules. arXiv:1610.02415, 2016.
-
(2016)
Automatic Chemical Design using a Data-driven Continuous Representation of Molecules
-
-
Gómez-Bombarelli, R.1
Duvenaud, D.2
Hernández-Lobato, J.M.3
Aguilera-Iparraguirre, J.4
Hirzel, T.5
Adams, R.R.6
Aspuru-Guzik, A.7
-
9
-
-
84947598017
-
Squeezing bottlenecks: Exploring the limits of autoencoder semantic representation capabilities
-
Gupta, R, Banchs, R. E., and Rosso, R Squeezing bottlenecks: Exploring the limits of autoencoder semantic representation capabilities. Neurocomputing, 175:1001-1008, 2016.
-
(2016)
Neurocomputing
, vol.175
, pp. 1001-1008
-
-
Gupta, R.1
Banchs, R.E.2
Rosso, R.3
-
10
-
-
84989333545
-
-
Higgins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C, Mohamed, S., and Lerchner, A. Early visual concept learning with unsupervised deep learning. arXiv:1606.05579, 2016.
-
(2016)
Early Visual Concept Learning with Unsupervised Deep Learning
-
-
Higgins, I.1
Matthey, L.2
Glorot, X.3
Pal, A.4
Uria, B.5
Blundell, C.6
Mohamed, S.7
Lerchner, A.8
-
13
-
-
85030979929
-
The unreasonable effectiveness of recurrent neural networks
-
URL karpathy.github.io
-
Karpathy, A. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy blog, 2015. URL karpathy.github.io.
-
(2015)
Andrej Karpathy Blog
-
-
Karpathy, A.1
-
15
-
-
84965153327
-
Skip-thought vectors
-
Kiros, R., Zhu, Y, Salakhutdinov, R., Zemel, R. S., Tor-ralba, A., Urtasun, R., and Fidler, S. Skip-thought vectors. Advances in Neural Information Processing Systems, 2015.
-
(2015)
Advances in Neural Information Processing Systems
-
-
Kiros, R.1
Zhu, Y.2
Salakhutdinov, R.3
Zemel, R.S.4
Torralba, A.5
Urtasun, R.6
Fidler, S.7
-
17
-
-
79959829092
-
Recurrent neural network based language model
-
Mikolov, T., Karafiat, M., Bürget, L., Cernocky, J., and Khudanpur, S. Recurrent neural network based language model. Interspeech, 2010.
-
(2010)
Interspeech
-
-
Mikolov, T.1
Karafiat, M.2
Bürget, L.3
Cernocky, J.4
Khudanpur, S.5
-
19
-
-
85083937095
-
Learning optimal interventions
-
Mueller, J., Reshef, D. N., Du, G., and Jaakkola, T. Learning optimal interventions. Artificial Intelligence and Statistics, 2017.
-
(2017)
Artificial Intelligence and Statistics
-
-
Mueller, J.1
Reshef, D.N.2
Du, G.3
Jaakkola, T.4
-
20
-
-
84946206172
-
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
-
Nguyen, A., Yosinski, J., and Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Computer Vision and Pattern Recognition, 2015.
-
(2015)
Computer Vision and Pattern Recognition
-
-
Nguyen, A.1
Yosinski, J.2
Clune, J.3
-
21
-
-
85019234593
-
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
-
Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., and Clune, J. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in Neural Information Processing Systems, 2016.
-
(2016)
Advances in Neural Information Processing Systems
-
-
Nguyen, A.1
Dosovitskiy, A.2
Yosinski, J.3
Brox, T.4
Clune, J.5
-
22
-
-
85083953896
-
Deep inside convolutional networks: Visualising image classification models and saliency maps
-
Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. ICLR Workshop Proceedings, 2014.
-
(2014)
ICLR Workshop Proceedings
-
-
Simonyan, K.1
Vedaldi, A.2
Zisserman, A.3
-
25
-
-
84905715088
-
Efficient global optimization for combinatorial problems
-
Zaefferer, M., Stork, J., Friese, M., Fischbach, A., Naujoks, B., and Bartz-Beielstein, T. Efficient global optimization for combinatorial problems. Genetic and Evolutionary Computation Conference, 2014.
-
(2014)
Genetic and Evolutionary Computation Conference
-
-
Zaefferer, M.1
Stork, J.2
Friese, M.3
Fischbach, A.4
Naujoks, B.5
Bartz-Beielstein, T.6
-
27
-
-
0037236821
-
An elementary proof of a theorem of Johnson and lindenstrauss
-
Dasgupta, S. D. A. and Gupta, A. K. An elementary proof of a theorem of Johnson and lindenstrauss. Random Structures and Algorithms, 22:60-65, 2002.
-
(2002)
Random Structures and Algorithms
, vol.22
, pp. 60-65
-
-
Dasgupta, S.D.A.1
Gupta, A.K.2
|