-
2
-
-
85083953689
-
Neural machine translation by jointly learning to align and translate
-
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
-
(2015)
ICLR
-
-
Bahdanau, D.1
Cho, K.2
Bengio, Y.3
-
5
-
-
84961291190
-
Learning phrase representations using RNN encoder-decoder for statistical machine translation
-
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP.
-
(2014)
EMNLP
-
-
Cho, K.1
van Merrienboer, B.2
Gulcehre, C.3
Bougares, F.4
Schwenk, H.5
Bengio, Y.6
-
8
-
-
84937896655
-
Exploiting linear structure within convolutional networks for efficient evaluation
-
Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS.
-
(2014)
NIPS
-
-
Denton, E.L.1
Zaremba, W.2
Bruna, J.3
LeCun, Y.4
Fergus, R.5
-
10
-
-
84965175092
-
Deep compression: Compressing deep neural networks with pruning, trained quantization and huff-man coding
-
Song Han, Huizi Mao, and William J Dally. 2015a. Deep compression: Compressing deep neural networks with pruning, trained quantization and huff-man coding. In ICLR.
-
(2015)
ICLR
-
-
Han, S.1
Mao, H.2
Dally, W.J.3
-
11
-
-
84965140688
-
Learning both weights and connections for efficient neural network
-
Song Han, Jeff Pool, John Tran, and William Dally. 2015b. Learning both weights and connections for efficient neural network. In NIPS.
-
(2015)
NIPS
-
-
Han, S.1
Pool, J.2
Tran, J.3
Dally, W.4
-
15
-
-
84988340112
-
-
arXiv preprint
-
Forrest N Iandola, Matthew W Moskewicz, Khalid Ashraf, Song Han, William J Dally, and Kurt Keutzer. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 1mb model size. arXiv preprint arXiv:1602.07360.
-
(2016)
Squeezenet: Alexnet-Level Accuracy with 50x Fewer Parameters and < 1mb Model Size
-
-
Iandola, F.N.1
Moskewicz, M.W.2
Ashraf, K.3
Han, S.4
Dally, W.J.5
Keutzer, K.6
-
16
-
-
85056556192
-
Speeding up convolutional neural networks with low rank expansions
-
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014. Speeding up convolutional neural networks with low rank expansions. In NIPS.
-
(2014)
NIPS
-
-
Jaderberg, M.1
Vedaldi, A.2
Zisserman, A.3
-
17
-
-
84943744936
-
On using very large target vocabulary for neural machine translation
-
Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On using very large target vocabulary for neural machine translation. In ACL.
-
(2015)
ACL
-
-
Jean, S.1
Cho, K.2
Memisevic, R.3
Bengio, Y.4
-
18
-
-
85119970086
-
Montreal neural machine translation systems for WMT’15
-
Sébastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal neural machine translation systems for WMT’15. In WMT.
-
(2015)
WMT
-
-
Jean, S.1
Firat, O.2
Cho, K.3
Memisevic, R.4
Bengio, Y.5
-
19
-
-
84926283798
-
Recurrent continuous translation models
-
Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP.
-
(2013)
EMNLP
-
-
Kalchbrenner, N.1
Blunsom, P.2
-
23
-
-
84973294673
-
Learning compact recurrent neural networks
-
Zhiyun Lu, Vikas Sindhwani, and Tara N Sainath. 2016. Learning compact recurrent neural networks. In ICASSP.
-
(2016)
ICASSP
-
-
Lu, Z.1
Sindhwani, V.2
Sainath, T.N.3
-
24
-
-
84994185144
-
Stanford neural machine translation systems for spoken language domain
-
Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domain. In IWSLT.
-
(2015)
IWSLT
-
-
Luong, M.-T.1
Manning, C.D.2
-
25
-
-
85011890652
-
Achieving open vocabulary neural machine translation with hybrid word-character models
-
Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL.
-
(2016)
ACL
-
-
Luong, M.-T.1
Manning, C.D.2
-
26
-
-
84959874994
-
Effective approaches to attention-based neural machine translation
-
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attention-based neural machine translation. In EMNLP.
-
(2015)
EMNLP
-
-
Luong, M.-T.1
Pham, H.2
Manning, C.D.3
-
28
-
-
84959882423
-
Auto-sizing neural networks: With applications to n-gram language models
-
Kenton Murray and David Chiang. 2015. Auto-sizing neural networks: With applications to n-gram language models. In EMNLP.
-
(2015)
EMNLP
-
-
Murray, K.1
Chiang, D.2
-
29
-
-
84973402069
-
On the compression of recurrent neural networks with an application to lvcsr acoustic modeling for embedded speech recognition
-
Rohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, and Ian McGraw. 2016. On the compression of recurrent neural networks with an application to lvcsr acoustic modeling for embedded speech recognition. In ICASSP.
-
(2016)
ICASSP
-
-
Prabhavalkar, R.1
Alsharif, O.2
Bruguier, A.3
McGraw, I.4
-
30
-
-
85011945222
-
Improving neural machine translation models with monolingual data
-
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In ACL.
-
(2016)
ACL
-
-
Sennrich, R.1
Haddow, B.2
Birch, A.3
-
31
-
-
84965123123
-
Data-free parameter pruning for deep neural networks
-
Suraj Srinivas and R Venkatesh Babu. 2015. Data-free parameter pruning for deep neural networks. In BMVC.
-
(2015)
BMVC
-
-
Srinivas, S.1
Venkatesh Babu, R.2
-
32
-
-
84928547704
-
Sequence to sequence learning with neural networks
-
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS.
-
(2014)
NIPS
-
-
Sutskever, I.1
Vinyals, O.2
Le, Q.V.3
|