메뉴 건너뛰기




Volumn , Issue , 2016, Pages 527-533

Learning distributed word representations for bidirectional LSTM recurrent neural network

Author keywords

[No Author keywords available]

Indexed keywords

COMPUTATIONAL LINGUISTICS; NATURAL LANGUAGE PROCESSING SYSTEMS; SPEECH RECOGNITION;

EID: 84994157362     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.18653/v1/n16-1064     Document Type: Conference Paper
Times cited : (35)

References (32)
  • 1
    • 84910030525 scopus 로고    scopus 로고
    • Word embeddings for speech recognition
    • Samy Bengio and Georg Heigold. 2014. Word embeddings for speech recognition. In INTERSPEECH, pages 1053-1057.
    • (2014) INTERSPEECH , pp. 1053-1057
    • Bengio, S.1    Heigold, G.2
  • 3
    • 85127836544 scopus 로고    scopus 로고
    • Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms
    • Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 1-8.
    • (2002) Proceedings of Empirical Methods in Natural Language Processing (EMNLP) , pp. 1-8
    • Collins, M.1
  • 4
    • 56449095373 scopus 로고    scopus 로고
    • A unified architecture for natural language processing: Deep neural networks with multitask learning
    • Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of International Conference on Machine Learning (ICML), pages 160-167.
    • (2008) Proceedings of International Conference on Machine Learning (ICML) , pp. 160-167
    • Collobert, R.1    Weston, J.2
  • 9
    • 84857892556 scopus 로고    scopus 로고
    • Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics
    • Michael Gutmann and Aapo Hyvärinen. 2012. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research (JMLR), 13:307-361.
    • (2012) Journal of Machine Learning Research (JMLR) , vol.13 , pp. 307-361
    • Gutmann, M.1    Hyvärinen, A.2
  • 12
    • 0000458784 scopus 로고    scopus 로고
    • Producing high-dimensional semantic spaces from lexical co-occurrence
    • Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, & Computers, 28(2):203-208.
    • (1996) Behavior Research Methods, Instruments, & Computers , vol.28 , Issue.2 , pp. 203-208
    • Lund, K.1    Burgess, C.2
  • 13
    • 34249852033 scopus 로고
    • Building a large annotated corpus of English: The Penn Treebank
    • Mitchell Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics (CL), 19(2):313-330.
    • (1993) Computational Linguistics (CL) , vol.19 , Issue.2 , pp. 313-330
    • Marcus, M.1    Marcinkiewicz, M.A.2    Santorini, B.3
  • 14
    • 84906237242 scopus 로고    scopus 로고
    • Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding
    • Grégoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding. In INTERSPEECH, pages 3771-3775.
    • (2013) INTERSPEECH , pp. 3771-3775
    • Mesnil, G.1    He, X.2    Deng, L.3    Bengio, Y.4
  • 27
    • 84930639546 scopus 로고    scopus 로고
    • Introducing CURRENNT-the Munich open-source CUDA recurrent Neural Network Toolkit
    • Felix Weninger, Johannes Bergmann, and Björn Schuller. 2014. Introducing CURRENNT-the Munich open-source CUDA recurrent Neural Network Toolkit. Journal of Machine Learning Research (JMLR), 16(1):547-551.
    • (2014) Journal of Machine Learning Research (JMLR) , vol.16 , Issue.1 , pp. 547-551
    • Weninger, F.1    Bergmann, J.2    Schuller, B.3
  • 28
    • 70450183849 scopus 로고    scopus 로고
    • An empirical comparison of goodness measures for unsupervised chinese word segmentation with a unified framework
    • Hai Zhao and Chunyu Kit. 2008a. An empirical comparison of goodness measures for unsupervised chinese word segmentation with a unified framework. In Proceedings of International Joint Conference on Natural Language Processing (IJCNLP), volume 1, pages 9-16.
    • (2008) Proceedings of International Joint Conference on Natural Language Processing (IJCNLP) , vol.1 , pp. 9-16
    • Zhao, H.1    Kit, C.2
  • 29
    • 84871054273 scopus 로고    scopus 로고
    • Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition
    • Hai Zhao and Chunyu Kit. 2008b. Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In SIGHAN Workshop on Chinese Language Processing, pages 106-111.
    • (2008) SIGHAN Workshop on Chinese Language Processing , pp. 106-111
    • Zhao, H.1    Kit, C.2
  • 30
    • 77958083019 scopus 로고    scopus 로고
    • Integrating unsupervised and supervised word segmentation: The role of goodness measures
    • Hai Zhao and Chunyu Kit. 2011. Integrating unsupervised and supervised word segmentation: The role of goodness measures. Information Sciences, 181(1):163-183.
    • (2011) Information Sciences , vol.181 , Issue.1 , pp. 163-183
    • Zhao, H.1    Kit, C.2
  • 31
    • 85097831731 scopus 로고    scopus 로고
    • An improved chinese word segmentation system with conditional random field
    • Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An improved chinese word segmentation system with conditional random field. In SIGHAN Workshop on Chinese Language Processing, pages 162-165.
    • (2006) SIGHAN Workshop on Chinese Language Processing , pp. 162-165
    • Zhao, H.1    Huang, C.-N.2    Li, M.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.