메뉴 건너뛰기




Volumn , Issue , 2017, Pages

Capacity and trainability in recurrent neural networks

Author keywords

[No Author keywords available]

Indexed keywords

ARCHITECTURE; NETWORK ARCHITECTURE;

EID: 85088225685     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (80)

References (42)
  • 3
    • 0001163081 scopus 로고
    • Number of stable points for spin-glasses and neural networks of higher orders
    • Pierre Baldi and Santosh S Venkatesh. Number of stable points for spin-glasses and neural networks of higher orders. Physical Review Letters, 58(9):913, 1987.
    • (1987) Physical Review Letters , vol.58 , Issue.9 , pp. 913
    • Baldi, P.1    Venkatesh, S.S.2
  • 5
    • 84899857287 scopus 로고    scopus 로고
    • Short-term memory capacity in networks via the restricted isometry property
    • Adam S Charles, Han Lun Yap, and Christopher J Rozell. Short-term memory capacity in networks via the restricted isometry property. Neural computation, 26(6):1198-1235, 2014.
    • (2014) Neural Computation , vol.26 , Issue.6 , pp. 1198-1235
    • Charles, A.S.1    Yap, H.L.2    Rozell, C.J.3
  • 8
    • 84918441630 scopus 로고
    • Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition
    • Thomas M Cover. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE transactions on electronic computers, (3):326-334, 1965.
    • (1965) IEEE Transactions on Electronic Computers , Issue.3 , pp. 326-334
    • Cover, T.M.1
  • 9
    • 84942597150 scopus 로고    scopus 로고
    • Parallelizing exploration-exploitation tradeoffs in Gaussian process bandit optimization
    • Thomas Desautels, Andreas Krause, and Joel W Burdick. Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. The Journal of Machine Learning Research, 15(1):3873-3923, 2014.
    • (2014) The Journal of Machine Learning Research , vol.15 , Issue.1 , pp. 3873-3923
    • Desautels, T.1    Krause, A.2    Burdick, J.W.3
  • 13
    • 36149029786 scopus 로고
    • The space of interactions in neural network models
    • Elizabeth Gardner. The space of interactions in neural network models. Journal of physics A: Mathematical and general, 21(1):257, 1988.
    • (1988) Journal of Physics A: Mathematical and General , vol.21 , Issue.1 , pp. 257
    • Gardner, E.1
  • 15
    • 64849110608 scopus 로고    scopus 로고
    • A novel connectionist system for unconstrained handwriting recognition. Pattern analysis and machine Intelligence
    • Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami, Horst Bunke, and Jürgen Schmidhuber. A novel connectionist system for unconstrained handwriting recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(5):855-868, 2009.
    • (2009) IEEE Transactions on , vol.31 , Issue.5 , pp. 855-868
    • Graves, A.1    Liwicki, M.2    Fernández, S.3    Bertolami, R.4    Bunke, H.5    Schmidhuber, J.6
  • 22
    • 1842421269 scopus 로고    scopus 로고
    • Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication
    • Herbert Jaeger and Harald Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. science, 304(5667):78-80, 2004.
    • (2004) Science , vol.304 , Issue.5667 , pp. 78-80
    • Jaeger, H.1    Haas, H.2
  • 24
    • 84994193137 scopus 로고    scopus 로고
    • Exploring the limits of language modeling
    • abs
    • Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/1602.02410.
    • (2016) CoRR
    • Józefowicz, R.1    Vinyals, O.2    Schuster, M.3    Shazeer, N.4    Wu, Y.5
  • 26
    • 85083951076 scopus 로고    scopus 로고
    • ADaM: A method for stochastic optimization
    • abs
    • Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
    • (2014) CoRR
    • Kingma, D.P.1    Ba, J.2
  • 27
    • 0002702215 scopus 로고    scopus 로고
    • Vapnik-chervonenkis dimension of recurrent neural networks
    • Pascal Koiran and Eduardo D Sontag. Vapnik-chervonenkis dimension of recurrent neural networks. Discrete Applied Mathematics, 86(1):63-79, 1998.
    • (1998) Discrete Applied Mathematics , vol.86 , Issue.1 , pp. 63-79
    • Koiran, P.1    Sontag, E.D.2
  • 29
    • 0036834701 scopus 로고    scopus 로고
    • Real-time computing without stable states: A new framework for neural computation based on perturbations
    • Wolfgang Maass, Thomas Natschläger, and Henry Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation, 14(11):2531-2560, 2002.
    • (2002) Neural Computation , vol.14 , Issue.11 , pp. 2531-2560
    • Maass, W.1    Natschläger, T.2    Markram, H.3
  • 31
    • 84887390404 scopus 로고    scopus 로고
    • Context-dependent computation by recurrent dynamics in prefrontal cortex
    • Valerio Mante, David Sussillo, Krishna V Shenoy, and William T Newsome. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503(7474):78-84, 2013.
    • (2013) Nature , vol.503 , Issue.7474 , pp. 78-84
    • Mante, V.1    Sussillo, D.2    Shenoy, K.V.3    Newsome, W.T.4
  • 34
    • 85088226307 scopus 로고    scopus 로고
    • Outrageously large neural networks: The sparsely-gated mixture-of-experts layer
    • abs
    • Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. CoRR, abs/1701.06538, 2017. URL http://arxiv.org/abs/1701.06538.
    • (2017) CoRR
    • Shazeer, N.1    Mirhoseini, A.2    Maziarz, K.3    Davis, A.4    Le, Q.V.5    Hinton, G.E.6    Dean, J.7
  • 37
    • 84877827546 scopus 로고    scopus 로고
    • Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks
    • David Sussillo and Omri Barak. Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3):626-649, 2013.
    • (2013) Neural Computation , vol.25 , Issue.3 , pp. 626-649
    • Sussillo, D.1    Barak, O.2
  • 39
    • 84893343292 scopus 로고    scopus 로고
    • Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude
    • Tijmen Tieleman and Geoffrey. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.
    • (2012) COURSERA: Neural Networks for Machine Learning , vol.4
    • Tieleman, T.1    Hinton, G.2
  • 40
    • 2342592517 scopus 로고    scopus 로고
    • Short-term memory in orthogonal neural networks
    • Olivia L White, Daniel D Lee, and Haim Sompolinsky. Short-term memory in orthogonal neural networks. Physical review letters, 92(14):148102, 2004.
    • (2004) Physical Review Letters , vol.92 , Issue.14 , pp. 148102
    • White, O.L.1    Lee, D.D.2    Sompolinsky, H.3
  • 42
    • 84975705947 scopus 로고    scopus 로고
    • Minimal gated unit for recurrent neural networks
    • Guo-Bing Zhou, Jianxin Wu, Chen-Lin Zhang, and Zhi-Hua Zhou. Minimal gated unit for recurrent neural networks. International Journal of Automation and Computing, 13(3):226-234, 2016. ISSN 1751-8520. doi: 10.1007/s11633-016-1006-2. URL http://dx.doi.org/10.1007/s11633-016-1006-2.
    • (2016) International Journal of Automation and Computing , vol.13 , Issue.3 , pp. 226-234
    • Zhou, G.-B.1    Wu, J.2    Zhang, C.-L.3    Zhou, Z.-H.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.