메뉴 건너뛰기




Volumn 9, Issue 1, 1998, Pages 11-26

Fast training of recurrent networks based on the EM algorithm

Author keywords

EM algorithm; Fast; Mean field approximation; Moving targets; Probability model; Recurrent networks

Indexed keywords

APPROXIMATION THEORY; ERROR ANALYSIS; LEARNING ALGORITHMS; LEARNING SYSTEMS; MATHEMATICAL MODELS; PROBABILITY DENSITY FUNCTION; REGRESSION ANALYSIS; TRANSFER FUNCTIONS; VECTORS;

EID: 0031675929     PISSN: 10459227     EISSN: None     Source Type: Journal    
DOI: 10.1109/72.655025     Document Type: Article
Times cited : (29)

References (35)
  • 2
    • 0029484103 scopus 로고    scopus 로고
    • A survey and crique of techniques for extracting rules from trained artificial neural networks
    • R. Andrews, J. Diederich, and A. B. Tickle, "A survey and crique of techniques for extracting rules from trained artificial neural networks," to appear in Knowledge-Based Systems, available FTP:ftp.qut.edu.au//pub/NRC/ps/QUTNRC-95-01-02.ps.Z
    • Knowledge-Based Systems
    • Andrews, R.1    Diederich, J.2    Tickle, A.B.3
  • 3
    • 0000353178 scopus 로고
    • A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains
    • L. E. Baum, T. Petrie, G. Soules, and N. Weiss, "A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains," Annu. Math. Statist., vol. 41, pp. 164-171, 1970.
    • (1970) Annu. Math. Statist. , vol.41 , pp. 164-171
    • Baum, L.E.1    Petrie, T.2    Soules, G.3    Weiss, N.4
  • 4
    • 0000971250 scopus 로고
    • Credit assignment through time: Alternative to backpropagation
    • Y. Bengio, "Credit assignment through time: Alternative to backpropagation," Advances in Neural Inform. Processing Syst., vol. 6, pp. 75-82, 1994.
    • (1994) Advances in Neural Inform. Processing Syst. , vol.6 , pp. 75-82
    • Bengio, Y.1
  • 5
    • 0028392483 scopus 로고
    • The problem of learning long-term dependencies in recurrent networks
    • Y. Bengio, P. Frasconi, and P. Simard, "The problem of learning long-term dependencies in recurrent networks," IEEE Trans. Neural Networks, vol. 5, pp. 157-166, 1993.
    • (1993) IEEE Trans. Neural Networks , vol.5 , pp. 157-166
    • Bengio, Y.1    Frasconi, P.2    Simard, P.3
  • 7
    • 0026898265 scopus 로고
    • Alternating minimization and Boltzmann machine learning
    • W. Byne, "Alternating minimization and Boltzmann machine learning," IEEE Trans. Neural Networks, vol. 3, 1992.
    • (1992) IEEE Trans. Neural Networks , vol.3
    • Byne, W.1
  • 9
    • 0003130308 scopus 로고
    • A unified gradient descent clustering architecture for finite-state machine induction
    • S. Das and M. C. Mozor, "A unified gradient descent clustering architecture for finite-state machine induction," Advances in Neural Inform. Processing Syst., vol. 6, 1994.
    • (1994) Advances in Neural Inform. Processing Syst. , vol.6
    • Das, S.1    Mozor, M.C.2
  • 10
    • 0002629270 scopus 로고
    • Maximum likelihood from incomplete data via EM algorithm
    • A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via EM algorithm," J. Roy. Statist. Soc., vol. 39, pp. 1-33, 1977.
    • (1977) J. Roy. Statist. Soc. , vol.39 , pp. 1-33
    • Dempster, A.P.1    Laird, N.M.2    Rubin, D.B.3
  • 15
    • 0000262562 scopus 로고
    • Hierarchical mixture of experts and the EM algorithm
    • M. Jordan and R. A. Jacobs, "Hierarchical mixture of experts and the EM algorithm," Neural Computa., vol. 6, pp. 181-214, 1994.
    • (1994) Neural Computa. , vol.6 , pp. 181-214
    • Jordan, M.1    Jacobs, R.A.2
  • 17
    • 0025254722 scopus 로고
    • A time-delay neural-network architecture for isolated word recognition
    • K. J. Lang, A. H. Waibel, and G. E. Hinton, "A time-delay neural-network architecture for isolated word recognition," Neural Networks, vol. 3, pp. 23-44, 1990.
    • (1990) Neural Networks , vol.3 , pp. 23-44
    • Lang, K.J.1    Waibel, A.H.2    Hinton, G.E.3
  • 18
    • 0025508916 scopus 로고
    • A statistical approach to learning and generalization in layered neural network
    • E. Levin, N. Tishby, and S. A. Solla, "A statistical approach to learning and generalization in layered neural network," Proc. IEEE, vol. 78, no. 10, pp. 1568-1574, 1990.
    • (1990) Proc. IEEE , vol.78 , Issue.10 , pp. 1568-1574
    • Levin, E.1    Tishby, N.2    Solla, S.A.3
  • 19
    • 0031105693 scopus 로고    scopus 로고
    • An efficient EM-based training algorithm for feedforward neural networks
    • to be pulished
    • S. Ma, C. Ji, and J. Farmer, "An efficient EM-based training algorithm for feedforward neural networks," Neural Networks, to be pulished.
    • Neural Networks
    • Ma, S.1    Ji, C.2    Farmer, J.3
  • 20
    • 0002704818 scopus 로고
    • A practical Bayesian framework for backpropagation networks
    • D. J. Mackey, "A practical Bayesian framework for backpropagation networks," Neural Computa., vol. 4, pp. 448-472, 1992.
    • (1992) Neural Computa. , vol.4 , pp. 448-472
    • Mackey, D.J.1
  • 21
    • 0002487906 scopus 로고
    • Adaptive control of dynamical systems using neural networks
    • D. A. White and D. A. Sofge, Eds. NY: Van Nostrand Reinhold
    • K. S. Narendra, "Adaptive control of dynamical systems using neural networks," in Handbook of Intelligent Control, D. A. White and D. A. Sofge, Eds. NY: Van Nostrand Reinhold, 1992.
    • (1992) Handbook of Intelligent Control
    • Narendra, K.S.1
  • 22
    • 0029880174 scopus 로고    scopus 로고
    • Extraction of rules from discrete-time recurrent neural networks
    • C. Omlin and C. L. Giles, "Extraction of rules from discrete-time recurrent neural networks," Neural Networks, vol. 9, no. 1, p. 41, 1996.
    • (1996) Neural Networks , vol.9 , Issue.1 , pp. 41
    • Omlin, C.1    Giles, C.L.2
  • 23
    • 0001460434 scopus 로고
    • The induction of dynamical recognizers
    • J. B. Pollack, "The induction of dynamical recognizers," Machine Learning, vol. 7, no. 23, 1991.
    • (1991) Machine Learning , vol.7 , Issue.23
    • Pollack, J.B.1
  • 26
    • 0342427298 scopus 로고
    • Learning by choice of internal representations: An energy minimization approach
    • D. Saad and E. Marom, "Learning by choice of internal representations: An energy minimization approach," Complex Syst., vol. 4, pp. 107-118, 1990.
    • (1990) Complex Syst. , vol.4 , pp. 107-118
    • Saad, D.1    Marom, E.2
  • 28
    • 0028384347 scopus 로고
    • Combining symbolic and neural learning
    • J. W. Shavlik, "Combining symbolic and neural learning," Machine Learning, vol. 14, no. 3, 1994.
    • (1994) Machine Learning , vol.14 , Issue.3
    • Shavlik, J.W.1
  • 29
    • 0001101724 scopus 로고
    • A dynamic neural network architecture by sequential partitioning of the input space
    • R. S. Shadafan and M. Niranjan, "A dynamic neural network architecture by sequential partitioning of the input space," Neural Computa., vol. 6, pp. 1203-1222, 1994.
    • (1994) Neural Computa. , vol.6 , pp. 1203-1222
    • Shadafan, R.S.1    Niranjan, M.2
  • 31
    • 0040726637 scopus 로고
    • Learning and extracting initial mealy machines with a modular neural network model
    • P. Tino and J. Sajda, "Learning and extracting initial mealy machines with a modular neural network model," Neural Computa., vol. 7, no. 4, 1995.
    • (1995) Neural Computa. , vol.7 , Issue.4
    • Tino, P.1    Sajda, J.2
  • 32
    • 0343839296 scopus 로고
    • Ph.D dissertation, Harvard Univ., Cambridge, MA
    • P. Werbos, "Beyond regression," Ph.D dissertation, Harvard Univ., Cambridge, MA, 1974.
    • (1974) Beyond Regression
    • Werbos, P.1
  • 33
    • 0001609567 scopus 로고
    • An efficient gradient-based algorithm of online training of recurrent network trajectories
    • R. J. Williams and J. Peng, "An efficient gradient-based algorithm of online training of recurrent network trajectories," Neural Computa., vol. 2, pp. 491-501, 1990.
    • (1990) Neural Computa. , vol.2 , pp. 491-501
    • Williams, R.J.1    Peng, J.2
  • 34
    • 0001202594 scopus 로고
    • A learning algorithm for continually running fully recurrent neural networks
    • R. J. Williams and D. Zipser, "A learning algorithm for continually running fully recurrent neural networks," Neural Computa., vol. 1, pp. 270-280, 1989.
    • (1989) Neural Computa. , vol.1 , pp. 270-280
    • Williams, R.J.1    Zipser, D.2
  • 35
    • 0026938712 scopus 로고
    • The mean-field theory in EM procedures for Markov random fields
    • J. Zhang, "The mean-field theory in EM procedures for Markov random fields," IEEE Trans. Signal Processing, vol. 40, pp. 2570-2583, 1992.
    • (1992) IEEE Trans. Signal Processing , vol.40 , pp. 2570-2583
    • Zhang, J.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.