메뉴 건너뛰기




Volumn 23, Issue 10, 2012, Pages 1649-1658

Symbolic representation of recurrent neural network dynamics

Author keywords

Finite state machines; hidden layer representation; penalty function; recurrent neural networks

Indexed keywords

BACKPROPAGATION LEARNING; COMPUTATIONAL EXPERIMENT; ERROR BACK-PROPAGATION; FEED-FORWARD NETWORK; HIDDEN LAYERS; PENALTY FUNCTION; SYMBOLIC REPRESENTATION; TEMPORAL SEQUENCES;

EID: 84876436918     PISSN: 2162237X     EISSN: 21622388     Source Type: Journal    
DOI: 10.1109/TNNLS.2012.2210242     Document Type: Article
Times cited : (10)

References (24)
  • 1
    • 26444565569 scopus 로고
    • Finding structure in time
    • J. Elman, "Finding structure in time," Cognit. Sci., vol. 14, no. 2, pp. 179-211, 1990.
    • (1990) Cognit. Sci , vol.14 , Issue.2 , pp. 179-211
    • Elman, J.1
  • 2
    • 18444364992 scopus 로고    scopus 로고
    • Rule extraction from recurrent neural networks: A taxonomy and review
    • H. Jacobsson, "Rule extraction from recurrent neural networks: A taxonomy and review," Neural Comput., vol. 17, no. 6, pp. 1223-1263, 2005.
    • (2005) Neural Comput , vol.17 , Issue.6 , pp. 1223-1263
    • Jacobsson, H.1
  • 3
    • 0035392695 scopus 로고    scopus 로고
    • Financial volatility trading using recurrent neural networks
    • Jul
    • P. Tino, C. Schittenkopf, and G. Dorffner, "Financial volatility trading using recurrent neural networks," IEEE Trans. Neural Netw., vol. 12, no. 4, pp. 865-874, Jul. 2001.
    • (2001) IEEE Trans. Neural Netw , vol.12 , Issue.4 , pp. 865-874
    • Tino, P.1    Schittenkopf, C.2    Dorffner, G.3
  • 4
    • 68949200806 scopus 로고    scopus 로고
    • Segmented-memory recurrent neural networks
    • Aug
    • J. Chen and N. Chaudhari, "Segmented-memory recurrent neural networks," IEEE Trans. Neural Netw., vol. 20, no. 8, pp. 1267-1280, Aug. 2009.
    • (2009) IEEE Trans. Neural Netw , vol.20 , Issue.8 , pp. 1267-1280
    • Chen, J.1    Chaudhari, N.2
  • 5
    • 0001887517 scopus 로고
    • Attractor dynamics and parallelism in a connectionist sequential machine
    • M. Jordan, "Attractor dynamics and parallelism in a connectionist sequential machine," in Proc. 8th Annu. Conf. Cognit. Sci. Soc., 1986, pp. 531-546.
    • (1986) Proc. 8th Annu. Conf. Cognit. Sci. Soc , pp. 531-546
    • Jordan, M.1
  • 6
    • 33947361151 scopus 로고    scopus 로고
    • Evolutionary design of neural network architectures using a descriptive encoding language
    • Dec
    • J. Jung and J. Reggia, "Evolutionary design of neural network architectures using a descriptive encoding language," IEEE Trans. Evol. Comput., vol. 10, no. 6, pp. 676-688, Dec. 2006.
    • (2006) IEEE Trans. Evol. Comput , vol.10 , Issue.6 , pp. 676-688
    • Jung, J.1    Reggia, J.2
  • 7
    • 0032123428 scopus 로고    scopus 로고
    • How embedded memory in recurrent neural network architectures helps learning long-term temporal dependencies
    • T. Lin, B. Horne, and C. Giles, "How embedded memory in recurrent neural network architectures helps learning long-term temporal dependencies," Neural Netw., vol. 11, no. 5, pp. 861-868, 1998.
    • (1998) Neural Netw , vol.11 , Issue.5 , pp. 861-868
    • Lin, T.1    Horne, B.2    Giles, C.3
  • 8
    • 79951671371 scopus 로고    scopus 로고
    • Guiding hidden layer representations for improved rule extraction from neural networks
    • Feb.
    • T. Huynh and J. Reggia, "Guiding hidden layer representations for improved rule extraction from neural networks," IEEE Trans. Neural Netw., vol. 22, no. 2, pp. 264-275, Feb. 2011.
    • (2011) IEEE Trans. Neural Netw , vol.22 , Issue.2 , pp. 264-275
    • Huynh, T.1    Reggia, J.2
  • 10
    • 0032208720 scopus 로고    scopus 로고
    • The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained artificial neural networks
    • Nov
    • A. Tickle, R. Andrews, M. Golea, and J. Diederich, "The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained artificial neural networks," IEEE Trans. Neural Netw., vol. 9, no. 6, pp. 1057-1068, Nov. 1998.
    • (1998) IEEE Trans. Neural Netw , vol.9 , Issue.6 , pp. 1057-1068
    • Tickle, A.1    Andrews, R.2    Golea, M.3    Diederich, J.4
  • 11
    • 0033097317 scopus 로고    scopus 로고
    • Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences
    • Mar
    • P. Tino and M. Koteles, "Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences," IEEE Trans. Neural Netw., vol. 10, no. 2, pp. 284-302, Mar. 1999.
    • (1999) IEEE Trans. Neural Netw , vol.10 , Issue.2 , pp. 284-302
    • Tino, P.1    Koteles, M.2
  • 12
    • 0000003489 scopus 로고
    • Learning finite state machines with self-clustering recurrent networks
    • Z. Zeng, R. Goodman, and P. Smyth, "Learning finite state machines with self-clustering recurrent networks," Neural Comput., vol. 5, no. 6, pp. 976-990, 1993.
    • (1993) Neural Comput , vol.5 , Issue.6 , pp. 976-990
    • Zeng, Z.1    Goodman, R.2    Smyth, P.3
  • 13
    • 0040379161 scopus 로고    scopus 로고
    • Constrained second-order recurrent networks for finite-state automata induction
    • S. Kremer, R. Neco, and M. Forcada, "Constrained second-order recurrent networks for finite-state automata induction," in Proc. 8th Int. Conf. Artif. Neural Netw., vol. 98. 1998, pp. 529-534.
    • (1998) Proc. 8th Int. Conf. Artif. Neural Netw , vol.98 , pp. 529-534
    • Kremer, S.1    Neco, R.2    Forcada, M.3
  • 14
    • 0031915593 scopus 로고    scopus 로고
    • Dynamic on-line clustering and state extraction: An approach to symbolic learning
    • S. Das and M. Mozer, "Dynamic on-line clustering and state extraction: An approach to symbolic learning," Neural Netw., vol. 11, no. 1, pp. 53-64, 1998.
    • (1998) Neural Netw , vol.11 , Issue.1 , pp. 53-64
    • Das, S.1    Mozer, M.2
  • 19
    • 0032074270 scopus 로고    scopus 로고
    • Inductive inference from noisy examples using the hybrid finite state filter
    • May
    • M. Gori, M. Maggini, E. Martinelli, and G. Soda, "Inductive inference from noisy examples using the hybrid finite state filter," IEEE Trans. Neural Netw., vol. 9, no. 3, pp. 571-575, May 1998.
    • (1998) IEEE Trans. Neural Netw , vol.9 , Issue.3 , pp. 571-575
    • Gori, M.1    Maggini, M.2    Martinelli, E.3    Soda, G.4
  • 20
    • 33748808428 scopus 로고    scopus 로고
    • Learn more by training less: Systematicity in sentence processing by recurrent networks
    • S. Frank, "Learn more by training less: Systematicity in sentence processing by recurrent networks," Connect. Sci., vol. 18, no. 3, pp. 287-302, 2006.
    • (2006) Connect. Sci , vol.18 , Issue.3 , pp. 287-302
    • Frank, S.1
  • 22
    • 77952559512 scopus 로고    scopus 로고
    • Sentence-processing in echo state networks: A qualitative analysis by finite state machine extraction
    • S. Frank and H. Jacobsson, "Sentence-processing in echo state networks: A qualitative analysis by finite state machine extraction," Connect. Sci., vol. 22, no. 2, pp. 135-155, 2010.
    • (2010) Connect. Sci , vol.22 , Issue.2 , pp. 135-155
    • Frank, S.1    Jacobsson, H.2
  • 23
    • 2542530088 scopus 로고    scopus 로고
    • Lack of combinatorial productivity in language processing with simple recurrent networks
    • F. Van der Velde, G. T. van der V. van der Kleij, and M. de Kamps, "Lack of combinatorial productivity in language processing with simple recurrent networks," Connect. Sci., vol. 16, no. 1, pp. 21-46, 2004.
    • (2004) Connect. Sci , vol.16 , Issue.1 , pp. 21-46
    • Van Velde Der, F.1    Kleij Der Van, V.2    Der Van, G.T.3    De Kamps, M.4
  • 24
    • 0001770758 scopus 로고
    • Dynamic construction of finite-state automata from examples using hill-climbing
    • M. Tomita, "Dynamic construction of finite-state automata from examples using hill-climbing," in Proc. 4th Annu. Cognit. Sci. Conf., 1982, pp. 105-108.
    • (1982) Proc. 4th Annu. Cognit. Sci. Conf , pp. 105-108
    • Tomita, M.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.