메뉴 건너뛰기




Volumn 9, Issue 7, 1996, Pages 1213-1222

Merging back-propagation and Hebbian learning rules for robust classifications

Author keywords

Classifier networks; Error back propagation; Generalization; Hebbian learning; Hidden neuron saturation; Hybrid learning; Mapping sensitivity; Robustness

Indexed keywords

BACKPROPAGATION; COMPUTATIONAL METHODS; COMPUTER SIMULATION; CONVERGENCE OF NUMERICAL METHODS; LEARNING ALGORITHMS; OPTIMIZATION; PATTERN RECOGNITION;

EID: 0030273576     PISSN: 08936080     EISSN: None     Source Type: Journal    
DOI: 10.1016/0893-6080(96)00042-1     Document Type: Article
Times cited : (34)

References (25)
  • 1
    • 0001160588 scopus 로고
    • What size net gives valid generalization?
    • Baum E., Haussler D. What size net gives valid generalization? Neural Computation. 1:1989;151-160.
    • (1989) Neural Computation , vol.1 , pp. 151-160
    • Baum, E.1    Haussler, D.2
  • 2
    • 0027659357 scopus 로고
    • Curvature-driven smoothing: A learning algorithm for feedforward networks
    • Bishop C.M. Curvature-driven smoothing: a learning algorithm for feedforward networks. IEEE Transactions on Neural Networks. 4:1993;882-884.
    • (1993) IEEE Transactions on Neural Networks , vol.4 , pp. 882-884
    • Bishop, C.M.1
  • 3
    • 0026953305 scopus 로고
    • Improving generalization performance using double backpropagation
    • Drucker H., Le Cun Y. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks. 3:1992;991-997.
    • (1992) IEEE Transactions on Neural Networks , vol.3 , pp. 991-997
    • Drucker, H.1    Le Cun, Y.2
  • 4
    • 0023846591 scopus 로고
    • Neocognitron: A hierarchical neural network capable of visual pattern recognition
    • Fukushima K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Networks. 1:1989;119-130.
    • (1989) Neural Networks , vol.1 , pp. 119-130
    • Fukushima, K.1
  • 6
    • 0000991092 scopus 로고
    • Comparing biases for minimal network construction with back-propagation
    • In D. Touretzky (Ed.), San Matero, CA: Morgan Kaufmann.
    • Hanson, S. J., & Pratt, L. Y. (1989). Comparing biases for minimal network construction with back-propagation. In D. Touretzky (Ed.), Advances in neural information processing systems 1 (pp. 177-185). San Matero, CA: Morgan Kaufmann.
    • (1989) Advances in Neural Information Processing Systems , vol.1 , pp. 177-185
    • Hanson, S.J.1    Pratt, L.Y.2
  • 7
    • 0001683814 scopus 로고
    • Layered neural networks with Gaussian hidden units with universal approximations
    • Hartman E.J., Keeler J.D., Kowalski J.M. Layered neural networks with Gaussian hidden units with universal approximations. Neural Computation. 2:1990;210-215.
    • (1990) Neural Computation , vol.2 , pp. 210-215
    • Hartman, E.J.1    Keeler, J.D.2    Kowalski, J.M.3
  • 10
    • 0025447562 scopus 로고
    • A simple procedure for pruning back-propagation trained neural networks
    • Karnin E.D. A simple procedure for pruning back-propagation trained neural networks. IEEE Transactions on Neural Networks. 1:1990;239-242.
    • (1990) IEEE Transactions on Neural Networks , vol.1 , pp. 239-242
    • Karnin, E.D.1
  • 11
    • 0011931159 scopus 로고
    • Merging Hebbian learning rule and least-mean-square error algorithm for two layer neural networks
    • pp. Washington DC., USA.
    • Koh, S. H., Lee, S. Y., Jang, J. S., & Shin, S. Y. (1990). Merging Hebbian learning rule and least-mean-square error algorithm for two layer neural networks. Proceedings of the International Joint Conference on Neural Networks (I, pp. 647-650). Washington DC., USA.
    • (1990) Proceedings of the International Joint Conference on Neural Networks , vol.1 , pp. 647-650
    • Koh, S.H.1    Lee, S.Y.2    Jang, J.S.3    Shin, S.Y.4
  • 12
    • 0000029122 scopus 로고
    • A simple weight decay can improve generalization
    • In D. Touretzky (Ed.), (pp. San Mateo, CA: Morgan Kaufmann.
    • Krogh, A., & Hertz, J. A. (1992). A simple weight decay can improve generalization. In D. Touretzky (Ed.), Advances in neural information processing systems 4 (pp. 950-957). San Mateo, CA: Morgan Kaufmann.
    • (1992) Advances in Neural Information Processing Systems , vol.4 , pp. 950-957
    • Krogh, A.1    Hertz, J.A.2
  • 15
    • 0025839504 scopus 로고
    • A Gaussian potential function network with hierarchically self-organizing learning
    • Lee S., Kil R.M. A Gaussian potential function network with hierarchically self-organizing learning. Neural Networks. 4:1991;207-224.
    • (1991) Neural Networks , vol.4 , pp. 207-224
    • Lee, S.1    Kil, R.M.2
  • 16
    • 0011992424 scopus 로고
    • Error minimization, generalization, and hardware implementability of supervised learning
    • pp. San Diego, CA.
    • Lee, S. Y., & Jeong, D. G. (1994). Error minimization, generalization, and hardware implementability of supervised learning. Proceedings of World Congress on Neural Network (III, pp. 325-330). San Diego, CA.
    • (1994) Proceedings of World Congress on Neural Network , vol.3 , pp. 325-330
    • Lee, S.Y.1    Jeong, D.G.2
  • 17
    • 0000900876 scopus 로고
    • Skeletonization: a technique for trimming the fat from a network via relevance assessment
    • In D. Touretzky (Ed.), (pp. San Mateo, CA: Morgan Kaufmann.
    • Mozer, M. C., & Smolensky, P. (1989). Skeletonization: a technique for trimming the fat from a network via relevance assessment. In D. Touretzky (Ed.), Advances in neural information processing systems 1 (pp. 107-115). San Mateo, CA: Morgan Kaufmann.
    • (1989) Advances in Neural Information Processing Systems , vol.1 , pp. 107-115
    • Mozer, M.C.1    Smolensky, P.2
  • 18
    • 0001765492 scopus 로고
    • Simplifying neural networks by soft weight sharing
    • Nowlan S.J., Hinton G.E. Simplifying neural networks by soft weight sharing. Neural Computation. 4:1992;473-493.
    • (1992) Neural Computation , vol.4 , pp. 473-493
    • Nowlan, S.J.1    Hinton, G.E.2
  • 19
    • 0029341842 scopus 로고
    • Sensitivity analysis of single hidden-layer neural networks with threshold functions
    • Oh, S.H., Lee, Y. Sensitivity analysis of single hidden-layer neural networks with threshold functions. IEEE Transactions on Neural Networks, 6 1005-1007 (1995).
    • (1995) IEEE Transactions on Neural Networks , vol.6 , pp. 1005-1007
    • Oh, S.H.1    Lee, Y.2
  • 21
    • 0026017007 scopus 로고
    • Creating artificial neural networks that generalize
    • Sietsma J., Dow R.J.F. Creating artificial neural networks that generalize. Neural Networks. 4:1991;67-79.
    • (1991) Neural Networks , vol.4 , pp. 67-79
    • Sietsma, J.1    Dow, R.J.F.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.