메뉴 건너뛰기




Volumn 45, Issue 4, 2001, Pages 295-328

Sensitivity analysis for selective learning by feedforward neural networks

Author keywords

Decision Boundaries; Dynamic Pattern Selection; Feedforward Neural Networks; Pattern Informativeness; Sensitivity Analysis

Indexed keywords

LEARNING SYSTEMS; PERTURBATION TECHNIQUES; SENSITIVITY ANALYSIS;

EID: 0035270526     PISSN: 01692968     EISSN: None     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (7)

References (63)
  • 1
    • 0000646059 scopus 로고
    • Learning internal representation by error propagation
    • D.E. Rumelhart, J.L. McClelland, Eds., MIT Press, Cambridge, Mass
    • Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representation by Error Propagation, in: Parallel Distributed Processing (D.E. Rumelhart, J.L. McClelland, Eds.), Vol. 1, MIT Press, Cambridge, Mass, 1986.
    • (1986) Parallel Distributed Processing , vol.1
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 2
    • 0027242809 scopus 로고
    • Initializing back propagation networks using prototypes
    • Denoeux, T., Lengellé, R.: Initializing Back Propagation Networks using Prototypes, Neural Networks, 6(3), 1993, 351-363.
    • (1993) Neural Networks , vol.6 , Issue.3 , pp. 351-363
    • Denoeux, T.1    Lengellé, R.2
  • 3
    • 0026955395 scopus 로고
    • Avoiding false local minima by proper initialization of connections
    • Wesseles, L.F.A., Barnard, E.: Avoiding False Local Minima by Proper Initialization of Connections, IEEE Transactions on Neural Networks, 3(6), 1992, 899-905.
    • (1992) IEEE Transactions on Neural Networks , vol.3 , Issue.6 , pp. 899-905
    • Wesseles, L.F.A.1    Barnard, E.2
  • 4
    • 0031028741 scopus 로고    scopus 로고
    • Effective backpropagation training with variable stepsize
    • Magoulas, G.D., Vrahatis, M.N., Androulakis, G.S.: Effective Backpropagation Training with Variable Stepsize, Neural Networks, 10(1), 1997, 69-82.
    • (1997) Neural Networks , vol.10 , Issue.1 , pp. 69-82
    • Magoulas, G.D.1    Vrahatis, M.N.2    Androulakis, G.S.3
  • 5
    • 0030176255 scopus 로고    scopus 로고
    • Accelerating backpropagation through dynamic self-adaptation
    • Salomon, R., Van Hemmen, J.L.: Accelerating Backpropagation through Dynamic Self-Adaptation, Neural Networks, 9(4), 1996, 589-601.
    • (1996) Neural Networks , vol.9 , Issue.4 , pp. 589-601
    • Salomon, R.1    Van Hemmen, J.L.2
  • 6
    • 0025724253 scopus 로고    scopus 로고
    • A method for self-determination of adaptive learning rates in back propagation
    • Weir, M.K.: A Method for Self-Determination of Adaptive Learning Rates in Back Propagation, Neural Networks, 1990, 371-379.
    • Neural Networks , vol.1990 , pp. 371-379
    • Weir, M.K.1
  • 7
    • 0031127257 scopus 로고    scopus 로고
    • Efficient backpropagation learning using optimal learning rate and momentum
    • Yu, X-H., Chen, G-A.: Efficient Backpropagation Learning using Optimal Learning Rate and Momentum, Neural Networks, 10(3), 1997, 517-527.
    • (1997) Neural Networks , vol.10 , Issue.3 , pp. 517-527
    • Yu, X.-H.1    Chen, G.-A.2
  • 9
    • 0029727311 scopus 로고    scopus 로고
    • A sensitivity analysis algorithm for pruning feedforward neural networks
    • Washington
    • Engelbrecht, A.P., Cloete, I.: A Sensitivity Analysis Algorithm for Pruning Feedforward Neural Networks, IEEE International Conference in Neural Networks, Washington, Vol. 2, 1996, 1274-1277.
    • (1996) IEEE International Conference in Neural Networks , vol.2 , pp. 1274-1277
    • Engelbrecht, A.P.1    Cloete, I.2
  • 11
    • 0000833730 scopus 로고
    • Optimal brain surgeon: Extensions and performance comparisons
    • J.D. Cowan, G. Tesauro, J. Alspector, Eds
    • Hassibi, B., Stork, D.G., Wolff, G.: Optimal Brain Surgeon: Extensions and Performance Comparisons, in: Advances in Neural Information Processing Systems (J.D. Cowan, G. Tesauro, J. Alspector, Eds), Vol. 6, 1994, 263-270.
    • (1994) Advances in Neural Information Processing Systems , vol.6 , pp. 263-270
    • Hassibi, B.1    Stork, D.G.2    Wolff, G.3
  • 13
    • 0031553665 scopus 로고    scopus 로고
    • Perturbation method for deleting redundant inputs of perceptron networks
    • Zurada, J.M., Malinowski, A., Usui, S.: Perturbation Method for Deleting Redundant Inputs of Perceptron Networks, Neurocomputing, 14, 1997, 177-193.
    • (1997) Neurocomputing , vol.14 , pp. 177-193
    • Zurada, J.M.1    Malinowski, A.2    Usui, S.3
  • 14
    • 0025964567 scopus 로고
    • Back-propagation algorithm which varies the number of hidden units
    • Hirose, Y., Yamashita, K., Hijiya, S.: Back-Propagation Algorithm which Varies the Number of Hidden Units, Neural Networks, 4, 1991, 61-66.
    • (1991) Neural Networks , vol.4 , pp. 61-66
    • Hirose, Y.1    Yamashita, K.2    Hijiya, S.3
  • 16
    • 0001014208 scopus 로고
    • Dynamic creation of hidden units with selective pruning in backpropagation
    • Lim, S-F., Ho, S-B.: Dynamic Creation of Hidden Units with Selective Pruning in Backpropagation, World Congress on Neural Networks, Vol. 3, 1994, 492-497.
    • (1994) World Congress on Neural Networks , vol.3 , pp. 492-497
    • Lim, S.-F.1    Ho, S.-B.2
  • 17
    • 0001024110 scopus 로고
    • First- and second-order methods for learning: Between steepest descent and Newton's method
    • Battiti, R.: First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method, Neural Computation, 4, 1992, 141-166
    • (1992) Neural Computation , vol.4 , pp. 141-166
    • Battiti, R.1
  • 19
    • 0027205884 scopus 로고
    • A scaled conjugate gradient algorithm for fast supervised learning
    • Møller, M.F.: A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning, Neural Networks, 6, 1993, 525-533.
    • (1993) Neural Networks , vol.6 , pp. 525-533
    • Møller, M.F.1
  • 21
    • 0028256912 scopus 로고
    • Deterministic global optimal FNN training algorithm
    • Tang, Z., Koehler, G.J.: Deterministic Global Optimal FNN Training Algorithm, Neural Networks, 7(2), 1994, 301-311.
    • (1994) Neural Networks , vol.7 , Issue.2 , pp. 301-311
    • Tang, Z.1    Koehler, G.J.2
  • 22
    • 0002345372 scopus 로고    scopus 로고
    • Cooperative learning in neural networks using particle swarm optimizers
    • Van den Bergh, F., Engelbrecht, A.P.: Cooperative Learning in Neural Networks using Particle Swarm Optimizers, South Africam Computer Journal, 26, 2000, 84-90.
    • (2000) South Africam Computer Journal , vol.26 , pp. 84-90
    • Van Den Bergh, F.1    Engelbrecht, A.P.2
  • 23
    • 84940396589 scopus 로고
    • Automatic scaling using gamma learning for feedforward neural networks
    • International Workshop on Artificial Neural Networks, Torremolinos, Spain, in: "From Natural to Artificial Neural Computing" J. Mira, F. Sandoval, Eds
    • Engelbrecht, A.P., Cloete, I., Geldenhuys, J., Zurada, J.M.: Automatic Scaling using Gamma Learning for Feedforward Neural Networks, International Workshop on Artificial Neural Networks, Torremolinos, Spain, in: "From Natural to Artificial Neural Computing" in the series Lecture Notes in Computer Science (J. Mira, F. Sandoval, Eds), Vol. 930, 1995, 374-381.
    • (1995) Lecture Notes in Computer Science , vol.930 , pp. 374-381
    • Engelbrecht, A.P.1    Cloete, I.2    Geldenhuys, J.3    Zurada, J.M.4
  • 26
    • 0000177227 scopus 로고
    • The vapnik-chervonenkis dimension: Information versus complexity in learning
    • Abu-Mostafa, Y.S.: The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning, Neural Computation, Vol. 1, 1989, 312-317.
    • (1989) Neural Computation , vol.1 , pp. 312-317
    • Abu-Mostafa, Y.S.1
  • 27
    • 0000966291 scopus 로고
    • Can neural networks do better than the vapnik-chervonenkis bounds?
    • R. Lippmann, J. Moody, D.S. Touretzky, Eds
    • Cohn, D., Tesauro, G.: Can Neural Networks do Better than the Vapnik-Chervonenkis Bounds?, Advances in Neural Information Processing Systems (R. Lippmann, J. Moody, D.S. Touretzky, Eds), Vol. 3, 1991, 911-917.
    • (1991) Advances in Neural Information Processing Systems , vol.3 , pp. 911-917
    • Cohn, D.1    Tesauro, G.2
  • 28
    • 0030586607 scopus 로고    scopus 로고
    • Vapnik-chervonenkis generalization bounds for real valued neural networks
    • Hole, A.: Vapnik-Chervonenkis Generalization Bounds for Real Valued Neural Networks, Neural Computation, 8, 1996, 1277-1299.
    • (1996) Neural Computation , vol.8 , pp. 1277-1299
    • Hole, A.1
  • 29
    • 0000538917 scopus 로고
    • Learning and generalization in a two-layer neural network: The role of the vapnik-chervonenkis dimension
    • Opper, M.: Learning and Generalization in a Two-Layer Neural Network: The Role of the Vapnik-Chervonenkis Dimension, Physical Review Letters, 72(13), 1994, 2133-2166.
    • (1994) Physical Review Letters , vol.72 , Issue.13 , pp. 2133-2166
    • Opper, M.1
  • 30
    • 0031213726 scopus 로고    scopus 로고
    • Estimating learning curves of concept learning
    • Gu, H., Takahashi, H.: Estimating Learning Curves of Concept Learning, Neural Networks, 10(6), 1997, 1089-1102.
    • (1997) Neural Networks , vol.10 , Issue.6 , pp. 1089-1102
    • Gu, H.1    Takahashi, H.2
  • 32
    • 0028389647 scopus 로고
    • Accelerated learning by active example selection
    • Zhang, B-T.: Accelerated Learning by Active Example Selection, International Journal of Neural Systems, 5(1), 1994, 67-75.
    • (1994) International Journal of Neural Systems , vol.5 , Issue.1 , pp. 67-75
    • Zhang, B.-T.1
  • 34
    • 0000408944 scopus 로고
    • Quantifying a critical training set size for generalization and overfitting using teacher neural networks
    • Lange, R., Männer, R.: Quantifying a Critical Training Set Size for Generalization and Overfitting using Teacher Neural Networks, International Conference on Artificial Neural Networks, Vol. 1, 1994, 497-500.
    • (1994) International Conference on Artificial Neural Networks , vol.1 , pp. 497-500
    • Lange, R.1    Männer, R.2
  • 36
    • 0028424239 scopus 로고
    • Improving generalization with active learning
    • Cohn, D., Atlas, L., Ladner, R.: Improving Generalization with Active Learning, Machine Learning, 15, 1994, 201-221.
    • (1994) Machine Learning , vol.15 , pp. 201-221
    • Cohn, D.1    Atlas, L.2    Ladner, R.3
  • 37
    • 0028867556 scopus 로고
    • Selective training of feedforward artificial neural networks using matrix perturbation theory
    • Hunt, S.D., Deller J.R. (jr): Selective Training of Feedforward Artificial Neural Networks using Matrix Perturbation Theory, Neural Networks, 8(6), 1995, 931-944.
    • (1995) Neural Networks , vol.8 , Issue.6 , pp. 931-944
    • Hunt, S.D.1    Deller J.R., Jr.2
  • 38
    • 0004563832 scopus 로고
    • AI Memo No 1491, Artificial Intelligence Laboratory, Massachusetts Institute of Technology
    • Cohn, D.A.: Neural Network Exploration using Optimal Experiment Design, AI Memo No 1491, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 1994.
    • (1994) Neural Network Exploration Using Optimal Experiment Design
    • Cohn, D.A.1
  • 40
    • 0026124174 scopus 로고
    • Performance and generalization of the classification figure of merit criterion function
    • Barnard, E.: Performance and Generalization of the Classification Figure of Merit Criterion Function, IEEE Transactions on Neural Networks, 2(2), 1991, 322-325.
    • (1991) IEEE Transactions on Neural Networks , vol.2 , Issue.2 , pp. 322-325
    • Barnard, E.1
  • 41
    • 0025449258 scopus 로고
    • A novel objective function for improved phoneme recognition using time-delay neural networks
    • Hampshire, J.B., Waibel, A.H.: A Novel Objective Function for Improved Phoneme Recognition using Time-Delay Neural Networks, IEEE Transactions on Neural Networks, 1(2), 1990, 216-228.
    • (1990) IEEE Transactions on Neural Networks , vol.1 , Issue.2 , pp. 216-228
    • Hampshire, J.B.1    Waibel, A.H.2
  • 42
    • 0004601428 scopus 로고
    • Selective presentation of learning samples for efficient learning in multi-layer perceptron
    • Ohnishi, N., Okamoto, A., Sugiem, N.: Selective Presentation of Learning Samples for Efficient Learning in Multi-Layer Perceptron, International Joint Conference on Neural Networks Vol. 1, 1990, 688-691.
    • (1990) International Joint Conference on Neural Networks , vol.1 , pp. 688-691
    • Ohnishi, N.1    Okamoto, A.2    Sugiem, N.3
  • 44
    • 84980025378 scopus 로고
    • Bimodal distribution removal
    • International Workshop on Artificial Neural Networks, in: "New Trends in Neural Computation," J. Mira, J. Cabestany, A. Prieto, Eds, Springer-Verlag, Berlin
    • Slade, P., Gedeon, T.D.: Bimodal Distribution Removal, International Workshop on Artificial Neural Networks, in: "New Trends in Neural Computation," in the series Lecture Notes in Computer Science (J. Mira, J. Cabestany, A. Prieto, Eds), Springer-Verlag, Berlin, 1993, 249-254.
    • (1993) Lecture Notes in Computer Science , pp. 249-254
    • Slade, P.1    Gedeon, T.D.2
  • 45
    • 84947436362 scopus 로고
    • Balancing bias and variance: Network topology and pattern set reduction techniques
    • International Workshop on Artificial Neural Networks, in: "From Natural to Artificial Neural Computing," J. Mira, F. Sandoval, Eds
    • Gedeon, T.D., Wong, P.M., Harris, D.: Balancing Bias and Variance: Network Topology and Pattern Set Reduction Techniques, International Workshop on Artificial Neural Networks, in: "From Natural to Artificial Neural Computing," in the series Lecture Notes in Computer Science (J. Mira, F. Sandoval, Eds), Vol. 930, 1995, 551-558.
    • (1995) Lecture Notes in Computer Science , vol.930 , pp. 551-558
    • Gedeon, T.D.1    Wong, P.M.2    Harris, D.3
  • 51
    • 0000413255 scopus 로고    scopus 로고
    • Active learning in multilayer perceptrons
    • D.S. Touretzky, M.C. Mozer, M.E. Hasselmo, Eds
    • Fukumizu, K.: Active Learning in Multilayer Perceptrons, Advances in Neural Information Processing Systems (D.S. Touretzky, M.C. Mozer, M.E. Hasselmo, Eds), Vol. 8, 1996.
    • (1996) Advances in Neural Information Processing Systems , vol.8
    • Fukumizu, K.1
  • 54
    • 0025897859 scopus 로고
    • Neural net algorithms that learn in polynomial time from examples and queries
    • Baum, E.B.: Neural Net Algorithms that Learn in Polynomial Time from Examples and Queries, IEEE Transactions on Neural Networks, 2(1), 1991, 5-19.
    • (1991) IEEE Transactions on Neural Networks , vol.2 , Issue.1 , pp. 5-19
    • Baum, E.B.1
  • 55
  • 58
    • 0033355617 scopus 로고    scopus 로고
    • Sensitivity analysis for decision boundaries
    • Engelbrecht, A.P.: Sensitivity Analysis for Decision Boundaries, Neural Processing Letters, 10(3), 1999, 253-266.
    • (1999) Neural Processing Letters , vol.10 , Issue.3 , pp. 253-266
    • Engelbrecht, A.P.1
  • 61
    • 0343037525 scopus 로고    scopus 로고
    • University California Irvene Machine Learning Repository
    • University California Irvene Machine Learning Repository, http://www.ics.uci.edu/mlearn/MLRepository.html


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.