메뉴 건너뛰기




Volumn 8, Issue 3, 1996, Pages 643-674

The Effects of Adding Noise during Backpropagation Training on a Generalization Performance

Author keywords

[No Author keywords available]

Indexed keywords


EID: 2342565172     PISSN: 08997667     EISSN: None     Source Type: Journal    
DOI: 10.1162/neco.1996.8.3.643     Document Type: Article
Times cited : (475)

References (38)
  • 2
    • 0001740650 scopus 로고
    • Training with noise is equivalent to Tikhomov regularization
    • Bishop, C. M. 1995. Training with noise is equivalent to Tikhomov regularization. Neural Comp. 7, 108-116.
    • (1995) Neural Comp. , vol.7 , pp. 108-116
    • Bishop, C.M.1
  • 3
    • 33847215211 scopus 로고
    • Stochastic gradient learning in neural networks
    • Nanterre, France
    • Bottou, L. 1991. Stochastic gradient learning in neural networks. NEURONIMES'91, EC2, Nanterre, France, 687-606.
    • (1991) NEURONIMES'91 , vol.EC2 , pp. 687-1606
    • Bottou, L.1
  • 4
    • 0000473247 scopus 로고
    • A back-propagation algorithm with optimal use of hidden units
    • D. Touretsky, ed., Morgan Kaufmann, San Mateo, CA
    • Chauvin, Y. 1989. A back-propagation algorithm with optimal use of hidden units. In Advances in Neural Information Processing System I, D. Touretsky, ed., pp. 519-526. Morgan Kaufmann, San Mateo, CA.
    • (1989) Advances in Neural Information Processing System I , pp. 519-526
    • Chauvin, Y.1
  • 5
    • 0011213245 scopus 로고
    • Fault tolerance training improves generalization and robustness
    • IEEE Neural Council, Baltimore
    • Clay, R., and Sequin, C. 1992. Fault tolerance training improves generalization and robustness. Proc. Int. Joint Conf. Neural Networks, IEEE Neural Council, Baltimore, I-769-774.
    • (1992) Proc. Int. Joint Conf. Neural Networks
    • Clay, R.1    Sequin, C.2
  • 6
    • 0026953305 scopus 로고
    • Improving generalisation performance using double back-propagation
    • Drucker, H., and Le Cun, Y. 1992. Improving generalisation performance using double back-propagation. IEEE Trans. Neural Networks 3, 991-997.
    • (1992) IEEE Trans. Neural Networks , vol.3 , pp. 991-997
    • Drucker, H.1    Le Cun, Y.2
  • 7
    • 0022787342 scopus 로고
    • Diffusion for global optimization
    • Geman, S., and Hwang, C. 1986. Diffusion for global optimization. SIAM J. Control Optim. 25, 1031-1043.
    • (1986) SIAM J. Control Optim. , vol.25 , pp. 1031-1043
    • Geman, S.1    Hwang, C.2
  • 11
    • 0039521491 scopus 로고
    • A stochastic version of the delta rule
    • Hanson, S. J. 1990. A stochastic version of the delta rule. Physica D 42, 265-272.
    • (1990) Physica D , vol.42 , pp. 265-272
    • Hanson, S.J.1
  • 15
  • 16
    • 2342553675 scopus 로고
    • Optimal network construction by minimum description length
    • Kendall, G. D., and Hall, T. J. 1993. Optimal network construction by minimum description length. Neural Comp. 5, 210-212.
    • (1993) Neural Comp. , vol.5 , pp. 210-212
    • Kendall, G.D.1    Hall, T.J.2
  • 18
    • 26444479778 scopus 로고
    • Optimization by simulated annealing
    • Kirkpatrick, S., Gelatt, C., and Vecchi, M. 1983. Optimization by simulated annealing. Science 220, 671-680.
    • (1983) Science , vol.220 , pp. 671-680
    • Kirkpatrick, S.1    Gelatt, C.2    Vecchi, M.3
  • 19
    • 0000029122 scopus 로고
    • A simple weight decay can improve generalization
    • J. E. Moody et al., eds., Morgan Kaufmann, San Mateo, CA
    • Krogh, A., and Hertz, J. A. 1992. A simple weight decay can improve generalization. In Advances in Neural Information Processing System 4 (NIPS 91), J. E. Moody et al., eds., pp. 950-957. Morgan Kaufmann, San Mateo, CA.
    • (1992) Advances in Neural Information Processing System 4 (NIPS 91) , pp. 950-957
    • Krogh, A.1    Hertz, J.A.2
  • 20
    • 0023289032 scopus 로고
    • Asymptotic global behavior for stochastic approximation and diffusions with slowly decreasing noise effects: Global minimization via Monte Carlo
    • Kushner, H. 1987. Asymptotic global behavior for stochastic approximation and diffusions with slowly decreasing noise effects: Global minimization via Monte Carlo. SIAM J. Appl. Math. 47, 169-185.
    • (1987) SIAM J. Appl. Math. , vol.47 , pp. 169-185
    • Kushner, H.1
  • 21
    • 0001025418 scopus 로고
    • Bayesian interpolation
    • MacKay, D. J. C. 1992. Bayesian interpolation. Neural Comp. 4, 415-447.
    • (1992) Neural Comp. , vol.4 , pp. 415-447
    • MacKay, D.J.C.1
  • 22
    • 0026858102 scopus 로고
    • Noise injection into inputs in back-propagation learning. IEEE Trans
    • Matsuoka, K. 1992. Noise injection into inputs in back-propagation learning. IEEE Trans. Sys. Man Cybern. 22, 436-440.
    • (1992) Sys. Man Cybern. , vol.22 , pp. 436-440
    • Matsuoka, K.1
  • 23
    • 0027629615 scopus 로고
    • Synaptic weight noise during multilayer perceptron training: Fault tolerance and training improvements
    • Murray, A. F., and Edwards, P. J. 1993. Synaptic weight noise during multilayer perceptron training: Fault tolerance and training improvements. IEEE Trans. Neural Networks 4, 722-725.
    • (1993) IEEE Trans. Neural Networks , vol.4 , pp. 722-725
    • Murray, A.F.1    Edwards, P.J.2
  • 24
    • 0028494739 scopus 로고
    • Enhanced mlp performance and fault tolerance resulting from synaptic weight noise during training
    • Murray, A. F., and Edwards, P. J. 1994. Enhanced mlp performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Networks 5, 792-802.
    • (1994) IEEE Trans. Neural Networks , vol.5 , pp. 792-802
    • Murray, A.F.1    Edwards, P.J.2
  • 26
    • 0001765492 scopus 로고
    • Simplifying neural networks by soft weight-sharing
    • Nowlan, S. J., and Hinton, G. E. 1992. Simplifying neural networks by soft weight-sharing. Neural Comp. 4, 473-493.
    • (1992) Neural Comp. , vol.4 , pp. 473-493
    • Nowlan, S.J.1    Hinton, G.E.2
  • 28
    • 0025490985 scopus 로고
    • Networks for approximation and learning
    • Poggio, T., and Girosi, F. 1990. Networks for approximation and learning. Proc. IEEE 78, 1481-1497.
    • (1990) Proc. IEEE , vol.78 , pp. 1481-1497
    • Poggio, T.1    Girosi, F.2
  • 29
    • 85132199214 scopus 로고
    • Regularization using jittered training data
    • IEEE Neural Council, Baltimore
    • Reed, R., Oh, S., and Marks, R. J., II. 1992. Regularization using jittered training data. Proc. Int. Joint Conf. Neural Networks, IEEE Neural Council, Baltimore, III-147.
    • (1992) Proc. Int. Joint Conf. Neural Networks
    • Reed, R.1    Oh, S.2    Marks II, R.J.3
  • 31
    • 0000201186 scopus 로고
    • On Langevin updating in multilayer perceptrons
    • Rögnvaldsson, T. 1994. On Langevin updating in multilayer perceptrons. Neural Comp. 6, 916-926.
    • (1994) Neural Comp. , vol.6 , pp. 916-926
  • 32
    • 0022471098 scopus 로고
    • Learning representations by back-propagating errors
    • Rumelhart, D. E., Hinton, G. E., and Williams, R. J. 1986. Learning representations by back-propagating errors. Nature (London) 323, 533-536.
    • (1986) Nature (London) , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 33
    • 0000755030 scopus 로고
    • Statistical mechanics of learning from examples
    • Seung, H., Sompolinsky, H., and Tishby, N. 1992. Statistical mechanics of learning from examples. Phys. Rev. A 45, 6056-6091.
    • (1992) Phys. Rev. A , vol.45 , pp. 6056-6091
    • Seung, H.1    Sompolinsky, H.2    Tishby, N.3
  • 37
    • 0000539096 scopus 로고
    • Generalization by weight-elimination with application to forecasting
    • J. M. R. P. Lippmann and D. Touretsky, eds., Morgan Kaufmann, San Mateo, CA
    • Weigend, A., Rumelhart, D., and Huberman, B. 1991. Generalization by weight-elimination with application to forecasting. In Advances in Neural Information Processing System 3 (NIPS 90), J. M. R. P. Lippmann and D. Touretsky, eds., pp. 875-882. Morgan Kaufmann, San Mateo, CA.
    • (1991) Advances in Neural Information Processing System 3 (NIPS 90) , pp. 875-882
    • Weigend, A.1    Rumelhart, D.2    Huberman, B.3
  • 38
    • 0000243355 scopus 로고
    • Learning in artificial neural networks: A statistical perspective
    • White, H. 1989. Learning in artificial neural networks: A statistical perspective. Neural Comp. 1, 425-464.
    • (1989) Neural Comp. , vol.1 , pp. 425-464
    • White, H.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.