메뉴 건너뛰기




Volumn 8, Issue 3, 1997, Pages 623-629

On convergence properties of pocket algorithm

(1)  Muselli, Marco a  

a CNR   (Italy)

Author keywords

Convergence theorems; Neural networks; Optimal learning; Perceptron algorithm; Pocket algorithm; Threshold neuron

Indexed keywords

CONVERGENCE OF NUMERICAL METHODS; ITERATIVE METHODS; OPTIMIZATION; THEOREM PROVING; VECTORS;

EID: 0031145145     PISSN: 10459227     EISSN: None     Source Type: Journal    
DOI: 10.1109/72.572101     Document Type: Article
Times cited : (29)

References (28)
  • 1
    • 84951490428 scopus 로고
    • Review of neural networks for speech recognition
    • R. P. Lippmann, "Review of neural networks for speech recognition," Neural Computa., vol. 1, pp. 1-38, 1989.
    • (1989) Neural Computa. , vol.1 , pp. 1-38
    • Lippmann, R.P.1
  • 3
    • 0001704198 scopus 로고
    • Structural risk minimization for character recognition
    • J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds. San Mateo, CA: Morgan Kaufmann
    • I. Guyon, V. Vapnik, B. Boser, L. Bottou, and S. A. Solla, "Structural risk minimization for character recognition," in Advances in Neural Information Processing Systems 4, J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds. San Mateo, CA: Morgan Kaufmann, 1992, pp. 471-479.
    • (1992) Advances in Neural Information Processing Systems 4 , pp. 471-479
    • Guyon, I.1    Vapnik, V.2    Boser, B.3    Bottou, L.4    Solla, S.A.5
  • 4
    • 0002748218 scopus 로고
    • How neural nets work
    • D. Z. Anderson, Ed. New York: Amer. Inst. Physics
    • A. Lapedes and R. Farber, "How neural nets work," in Neural Information Processing Systems, D. Z. Anderson, Ed. New York: Amer. Inst. Physics, 1987, pp. 442-456.
    • (1987) Neural Information Processing Systems , pp. 442-456
    • Lapedes, A.1    Farber, R.2
  • 5
    • 0024861871 scopus 로고
    • Approximation by superpositions of a sigmoidal function
    • G. Cybenko, "Approximation by superpositions of a sigmoidal function," Math. Contr., Signals, Syst., vol. 2, pp. 303-314, 1989.
    • (1989) Math. Contr., Signals, Syst. , vol.2 , pp. 303-314
    • Cybenko, G.1
  • 6
    • 0024866495 scopus 로고
    • On the approximate realization of continuous mapping by neural networks
    • K. Funahashi, "On the approximate realization of continuous mapping by neural networks," Neural Networks, vol. 2, pp. 183-192, 1989.
    • (1989) Neural Networks , vol.2 , pp. 183-192
    • Funahashi, K.1
  • 7
    • 0024880831 scopus 로고
    • Multilayer feedforward networks are universal approximators
    • K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural Networks, vol. 2, pp. 359-366, 1989.
    • (1989) Neural Networks , vol.2 , pp. 359-366
    • Hornik, K.1    Stinchcombe, M.2    White, H.3
  • 8
    • 0001024505 scopus 로고
    • On the uniform convergence of relative frequencies of events to their probabilities
    • V. N. Vapnik and A. Y. Chervonenkis, "On the uniform convergence of relative frequencies of events to their probabilities," Theory Probability Applicat., vol. 16, pp. 264-280, 1971.
    • (1971) Theory Probability Applicat. , vol.16 , pp. 264-280
    • Vapnik, V.N.1    Chervonenkis, A.Y.2
  • 10
    • 0040864988 scopus 로고
    • Principles of risk minimization for learning theory
    • J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds. San Mateo, CA: Morgan Kaufmann
    • _, "Principles of risk minimization for learning theory," in Advances in Neural Information Processing Systems 4, J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds. San Mateo, CA: Morgan Kaufmann, 1992, pp. 831-838.
    • (1992) Advances in Neural Information Processing Systems 4 , pp. 831-838
  • 11
    • 0007487086 scopus 로고
    • Local algorithms for pattern recognition and dependencies estimation
    • V. Vapnik and L. Bottou, "Local algorithms for pattern recognition and dependencies estimation," Neural Computa., vol. 5, pp. 893-909, 1993.
    • (1993) Neural Computa. , vol.5 , pp. 893-909
    • Vapnik, V.1    Bottou, L.2
  • 12
    • 0023572536 scopus 로고
    • Learning in networks is hard
    • San Diego, CA
    • J. S. Judd, "Learning in networks is hard," in Proc. 1st Int. Conf Neural Networks, San Diego, CA, 1987, pp. 685-692.
    • (1987) Proc. 1st Int. Conf Neural Networks , pp. 685-692
    • Judd, J.S.1
  • 13
    • 0026453958 scopus 로고
    • Training a 3-node neural network is NP-complete
    • A. Blum and R. L. Rivest, "Training a 3-node neural network is NP-complete," Neural Networks, vol. 5, pp. 117-127, 1992.
    • (1992) Neural Networks , vol.5 , pp. 117-127
    • Blum, A.1    Rivest, R.L.2
  • 14
    • 0000646059 scopus 로고
    • Learning internal representations by error propagation
    • D. E. Rumelhart and J. L. McClelland, Eds. Cambridge, MA: MIT Press
    • D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation," in Parallel Distributed Processing, D. E. Rumelhart and J. L. McClelland, Eds. Cambridge, MA: MIT Press, 1986, pp. 318-362.
    • (1986) Parallel Distributed Processing , pp. 318-362
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 20
    • 0001259521 scopus 로고
    • An algorithm for linear inequalities and its applications
    • Y.-C. Ho and R. L. Kashyap, "An algorithm for linear inequalities and its applications," IEEE Trans. Electron. Computers, vol. EC-14, pp. 683-688, 1965.
    • (1965) IEEE Trans. Electron. Computers , vol.EC-14 , pp. 683-688
    • Ho, Y.-C.1    Kashyap, R.L.2
  • 21
    • 0000564361 scopus 로고
    • A polinomial algorithm in linear programming
    • L. G. Khachiyan, "A polinomial algorithm in linear programming," Sov. Math. Doklady, vol. 20, pp. 191-194, 1979.
    • (1979) Sov. Math. Doklady , vol.20 , pp. 191-194
    • Khachiyan, L.G.1
  • 22
    • 0026745371 scopus 로고
    • Comparison of perceptron training by linear programming and by the perceptron convergence procedure
    • Seattle, WA
    • A. J. Mansfield, "Comparison of perceptron training by linear programming and by the perceptron convergence procedure," in Proc. Int. Joint Conf. Neural Networks, Seattle, WA, 1991, pp. II-25-II-30.
    • (1991) Proc. Int. Joint Conf. Neural Networks
    • Mansfield, A.J.1
  • 23
    • 0025449027 scopus 로고
    • Perceptron-based learning algorithms
    • S. I. Gallant, "Perceptron-based learning algorithms," IEEE Trans. Neural Networks, vol. 1, pp. 179-191, 1990.
    • (1990) IEEE Trans. Neural Networks , vol.1 , pp. 179-191
    • Gallant, S.I.1
  • 25
    • 36149031331 scopus 로고
    • Learning in feedforward layered networks: The tiling algorithm
    • M. Mézard and J.-P. Nadal, "Learning in feedforward layered networks: The tiling algorithm," J. Phys. A, vol. 22, pp. 2191-2203, 1989.
    • (1989) J. Phys. A , vol.22 , pp. 2191-2203
    • Mézard, M.1    Nadal, J.-P.2
  • 26
    • 0000783575 scopus 로고
    • The upstart algorithm: A method for constructing and training feedforward neural networks
    • M. Frean, "The upstart algorithm: A method for constructing and training feedforward neural networks," Neural Computa., vol. 2, pp. 198-209, 1990.
    • (1990) Neural Computa. , vol.2 , pp. 198-209
    • Frean, M.1
  • 27
    • 0029306954 scopus 로고
    • On sequential construction of binary neural networks
    • M. Muselli, "On sequential construction of binary neural networks," IEEE Trans. Neural Networks, vol. 6, pp. 678-690, 1995.
    • (1995) IEEE Trans. Neural Networks , vol.6 , pp. 678-690
    • Muselli, M.1
  • 28
    • 0030590932 scopus 로고    scopus 로고
    • Simple expressions for success run distributions in Bernoulli trials
    • _, "Simple expressions for success run distributions in Bernoulli trials," to appear in Statist. Probability Lett., 1996.
    • (1996) Statist. Probability Lett.


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.