-
2
-
-
0037279487
-
A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learning
-
Chakraborty D., Pal N.R. A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learning. IEEE Transactions on Neural Networks 2003, 14:1-14.
-
(2003)
IEEE Transactions on Neural Networks
, vol.14
, pp. 1-14
-
-
Chakraborty, D.1
Pal, N.R.2
-
3
-
-
0040907600
-
Parameter convergence and learning curves for neural networks
-
Fine T.L., Mukherjee S. Parameter convergence and learning curves for neural networks. Neural Computation 1999, 11:747-769.
-
(1999)
Neural Computation
, vol.11
, pp. 747-769
-
-
Fine, T.L.1
Mukherjee, S.2
-
4
-
-
0001699239
-
Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima
-
Finnoff W. Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima. Neural Computation 1994, 6:285-295.
-
(1994)
Neural Computation
, vol.6
, pp. 285-295
-
-
Finnoff, W.1
-
5
-
-
0030195986
-
A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning
-
Heskes T., Wiegerinck W. A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning. IEEE Transactions on Neural Networks 1996, 7:919-925.
-
(1996)
IEEE Transactions on Neural Networks
, vol.7
, pp. 919-925
-
-
Heskes, T.1
Wiegerinck, W.2
-
8
-
-
1642562585
-
Convergence of an online gradient method for feedforward neural networks with stochastic inputs
-
Li Z.X., Wu W., Tian Y.L. Convergence of an online gradient method for feedforward neural networks with stochastic inputs. Journal of Computational and Applied Mathematics 2004, 163:165-176.
-
(2004)
Journal of Computational and Applied Mathematics
, vol.163
, pp. 165-176
-
-
Li, Z.X.1
Wu, W.2
Tian, Y.L.3
-
9
-
-
0036131765
-
Successive approximation training algorithm for feedforward neural networks
-
Liang Y.C., Feng D.P., Lee H.P., Lim S.P., Lee K.H. Successive approximation training algorithm for feedforward neural networks. Neurocomputing 2002, 42:11-322.
-
(2002)
Neurocomputing
, vol.42
, pp. 11-322
-
-
Liang, Y.C.1
Feng, D.P.2
Lee, H.P.3
Lim, S.P.4
Lee, K.H.5
-
10
-
-
84972916837
-
Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
-
Mangasarian O.L., Solodov M.V. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optimization Methods and Software 1994, 4:117-134.
-
(1994)
Optimization Methods and Software
, vol.4
, pp. 117-134
-
-
Mangasarian, O.L.1
Solodov, M.V.2
-
11
-
-
70350710039
-
Theoretical analysis of batch and on-line training for gradient descent learning in neural networks
-
Nakama T. Theoretical analysis of batch and on-line training for gradient descent learning in neural networks. Neurocomputing 2009, 73:151-159.
-
(2009)
Neurocomputing
, vol.73
, pp. 151-159
-
-
Nakama, T.1
-
12
-
-
78649650762
-
-
Learning-logic, invention report. Stanford University, Stanford, Calif.
-
Parker, D. B. (1982). Learning-logic, invention report. Stanford University, Stanford, Calif.
-
(1982)
-
-
Parker, D.B.1
-
13
-
-
0022471098
-
Learning representations by back-propagation errors
-
Rumelhart D.E., Hinton G.E., Williams R.J. Learning representations by back-propagation errors. Nature 1986, 323:533-536.
-
(1986)
Nature
, vol.323
, pp. 533-536
-
-
Rumelhart, D.E.1
Hinton, G.E.2
Williams, R.J.3
-
15
-
-
67649130661
-
-
Learning in neural networks by normalized stochastic gradient algorithm: local convergence. In Proceedings of the 5th seminar neural networks application electronic engineering.
-
Tadic, V., & Stankovic, S. (2000). Learning in neural networks by normalized stochastic gradient algorithm: local convergence. In Proceedings of the 5th seminar neural networks application electronic engineering.
-
(2000)
-
-
Tadic, V.1
Stankovic, S.2
-
16
-
-
0024883243
-
Optimal unsupervised learning in a single-layer linear feedforward neural network
-
Terence D.S. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks 1989, 2:459-473.
-
(1989)
Neural Networks
, vol.2
, pp. 459-473
-
-
Terence, D.S.1
-
17
-
-
78649640635
-
-
Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis. Harvard University, Cambridge, MA.
-
Werbos, P. J. (1974). Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis. Harvard University, Cambridge, MA.
-
(1974)
-
-
Werbos, P.J.1
-
18
-
-
0242662161
-
The general inefficiency of batch training for gradient descent learning
-
Wilson D.R., Martinez T.R. The general inefficiency of batch training for gradient descent learning. Neural Networks 2003, 16:1429-1451.
-
(2003)
Neural Networks
, vol.16
, pp. 1429-1451
-
-
Wilson, D.R.1
Martinez, T.R.2
-
19
-
-
0036644480
-
Deterministic convergence of an on-line gradient method for neural networks
-
Wu W., Xu Y.S. Deterministic convergence of an on-line gradient method for neural networks. Journal of Computational and Applied Mathematics 2002, 144:335-347.
-
(2002)
Journal of Computational and Applied Mathematics
, vol.144
, pp. 335-347
-
-
Wu, W.1
Xu, Y.S.2
-
20
-
-
19344362900
-
Deterministic convergence of an online gradient method for BP neural networks
-
Wu W., Feng G.R., Li Z.X., Xu Y.S. Deterministic convergence of an online gradient method for BP neural networks. IEEE Transactions on Neural Networks 2005, 16:533-540.
-
(2005)
IEEE Transactions on Neural Networks
, vol.16
, pp. 533-540
-
-
Wu, W.1
Feng, G.R.2
Li, Z.X.3
Xu, Y.S.4
-
21
-
-
0141849409
-
Training multilayer perceptrons via minimization of sum of ridge functions
-
Wu W., Feng G.R., Li X. Training multilayer perceptrons via minimization of sum of ridge functions. Advances in Computational Mathematics 2002, 17:331-347.
-
(2002)
Advances in Computational Mathematics
, vol.17
, pp. 331-347
-
-
Wu, W.1
Feng, G.R.2
Li, X.3
-
22
-
-
0242364795
-
Convergence of online gradient methods for continuous perceptrons with linearly separable training patterns
-
Wu W., Shao Z.Q. Convergence of online gradient methods for continuous perceptrons with linearly separable training patterns. Applied Mathematics Letters 2003, 16:999-1002.
-
(2003)
Applied Mathematics Letters
, vol.16
, pp. 999-1002
-
-
Wu, W.1
Shao, Z.Q.2
-
23
-
-
33847148535
-
-
Strong convergence of gradient methods for BP networks training. In Proceedings of 2005 international conference on neural networks and brains
-
Wu, W., Shao, H. M., & Qu, D. (2005). Strong convergence of gradient methods for BP networks training. In Proceedings of 2005 international conference on neural networks and brains (pp. 332-334).
-
(2005)
, pp. 332-334
-
-
Wu, W.1
Shao, H.M.2
Qu, D.3
-
25
-
-
67649385962
-
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
-
Zhang H.S., Wu W., Liu F., Yao M.C. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Transactions on Neural Networks 2009, 20:1050-1054.
-
(2009)
IEEE Transactions on Neural Networks
, vol.20
, pp. 1050-1054
-
-
Zhang, H.S.1
Wu, W.2
Liu, F.3
Yao, M.C.4
|