-
1
-
-
0000335879
-
Parameter adaptation in stochastic optimization
-
D. Saad, editor, chapter 6. Cambridge University Press
-
L. B. Almeida, T. Langlois, J. D. Amaral, and A. Plakhov. Parameter adaptation in stochastic optimization. In D. Saad, editor, On-line Learning in Neural Networks, chapter 6, pages 111-134. Cambridge University Press, 1999.
-
(1999)
On-line Learning in Neural Networks
, pp. 111-134
-
-
Almeida, L.B.1
Langlois, T.2
Amaral, J.D.3
Plakhov, A.4
-
2
-
-
0001024110
-
First and second order methods for learning: Between steepest descent and Newton's method
-
R. Battiti. First and second order methods for learning: Between steepest descent and Newton's method. Neural Computation, 4(2): 141-166, 1992.
-
(1992)
Neural Computation
, vol.4
, Issue.2
, pp. 141-166
-
-
Battiti, R.1
-
3
-
-
0010515793
-
A derivation of conjugate gradients
-
F. A. Lootsma, editor. Academic Press, London
-
E. M. L. Beale. A derivation of conjugate gradients. In F. A. Lootsma, editor, Numerical methods for nonlinear optimization, pages 39-43. Academic Press, London, 1972.
-
(1972)
Numerical Methods for Nonlinear Optimization
, pp. 39-43
-
-
Beale, E.M.L.1
-
4
-
-
0027224761
-
A learning algorithm for multilayered neural networks based on linear least-squares problems
-
F. Biegler-König and F. Bärmann. A learning algorithm for multilayered neural networks based on linear least-squares problems. Neural Networks, 6:127-131, 1993.
-
(1993)
Neural Networks
, vol.6
, pp. 127-131
-
-
Biegler-König, F.1
Bärmann, F.2
-
5
-
-
0028424954
-
Computing second derivatives in feed-forward networks: A review
-
W. L. Buntine and A. S. Weigend. Computing second derivatives in feed-forward networks: A review. IEEE Transactions on Neural Networks, 5(3):480-488, 1993.
-
(1993)
IEEE Transactions on Neural Networks
, vol.5
, Issue.3
, pp. 480-488
-
-
Buntine, W.L.1
Weigend, A.S.2
-
6
-
-
0031191337
-
Sensitivity analysis in discrete bayesian networks
-
E. Castillo, J. M. Gutiérrez, and A. Hadi. Sensitivity analysis in discrete bayesian networks. IEEE Transactions on Systems, Man and Cybernetics, 26(7):412-423, 1997.
-
(1997)
IEEE Transactions on Systems, Man and Cybernetics
, vol.26
, Issue.7
, pp. 412-423
-
-
Castillo, E.1
Gutiérrez, J.M.2
Hadi, A.3
-
7
-
-
0033079314
-
Working with differential, functional and difference equations using functional networks
-
E. Castillo, A. Cobo, J. M. Gutiérrez, and R. E. Pruneda. Working with differential, functional and difference equations using functional networks. Applied Mathematical Modelling, 23(2):89-107, 1999.
-
(1999)
Applied Mathematical Modelling
, vol.23
, Issue.2
, pp. 89-107
-
-
Castillo, E.1
Cobo, A.2
Gutiérrez, J.M.3
Pruneda, R.E.4
-
8
-
-
0033906885
-
Functional networks, a new neural network based methodology
-
E. Castillo, A. Cobo, J. M. Gutiérrez, and R. E. Pruneda. Functional networks, a new neural network based methodology. Computer-Aided Civil and Infrastructure Engineering, 15(2):90-106, 2000.
-
(2000)
Computer-aided Civil and Infrastructure Engineering
, vol.15
, Issue.2
, pp. 90-106
-
-
Castillo, E.1
Cobo, A.2
Gutiérrez, J.M.3
Pruneda, R.E.4
-
9
-
-
0004016489
-
-
John Wiley & Sons Inc., New York
-
E. Castillo, A. Conejo, P. Pedregal, R. Garcia, and N. Alguacil. Building and Solving Mathematical Programming Models in Engineering and Science. John Wiley & Sons Inc., New York., 2001.
-
(2001)
Building and Solving Mathematical Programming Models in Engineering and Science
-
-
Castillo, E.1
Conejo, A.2
Pedregal, P.3
Garcia, R.4
Alguacil, N.5
-
10
-
-
0040081679
-
A global optimum approach for one-layer neural networks
-
E. Castillo, O. Fontenla-Romero, A. Alonso Betanzos, and B. Guijarro-Berdiñas. A global optimum approach for one-layer neural networks. Neural Computation, 14(6): 1429-1449, 2002.
-
(2002)
Neural Computation
, vol.14
, Issue.6
, pp. 1429-1449
-
-
Castillo, E.1
Fontenla-Romero, O.2
Alonso Betanzos, A.3
Guijarro-Berdiñas, B.4
-
11
-
-
8344246198
-
A general method for local sensitivity analysis with application to regression models and other optimization problems
-
E. Castillo, A. S. Hadi, A. Conejo, and A. Fernández-Canteli. A general method for local sensitivity analysis with application to regression models and other optimization problems. Technometrics, 46(4):430-445, 2004.
-
(2004)
Technometrics
, vol.46
, Issue.4
, pp. 430-445
-
-
Castillo, E.1
Hadi, A.S.2
Conejo, A.3
Fernández-Canteli, A.4
-
12
-
-
28244437595
-
A perturbation approach to sensitivity analysis in nonlinear programming
-
E. Castillo, C. Castillo A. Conejo and, R. Minguez, and D. Ortigosa. A perturbation approach to sensitivity analysis in nonlinear programming. Journal of Optimization Theory and Applications, 128(1):49-74, 2006.
-
(2006)
Journal of Optimization Theory and Applications
, vol.128
, Issue.1
, pp. 49-74
-
-
Castillo, E.1
Castillo, C.2
Conejo, A.3
Minguez, R.4
Ortigosa, D.5
-
13
-
-
84943261866
-
Supervised learning for feed-forward neural networks: A new minimax approach for fast convergence
-
A. Chella, A. Gentile, F. Sorbello, and A. Tarantino. Supervised learning for feed-forward neural networks: a new minimax approach for fast convergence. Proceedings of the IEEE International Conference on Neural Networks, 1:605-609, 1993.
-
(1993)
Proceedings of the IEEE International Conference on Neural Networks
, vol.1
, pp. 605-609
-
-
Chella, A.1
Gentile, A.2
Sorbello, F.3
Tarantino, A.4
-
15
-
-
0038605699
-
Scaling large learning problems with hard parallel mixtures
-
R. Collobert, Y. Bengio, and S. Bengio. Scaling large learning problems with hard parallel mixtures. International Journal of Pattern Recognition and Artificial Intelligence, 17(3):349-365, 2003.
-
(2003)
International Journal of Pattern Recognition and Artificial Intelligence
, vol.17
, Issue.3
, pp. 349-365
-
-
Collobert, R.1
Bengio, Y.2
Bengio, S.3
-
17
-
-
0026897995
-
Statistically controlled activation weight initialization (SCAWI)
-
G. P. Drago and S. Ridella. Statistically controlled activation weight initialization (SCAWI). IEEE Transactions on Neural Networks, 3:899-905, 1992.
-
(1992)
IEEE Transactions on Neural Networks
, vol.3
, pp. 899-905
-
-
Drago, G.P.1
Ridella, S.2
-
18
-
-
0000615669
-
Function minimization by conjugate gradients
-
R. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. Computer Journal, 7 (149-154), 1964.
-
(1964)
Computer Journal
, vol.7
, Issue.149-154
-
-
Fletcher, R.1
Reeves, C.M.2
-
19
-
-
33745736708
-
Linear least-squares based methods for neural networks learning
-
O. Fontenla-Romero, D. Erdogmus, J.C. Principe, A. Alonso-Betanzos, and E. Castillo. Linear least-squares based methods for neural networks learning. Lecture Notes in Computer Science, 2714(84-91), 2003.
-
(2003)
Lecture Notes in Computer Science
, vol.2714
, Issue.84-91
-
-
Fontenla-Romero, O.1
Erdogmus, D.2
Principe, J.C.3
Alonso-Betanzos, A.4
Castillo, E.5
-
20
-
-
0028543366
-
Training feedforward networks with the marquardt algorithm
-
M. T. Hagan and M. Menhaj. Training feedforward networks with the marquardt algorithm. IEEE Transactions on Neural Networks, 5(6):989-993, 1994.
-
(1994)
IEEE Transactions on Neural Networks
, vol.5
, Issue.6
, pp. 989-993
-
-
Hagan, M.T.1
Menhaj, M.2
-
21
-
-
0003593041
-
-
PWS Publishing, Boston, MA
-
M. T. Hagan, H. B. Demuth, and M. H. Beale. Neural Network Design. PWS Publishing, Boston, MA, 1996.
-
(1996)
Neural Network Design
-
-
Hagan, M.T.1
Demuth, H.B.2
Beale, M.H.3
-
24
-
-
0024124741
-
Improving the learning rate of back-propagation with the gradient reuse algorithm
-
D. R. Hush and J. M. Salas. Improving the learning rate of back-propagation with the gradient reuse algorithm. Proceedings of the IEEE Conference of Neural Networks, 1:441-447, 1988.
-
(1988)
Proceedings of the IEEE Conference of Neural Networks
, vol.1
, pp. 441-447
-
-
Hush, D.R.1
Salas, J.M.2
-
26
-
-
0024137490
-
Increased rates of convergence through learning rate adaptation
-
R. A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1 (4):295-308, 1988.
-
(1988)
Neural Networks
, vol.1
, Issue.4
, pp. 295-308
-
-
Jacobs, R.A.1
-
27
-
-
0009772169
-
Second order properties of error surfaces: Learning time and generalization
-
R.P. Lippmann, J.E. Moody, and D.S. Touretzky, editors, San Mateo, CA. Morgan Kaufmann
-
Y. LeCun, I. Kanter, and S.A. Solla. Second order properties of error surfaces: Learning time and generalization. In R.P. Lippmann, J.E. Moody, and D.S. Touretzky, editors, Neural Information Processing Systems, volume 3, pages 918-924, San Mateo, CA, 1991. Morgan Kaufmann.
-
(1991)
Neural Information Processing Systems
, vol.3
, pp. 918-924
-
-
Lecun, Y.1
Kanter, I.2
Solla, S.A.3
-
28
-
-
0001857994
-
Efficient backprop
-
G. B. Orr and K.-R. Müller, editors, number 1524 in LNCS. Springer-Verlag
-
Y. LeCun, L. Bottou, G.B. Orr, and K.-R Müller. Efficient backprop. In G. B. Orr and K.-R. Müller, editors, Neural Networks: Tricks of the trade, number 1524 in LNCS. Springer-Verlag, 1998.
-
(1998)
Neural Networks: Tricks of the Trade
-
-
Lecun, Y.1
Bottou, L.2
Orr, G.B.3
Müller, K.-R.4
-
29
-
-
0000873069
-
A method for the solution of certain non-linear problems in least squares
-
K. Levenberg. A method for the solution of certain non-linear problems in least squares. Quaterly Journal of Applied Mathematics, 2(2): 164-168, 1944.
-
(1944)
Quaterly Journal of Applied Mathematics
, vol.2
, Issue.2
, pp. 164-168
-
-
Levenberg, K.1
-
30
-
-
0030328721
-
On the peculiar distribution of the U.S. stock indeces' first digits
-
E. Ley. On the peculiar distribution of the U.S. stock indeces' first digits. The American Statistician, 50(4):311-314, 1996.
-
(1996)
The American Statistician
, vol.50
, Issue.4
, pp. 311-314
-
-
Ley, E.1
-
33
-
-
0027205884
-
A scaled conjugate gradient algorithm for fast supervised learning
-
M. F. Moller. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks, 6:525-533, 1993.
-
(1993)
Neural Networks
, vol.6
, pp. 525-533
-
-
Moller, M.F.1
-
34
-
-
0025536870
-
Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights
-
D. Nguyen and B. Widrow. Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. Proceedings of the International Joint Conference on Neural Networks, 3:21-26, 1990.
-
(1990)
Proceedings of the International Joint Conference on Neural Networks
, vol.3
, pp. 21-26
-
-
Nguyen, D.1
Widrow, B.2
-
35
-
-
84898987060
-
Using curvature information for fast stochastic search
-
M.I. Jordan, M.C. Mozer, and T. Petsche, editors, Cambridge. MIT Press
-
G. B. Orr and T. K. Leen. Using curvature information for fast stochastic search. In M.I. Jordan, M.C. Mozer, and T. Petsche, editors, Neural Information Processing Systems, volume 9, pages 606-612, Cambridge, 1996. MIT Press.
-
(1996)
Neural Information Processing Systems
, vol.9
, pp. 606-612
-
-
Orr, G.B.1
Leen, T.K.2
-
36
-
-
0023602770
-
Optimal algorithms for adaptive networks: Second order back propagation, second order direct propagation, and second order hebbian learning
-
D. B. Parker. Optimal algorithms for adaptive networks: second order back propagation, second order direct propagation, and second order hebbian learning. Proceedings of the IEEE Conference on Neural Networks, 2:593-600, 1987.
-
(1987)
Proceedings of the IEEE Conference on Neural Networks
, vol.2
, pp. 593-600
-
-
Parker, D.B.1
-
37
-
-
33745742562
-
Characterization of optical instabilities and chaos using MLP training algorithms
-
S. Pethel, C. Bowden, and M. Scalora. Characterization of optical instabilities and chaos using MLP training algorithms. SPIE Chaos Opt, 2039:129-140, 1993.
-
(1993)
SPIE Chaos Opt
, vol.2039
, pp. 129-140
-
-
Pethel, S.1
Bowden, C.2
Scalora, M.3
-
38
-
-
33846446220
-
Restart procedures for the conjugate gradient method
-
M. J. D. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming, 12:241-254, 1977.
-
(1977)
Mathematical Programming
, vol.12
, pp. 241-254
-
-
Powell, M.J.D.1
-
39
-
-
0030817465
-
Circular backpropagation networks for classification
-
January
-
S. Ridella, S. Rovetta, and R. Zunino. Circular backpropagation networks for classification. IEEE Transactions on Neural Networks, 8(1):84-97, January 1997.
-
(1997)
IEEE Transactions on Neural Networks
, vol.8
, Issue.1
, pp. 84-97
-
-
Ridella, S.1
Rovetta, S.2
Zunino, R.3
-
40
-
-
0025841422
-
Rescaling of variables in back propagation learning
-
A. K. Rigler, J. M. Irvine, and T. P. Vogl. Rescaling of variables in back propagation learning. Neural Networks, 4:225-229, 1991.
-
(1991)
Neural Networks
, vol.4
, pp. 225-229
-
-
Rigler, A.K.1
Irvine, J.M.2
Vogl, T.P.3
-
41
-
-
0022471098
-
Learning representations of back-propagation errors
-
D. E. Rumelhart, G. E. Hinten, and R. J. Willian. Learning representations of back-propagation errors. Nature, 323:533-536, 1986.
-
(1986)
Nature
, vol.323
, pp. 533-536
-
-
Rumelhart, D.E.1
Hinten, G.E.2
Willian, R.J.3
-
42
-
-
0036631778
-
Fast curvature matrix-vector products for second order gradient descent
-
N. N. Schraudolph. Fast curvature matrix-vector products for second order gradient descent. Neural Computation, 14(7): 1723-1738, 2002.
-
(2002)
Neural Computation
, vol.14
, Issue.7
, pp. 1723-1738
-
-
Schraudolph, N.N.1
-
43
-
-
0027313792
-
Speed up learning and network optimization with extended back propagation
-
A. Sperduti and S. Antonina. Speed up learning and network optimization with extended back propagation. Neural Networks, 6:365-383, 1993.
-
(1993)
Neural Networks
, vol.6
, pp. 365-383
-
-
Sperduti, A.1
Antonina, S.2
-
44
-
-
0003953609
-
-
J. A. K. Suykens and J. Vandewalle, editors. Kluwer Academic Publishers Boston
-
J. A. K. Suykens and J. Vandewalle, editors. Nonlinear Modeling: advanced black-box techniques. Kluwer Academic Publishers Boston, 1998.
-
(1998)
Nonlinear Modeling: Advanced Black-box Techniques
-
-
-
45
-
-
0025593679
-
Supersab: Fast adaptive back propagation with good scaling properties
-
T. Tollenaere. Supersab: Fast adaptive back propagation with good scaling properties. Neural Networks, 3(561-573), 1990.
-
(1990)
Neural Networks
, vol.3
, Issue.561-573
-
-
Tollenaere, T.1
-
46
-
-
34250094997
-
Accelerating the convergence of back-propagation method
-
T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence of back-propagation method. Biological Cybernetics, 59:257-263, 1988.
-
(1988)
Biological Cybernetics
, vol.59
, pp. 257-263
-
-
Vogl, T.P.1
Mangis, J.K.2
Rigler, A.K.3
Zink, W.T.4
Alkon, D.L.5
-
47
-
-
0025724253
-
A method for self-determination of adaptive learning rates in back propagation
-
M. K. Weir. A method for self-determination of adaptive learning rates in back propagation. Neural Networks, 4:371-379, 1991.
-
(1991)
Neural Networks
, vol.4
, pp. 371-379
-
-
Weir, M.K.1
-
48
-
-
0034868960
-
An algorithm for fast convergence in training neural networks
-
B. M. Wilamowski, S. Iplikci, O. Kaynak, and M. O. Efe. An algorithm for fast convergence in training neural networks. Proceedings of the International Joint Conference on Neural Networks, 2:1778-1782, 2001.
-
(2001)
Proceedings of the International Joint Conference on Neural Networks
, vol.2
, pp. 1778-1782
-
-
Wilamowski, B.M.1
Iplikci, S.2
Kaynak, O.3
Efe, M.O.4
-
49
-
-
0031193520
-
A new method in determining the initial weights of feedforward neural networks
-
J. Y. F. Yam, T. W. S Chow, and C. T Leung. A new method in determining the initial weights of feedforward neural networks. Neurocomputing, 16(1):23-32, 1997.
-
(1997)
Neurocomputing
, vol.16
, Issue.1
, pp. 23-32
-
-
Yam, J.Y.F.1
Chow, T.W.S.2
Leung, C.T.3
|