-
1
-
-
84957345624
-
When are k-nearest neighbor and back propagation accurate for feasible sized sets of examples?
-
L. B. Almeida and C. J. Wellekens, Eds., Feb.
-
E. B. Baum, “When are k-nearest neighbor and back propagation accurate for feasible sized sets of examples?,” in Neural Networks, Proc. EURASIP Workshop, L. B. Almeida and C. J. Wellekens, Eds., Feb. 1990, pp. 2–25.
-
(1990)
Neural Networks, Proc. EURASIP Workshop
, pp. 2-25
-
-
Baum, E.B.1
-
2
-
-
0001160588
-
What size net gives valid generalization?
-
E. B. Baum and D. Haussler, “What size net gives valid generalization?,” Neural Computation, vol. 1, pp. 151–160, 1989.
-
(1989)
Neural Computation
, vol.1
, pp. 151-160
-
-
Baum, E.B.1
Haussler, D.2
-
3
-
-
0024750852
-
Learnability and the Vapnik-Chervonenkis dimension
-
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth, “Learnability and the Vapnik-Chervonenkis dimension,” J. Ass. Comput. Mach., vol. 36, no. 4, pp. 929–965, 1989.
-
(1989)
J. Ass. Comput. Mach.
, vol.36
, Issue.4
, pp. 929-965
-
-
Blumer, A.1
Ehrenfeucht, A.2
Haussler, D.3
Warmuth, M.4
-
4
-
-
0000473247
-
A back-propagation algorithm with optimal use of hidden units
-
D. S. Touretzky, Ed. (Denver 1988)
-
Y. Chauvin, “A back-propagation algorithm with optimal use of hidden units,” in Advances in Neural Information Processing (1), D. S. Touretzky, Ed. (Denver 1988), 1989, pp. 519–526.
-
(1988)
Advances in Neural Information Processing (1)
, vol.1989
, pp. 519-526
-
-
Chauvin, Y.1
-
5
-
-
0000494465
-
Dynamic behavior of constrained back-propagation networks
-
D.S. Touretzky, Ed. (Denver 1989)
-
Y. Chauvin, “Dynamic behavior of constrained back-propagation networks,” in Advances in Neural Information Processing (2), D.S. Touretzky, Ed. (Denver 1989), 1990, pp. 642–649.
-
(1989)
Advances in Neural Information Processing (2)
, vol.1990
, pp. 642-649
-
-
Chauvin, Y.1
-
6
-
-
85025505893
-
Generalization performance of overtrained backpropagation networks
-
L. B. Almeida and C. J. Wellekens, Eds., Feb.
-
Y. Chauvin, “Generalization performance of overtrained backpropagation networks,” in Neural Networks, Proc. EUROSIP Workshop, L. B. Almeida and C. J. Wellekens, Eds., Feb. 1990, pp. 46–55.
-
(1990)
Neural Networks, Proc. EUROSIP Workshop
, pp. 46-55
-
-
Chauvin, Y.1
-
7
-
-
0000494466
-
Optimal brain damage
-
D.S. Touretzky, Ed. (Denver 1989)
-
Y. Le Cun, J. S. Denker, and S. A. Solla, “Optimal brain damage,” in Advances in Neural Information Processing (2), D.S. Touretzky, Ed. (Denver 1989), 1990, pp. 598–605.
-
(1989)
Advances in Neural Information Processing (2)
, vol.1990
, pp. 598-605
-
-
Le Cun, Y.1
Denker, J.S.2
Solla, S.A.3
-
8
-
-
84918685348
-
A general lower bound on the number of examples needed for learning
-
A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant, “A general lower bound on the number of examples needed for learning,” in Proc. 1988 Workshop Computational Learning Theory, 1988.
-
(1988)
Proc. 1988 Workshop Computational Learning Theory
-
-
Ehrenfeucht, A.1
Haussler, D.2
Kearns, M.3
Valiant, L.4
-
9
-
-
0000991092
-
Comparing biases for minimal network construction with back-propagation
-
D.S. Touretzky, Ed. (Denver 1988)
-
S. J. Hanson and L. Y. Pratt, “Comparing biases for minimal network construction with back-propagation,” in Advances in Neural Information Processing (1), D.S. Touretzky, Ed. (Denver 1988), 1989, pp. 177–185.
-
(1988)
Advances in Neural Information Processing (1)
, vol.1989
, pp. 177-185
-
-
Hanson, S.J.1
Pratt, L.Y.2
-
10
-
-
85046400375
-
A comparison of weight elimination methods for reducing complexity in neural networks
-
San Diego, CA
-
F. Hergert, W. Finnoff, and H. G. Zimmermann, “A comparison of weight elimination methods for reducing complexity in neural networks,” in Proc. Int. Joint Conf Neural Networks, San Diego, CA, vol. III, 1992, pp. 980–987.
-
(1992)
Proc. Int. Joint Conf Neural Networks
, vol.3
, pp. 980-987
-
-
Hergert, F.1
Finnoff, W.2
Zimmermann, H.G.3
-
11
-
-
0003449618
-
A structural learning algorithm with forgetting of link weights
-
M. Ishikawa, “A structural learning algorithm with forgetting of link weights,” Tech. Rep. TR-90-7, Electrotechnical Lab., Tsukuba-City, Japan, 1990.
-
(1990)
Tech. Rep. TR-90-7, Electrotechnical Lab., Tsukuba-City, Japan
-
-
Ishikawa, M.1
-
12
-
-
0000974760
-
Generalizing smoothness constraints from discrete samples
-
C. Ji, R. R. Snapp, and D. Psaltis, “Generalizing smoothness constraints from discrete samples,” Neural Computation, vol. 2, no. 2, pp. 188–197, 1990.
-
(1990)
Neural Computation
, vol.2
, Issue.2
, pp. 188-197
-
-
Ji, C.1
Snapp, R.R.2
Psaltis, D.3
-
13
-
-
0025447562
-
A simple procedure for pruning back-propagation trained neural networks
-
E. D. Karnin, “A simple procedure for pruning back-propagation trained neural networks,” IEEE Trans. Neural Networks, vol. 1, no. 2, pp. 239–242, 1990.
-
(1990)
IEEE Trans. Neural Networks
, vol.1
, Issue.2
, pp. 239-242
-
-
Karnin, E.D.1
-
14
-
-
0000029122
-
A simple weight decay can improve generalization
-
J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds.
-
A. Krogh and J. A. Hertz, “A simple weight decay can improve generalization,” in Advances in Neural Information Processing (4), J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds., 1992, pp. 951–957.
-
(1992)
Advances in Neural Information Processing (4)
, pp. 951-957
-
-
Krogh, A.1
Hertz, J.A.2
-
15
-
-
0003383280
-
Creating local and distributed bottlenecks in hidden layers of back-propagation networks
-
J. K. Kruschke, “Creating local and distributed bottlenecks in hidden layers of back-propagation networks,” in Proc. 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton, and T. Sejnowski, Eds., 1988, pp. 120–126.
-
(1988)
Proc. 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton, and T. Sejnowski, Eds.
, pp. 120-126
-
-
Kruschke, J.K.1
-
16
-
-
0024904547
-
Improving generalization in back-propagation networks with distributed bottlenecks
-
Washington DC
-
J. K. Kruschke, “Improving generalization in back-propagation networks with distributed bottlenecks,” in Proc. Int. Joint Conf Neural Networks, Washington DC, vol. I, 1989, pp. 443—447 0.
-
(1989)
Proc. Int. Joint Conf Neural Networks
, vol.1
, pp. 443-447
-
-
Kruschke, J.K.1
-
17
-
-
0026627410
-
A Frobenius approximation reduction method (FARM) for determining optimal number of hidden units
-
Seattle
-
S. Y. Kung and Y. H. Hu, “A Frobenius approximation reduction method (FARM) for determining optimal number of hidden units,” in Proc. Int. Joint Conf. Neural Networks, vol. II, Seattle, 1991, pp. 163–168.
-
(1991)
Proc. Int. Joint Conf. Neural Networks
, vol.2
, pp. 163-168
-
-
Kung, S.Y.1
Hu, Y.H.2
-
18
-
-
0025508916
-
A statistical approach to learning and generalization in layered neural networks
-
Oct.
-
E. Levin, N. Tishby, and S. A. Solla, “A statistical approach to learning and generalization in layered neural networks,” Proc IEEE, vol. 78, no. 10, pp. 1568–1574, Oct. 1990.
-
(1990)
Proc IEEE
, vol.78
, Issue.10
, pp. 1568-1574
-
-
Levin, E.1
Tishby, N.2
Solla, S.A.3
-
19
-
-
0000900876
-
Skeletonization: A technique for trimming the fat from a network via relevance assessment
-
D.S. Touretzky, Ed. (Denver 1988)
-
M. C. Mozer and P. Smolensky, “Skeletonization: A technique for trimming the fat from a network via relevance assessment,” in Advances in Neural Information Processing (1), D.S. Touretzky, Ed. (Denver 1988), 1989, pp. 107–115.
-
(1988)
Advances in Neural Information Processing (1)
, vol.1989
, pp. 107-115
-
-
Mozer, M.C.1
Smolensky, P.2
-
20
-
-
0001765492
-
Simplifying neural networks by soft weight-sharing
-
S. J. Nowlan and G. E. Hinton, “Simplifying neural networks by soft weight-sharing,” Neural Computation, vol. 4, no. 4, pp. 473–493, 1992.
-
(1992)
Neural Computation
, vol.4
, Issue.4
, pp. 473-493
-
-
Nowlan, S.J.1
Hinton, G.E.2
-
21
-
-
0041830862
-
Optimizaton of neural network topology and information content using Boltzmann methods
-
O. M. Omidvara and C. L. Wilson, “Optimizaton of neural network topology and information content using Boltzmann methods,” in Proc. Int. Joint Conf. Neural Networks (Baltimore), vol. IV, 1992, pp. 594–599.
-
(1992)
Proc. Int. Joint Conf. Neural Networks (Baltimore)
, vol.4
, pp. 594-599
-
-
Omidvara, O.M.1
Wilson, C.L.2
-
22
-
-
0003794792
-
Experiments on learning by back propagation
-
D. C. Plaut, S. J. Nowlan, and G. E. Hinton, “Experiments on learning by back propagation,” Tech. Rep. CMU-CS-86-126, Carnegie-Mellon Univ., 1986.
-
(1986)
Tech. Rep. CMU-CS-86-126, Carnegie-Mellon Univ.
-
-
Plaut, D.C.1
Nowlan, S.J.2
Hinton, G.E.3
-
23
-
-
0010561601
-
Information measure based skeletonisation
-
J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds.
-
S. Ramachandran and L. Y. Pratt, “Information measure based skeletonisation,” in Advances in Neural Information Processing (4), J. E. Moody, S. J. Hanson, and R. P. Lippmann, Eds., 1992, pp. 1080–1087.
-
(1992)
Advances in Neural Information Processing (4)
, pp. 1080
-
-
Ramachandran, S.1
Pratt, L.Y.2
-
24
-
-
0026736931
-
Optimal pruning of neural tree networks for improved generalization
-
(Seattle)
-
A. Sankar and R. J. Mammone, “Optimal pruning of neural tree networks for improved generalization,” in Proc. Int. Joint Conf. Neural Networks, vol. II (Seattle), 1991, pp. 219–224.
-
(1991)
Proc. Int. Joint Conf. Neural Networks
, vol.2
, pp. 219-224
-
-
Sankar, A.1
Mammone, R.J.2
-
25
-
-
0001548727
-
Exhaustive learning
-
D. B. Schwartz, V. K. Samalan, S. A. Solla, and J. S. Denker, “Exhaustive learning,” Neural Computation, vol. 2, no. 3, pp. 374–385, 1990.
-
(1990)
Neural Computation
, vol.2
, Issue.3
, pp. 374-385
-
-
Schwartz, D.B.1
Samalan, V.K.2
Solla, S.A.3
Denker, J.S.4
-
26
-
-
0026624311
-
Fault tolerance of pruned multilayer networks
-
(Seattle)
-
B. E. Segee and M. J. Carter, “Fault tolerance of pruned multilayer networks,” in Proc. Int. Joint Conf. Neural Networks, vol. II (Seattle), pp. 447–452, 1991.
-
(1991)
Proc. Int. Joint Conf. Neural Networks
, vol.2
, pp. 447-452
-
-
Segee, B.E.1
Carter, M.J.2
-
27
-
-
0026017007
-
Creating artificial neural networks that generalize
-
J. Sietsma and R. J. F. Dow, “Creating artificial neural networks that generalize,” Neural Networks, vol. 4, no. 1, pp. 67–69, 1991.
-
(1991)
Neural Networks
, vol.4
, Issue.1
, pp. 67-69
-
-
Sietsma, J.1
Dow, R.J.F.2
-
29
-
-
84941446975
-
Learning to identify letters with REM equations
-
(Washington, DC)
-
W. E. Simon and J. R. Carter, “Learning to identify letters with REM equations,” in Proc. Int. Joint Conf. Neural Networks, vol. I (Washington, DC), 1990, pp. 727–730.
-
(1990)
Proc. Int. Joint Conf. Neural Networks
, vol.1
, pp. 727-730
-
-
Simon, W.E.1
Carter, J.R.2
-
30
-
-
0025680807
-
Removing and adding network connections with recursive error minimization (REM) equations
-
S.K. Rogers, Ed.
-
W. E. Simon and J. R. Carter, “Removing and adding network connections with recursive error minimization (REM) equations,” in Applications of Artificial Neural Networks, S.K. Rogers, Ed. 1990, pp. 600–606.
-
(1990)
Applications of Artificial Neural Networks
, pp. 600-606
-
-
Simon, W.E.1
Carter, J.R.2
-
31
-
-
0024940401
-
Consistent inference of probabilities in layered networks: Predictions and generalization
-
N. Tishby, E. Levin, and S. A. Solla, “Consistent inference of probabilities in layered networks: Predictions and generalization,” in Proc. Int. Joint Conf. Neural Networks, 1989, p. 403.
-
(1989)
Proc. Int. Joint Conf. Neural Networks
, pp. 403.
-
-
Tishby, N.1
Levin, E.2
Solla, S.A.3
-
32
-
-
0021518106
-
A theory of the learnable
-
L. G. Valiant, “A theory of the learnable,” Commun. Ass. Comput. Mach., vol. 27, no. 11, pp. 1134–1142, 1984.
-
(1984)
Commun. Ass. Comput. Mach.
, vol.27
, Issue.11
, pp. 1134-1142
-
-
Valiant, L.G.1
-
33
-
-
0026698370
-
Reduction of interconnection weights in higher order associative memory networks
-
(Seattle)
-
J.H. Wang, T. F. Krile, and J. F. Walkup, “Reduction of interconnection weights in higher order associative memory networks,” in Proc. Int. Joint Conf. Neural Networks, vol. II (Seattle), 1991, pp. 177–182.
-
(1991)
Proc. Int. Joint Conf. Neural Networks
, vol.2
, pp. 177-182
-
-
Wang, J.H.1
Krile, T.F.2
Walkup, J.F.3
-
34
-
-
0002003139
-
Back-propagation, weight-elimination and time series prediction,” in Proc. 1990 Connectionist Models Summer School, D. Touretzky, J. Elman, T. Sejnowski, and G
-
A. S. Weigend, D. E. Rumelhart, and B. A. Huberman, “Back-propagation, weight-elimination and time series prediction,” in Proc. 1990 Connectionist Models Summer School, D. Touretzky, J. Elman, T. Sejnowski, and G. Hinton, Eds., 1990, pp. 105–116.
-
(1990)
Hinton, Eds.
, pp. 105-116
-
-
Weigend, A.S.1
Rumelhart, D.E.2
Huberman, B.A.3
-
35
-
-
0026367426
-
Generalization by weight-elimination applied to currency exchange rate prediction
-
(Seattle)
-
A. S. Weigend, D. E. Rumelhart, and B. A. Huberman, “Generalization by weight-elimination applied to currency exchange rate prediction,” in Proc. Int. Joint Conf. Neural Networks, vol. I (Seattle), 1991, pp. 837–841.
-
(1991)
Proc. Int. Joint Conf. Neural Networks
, vol.1
, pp. 837-841
-
-
Weigend, A.S.1
Rumelhart, D.E.2
Huberman, B.A.3
-
36
-
-
0000539096
-
Generalization by weight-elimination with application to forecasting
-
R. Lippmann, J. Moody, and D. Touretzky, Eds.
-
A. S. Weigend, D. E. Rumelhart, and B. A. Huberman, “Generalization by weight-elimination with application to forecasting,” in Advances in Neural Information Processing (3), R. Lippmann, J. Moody, and D. Touretzky, Eds., 1991, pp. 875–882.
-
(1991)
Advances in Neural Information Processing (3)
, pp. 875-882
-
-
Weigend, A.S.1
Rumelhart, D.E.2
Huberman, B.A.3
-
37
-
-
0001420440
-
The evolution of connectivity: Pruning neural networks using genetic algorithms
-
(Washington, DC)
-
D. Whitley and C. Bogart, “The evolution of connectivity: Pruning neural networks using genetic algorithms,” in Proc, Int. Joint Conf. Neural Networks, vol. I (Washington, DC), 1990, p. 134.
-
(1990)
Proc, Int. Joint Conf. Neural Networks
, vol.1
, pp. 134
-
-
Whitley, D.1
Bogart, C.2
|