-
1
-
-
0004064629
-
-
New Delhi, India: Tata McGraw-Hill
-
N. K. Bose and P. Liang, Neural Network Fundamentals with Graphs, Algorithms, and Applications. New Delhi, India: Tata McGraw-Hill, 1998.
-
(1998)
Neural Network Fundamentals With Graphs, Algorithms, and Applications
-
-
Bose, N.K.1
Liang, P.2
-
4
-
-
0025488663
-
"30 years of adaptive neural networks: Perceptron, madaline, and backpropagation"
-
B. Widrow and M. A. Lehr, "30 years of adaptive neural networks: Perceptron, madaline, and backpropagation," Proc. IEEE, vol. 78, pp. 1415-1442, 1990.
-
(1990)
Proc. IEEE
, vol.78
, pp. 1415-1442
-
-
Widrow, B.1
Lehr, M.A.2
-
5
-
-
0024124325
-
"There exists a neural network that does not make avoidable mistakes"
-
A. R. Gallant and H. White, "There exists a neural network that does not make avoidable mistakes," in Proc. Second Int. Joint Conf. Neural Networks, vol. I, 1988, pp. 593-606.
-
(1988)
Proc. Second Int. Joint Conf. Neural Networks
, vol.1
, pp. 593-606
-
-
Gallant, A.R.1
White, H.2
-
6
-
-
0003095817
-
"Approximation by superposition of a sigmoidal function"
-
G. Cybenko, "Approximation by superposition of a sigmoidal function," Math. Contr., Signal, and Syst., vol. 5, pp. 233-243, 1989.
-
(1989)
Math. Contr., Signal, and Syst.
, vol.5
, pp. 233-243
-
-
Cybenko, G.1
-
7
-
-
0024866495
-
"On the approximate realization of continuous mappings by neural networks"
-
K. Funahashi, "On the approximate realization of continuous mappings by neural networks," Neural Networks, vol. 2, pp. 183-192, 1989.
-
(1989)
Neural Networks
, vol.2
, pp. 183-192
-
-
Funahashi, K.1
-
8
-
-
0024880831
-
"Multilayer feedforward networks are universal approximators"
-
K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward networks are universal approximators," Neural Networks, vol. 2, pp. 359-366, 1989.
-
(1989)
Neural Networks
, vol.2
, pp. 359-366
-
-
Hornik, K.1
Stinchcombe, M.2
White, H.3
-
9
-
-
0025627940
-
"Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks"
-
K. Hornik, "Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks," Neural Networks, vol. 3, pp. 551-560, 1990.
-
(1990)
Neural Networks
, vol.3
, pp. 551-560
-
-
Hornik, K.1
-
10
-
-
0025558060
-
"Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights"
-
M. Stinchcombe and H. White, "Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights," in Proc. IJCNN, vol. III, 1990, pp. 7-16.
-
(1990)
Proc. IJCNN
, vol.3
, pp. 7-16
-
-
Stinchcombe, M.1
White, H.2
-
11
-
-
0025751820
-
"Approximation capabilities of multilayer feedforward networks"
-
K. Hornik, "Approximation capabilities of multilayer feedforward networks," Neural Networks, vol. 4, pp. 251-257, 1991.
-
(1991)
Neural Networks
, vol.4
, pp. 251-257
-
-
Hornik, K.1
-
12
-
-
0027599793
-
"Universal approximation bounds for superposition of a sigmoidal function"
-
A. R. Barron, "Universal approximation bounds for superposition of a sigmoidal function," IEEE Trans. Inform. Theory, vol. 39, pp. 930-945, 1993.
-
(1993)
IEEE Trans. Inform. Theory
, vol.39
, pp. 930-945
-
-
Barron, A.R.1
-
14
-
-
0032961240
-
"Neural network approximation of continuous functionals and continuous functions on compactifications"
-
M. B. Stinchcombe, "Neural network approximation of continuous functionals and continuous functions on compactifications," Neural Networks, vol. 12, pp. 467-477, 1999.
-
(1999)
Neural Networks
, vol.12
, pp. 467-477
-
-
Stinchcombe, M.B.1
-
15
-
-
0034233437
-
"Neural networks with a continuous squashing function in the output are universal approximators"
-
J.I. Castro, C. J. Mantas, and J. M. Bentez, "Neural networks with a continuous squashing function in the output are universal approximators," Neural Networks, vol. 13, pp. 561-563, 2000.
-
(2000)
Neural Networks
, vol.13
, pp. 561-563
-
-
Castro, J.I.1
Mantas, C.J.2
Bentez, J.M.3
-
17
-
-
0010263084
-
"The power of approximating: A comparision of activation functions"
-
C. L. Giles, S. J. Hanson, and J. D. Cowan, Eds. San Mateo, CA: Morgan Kaufmann
-
B. DasGupta and G. Schnitger, "The power of approximating: A comparision of activation functions," in Advances in Neural Information Processing Systems, C. L. Giles, S. J. Hanson, and J. D. Cowan, Eds. San Mateo, CA: Morgan Kaufmann, 1993, vol. 5, pp. 615-622.
-
(1993)
Advances in Neural Information Processing Systems
, vol.5
, pp. 615-622
-
-
DasGupta, B.1
Schnitger, G.2
-
18
-
-
9244225564
-
"Degree of approximation results for feedforward networks approximating unknown mappings and their derivatives"
-
Tech. Rep. NC-TR-95-004, Neuro-COLT
-
K. Hornik, M. Stinchcombe, H. White, and P. Auer, "Degree of approximation results for feedforward networks approximating unknown mappings and their derivatives,", Tech. Rep. NC-TR-95-004, 1995. Neuro-COLT.
-
(1995)
-
-
Hornik, K.1
Stinchcombe, M.2
White, H.3
Auer, P.4
-
19
-
-
0000041417
-
"Neural networks for optimal approximation of smooth and analytic functions"
-
H. Mhasker, "Neural networks for optimal approximation of smooth and analytic functions," Neural Comput., vol. 8, pp. 164-177, 1996.
-
(1996)
Neural Comput.
, vol.8
, pp. 164-177
-
-
Mhasker, H.1
-
20
-
-
9244254691
-
Lower bounds on the complexity of approximating continuous functions by sigmoidal neural networks
-
[Online]. Available
-
M. Schmitt. (2000) Lower bounds on the complexity of approximating continuous functions by sigmoidal neural networks. [Online]. Available: http://www.neurocolt.com
-
(2000)
-
-
Schmitt, M.1
-
21
-
-
0012019432
-
"Networks and the Best Approximation Property"
-
Artificial Intelligence Laboratory, Mass. Inst. Technol., MA, Tech. Rep. AI Memo no. 1164
-
T. Poggio and E Girosi, "Networks and the Best Approximation Property," Artificial Intelligence Laboratory, Mass. Inst. Technol., MA, Tech. Rep. AI Memo no. 1164, 1989.
-
(1989)
-
-
Poggio, T.1
Girosi, E.2
-
22
-
-
9244239657
-
Bounded weight sigmoidal FFANN's have the best approximation property
-
submitted
-
P. Chandra and Y. Singh, Bounded weight sigmoidal FFANN's have the best approximation property, in IEEE Trans. Neural Networks, 2003. submitted.
-
(2003)
IEEE Trans. Neural Networks
-
-
Chandra, P.1
Singh, Y.2
-
23
-
-
9244247541
-
Closure property of function sets represented by feedforward sigmoidal networks
-
submitted
-
P. Chandra, Closure property of function sets represented by feedforward sigmoidal networks, in Int. J. Neural Systems of Neural Systems, 2003. submitted.
-
(2003)
Int. J. Neural Systems of Neural Systems
-
-
Chandra, P.1
-
24
-
-
33750590619
-
"Fault Tolerant Multi-Layer Perceptron Networks"
-
Advanced Computer Architecture Group, Dept. Computer Sci., Univ. York, U.K., Tech. Rep. YCS 180
-
G. Bolt, "Fault Tolerant Multi-Layer Perceptron Networks," Advanced Computer Architecture Group, Dept. Computer Sci., Univ. York, U.K., Tech. Rep. YCS 180, 1992.
-
(1992)
-
-
Bolt, G.1
-
25
-
-
0038648742
-
"A class +1 sigmoidal activation functions for FFANNs"
-
Y. Singh and P. Chandra, "A class +l sigmoidal activation functions for FFANNs," J. Economic Dyn. Contr., vol. 28, pp. 183-187, 2003.
-
(2003)
J. Economic Dyn. Contr.
, vol.28
, pp. 183-187
-
-
Singh, Y.1
Chandra, P.2
-
26
-
-
0003905011
-
-
Berlin, Germany: Springer-Verlag
-
J. Jost, Postmodern Analysis. Berlin, Germany: Springer-Verlag, 1998.
-
(1998)
Postmodern Analysis
-
-
Jost, J.1
-
30
-
-
0141801432
-
"The illusion of fault tolerance of neural nets for pattern recognition and signal processing"
-
Durham
-
M. J. Carter, "The illusion of fault tolerance of neural nets for pattern recognition and signal processing," in Proc. Tech. Session on Fault Integrated System. Durham, 1988.
-
(1988)
Proc. Tech. Session on Fault Integrated System
-
-
Carter, M.J.1
-
32
-
-
0029269583
-
"Complete and partial fault tolerance of feed-forward neural nets"
-
D. S. Phatak and I. Koren, "Complete and partial fault tolerance of feed-forward neural nets," IEEE Trans. Neural Networks, vol. 6, pp. 446-456, 1995.
-
(1995)
IEEE Trans. Neural Networks
, vol.6
, pp. 446-456
-
-
Phatak, D.S.1
Koren, I.2
-
33
-
-
0028532748
-
"Comparative fault-tolerance of parallel distributed processing networks"
-
B. E. Segee and M. J. Carter, "Comparative fault-tolerance of parallel distributed processing networks," IEEE Trans. Comput., vol. 43, pp. 1323-1329, 1994.
-
(1994)
IEEE Trans. Comput.
, vol.43
, pp. 1323-1329
-
-
Segee, B.E.1
Carter, M.J.2
-
34
-
-
0001223466
-
"Fault tolerance of backpropagation neural networks trained with noisy data"
-
J.I. Minnix, "Fault tolerance of backpropagation neural networks trained with noisy data," in IJCNN'91, vol. 1, 1991, pp. 703-708.
-
(1991)
IJCNN'91
, vol.1
, pp. 703-708
-
-
Minnix, J.I.1
-
35
-
-
0026858102
-
"Noise injection into inputs in backpropagation learning"
-
K. Matasuoka, "Noise injection into inputs in backpropagation learning," IEEE Trans. Syst., Man, Cybern., vol. 22, pp. 436-440, 1992.
-
(1992)
IEEE Trans. Syst., Man, Cybern.
, vol.22
, pp. 436-440
-
-
Matasuoka, K.1
-
36
-
-
0026624071
-
"Using additive noise in backpropagation training"
-
L. Holmstrom and P. Koistinen, "Using additive noise in backpropagation training," IEEE Trans. Neural Networks, vol. 3, pp. 24-28, 1992.
-
(1992)
IEEE Trans. Neural Networks
, vol.3
, pp. 24-28
-
-
Holmstrom, L.1
Koistinen, P.2
-
37
-
-
0028494739
-
"Synaptic weight noise during MLP training: Enhanced MLP performance and fault tolerance resulting from synaptic noise during training"
-
A. F. Murray and P. J. Edwards, "Synaptic weight noise during MLP training: Enhanced MLP performance and fault tolerance resulting from synaptic noise during training," IEEE Trans. Neural Networks, vol. 5, pp. 792-802, 1994.
-
(1994)
IEEE Trans. Neural Networks
, vol.5
, pp. 792-802
-
-
Murray, A.F.1
Edwards, P.J.2
-
38
-
-
0141461599
-
"Fault tolerance of feedforward artificial neural networks - A framework of study"
-
P. Chandra and Y. Singh, "Fault tolerance of feedforward artificial neural networks - A framework of study," in IJCNN'03, 2003, pp. 489-494.
-
(2003)
IJCNN'03
, pp. 489-494
-
-
Chandra, P.1
Singh, Y.2
-
39
-
-
0141633850
-
"Regularization and feedforward artificial neural network training with noise"
-
P. Chandra, "Regularization and feedforward artificial neural network training with noise," in IJCNN'03, 2003, pp. 2366-2371.
-
(2003)
IJCNN'03
, pp. 2366-2371
-
-
Chandra, P.1
-
40
-
-
2342565172
-
"The effect of adding noise during backpropagation training on a generalization performance"
-
G. Z. An, "The effect of adding noise during backpropagation training on a generalization performance," Neural Comput., vol. 8, pp. 643-674, 1996.
-
(1996)
Neural Comput.
, vol.8
, pp. 643-674
-
-
An, G.Z.1
-
41
-
-
0342990011
-
"An analysis of noise in recurrent neural networks: Convergence and generalization"
-
K. Jim, C. L. Giles, and B. G. Home, "An analysis of noise in recurrent neural networks: Convergence and generalization," IEEE Trans. Neural Networks, vol. 7, pp. 1424-1439, 1996.
-
(1996)
IEEE Trans. Neural Networks
, vol.7
, pp. 1424-1439
-
-
Jim, K.1
Giles, C.L.2
Home, B.G.3
-
42
-
-
0025563947
-
"Fault tolerance in artificial neural networks"
-
C. H. Sequin and R. D. Clay, "Fault tolerance in artificial neural networks," in IJCNN'90, vol. 1, 1990, pp. 703-708.
-
(1990)
IJCNN'90
, vol.1
, pp. 703-708
-
-
Sequin, C.H.1
Clay, R.D.2
-
44
-
-
0030649974
-
"A learning algorithm for fault tolerant feedforward networks"
-
N. C. Hammadi and H. Ito, "A learning algorithm for fault tolerant feedforward networks," IEICE Trans. Inform. Syst., vol. E80-D, pp. 21-27, 1997.
-
(1997)
IEICE Trans. Inform. Syst.
, vol.E80-D
, pp. 21-27
-
-
Hammadi, N.C.1
Ito, H.2
-
45
-
-
0028058462
-
"Training techniques to obtain fault tolerant neural networks"
-
Austin, TX
-
C.-T. Chin, K. Mehrotra, C. K. Mohan, and S. Ranka, "Training techniques to obtain fault tolerant neural networks," in Proc. Int. Symp. Fault Tolerant Computing, Austin, TX, 1994, pp. 360-369.
-
(1994)
Proc. Int. Symp. Fault Tolerant Computing
, pp. 360-369
-
-
Chin, C.-T.1
Mehrotra, K.2
Mohan, C.K.3
Ranka, S.4
-
46
-
-
0031681567
-
"On the activation function and fault tolerance in feedforward artificial neural networks"
-
N. C. Hammadi and H. Ito, "On the activation function and fault tolerance in feedforward artificial neural networks," IEICE Trans. Inform. Syst., vol. E81-D, pp. 66-72, 1998.
-
(1998)
IEICE Trans. Inform. Syst.
, vol.E81-D
, pp. 66-72
-
-
Hammadi, N.C.1
Ito, H.2
-
47
-
-
0027668417
-
"Determining and improving the fault tolerance of multilayer perceptrons in a pattern recognition application"
-
M. D. Emmerson and R. I. Damper, "Determining and improving the fault tolerance of multilayer perceptrons in a pattern recognition application" IEEE Trans. Neural Networks, vol. 4, pp. 788-793, 1993.
-
(1993)
IEEE Trans. Neural Networks
, vol.4
, pp. 788-793
-
-
Emmerson, M.D.1
Damper, R.I.2
|