-
1
-
-
0016029778
-
The relationship between variable selection and prediction
-
Allen, D. M. (1974). The relationship between variable selection and prediction. Technometrics, 16, 125-127.
-
(1974)
Technometrics
, vol.16
, pp. 125-127
-
-
Allen, D.M.1
-
2
-
-
0003706460
-
-
3 Society for Industrial and Applied Mathematics Philadelphia
-
Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., & Sorensen, D. (1999). LAPACK users' guide (3rd ed.). Philadelphia: Society for Industrial and Applied Mathematics.
-
(1999)
LAPACK Users' Guide
-
-
Anderson, E.1
Bai, Z.2
Bischof, C.3
Blackford, S.4
Demmel, J.5
Dongarra, J.6
Du Croz, J.7
Greenbaum, A.8
Hammarling, S.9
McKenney, A.10
Sorensen, D.11
-
3
-
-
33645727457
-
Feature scaling for kernel Fisher discriminant analysis using leave-one-out cross validation
-
4
-
Bo, L., Wang, L., & Jiao, L. (2006). Feature scaling for kernel Fisher discriminant analysis using leave-one-out cross validation. Neural Computation, 18(4), 961-978.
-
(2006)
Neural Computation
, vol.18
, pp. 961-978
-
-
Bo, L.1
Wang, L.2
Jiao, L.3
-
4
-
-
0026966646
-
A training algorithm for optimal margin classifiers
-
D. Haussler (Ed.) Pittsburgh, PA, July 1992
-
Boser, B. E., Guyon, I. M., & Vapnik, V. (1992). A training algorithm for optimal margin classifiers. In D. Haussler (Ed.), Proceedings of the fifth annual ACM workshop on computational learning theory (pp. 144-152), Pittsburgh, PA, July 1992.
-
(1992)
Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory
, pp. 144-152
-
-
Boser, B.E.1
Guyon, I.M.2
Vapnik, V.3
-
6
-
-
0034602774
-
Knowledge-based analysis of microarray gene expression data by using support vector machines
-
1
-
Brown, M. P. S., Grundy, W. N., Lin, D., Cristianini, N., Sugnet, C. W., Furey, T. S., Ares, M. Jr., & Haussler, D. (2000). Knowledge-based analysis of microarray gene expression data by using support vector machines. Proceedings of the National Academy of Sciences, 97(1), 262-267.
-
(2000)
Proceedings of the National Academy of Sciences
, vol.97
, pp. 262-267
-
-
Brown, M.P.S.1
Grundy, W.N.2
Lin, D.3
Cristianini, N.4
Sugnet, C.W.5
Furey, T.S.6
Ares Jr., M.7
Haussler, D.8
-
7
-
-
51749102433
-
Generalised kernel machines
-
Orlando, FL, USA, 12-17 August 2007
-
Cawley, G. C., Janacek, G. J., & Talbot, N. L. C. (2007). Generalised kernel machines. In Proceedings of the IEEE/INNS international joint conference on neural networks (IJCNN-07), Orlando, FL, USA, 12-17 August 2007.
-
(2007)
Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-07)
-
-
Cawley, G.C.1
Janacek, G.J.2
Talbot, N.L.C.3
-
8
-
-
0141639615
-
Efficient leave-one-out cross-validation of kernel Fisher discriminant classifiers
-
11
-
Cawley, G. C., & Talbot, N. L. C. (2003). Efficient leave-one-out cross-validation of kernel Fisher discriminant classifiers. Pattern Recognition, 36(11), 2585-2592.
-
(2003)
Pattern Recognition
, vol.36
, pp. 2585-2592
-
-
Cawley, G.C.1
Talbot, N.L.C.2
-
9
-
-
10044250988
-
Efficient model selection for kernel logistic regression
-
Cambridge, United Kingdom, 23-26 August 2004
-
Cawley, G. C., & Talbot, N. L. C. (2004a). Efficient model selection for kernel logistic regression. In Proceedings of the 17th international conference on pattern recognition (ICPR-2004) (Vol. 2, pp. 439-442), Cambridge, United Kingdom, 23-26 August 2004.
-
(2004)
Proceedings of the 17th International Conference on Pattern Recognition (ICPR-2004)
, vol.2
, pp. 439-442
-
-
Cawley, G.C.1
Talbot, N.L.C.2
-
10
-
-
8444241860
-
Fast leave-one-out cross-validation of sparse least-squares support vector machines
-
10
-
Cawley, G. C., & Talbot, N. L. C. (2004b). Fast leave-one-out cross-validation of sparse least-squares support vector machines. Neural Networks, 17(10), 1467-1475.
-
(2004)
Neural Networks
, vol.17
, pp. 1467-1475
-
-
Cawley, G.C.1
Talbot, N.L.C.2
-
11
-
-
34247558132
-
Preventing over-fitting in model selection via Bayesian regularization of the hyper-parameters
-
Cawley, G. C., & Talbot, N. L. C. (2007). Preventing over-fitting in model selection via Bayesian regularization of the hyper-parameters. Journal of Machine Learning Research, 8, 841-861.
-
(2007)
Journal of Machine Learning Research
, vol.8
, pp. 841-861
-
-
Cawley, G.C.1
Talbot, N.L.C.2
-
12
-
-
33644890436
-
Parametric accelerated life survival analysis using sparse Bayesian kernel learning methods
-
2
-
Cawley, G. C., Talbot, N. L. C., Janacek, G. J., & Peck, M. W. (2006). Parametric accelerated life survival analysis using sparse Bayesian kernel learning methods. IEEE Transactions on Neural Networks, 17(2), 471-481.
-
(2006)
IEEE Transactions on Neural Networks
, vol.17
, pp. 471-481
-
-
Cawley, G.C.1
Talbot, N.L.C.2
Janacek, G.J.3
Peck, M.W.4
-
14
-
-
34247849152
-
Training a support vector machine in the primal
-
5
-
Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19(5), 1155-1178.
-
(2007)
Neural Computation
, vol.19
, pp. 1155-1178
-
-
Chapelle, O.1
-
15
-
-
0036161011
-
Choosing multiple parameters for support vector machines
-
1
-
Chapelle, O., Vapnik, V., Bousquet, O., & Mukherjee, S. (2002). Choosing multiple parameters for support vector machines. Machine Learning, 46(1), 131-159.
-
(2002)
Machine Learning
, vol.46
, pp. 131-159
-
-
Chapelle, O.1
Vapnik, V.2
Bousquet, O.3
Mukherjee, S.4
-
17
-
-
34249753618
-
Support vector networks
-
Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 273-297.
-
(1995)
Machine Learning
, vol.20
, pp. 273-297
-
-
Cortes, C.1
Vapnik, V.2
-
18
-
-
0001942829
-
Neural networks and the bias/variance dilemma
-
1
-
Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4(1), 1-58.
-
(1992)
Neural Computation
, vol.4
, pp. 1-58
-
-
Geman, S.1
Bienenstock, E.2
Doursat, R.3
-
19
-
-
0004236492
-
-
3 The Johns Hopkins University Press Baltimore
-
Golub, G. H., & Van Loan, C. F. (1996). Matrix computations (3rd. ed.). Baltimore: The Johns Hopkins University Press.
-
(1996)
Matrix Computations
-
-
Golub, G.H.1
Van Loan, C.F.2
-
22
-
-
84879799188
-
Estimation of error rates in discriminant analysis
-
1
-
Lachenbruch, P. A., & Mickey, M. R. (1968). Estimation of error rates in discriminant analysis. Technometrics, 10(1), 1-11.
-
(1968)
Technometrics
, vol.10
, pp. 1-11
-
-
Lachenbruch, P.A.1
Mickey, M.R.2
-
24
-
-
0001500115
-
Functions of positive and negative type and their connection with the theory of integral equations
-
Mercer, J. (1909). Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London, A, 209, 415-446.
-
(1909)
Philosophical Transactions of the Royal Society of London, A
, vol.209
, pp. 415-446
-
-
Mercer, J.1
-
25
-
-
0033337021
-
Fisher discriminant analysis with kernels
-
IEEE Press New York
-
Mika, S., Rätsch, G., Weston, J., Schölkopf, B., & Müller, K.-R. (1999). Fisher discriminant analysis with kernels. In Neural networks for signal processing (Vol. IX, pp. 41-48). New York: IEEE Press.
-
(1999)
Neural Networks for Signal Processing IX
, pp. 41-48
-
-
Mika, S.1
Rätsch, G.2
Weston, J.3
Schölkopf, B.4
Müller, K.-R.5
-
26
-
-
84899018574
-
Invariant feature extraction and classification in feature spaces
-
MIT Press Cambridge
-
Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A. J., & Müller, K.-R. (2000). Invariant feature extraction and classification in feature spaces. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in neural information processing systems (Vol. 12, pp. 526-532). Cambridge: MIT Press.
-
(2000)
Advances in Neural Information Processing Systems 12
, pp. 526-532
-
-
Mika, S.1
Rätsch, G.2
Weston, J.3
Schölkopf, B.4
Smola, A.J.5
Müller, K.-R.6
Solla, S.A.7
Leen, T.K.8
Müller, K.-R.9
-
27
-
-
0038633559
-
Constructing descriptive and discriminative features: Rayleigh coefficients in kernel feature spaces
-
5
-
Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A., & Müller, K.-R. (2003). Constructing descriptive and discriminative features: Rayleigh coefficients in kernel feature spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5), 623-628.
-
(2003)
IEEE Transactions on Pattern Analysis and Machine Intelligence
, vol.25
, pp. 623-628
-
-
Mika, S.1
Rätsch, G.2
Weston, J.3
Schölkopf, B.4
Smola, A.5
Müller, K.-R.6
-
29
-
-
0035272287
-
An introduction to kernel-based learning algorithms
-
2
-
Müller, K.-R., Mika, S., Rätsch, G., Tsuda, K., & Schölkopf, B. (2001). An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, 12(2), 181-201.
-
(2001)
IEEE Transactions on Neural Networks
, vol.12
, pp. 181-201
-
-
Müller, K.-R.1
Mika, S.2
Rätsch, G.3
Tsuda, K.4
Schölkopf, B.5
-
30
-
-
0033322387
-
Efficient training of RBF networks for classification
-
Edinburgh, United Kingdom, 7-10 September, 1999
-
Nabney, I. T. (1999). Efficient training of RBF networks for classification. In: Proceedings of the ninth international conference on artificial neural networks (Vol. 1, pp. 210-215), Edinburgh, United Kingdom, 7-10 September, 1999.
-
(1999)
Proceedings of the Ninth International Conference on Artificial Neural Networks
, vol.1
, pp. 210-215
-
-
Nabney, I.T.1
-
31
-
-
0000238336
-
A simplex method for function minimization
-
Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. Computer Journal, 7, 308-313.
-
(1965)
Computer Journal
, vol.7
, pp. 308-313
-
-
Nelder, J.A.1
Mead, R.2
-
32
-
-
0034320350
-
Gaussian processes for classification: Mean-field algorithms
-
11
-
Opper, M., & Winther, O. (2000). Gaussian processes for classification: mean-field algorithms. Neural Computation, 12(11), 2665-2684.
-
(2000)
Neural Computation
, vol.12
, pp. 2665-2684
-
-
Opper, M.1
Winther, O.2
-
34
-
-
0002619965
-
Ridge regression in dual variables
-
Morgan Kaufmann San Mateo
-
Saunders, C., Gammermann, A., & Vovk, V. (1998). Ridge regression in dual variables. In J. Shavlik (Ed.), Proceedings of the fifteenth international conference on machine learning (ICML-1998). San Mateo: Morgan Kaufmann.
-
(1998)
Proceedings of the Fifteenth International Conference on Machine Learning (ICML-1998)
-
-
Saunders, C.1
Gammermann, A.2
Vovk, V.3
Shavlik, J.4
-
36
-
-
84956689194
-
Kernel principal component analysis
-
Springer Berlin
-
Schölkopf, B., Smola, A. J., & Müller, K. (1997). Kernel principal component analysis. In W. Gerstner, A. Germond, M. Hasler, & J.-D. Nicoud (Eds.), Lecture notes in computer science : Vol. 1327. Proceedings of the international conference on artificial neural networks (ICANN-1997) (pp. 583-588). Berlin: Springer.
-
(1997)
Proceedings of the International Conference on Artificial Neural Networks (ICANN-1997) Lecture Notes in Computer Science 1327
, pp. 583-588
-
-
Schölkopf, B.1
Smola, A.J.2
Müller, K.3
Gerstner, W.4
Germond, A.5
Hasler, M.6
Nicoud, J.-D.7
-
37
-
-
84865131152
-
A generalized representer theorem
-
Amsterdam, The Netherlands, 16-19 July 2002
-
Schölkopf, B., Herbrich, R., & Smola, A. J. (2002). A generalized representer theorem. In Proceedings of the fourteenth international conference on computational learning theory (pp. 416-426), Amsterdam, The Netherlands, 16-19 July 2002.
-
(2002)
Proceedings of the Fourteenth International Conference on Computational Learning Theory
, pp. 416-426
-
-
Schölkopf, B.1
Herbrich, R.2
Smola, A.J.3
-
38
-
-
0038058863
-
SYMINV: An algorithm for the inversion of a positive definite matrix by the Cholesky decomposition
-
5
-
Seaks, T. (1972). SYMINV: an algorithm for the inversion of a positive definite matrix by the Cholesky decomposition. Econometrica, 40(5), 961-962.
-
(1972)
Econometrica
, vol.40
, pp. 961-962
-
-
Seaks, T.1
-
40
-
-
0000629975
-
Cross-validatory choice and assessment of statistical predictions
-
1
-
Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, B, 36(1), 111-147.
-
(1974)
Journal of the Royal Statistical Society, B
, vol.36
, pp. 111-147
-
-
Stone, M.1
-
41
-
-
0035344742
-
Predictive approaches for choosing hyperparameters in Gaussian processes
-
5
-
Sundararajan, S., & Keerthi, S. S. (2001). Predictive approaches for choosing hyperparameters in Gaussian processes. Neural Computation, 13(5), 1103-1118.
-
(2001)
Neural Computation
, vol.13
, pp. 1103-1118
-
-
Sundararajan, S.1
Keerthi, S.S.2
-
42
-
-
0036825528
-
Weighted least squares support vector machines: Robustness and sparse approximation
-
1-4
-
Suykens, J. A. K., De Brabanter, J., Lukas, L., & Vandewalle, J. (2002a). Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing, 48(1-4), 85-105.
-
(2002)
Neurocomputing
, vol.48
, pp. 85-105
-
-
Suykens, J.A.K.1
De Brabanter, J.2
Lukas, L.3
Vandewalle, J.4
-
43
-
-
0037695279
-
-
World Scientific Singapore
-
Suykens, J. A. K., Van Gestel, T., De Brabanter, J., De Moor, B., & Vanderwalle, J. (2002b). Least squares support vector machines. Singapore: World Scientific.
-
(2002)
Least Squares Support Vector Machines
-
-
Suykens, J.A.K.1
Van Gestel, T.2
De Brabanter, J.3
De Moor, B.4
Vanderwalle, J.5
-
46
-
-
0038928834
-
Bounds on error expectation for SVM
-
A. J. Smola, P. L. Bartlett, B. Schölkopf, & D. Schuurmans (Eds.)
-
Vapnik, V., & Chapelle, O. (2000). Bounds on error expectation for SVM. In: A. J. Smola, P. L. Bartlett, B. Schölkopf, & D. Schuurmans (Eds.), Advances in large margin classifiers (pp. 261-280).
-
(2000)
Advances in Large Margin Classifiers
, pp. 261-280
-
-
Vapnik, V.1
Chapelle, O.2
-
49
-
-
0009912228
-
A Marquardt algorithm for choosing the step-size in backpropagation learning with conjugate gradients
-
University of Sussex, Brighton, UK, February 1991
-
Williams, P. M. (1991). A Marquardt algorithm for choosing the step-size in backpropagation learning with conjugate gradients. Cognitive Science Research Paper CSRP-229, University of Sussex, Brighton, UK, February 1991.
-
(1991)
Cognitive Science Research Paper CSRP-229
-
-
Williams, P.M.1
|