메뉴 건너뛰기




Volumn 71, Issue 2-3, 2008, Pages 243-264

Efficient approximate leave-one-out cross-validation for kernel logistic regression

Author keywords

Kernel logistic regression; Model selection

Indexed keywords

ALGORITHMS; LEAST SQUARES APPROXIMATIONS; PATTERN RECOGNITION; PROBABILITY DISTRIBUTIONS;

EID: 43049121679     PISSN: 08856125     EISSN: 15730565     Source Type: Journal    
DOI: 10.1007/s10994-008-5055-9     Document Type: Article
Times cited : (78)

References (51)
  • 1
    • 0016029778 scopus 로고
    • The relationship between variable selection and prediction
    • Allen, D. M. (1974). The relationship between variable selection and prediction. Technometrics, 16, 125-127.
    • (1974) Technometrics , vol.16 , pp. 125-127
    • Allen, D.M.1
  • 3
    • 33645727457 scopus 로고    scopus 로고
    • Feature scaling for kernel Fisher discriminant analysis using leave-one-out cross validation
    • 4
    • Bo, L., Wang, L., & Jiao, L. (2006). Feature scaling for kernel Fisher discriminant analysis using leave-one-out cross validation. Neural Computation, 18(4), 961-978.
    • (2006) Neural Computation , vol.18 , pp. 961-978
    • Bo, L.1    Wang, L.2    Jiao, L.3
  • 8
    • 0141639615 scopus 로고    scopus 로고
    • Efficient leave-one-out cross-validation of kernel Fisher discriminant classifiers
    • 11
    • Cawley, G. C., & Talbot, N. L. C. (2003). Efficient leave-one-out cross-validation of kernel Fisher discriminant classifiers. Pattern Recognition, 36(11), 2585-2592.
    • (2003) Pattern Recognition , vol.36 , pp. 2585-2592
    • Cawley, G.C.1    Talbot, N.L.C.2
  • 10
    • 8444241860 scopus 로고    scopus 로고
    • Fast leave-one-out cross-validation of sparse least-squares support vector machines
    • 10
    • Cawley, G. C., & Talbot, N. L. C. (2004b). Fast leave-one-out cross-validation of sparse least-squares support vector machines. Neural Networks, 17(10), 1467-1475.
    • (2004) Neural Networks , vol.17 , pp. 1467-1475
    • Cawley, G.C.1    Talbot, N.L.C.2
  • 11
    • 34247558132 scopus 로고    scopus 로고
    • Preventing over-fitting in model selection via Bayesian regularization of the hyper-parameters
    • Cawley, G. C., & Talbot, N. L. C. (2007). Preventing over-fitting in model selection via Bayesian regularization of the hyper-parameters. Journal of Machine Learning Research, 8, 841-861.
    • (2007) Journal of Machine Learning Research , vol.8 , pp. 841-861
    • Cawley, G.C.1    Talbot, N.L.C.2
  • 12
    • 33644890436 scopus 로고    scopus 로고
    • Parametric accelerated life survival analysis using sparse Bayesian kernel learning methods
    • 2
    • Cawley, G. C., Talbot, N. L. C., Janacek, G. J., & Peck, M. W. (2006). Parametric accelerated life survival analysis using sparse Bayesian kernel learning methods. IEEE Transactions on Neural Networks, 17(2), 471-481.
    • (2006) IEEE Transactions on Neural Networks , vol.17 , pp. 471-481
    • Cawley, G.C.1    Talbot, N.L.C.2    Janacek, G.J.3    Peck, M.W.4
  • 14
    • 34247849152 scopus 로고    scopus 로고
    • Training a support vector machine in the primal
    • 5
    • Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19(5), 1155-1178.
    • (2007) Neural Computation , vol.19 , pp. 1155-1178
    • Chapelle, O.1
  • 15
    • 0036161011 scopus 로고    scopus 로고
    • Choosing multiple parameters for support vector machines
    • 1
    • Chapelle, O., Vapnik, V., Bousquet, O., & Mukherjee, S. (2002). Choosing multiple parameters for support vector machines. Machine Learning, 46(1), 131-159.
    • (2002) Machine Learning , vol.46 , pp. 131-159
    • Chapelle, O.1    Vapnik, V.2    Bousquet, O.3    Mukherjee, S.4
  • 17
    • 34249753618 scopus 로고
    • Support vector networks
    • Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 273-297.
    • (1995) Machine Learning , vol.20 , pp. 273-297
    • Cortes, C.1    Vapnik, V.2
  • 18
    • 0001942829 scopus 로고
    • Neural networks and the bias/variance dilemma
    • 1
    • Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4(1), 1-58.
    • (1992) Neural Computation , vol.4 , pp. 1-58
    • Geman, S.1    Bienenstock, E.2    Doursat, R.3
  • 19
    • 0004236492 scopus 로고    scopus 로고
    • 3 The Johns Hopkins University Press Baltimore
    • Golub, G. H., & Van Loan, C. F. (1996). Matrix computations (3rd. ed.). Baltimore: The Johns Hopkins University Press.
    • (1996) Matrix Computations
    • Golub, G.H.1    Van Loan, C.F.2
  • 22
    • 84879799188 scopus 로고
    • Estimation of error rates in discriminant analysis
    • 1
    • Lachenbruch, P. A., & Mickey, M. R. (1968). Estimation of error rates in discriminant analysis. Technometrics, 10(1), 1-11.
    • (1968) Technometrics , vol.10 , pp. 1-11
    • Lachenbruch, P.A.1    Mickey, M.R.2
  • 24
    • 0001500115 scopus 로고
    • Functions of positive and negative type and their connection with the theory of integral equations
    • Mercer, J. (1909). Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London, A, 209, 415-446.
    • (1909) Philosophical Transactions of the Royal Society of London, A , vol.209 , pp. 415-446
    • Mercer, J.1
  • 30
    • 0033322387 scopus 로고    scopus 로고
    • Efficient training of RBF networks for classification
    • Edinburgh, United Kingdom, 7-10 September, 1999
    • Nabney, I. T. (1999). Efficient training of RBF networks for classification. In: Proceedings of the ninth international conference on artificial neural networks (Vol. 1, pp. 210-215), Edinburgh, United Kingdom, 7-10 September, 1999.
    • (1999) Proceedings of the Ninth International Conference on Artificial Neural Networks , vol.1 , pp. 210-215
    • Nabney, I.T.1
  • 31
    • 0000238336 scopus 로고
    • A simplex method for function minimization
    • Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. Computer Journal, 7, 308-313.
    • (1965) Computer Journal , vol.7 , pp. 308-313
    • Nelder, J.A.1    Mead, R.2
  • 32
    • 0034320350 scopus 로고    scopus 로고
    • Gaussian processes for classification: Mean-field algorithms
    • 11
    • Opper, M., & Winther, O. (2000). Gaussian processes for classification: mean-field algorithms. Neural Computation, 12(11), 2665-2684.
    • (2000) Neural Computation , vol.12 , pp. 2665-2684
    • Opper, M.1    Winther, O.2
  • 38
    • 0038058863 scopus 로고
    • SYMINV: An algorithm for the inversion of a positive definite matrix by the Cholesky decomposition
    • 5
    • Seaks, T. (1972). SYMINV: an algorithm for the inversion of a positive definite matrix by the Cholesky decomposition. Econometrica, 40(5), 961-962.
    • (1972) Econometrica , vol.40 , pp. 961-962
    • Seaks, T.1
  • 40
    • 0000629975 scopus 로고
    • Cross-validatory choice and assessment of statistical predictions
    • 1
    • Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, B, 36(1), 111-147.
    • (1974) Journal of the Royal Statistical Society, B , vol.36 , pp. 111-147
    • Stone, M.1
  • 41
    • 0035344742 scopus 로고    scopus 로고
    • Predictive approaches for choosing hyperparameters in Gaussian processes
    • 5
    • Sundararajan, S., & Keerthi, S. S. (2001). Predictive approaches for choosing hyperparameters in Gaussian processes. Neural Computation, 13(5), 1103-1118.
    • (2001) Neural Computation , vol.13 , pp. 1103-1118
    • Sundararajan, S.1    Keerthi, S.S.2
  • 42
    • 0036825528 scopus 로고    scopus 로고
    • Weighted least squares support vector machines: Robustness and sparse approximation
    • 1-4
    • Suykens, J. A. K., De Brabanter, J., Lukas, L., & Vandewalle, J. (2002a). Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing, 48(1-4), 85-105.
    • (2002) Neurocomputing , vol.48 , pp. 85-105
    • Suykens, J.A.K.1    De Brabanter, J.2    Lukas, L.3    Vandewalle, J.4
  • 46
    • 0038928834 scopus 로고    scopus 로고
    • Bounds on error expectation for SVM
    • A. J. Smola, P. L. Bartlett, B. Schölkopf, & D. Schuurmans (Eds.)
    • Vapnik, V., & Chapelle, O. (2000). Bounds on error expectation for SVM. In: A. J. Smola, P. L. Bartlett, B. Schölkopf, & D. Schuurmans (Eds.), Advances in large margin classifiers (pp. 261-280).
    • (2000) Advances in Large Margin Classifiers , pp. 261-280
    • Vapnik, V.1    Chapelle, O.2
  • 49
    • 0009912228 scopus 로고
    • A Marquardt algorithm for choosing the step-size in backpropagation learning with conjugate gradients
    • University of Sussex, Brighton, UK, February 1991
    • Williams, P. M. (1991). A Marquardt algorithm for choosing the step-size in backpropagation learning with conjugate gradients. Cognitive Science Research Paper CSRP-229, University of Sussex, Brighton, UK, February 1991.
    • (1991) Cognitive Science Research Paper CSRP-229
    • Williams, P.M.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.