메뉴 건너뛰기




Volumn 52, Issue 10, 2008, Pages 4658-4672

Developing a feature weight self-adjustment mechanism for a K-means clustering algorithm

Author keywords

[No Author keywords available]

Indexed keywords

MATHEMATICAL MODELS; SET THEORY;

EID: 44349127297     PISSN: 01679473     EISSN: None     Source Type: Journal    
DOI: 10.1016/j.csda.2008.03.002     Document Type: Article
Times cited : (136)

References (45)
  • 2
    • 0026453958 scopus 로고
    • Training a 3-node neural networks is NP-complete
    • Blum A.L., and Rivest R.L. Training a 3-node neural networks is NP-complete. Neural Netw. 5 (1992) 117-127
    • (1992) Neural Netw. , vol.5 , pp. 117-127
    • Blum, A.L.1    Rivest, R.L.2
  • 3
    • 0035534927 scopus 로고    scopus 로고
    • A variable-selection heuristic for K-means clustering
    • Brusco M.J., and Cradit J.D. A variable-selection heuristic for K-means clustering. Psychometrika 66 (2001) 249-270
    • (2001) Psychometrika , vol.66 , pp. 249-270
    • Brusco, M.J.1    Cradit, J.D.2
  • 4
    • 1842762839 scopus 로고    scopus 로고
    • An optimization algorithm for clustering using weighted dissimilarity measures
    • Chan E.Y., Ching W.K., Ng M.K., and Huang J.Z. An optimization algorithm for clustering using weighted dissimilarity measures. Pattern Recognit. 37 (2004) 943-952
    • (2004) Pattern Recognit. , vol.37 , pp. 943-952
    • Chan, E.Y.1    Ching, W.K.2    Ng, M.K.3    Huang, J.Z.4
  • 5
    • 78149289039 scopus 로고    scopus 로고
    • Dash, M., Choi, K., Scheuermann, P., Liu, H., 2002. Feature selection for clustering - a filter solution. In: Proceedings of the 2002 IEEE International Conference on Data Mining, Maebashi, Japan, pp. 115-122
    • Dash, M., Choi, K., Scheuermann, P., Liu, H., 2002. Feature selection for clustering - a filter solution. In: Proceedings of the 2002 IEEE International Conference on Data Mining, Maebashi, Japan, pp. 115-122
  • 6
    • 33947177850 scopus 로고
    • Optimal variable weighting for ultrametric and additive tree clustering
    • De Soete G. Optimal variable weighting for ultrametric and additive tree clustering. Quality Quantity 20 (1986) 169-180
    • (1986) Quality Quantity , vol.20 , pp. 169-180
    • De Soete, G.1
  • 7
    • 0000962917 scopus 로고
    • OVWTRE: A program for optimal variable weighting for ultrametric and additive tree fitting
    • De Soete G. OVWTRE: A program for optimal variable weighting for ultrametric and additive tree fitting. J. Classif. 5 (1988) 101-104
    • (1988) J. Classif. , vol.5 , pp. 101-104
    • De Soete, G.1
  • 8
    • 0002414638 scopus 로고
    • Synthesized clustering: A method for amalgamating clustering bases with differential weighting variables
    • Desarbo W.S., Carroll J.D., Clark L.A., and Green P.E. Synthesized clustering: A method for amalgamating clustering bases with differential weighting variables. Psychometrika 49 (1984) 57-78
    • (1984) Psychometrika , vol.49 , pp. 57-78
    • Desarbo, W.S.1    Carroll, J.D.2    Clark, L.A.3    Green, P.E.4
  • 9
    • 44349147676 scopus 로고    scopus 로고
    • Devaney, M., Ram, A., 1997. Efficient feature selection in conceptual clustering. In: Proceedings of the Fourteenth International Conference on Machine Learning, Nashville, pp. 92-97
    • Devaney, M., Ram, A., 1997. Efficient feature selection in conceptual clustering. In: Proceedings of the Fourteenth International Conference on Machine Learning, Nashville, pp. 92-97
  • 11
    • 44349114845 scopus 로고    scopus 로고
    • Dy, J.G., Brodley, C.E., 2000. Feature subset selection and order identification for unsupervised learning. In: Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, pp. 247-254
    • Dy, J.G., Brodley, C.E., 2000. Feature subset selection and order identification for unsupervised learning. In: Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, pp. 247-254
  • 12
    • 44349091905 scopus 로고    scopus 로고
    • Fayyad, U., Reina, C., Bradley, P.S, 1998. Initialization of iterative refinement clustering algorithms. In: Proceedings of the 4th International Conference on Knowledge Discovery and Data Mining, New York, pp. 194-198
    • Fayyad, U., Reina, C., Bradley, P.S, 1998. Initialization of iterative refinement clustering algorithms. In: Proceedings of the 4th International Conference on Knowledge Discovery and Data Mining, New York, pp. 194-198
  • 13
    • 0000014486 scopus 로고
    • Cluster analyses of multivariate data: Efficiency versus interpretability of classifications
    • Forgy E.W. Cluster analyses of multivariate data: Efficiency versus interpretability of classifications. Biometrics 21 (1965) 768-769
    • (1965) Biometrics , vol.21 , pp. 768-769
    • Forgy, E.W.1
  • 16
    • 21844501258 scopus 로고
    • Weighting and selection of variables for cluster analysis
    • Gnanadesikan R., Kettenring J.R., and Tsao S.L. Weighting and selection of variables for cluster analysis. J. Classif. 12 (1995) 113-136
    • (1995) J. Classif. , vol.12 , pp. 113-136
    • Gnanadesikan, R.1    Kettenring, J.R.2    Tsao, S.L.3
  • 17
    • 19044364181 scopus 로고    scopus 로고
    • Optimising K-means clustering results with standard software packages
    • Hand D.J., and Krzanowski W.J. Optimising K-means clustering results with standard software packages. Comput. Statist. Data Anal. 49 (2005) 969-973
    • (2005) Comput. Statist. Data Anal. , vol.49 , pp. 969-973
    • Hand, D.J.1    Krzanowski, W.J.2
  • 18
    • 0034819175 scopus 로고    scopus 로고
    • J-Means: A new local search heuristic for minimum sum of squares clustering
    • Hansen P., and Mladenovi'c N. J-Means: A new local search heuristic for minimum sum of squares clustering. Pattern Recognit. 34 (2001) 405-413
    • (2001) Pattern Recognit. , vol.34 , pp. 405-413
    • Hansen, P.1    Mladenovi'c, N.2
  • 20
    • 33750336303 scopus 로고    scopus 로고
    • Cluster analysis using multivariate normal mixture models to detect differential gene expression with microarray data
    • He Y., Pan W., and Lin J. Cluster analysis using multivariate normal mixture models to detect differential gene expression with microarray data. Comput. Statist. Data Anal. 51 (2006) 641-658
    • (2006) Comput. Statist. Data Anal. , vol.51 , pp. 641-658
    • He, Y.1    Pan, W.2    Lin, J.3
  • 23
    • 0000008146 scopus 로고
    • Comparing partitions
    • Hubert L., and Arabie P. Comparing partitions. J. Classif. 2 (1985) 193-218
    • (1985) J. Classif. , vol.2 , pp. 193-218
    • Hubert, L.1    Arabie, P.2
  • 24
    • 34347228671 scopus 로고    scopus 로고
    • An entropy weighting k-Means algorithm for subspace clustering of high-dimensional sparse data
    • Jing L., Ng M.K., and Huang J.Z. An entropy weighting k-Means algorithm for subspace clustering of high-dimensional sparse data. IEEE Trans. Knowl. Data Eng. 19 (2007) 1026-1041
    • (2007) IEEE Trans. Knowl. Data Eng. , vol.19 , pp. 1026-1041
    • Jing, L.1    Ng, M.K.2    Huang, J.Z.3
  • 25
    • 0034593107 scopus 로고    scopus 로고
    • Kim, Y., Street, W.N., Menczer, F., 2000. Feature selection in unsupervised learning via evolutionary search. In: Proceedings of the 6th International Conference on Knowledge Discovery and Data Mining, Boston, pp. 365-369
    • Kim, Y., Street, W.N., Menczer, F., 2000. Feature selection in unsupervised learning via evolutionary search. In: Proceedings of the 6th International Conference on Knowledge Discovery and Data Mining, Boston, pp. 365-369
  • 27
    • 33746671893 scopus 로고    scopus 로고
    • Li, C., Yu, J., 2006. A novel duzzy c-Means clustering algorithm. In: Proceedings of the First International Conference on Rough Set and Knowledge Technology, Chongquing, China, pp. 510-515
    • Li, C., Yu, J., 2006. A novel duzzy c-Means clustering algorithm. In: Proceedings of the First International Conference on Rough Set and Knowledge Technology, Chongquing, China, pp. 510-515
  • 28
    • 17044405923 scopus 로고    scopus 로고
    • Toward integrating feature selection algorithms for classification and clustering
    • Liu H., and Yu L. Toward integrating feature selection algorithms for classification and clustering. IEEE Trans. Knowl. Data Eng. 17 (2005) 491-502
    • (2005) IEEE Trans. Knowl. Data Eng. , vol.17 , pp. 491-502
    • Liu, H.1    Yu, L.2
  • 29
    • 44349177875 scopus 로고    scopus 로고
    • McQueen, J., 1967. Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281-297
    • McQueen, J., 1967. Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281-297
  • 30
    • 0035619721 scopus 로고    scopus 로고
    • Optimal variable weighting for ultrametric and additive trees and K-means partitioning methods and software
    • Makarenkov V., and Legendre P. Optimal variable weighting for ultrametric and additive trees and K-means partitioning methods and software. J. Classif. 18 (2001) 245-271
    • (2001) J. Classif. , vol.18 , pp. 245-271
    • Makarenkov, V.1    Legendre, P.2
  • 31
    • 0036937614 scopus 로고    scopus 로고
    • Performance evaluation of some clustering algorithms and validity indices
    • Maulik U., and Bandyopadhyay S. Performance evaluation of some clustering algorithms and validity indices. IEEE Trans. Pattern Anal. Mach. Intell. 24 (2002) 301-312
    • (2002) IEEE Trans. Pattern Anal. Mach. Intell. , vol.24 , pp. 301-312
    • Maulik, U.1    Bandyopadhyay, S.2
  • 34
    • 0042312608 scopus 로고    scopus 로고
    • Feature weighting in K-means clustering
    • Modha D.S., and Spangler W.S. Feature weighting in K-means clustering. Mach. Learn. 52 (2003) 217-237
    • (2003) Mach. Learn. , vol.52 , pp. 217-237
    • Modha, D.S.1    Spangler, W.S.2
  • 35
    • 44349174715 scopus 로고    scopus 로고
    • Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J., 1998. UCI Repository of Machine Learning Databases. URL (http://www.ics.uci.edu/~mlearn /MLSummary.html)
    • Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J., 1998. UCI Repository of Machine Learning Databases. URL (http://www.ics.uci.edu/~mlearn /MLSummary.html)
  • 37
    • 44349132577 scopus 로고    scopus 로고
    • Pelleg, D., Moore, A.W., 2000. X-means: Extending K-means with efficient estimation of the number of clusters. In: Proceedings of the 17th International Conference on Machine Learning, pp. 727-734
    • Pelleg, D., Moore, A.W., 2000. X-means: Extending K-means with efficient estimation of the number of clusters. In: Proceedings of the 17th International Conference on Machine Learning, pp. 727-734
  • 39
    • 33645505223 scopus 로고    scopus 로고
    • Variable selection for model-based clustering
    • Raftery A.E., and Dean N. Variable selection for model-based clustering. J. Am. Stat. Assoc. 101 (2006) 168-178
    • (2006) J. Am. Stat. Assoc. , vol.101 , pp. 168-178
    • Raftery, A.E.1    Dean, N.2
  • 42
    • 0031073477 scopus 로고    scopus 로고
    • A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms
    • Wettschereck D., Aha D.W., and Mohri T. A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif. Intell. Rev. 11 (1997) 1-5
    • (1997) Artif. Intell. Rev. , vol.11 , pp. 1-5
    • Wettschereck, D.1    Aha, D.W.2    Mohri, T.3
  • 45
    • 44349188990 scopus 로고    scopus 로고
    • Zhang, B., Hsu, M., Dayal, U., 1999. K-harmonic means-a data clustering algorithm. Technical Report HPL-1999-124, Hewlett Packard Laboratories, Oct. 29 1999
    • Zhang, B., Hsu, M., Dayal, U., 1999. K-harmonic means-a data clustering algorithm. Technical Report HPL-1999-124, Hewlett Packard Laboratories, Oct. 29 1999


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.