메뉴 건너뛰기




Volumn 60, Issue 2, 2009, Pages 282-291

A tabu search algorithm for the training of neural networks

Author keywords

Genetic algorithms; Heuristics; Neural networks; Simulated annealing; Supervised training; Tabu search

Indexed keywords

GENETIC ALGORITHMS; NEURAL NETWORKS; SIMULATED ANNEALING; TABU SEARCH;

EID: 58449100709     PISSN: 01605682     EISSN: 14769360     Source Type: Journal    
DOI: 10.1057/palgrave.jors.2602535     Document Type: Article
Times cited : (24)

References (29)
  • 1
    • 0026835298 scopus 로고
    • Optimization for training neural nets
    • Barnard E (1992). Optimization for training neural nets. IEEE Trans Neural Networks 3: 232-240.
    • (1992) IEEE Trans Neural Networks , vol.3 , pp. 232-240
    • Barnard, E.1
  • 2
    • 0001024110 scopus 로고
    • First- and second-order methods for learning between steepest descent and Newton's method
    • Battiti R (1992). First- and second-order methods for learning between steepest descent and Newton's method. Neural Comput 4:141-166.
    • (1992) Neural Comput , vol.4 , pp. 141-166
    • Battiti, R.1
  • 3
    • 0029373724 scopus 로고
    • Training neural nets with the reactive tabu search
    • Battiti R and Tecchiolli G (1995). Training neural nets with the reactive tabu search. IEEE Trans Neural Networks 6: 1185-1200.
    • (1995) IEEE Trans Neural Networks , vol.6 , pp. 1185-1200
    • Battiti, R.1    Tecchiolli, G.2
  • 4
    • 0032205828 scopus 로고    scopus 로고
    • Evolutionary learning of modular neural networks with genetic programming
    • Cho SB and Shimohara K (1998). Evolutionary learning of modular neural networks with genetic programming. Appl Intell 9: 191-200.
    • (1998) Appl Intell , vol.9 , pp. 191-200
    • Cho, S.B.1    Shimohara, K.2
  • 5
    • 0025449198 scopus 로고
    • Simulated annealing: A tool for operational research
    • Eglese RW (1990). Simulated annealing: a tool for operational research. Eur J Opl Res 46: 271-281.
    • (1990) Eur J Opl Res , vol.46 , pp. 271-281
    • Eglese, R.W.1
  • 7
    • 0032144731 scopus 로고    scopus 로고
    • A modified back propagation method to avoid false local minima
    • Fukuoka Y, Matsuki H, Minamitani H and Ishida A (1998). A modified back propagation method to avoid false local minima. Neural Networks 11: 1059-1072.
    • (1998) Neural Networks , vol.11 , pp. 1059-1072
    • Fukuoka, Y.1    Matsuki, H.2    Minamitani, H.3    Ishida, A.4
  • 8
    • 0022865373 scopus 로고
    • Future paths for integer programming and links to artificial intelligence
    • Glover F (1986). Future paths for integer programming and links to artificial intelligence. Comput Opns Res 5: 533-549.
    • (1986) Comput Opns Res , vol.5 , pp. 533-549
    • Glover, F.1
  • 9
    • 0000411214 scopus 로고
    • Tabu search-Part I
    • Glover F (1989). Tabu search-Part I. ORSA J Comput 1: 190-206.
    • (1989) ORSA J Comput , vol.1 , pp. 190-206
    • Glover, F.1
  • 10
    • 0001724713 scopus 로고
    • Tabu search-Part II
    • Glover F (1990). Tabu search-Part II. ORSA J Comput 2: 4-32.
    • (1990) ORSA J Comput , vol.2 , pp. 4-32
    • Glover, F.1
  • 11
    • 0000195251 scopus 로고    scopus 로고
    • Comparing backpropagation with a genetic algorithm for neural network training
    • Gupta JND and Sexton RS (1999). Comparing backpropagation with a genetic algorithm for neural network training. Omega 27:679-684.
    • (1999) Omega , vol.27 , pp. 679-684
    • Gupta, J.N.D.1    Sexton, R.S.2
  • 12
    • 0028543366 scopus 로고
    • Training feedforward networks with the Marquardt algorithm
    • Hagan MT and Menhaj MB (1994). Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Networks 5: 989-993.
    • (1994) IEEE Trans Neural Networks , vol.5 , pp. 989-993
    • Hagan, M.T.1    Menhaj, M.B.2
  • 14
    • 0001513581 scopus 로고    scopus 로고
    • Training a sigmoidal node is hard
    • Hush DR (1999). Training a sigmoidal node is hard. Neural Comput 11: 1249-1260.
    • (1999) Neural Comput , vol.11 , pp. 1249-1260
    • Hush, D.R.1
  • 15
    • 0024137490 scopus 로고
    • Increased rates of convergence through learning rate adaptation
    • Jacobs RA (1988). Increased rates of convergence through learning rate adaptation. Neural Networks 1: 295-308.
    • (1988) Neural Networks , vol.1 , pp. 295-308
    • Jacobs, R.A.1
  • 16
    • 0030233259 scopus 로고    scopus 로고
    • A scatter-search-based learning algorithm for neural network training
    • Kelly JP, Rangaswamy B and Xu J (1996). A scatter-search-based learning algorithm for neural network training. J Heuristics 2:129-146.
    • (1996) J Heuristics , vol.2 , pp. 129-146
    • Kelly, J.P.1    Rangaswamy, B.2    Xu, J.3
  • 18
    • 0031996747 scopus 로고    scopus 로고
    • Manufacturing process modeling and optimization based on multilayer perceptron network
    • Liao TW and Chen LJ (1998). Manufacturing process modeling and optimization based on multilayer perceptron network. J Manuf Sci Eng 120: 109-119.
    • (1998) J Manuf Sci Eng , vol.120 , pp. 109-119
    • Liao, T.W.1    Chen, L.J.2
  • 19
    • 0027205884 scopus 로고
    • A scaled conjugate gradient algorithm for fast supervised learning
    • Møller MF (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6: 525-533.
    • (1993) Neural Networks , vol.6 , pp. 525-533
    • Møller, M.F.1
  • 21
    • 0033116077 scopus 로고    scopus 로고
    • Training Elman and Jordan networks for system identification using genetic algorithms
    • Pham DT and Karaboga D (1999). Training Elman and Jordan networks for system identification using genetic algorithms. Artif Intell Eng 13: 107-117.
    • (1999) Artif Intell Eng , vol.13 , pp. 107-117
    • Pham, D.T.1    Karaboga, D.2
  • 22
    • 0025841422 scopus 로고
    • Rescaling of variables in back propagation learning
    • Rigler AK, Irvine JM and Vogl TP (1991). Rescaling of variables in back propagation learning. Neural Networks 4: 225-229.
    • (1991) Neural Networks , vol.4 , pp. 225-229
    • Rigler, A.K.1    Irvine, J.M.2    Vogl, T.P.3
  • 23
    • 0022471098 scopus 로고
    • Learning representations by backpropagation errors
    • Rumelhart DE, Hinton GE and Williams RJ (1986). Learning representations by backpropagation errors. Nature 323: 533-536.
    • (1986) Nature , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 24
    • 0034316975 scopus 로고    scopus 로고
    • Comparative evaluation of genetic algorithm and backpropagation for training neural networks
    • Sexton RS and Gupta JND (2000). Comparative evaluation of genetic algorithm and backpropagation for training neural networks. Inform Sci 129(1-4): 45-59.
    • (2000) Inform Sci , vol.129 , Issue.1-4 , pp. 45-59
    • Sexton, R.S.1    Gupta, J.N.D.2
  • 25
    • 0032051287 scopus 로고    scopus 로고
    • Global optimization for artificial neural networks: A tabu search application
    • Sexton RS, Alidaee B, Dorsey RE and Johnson JD (1998a). Global optimization for artificial neural networks: A tabu search application. Eur J Opl Res 106: 570-584.
    • (1998) Eur J Opl Res , vol.106 , pp. 570-584
    • Sexton, R.S.1    Alidaee, B.2    Dorsey, R.E.3    Johnson, J.D.4
  • 26
    • 0031999703 scopus 로고    scopus 로고
    • Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation
    • Sexton RS, Dorsey RE and Johnson JD (1998b), Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation. Decis Supp Syst 22: 171-185.
    • (1998) Decis Supp Syst , vol.22 , pp. 171-185
    • Sexton, R.S.1    Dorsey, R.E.2    Johnson, J.D.3
  • 27
    • 0033130747 scopus 로고    scopus 로고
    • Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing
    • Sexton RS, Dorsey RE and Johnson JD (1999). Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. Eur J Opl Res 114: 589-601.
    • (1999) Eur J Opl Res , vol.114 , pp. 589-601
    • Sexton, R.S.1    Dorsey, R.E.2    Johnson, J.D.3
  • 28
    • 0025593679 scopus 로고
    • SuperSAB: Fast adaptive back propagation with good scaling properties
    • Tollenaere T (1990). SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks 3: 561-573.
    • (1990) Neural Networks , vol.3 , pp. 561-573
    • Tollenaere, T.1
  • 29
    • 0032122764 scopus 로고    scopus 로고
    • Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm
    • Treadgold NK and Gedeon TD (1998). Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE Trans Neural Networks 9: 662-668.
    • (1998) IEEE Trans Neural Networks , vol.9 , pp. 662-668
    • Treadgold, N.K.1    Gedeon, T.D.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.