当前位置:文档之家› 遗传模拟退火算法及其应用

遗传模拟退火算法及其应用

本科毕业设计(论文)外文参考文献译文及原文学院轻工化工学院专业制药工程(天然药物方向)年级班别20 09级(2)班学号3109002300学生姓名黄学润指导教师魏关锋2013年6月遗传/模拟退火算法及其应用Guangming Lv, Xiaomeng Sun, Jian WangCollege of Mechanical and Electronic Engineering, Harbin Institute of Technology, Harbin Heilongjiang, Chinalgmhit@摘要:本文将模拟退火算法和遗传算法相结合,提出了一种新的算法。

遗传算法(GA)中嵌入模拟退火算法(SA),结合成一个新的全局优化算法。

SA的使用降低了GA的参数选择的困难。

此外,新算法可以缩减组合的搜索区域,并避免了遗传算法中存在的“过早收敛”问题,提高了算法的收敛性。

遗传操作的交叉算子在该算法中发挥着重要作用。

通过计算机仿真,我们可以看到新的算法相对于传统的遗传算法和模拟退火算法更具优势。

关键词:模拟退火法;遗传算法;过早收敛;交叉算子I.引言遗传算法(GA)首先由密歇根大学教授J.Holland提出,源于对自然和人工系统的自适应行为的研究。

GA是模拟生物在自然环境中的遗传和进化过程而形成的一种基于达尔文的进化论和孟德尔的遗传学说的自适应全局优化概率搜索算法。

对于复杂的优化问题,没有必要使用GA的建模和复杂操作[1]。

与传统的搜索算法相比,GA将优化问题的解空间转换成遗传空间。

它从一个种群中产生问题的一个解,并根据“优胜劣汰”的原则,一代又一代的达到问题的最优解或最近解。

遗传算法的主要特点是:处理对象不是参数本身,而是参数集的编码操作;GA同时处理的几个群体中个体,即同时估计在搜索空间中的几个解;GA只利用问题的目标函数,不需要任何其他条件或辅助信息;GA不采取一定的角色,而采用概率的变化规律来指导搜索方法;GA可以在较大的解空间快速搜索。

GA通过选择复制的行为和遗传因素保持优化种群的进化使得他们最终收敛到最优解。

选择复制给予个体更大的适应性和函数值更大的复制概率,并能加速算法的收敛速度。

交叉算子可以由父代个体的基因互换以帮助搜索出更好的个体。

变异操作可以带来遗传基因进化群,避免陷入局部极值点。

近年来,两种类型的全局随机优化方法-模拟退火(SA)和遗传算法已经得到了广泛应用[2]。

它们在难以由基于梯度的传统方法解决的复杂的优化问题上,显示出良好的性能。

GA通过优胜劣汰的策略,采用生物进化的思想解决优化问题;SA源于统计物理学方法,由Kirkpatrick和其他研究人员首先引入组合优化领域[3]。

GA具有较强的全局搜索能力,但存在过早收敛的问题,在进化后期,其搜索效率低,减缓遗传算法的进化,使得搜索效率低;SA具有较强的局部搜索能力,能避免陷入局部优解,但搜索时不能很好地控制搜索过程而使工作效率很低[4]。

在本文中,我们将模拟退火的思想嵌入到遗传算法,并有效地将它们结合在一起,从而减少了GA的参数选择的困难,和提高GA的全局收敛性,避免搜索时陷入局部优解。

II.模拟退火算法模拟退火(SA)是一种基于Monte Carlo迭代法的启发式随机搜索算法。

SA 来源于对固体物质的退火降温过程中的热平衡问题的模拟和随机搜索优化问题,以找到全局最优解或近似全局最优解[5]。

SA在寻找最优解的过程中,不仅接受优解,而且根据随机的验收标准,在一定程度上接受恶化解(Metropolis准则)。

此外,接受恶化解的概率逐渐趋向于0,这使得能够跳出局部极值区,从而找到全局最优解,所以要确保算法的收敛性[6]。

模拟退火算法可以划分成三个部分:解空间,目标函数和初始解。

其求解过程如下:第一步:产生随机初始解x0(算法迭代的初始点);第二步:初始化退火温度T0(足够大);第三步:在温度T k下,执行如下操作:(1)产生新的可行解x'(x'是x的相应的目标函数值);(2)计算新解的评价函数f(x')和旧解的评价函数f(x)的差值:∆f=f x′−f(x);(3)依照概率min⁡{1,exp⁡(−∆f T k)}>random[0,1]接收新解,(random[0,1]是一个[0,1]区间内的随机数。

如果温度达到T k,平衡条件转至第四步或第三步);第四步:按一定的方式,缓慢降低温度,定义降温函数为T k +1=αT k,k←k +1(α∈[0,1])。

α为略小于1.00的常数);第五步:若满足收敛条件,退火过程结束。

否则,转至第三步。

III .模拟退火新的遗传算法(SAGA)虽然许多研究和应用表明遗传算法的性能是比较好的,但“过早收敛”的问题存在于许多实际应用中。

在演化群体中,一些少数个体的适应度比其他个体大得多,再经过几代人,这些个体提前占据整个种群,进化过程提前收敛。

为GA 选择一种选择方式,同时保持良好的个体及维持种群多样性,是一个棘手的问题。

一些改进的选择方式提高遗传算法的性能是有用的。

传统的遗传算法,竞争只发生子代之间,子代和父代之间不存在竞争,所以可能会失去父代中一些优秀的个体。

一些算法把最优解放入下一代以保持它们,但这可能会导致过早收敛。

此外,GA采用随机交叉和变异系数和个体后,交叉和变异的个体不一定是优秀的,这可能会破坏原有的优秀个体,影响算法的性能。

GA和SA在一些现有的方法相结合。

例如,SAGA在文献[7]中控制变异概率的选择(Pm=exp⁡(−θT));GA在文献[8]中提高了SA的性能,整个算法分为两部分:GA阶段和SA阶段。

首先,它产生GA一组解,然后由SA调整优化。

本文中的算法与上述方法不同。

通过使用SA,减少了GA的参数选择的困难。

交叉操作是GA的主要因素,优化过程主要依赖于它。

SAGA采用这个想法,并对每一个确定的个体执行交叉操作,并让交叉和变异后的子代与父代竞争。

子代接受Boltzmann机制,这有利于保持良好的个体,可以同时避免过早收敛。

随着个体的不断进化,温度逐渐降低,接受恶化解的概率也逐渐降低。

它有效地利用SA的爬山特性,并提高了收敛速度。

本文的SAGA步骤如下:第一步:初始化控制参数:N为种群规模;Pm为变异概率;T为退火初始温度;α一个是温度冷却参数。

第二步:随机生成初始解组。

第三步:对生成解组执行如下操作,直到下一代出来。

[1] 评估该组中的每一个个体的自适应值f(x i)(i= 1,2,…,N)。

[2] 随机选择两个个体x i和x j执行交叉操作,并生成两个新个体x i'和x j',然后计算出两个新个体的自适应值f(x i')和f(x j'),然后在[0,1]之间随机产生一个随机数,依照概率min⁡{1,exp⁡(−∆f T k)}>random[0,1]接收新解,也就是接受新的个体。

[3] 个体交叉后执行变异操作,根据[2]中的方法决定是否接受变异后的解。

第四步:如果满足收敛条件,进化过程结束,或T k+1=αT K,转第三步。

IV.计算及实例分析为了评估SAGA算法的性能,在本文中,我们对比SAGA与传统GA和GA 和SA的其他组合算法。

求解以下两个函数最小值的最优化问题:1 F1x=x2 ,x∈−1,1;2 F2x=1−sin30x | 1−x2 ,x∈0,1 .F1x是一个二次函数。

虽然它只有一个最小值点(x = 0处,F1x= 0),随着函数值不断接近最低值,后来的搜索算法的性能也变得非常重要。

F2x是一个多峰函数,它在区间[0,1]有10个极小值点。

全局极小值点(x = 0.05179,F2x= 0.0260374)对算法的全局收敛性是非常重要的。

SAGA的控制参数如下:个体规模N = 100,变异率P = 0.01,初始温度T = 1000,冷却参数α= 0.9;传统GA 的控制参数是:个体规模N = 100,变异率Pm= 0.01,交叉率Pc = 0.6。

然后,分别用SAGA和传统GA求这两个函数的最小值。

对单峰函数F1x,图1表示SAGA的结果,找到最优解前,迭代约180代;图2表示传统GA的结果,找到最优解前,迭代约490代。

显然,这两种算法都可以收敛到最优解,但SAGA的收敛速率远远大于传统GA。

相对来说,引入了SA的自适应GA的循环体,在一定程度上提高了效率[9]。

图1. SAGA 得到F 1 x 的结果图2. 传统GA 得到的F 1 x 结果对于多峰函数F 2 x ,图3表示的是SAGA 的结果,找到最优解前,迭代约200代;图4表示传统GA 的结果,它不能收敛到全局最优解,尽管GA 能在求解过程中找到最优解。

所以,SAGA 的优势是显而易见的。

x的结果图3. SAGA得到F2与此同时,我们多次独立地运行每个算法和测试其平均求性能。

结果表明:对于单峰函数F1x,SAGA和GA收敛到最优解的迭代次数,分别是180和490。

所以,这两种算法都可以收敛到最优解,但SAGA的收敛速率明显大于GA。

此外,SAGA的平均自适应值等于最优的函数值0,而GA的平均自适应值是0.012。

对于多峰函数F2x,SAGA每次都能收敛到全局最优解,而传统GA不能收敛到全局最优解。

这源于上述的个例操作。

在收敛到全局最优解的条件下,相对于模糊自适应模拟退火遗传算法(FASAGA),SAGA收敛速率在一定程度上有所提高[10]。

V.结论在本文中,我们介绍了SAGA的思想,减少GA参数选择的难度,并对交叉和变异后的个体使用Boltzmann生存机制。

由实例分析中,我们可以得到:该算法充分利用交叉算子,这不仅可以减少可行区域的搜索范围,而且可以避免陷入局部优解。

它有效利用SA的爬山特性,提高了收敛速率。

参考文献[1] Shunhuang Wang, Diqian Gu. Intelligence Control System and its Application.Beijing: China Machine Press. 2005[2] Xuemei Wang, Yihe Wang. The Combination of Simulated Annealing and GeneticAlgorithms. Chinese Journal of computer,1997 (4) 13-18.[3] Chiye Lin, Fenghe Wang. Sequential Simulated Annealing for Multimodal DesignOptimization, 2003, 26(1):57-70.[4] Haoyang Wu, Bingguo Chang, Changchun Zhu, Junhua Liu. Multigroup ParallelGenetic Algorithm based on Simulated Annealing. Journal of Software. 2000 (3) : 20-25.[5] Ling Wang. Intelligent Optimization Algorithms and its Application. TsinghuaUniversity Press. 2005[6] Kirkpatrick S, Gellatt C D, V echi M R. Optimization by simulated annealing.Science, 1983, 220: 671 - 680.[7] Atashkari K, Nariman-Zadeh N, Pilechi A, Jamali A, Y ao X.ThermodynamicPareto optimization of turbojet engines using multi-objective genetic algorithms.International Journal of Thermal Sciences .2005, 44(11) :1061-1071.[8] Huang H.-C, Pan J.-S, Lu,Z.-M , Sun S. -H , Hang H. -M . V ector quantizationbased on genetic simulated annealing. Signal Processing .2001, 81(7) 1513-1523.[9] Hao Zhang, Ran Tao, Zhiyong Li, Hua Du. A Feature Selection Method Based onAdaptive Simulated Annealing Genetic Algorithm. Acta Armamentarii, 2009 (1) : 15-19.[10] Y onggang Peng, Xiaoping Luo, Wei Wei. New fuzzy adaptive simulatedannealing genetic algorithm. Control and Decision. 2009(6) : 15-20.Simulated Annealing-New Genetic Algorithm and its ApplicationGuangming Lv, Xiaomeng Sun, Jian WangCollege of Mechanical and Electronic Engineering, Harbin Institute of Technology, Harbin Heilongjiang, Chinalgmhit@Abstract: In this paper, a new kind of algorithm was opposed combined with simulated annealing algorithm and new genetic algorithm. The simulated annealing (SA) method was brought into the genetic algorithm (GA), which combined the two methods into a new global optimization algorithm. The use of SA reduces the stress to choose for GA. Further more, the combination can reduce the search area and avoid the premature convergence problem existing in genetic algorithm, so to improve the convergence of the algorithm. The crossover operator in genetic operation plays a more important role in this algorithm. Through computer simulation, we can see there are advantages in this algorithm compared with traditional genetic algorithm and other pre-existing simulated annealing-genetic algorithm.Keywords: simulated annealing; genetic algorithm; premature convergence; crossover operatorI. INTRODUCTIONGenetic Algorithm (GA) is firstly opposed by Professor J.Holland of Michigan University, deriving from the research on natural system and artificial adaptive system. GA is one kind of global optimization probability search algorithm based on theory of evolution by Darwin and theory of heredity by Mendel which can simulate life evolution mechanism in biosphere and realize optimization of specific goals in manual system. As to complicated optimization problems, there is no need of modeling and complicated operation in GA[1]. Compared with traditional searchalgorithm, GA converts the solution search space of optimization problem into genetic space. It begins from one population of possible potential solution set of representative problems, and evolves generation after generation to better and better approximate solution according to the principle of survival of the fittest.The main characteristic of GA is: the processing object is not the parameter itself but the individual to encode the parameter set; GA adopt the method of processing several individuals in the population simultaneously, that is estimating several solution in search space in the same time; GA only makes use of the target function of the problem and do not need any other precondition or auxiliary information; GA do not adopt certain roles but adopt change rules of probability to guide search method; GA can search rapidly in relatively big solution space.GA keep the optimization population evolving make them to converge to optimum condition finally by the action of selecting copy and genetic factor. Selecting copy give the individual with bigger adaptive function value bigger copy probability, and can accelerate the convergence rate of the algorithm. Crossover operator can help to search out better individual by exchanging gene of parent individuals. V ariation operation can bring genetic gene to the evolution group, and avoid being entrapped into local extreme point.In recent years, two kinds of global random optimization method-simulated annealing (SA) and genetic algorithm has found its extensive application[2]. They show excellent solution characteristic in complicated optimization problems that is hard to solve by traditional method based on gradient.GA adopts the idea of biological evolution and solves the optimization problems by the strategy of survival of the fittest; SA derives from statistical physics method and is first introduced into combinatorial optimization area by Kirkpatrick and other researchers[3]. GA has a strong global search ability but has the problem of premature convergence, and its search efficiency is low in later period of evolution so the population evolves slowly; SA has relatively strong local search ability can avoid being entrapped into local optimum solution when searching, but it cannot manage the search process well andhas a low operating efficiency [4]. In this paper, we introduced the idea of simulated annealing into new genetic algorithm and combined them effectively, so to reduce the stress to choose for GA and enhance the global convergence of GA and avoid being entrapped into local optimum solution when searching.II. SIMULATED ANNEALING ALGORITHMSimulated annealing (SA) is one kind of heuristic random search algorithm based on Monte Carlo iteration method. SA makes use of the similarity between thermal balance problem of solid matter annealing process and random search optimization problem to find global optimum solution or approximate global optimum solution [5]. In the process of searching optimum solution, SA can not only accept optimization solution but accept deterioration solution in certain degree according to random acceptance criteria (Metropolis criteria). In addition, the probability to accept deterioration solution tends to 0 gradually, and this make it possible to jump out from local extreme area and then find the global optimum solution, so to ensure the convergence of the algorithm [6].Simulated annealing algorithm can be divided into three parts: solution space, target function and initial solution. And its solving procedure is as below: Step one: Generate the initial solution x0(starting point of iteration of the algorithm) randomly;Step two: Initialize the annealing temperature T0 (make it big enough);Step three: Execute following operation, under the temperature T k:(1) Generate new feasible solution x' (x' is adjacent solution of x);(2) Calculate the difference between evaluation function f(x') and f(x):∆f=f x′−f(x);(3) Adopt new solution in the probability min{l,exp(−∆f T k)}>random[0,1](random[0,1] is a random number between 0 and 1. If the temperature reach T k, turn the balance condition to step four or to step 3);Step four: Reduce the temperature in certain way, and define the decrease function as T k+1=αT k, k←k+1 (α∈[0, 1]). αis constant slightly smaller than 1.00);Step five: If the convergence criteria is contented, the annealing procedure is over. Else, turn to step three.III. SIMULATED ANNEALING- NEW GENETIC ALGORITHM (SAGA) Although many researches and applications indicate the performance of GA is relatively good, but premature problem occur in many practical applications. In the evolution group, the adaptation function value of some minority individuals are much bigger than others, and then after several generations these individuals take up the whole population and the evolution process converges in advance. It is a troublesome problem to choose a selection method to keep excellent individuals and maintain the diversity of the population at the same time for GA. Some modified selection method to improve the performance of GA is available.As for traditional genetic algorithm, the competition happens only between offspring and there is no competition between offspring and parents, so some excellent individuals in parents may lose. Some algorithm put optimum solution into nest generation to keep them, but this may leads to premature convergence. Besides, what GA adopts is stochastic crossover and mutation coefficient and individuals after crossover and mutation are not necessarily excellent individual, and this may destroy original excellent individuals and affect performance of the algorithm.GA and SA are combined in some existing method. For example, SAGA was adopted in document [7] to control the selection of mutation probability (Pm= exp⁡(−θT)); GA was adopted in document [8]to improve the performance of SA, and the whole algorithm was divided into two part: GA stage and SA stage. Firstly it evolves a group of solution by GA, and then adjusts optimization by SA.The algorithm in this paper is different from methods above. It reduces the stress to choose for GA by the use of SA. Crossover operation is the main factor of GA, and optimization process mainly relies on it. SAGA adopts this idea and executes crossover operation to every selected individuals, and lets offspring after crossover and mutation compete with there parent. Offspring are accepted by Boltzmannmechanism, and this is benefit for keeping excellent individuals and can avoid premature convergence at the same time. As the evolving of the population, the temperature decreases gradually, and the probability to accept inferior solution decreases gradually too. It makes use of the mountain climbing characteristics of SA effectively, and improves the convergence rate.The SAGA algorithm in this paper is as below:Step one: Initialize the control parameter: N is population size; Pm is mutation probability; T is annealing initial temperature; a is temperature cooling parameter.Step two: Generate initial solution group randomly.Step three: Execute operations to existing solution group as below until next generation come out.[1] Evaluate the adaptive function value f (x i) (i= 1 ,2, ... , N) of every individual in this group.[2] Select two individuals x i and x j randomly to execute crossover operation, andgenerate two new individuals x i' and x j', and then calculate the adaptive function value f(x i') and f(x j') of both individuals; Then generate a random number named random between [0 , 1], and then accept new solution in the probability min{l,exp(−∆f T k)}>random[0,1], that is to accept new individuals.[3] Execute mutation operation to individuals after crossover, and decide whether toaccept solutions after mutation according to the method in (2).Step four: If the convergence condition is contented, the evolution process is over, or T k+1 =αT k and turn to (3).IV. CALCULATION AND ANALYSIS OF EXAMPLES To evaluate the performance of SAGA algorithm in this paper, we contrast SAGA with general GA algorithm and other methods combined with GA and SA. Take minimum optimization problems of two functions as below:1 F1x=x2 ,x∈−1,1;2 F2x=1−sin30x | 1−x2 ,x∈0,1 .F1x is a quadratic function. Although it only has one minimum point (x = 0,F1x= 0), function values closely near to the minimum, so the later search performance of the algorithm is very important. F2x is a multiple hump function, and it has ten minimal value points in the interval [0, 1]. The global minimal value point (x = 0.05179, F2x=0.0260374) is especially important to algorithm's global convergence of the function. The control parameters of SAGA are as below: population size N = 100, mutation probability Pm = 0.01, initial temperature T = 1000, cooling parameter α=0.9; the control parameters of standard GA algorithm are: population size N = 100, mutation probability Pm = 0. 01, crossover probability Pc= 0.6.Then, solve the two function's minimum by SAGA and standard GA algorithm respectively. As to single hump function F1x, Fig.l indicates the outcome of SAGA algorithm, and it iterated about 180 generation before finding the optimum solution; Fig.2 indicates the outcome of general GA algorithm, and it iterate about 490 generation before finding the optimum solution. Obviously, both algorithms can converge to the optimum condition, but the convergence rate of SAGA is much bigger than that of general GA. The efficiency has improved in a certain degree compared with methods that embed SA into loop body of adaptive GA algorithm [9].Fig.1 Outcome of F1x by SAGAFig.2 Outcome of F1x by general GAAs to multiple hump function F2x, Fig.3 indicates the outcome of SAGA algorithm, and it iterated about 200 generation before finding the optimum solution; Fig.4 indicates the outcome of general GA algorithm, and it cannot converge to global optimum point though GA can find optimum point during solving process. So, the advantage of SAGA is obvious.Fig.3. Outcome of F2x by SAGAFig.4. Outcome of F2x by general GAAt the same time, we executed every algorithm several times independently and test their average solving performance. The result shows: as to single hump function F1(x), the average iteration numbers of SAGA and GA to converge to optimum point are 180 and 490 respectively. So, both algorithms can converge to optimum condition, but the convergence rate of SAGA is obviously bigger than GA. Moreover, the average adaptive function value of SAGA is equal to the optimum function value 0 while the average adaptive function value of GA is 0.012. As to multiple hump function F2(x), SAGA converged to global optimum point every time while GA cannot converge to global optimum point. It is in accordance with the single operation above. Compared with fuzzy adaptive simulated annealing genetic algorithm (FASAGA), the convergence rate has improved in a certain degree under the condition of converging to the global optimum point[10].V. CONCLUSIONIn this paper we introduce the idea of SA to GA, and reduce the stress to choose for GA, and use Boltzmann acceptance strategy to individuals after crossover and mutation. Through specific test we can get: this algorithm make full use of crossover operator, and this can not only reduce the search range of the feasible region but avoid being entrapped into local optimum solution. It makes use of the mountain climbing characteristics of SA effectively, and improves the convergence rate.REFERENCES[1] Shunhuang Wang, Diqian Gu. Intelligence Control System and its Application.Beijing: China Machine Press. 2005[2] Xuemei Wang, Yihe Wang. The Combination of Simulated Annealing and GeneticAlgorithms. Chinese Journal of computer,1997 (4) 13-18.[3] Chiye Lin, Fenghe Wang. Sequential Simulated Annealing for Multimodal DesignOptimization, 2003, 26(1):57-70.[4] Haoyang Wu, Bingguo Chang, Changchun Zhu, Junhua Liu. Multigroup ParallelGenetic Algorithm based on Simulated Annealing. Journal of Software. 2000 (3) : 20-25.[5] Ling Wang. Intelligent Optimization Algorithms and its Application. TsinghuaUniversity Press. 2005[6] Kirkpatrick S, Gellatt C D, V echi M R. Optimization by simulated annealing.Science, 1983, 220: 671 - 680.[7] Atashkari K, Nariman-Zadeh N, Pilechi A, Jamali A, Y ao X.ThermodynamicPareto optimization of turbojet engines using multi-objective genetic algorithms.International Journal of Thermal Sciences .2005, 44(11) :1061-1071.[8] Huang H.-C, Pan J.-S, Lu,Z.-M , Sun S. -H , Hang H. -M . V ector quantizationbased on genetic simulated annealing. Signal Processing .2001, 81(7) 1513-1523.[9] Hao Zhang, Ran Tao, Zhiyong Li, Hua Du. A Feature Selection Method Based onAdaptive Simulated Annealing Genetic Algorithm. Acta Armamentarii, 2009 (1) : 15-19.[10] Y onggang Peng, Xiaoping Luo, Wei Wei. New fuzzy adaptive simulatedannealing genetic algorithm. Control and Decision. 2009(6) : 15-20.。

相关主题