当前位置:文档之家› 毕业设计外文翻译

毕业设计外文翻译

毕业设计(论文)外文文献翻译题目:A new constructing auxiliary function methodfor global optimization学院:专业名称:学号:学生姓名:指导教师:2014年2月14日一个新的辅助函数的构造方法的全局优化Jiang-She Zhang,Yong-Jun Wang/10.1016/j.mcm.2007.08.007非线性函数优化问题中具有许多局部极小,在他们的搜索空间中的应用,如工程设计,分子生物学是广泛的,和神经网络训练.虽然现有的传统的方法,如最速下降方法,牛顿法,拟牛顿方法,信赖域方法,共轭梯度法,收敛迅速,可以找到解决方案,为高精度的连续可微函数,这在很大程度上依赖于初始点和最终的全局解的质量很难保证.在全局优化中存在的困难阻碍了许多学科的进一步发展.因此,全局优化通常成为一个具有挑战性的计算任务的研究.一般来说,设计一个全局优化算法是由两个原因造成的困难:一是如何确定所得到的最小是全球性的(当时全球最小的是事先不知道),和其他的是,如何从中获得一个更好的最小跳.对第一个问题,一个停止规则称为贝叶斯终止条件已被报道.许多最近提出的算法的目标是在处理第二个问题.一般来说,这些方法可以被类fi主要分两大类,即:(一)确定的方法,及(ii)的随机方法.随机的方法是基于生物或统计物理学,它跳到当地的最低使用基于概率的方法.这些方法包括遗传算法(GA),模拟退火法(SA)和粒子群优化算法(PSO).虽然这些方法有其用途,它们往往收敛速度慢和寻找更高精度的解决方案是耗费时间.他们更容易实现和解决组合优化问题.然而,确定性方法如填充函数法,盾构法,等,收敛迅速,具有较高的精度,通常可以找到一个解决方案.这些方法往往依赖于修改目标函数的函数“少”或“低”局部极小,比原来的目标函数,并设计算法来减少该fiED功能逃离局部极小更好的发现.引用确定性算法中,扩散方程法,有效能量的方法,和积分变换方法近似的原始目标函数的粗结构由一组平滑函数的极小的“少”.这些方法通过修改目标函数的原始目标函数的积分.这样的集成是实现太贵,和辅助功能的最终解决必须追溯到原始目标函数的最小值,而所追踪的结果可能不是真正的全球最小的问题.终端器无约束子能量法和动态隧道方法修改fiES 的目标函数的基础上的动态系统的稳定性理论的全局优化的梯度下降算法的杂交方法.这些方法都将动态系统和相应的计算非常耗时,尤其是目标函数的维数的增加,因为他们的好点是通过搜索沿各坐标到终止的发现.拉伸函数方法是一个辅助函数法,利用以前的搜索得到的信息使目标函数和帮助算法跳出局部最小更有效.这种技术已被纳入PSO 的提高找到全局极小的成功率.然而,这种混合算法是建立在一个随机的方法,其收敛速度慢、应用更易与低维问题.填充函数法是另一个辅助函数法作案fiES 为目标函数的填充函数,然后找到更好的局部极小值逐步优化填充函数构造上得到的最小值.填充函数法为我们提供了一个好主意,使用局部优化技术来解决全局优化问题.如果无法估计的参数可以得到解决,设计的填充函数可以应用于高维函数,填充函数方法在文献中的前途是光明的.掘进方法修改fiES 的目标函数,以确保未来的出发点具有相同的函数值所得到的最小离获得一个,从而找到全局极小的概率增加.一个连续的会话的方法(SCM )将目标函数转化为一个在函数值都高于得到的地区没有局部极小或固定点,除了预fi固定值.这个方法似乎有希望如果通过预fi造成不影响固定的点被排除在外..不管拉伸功能的方法,已设计的填充函数法,或隧道算法的使用,他们往往依赖于几个关键的参数是不同的fi邪教的预估中的应用,如在极小的存在和上下的目标函数的导数边界的间隔长度.因此,一个在理论上有效的辅助函数法是困难的fi邪教在实践中,由于参数的不确定性,实现.一一维函数的一个例子如下:25604712)(234+-+-=x x x x x f显然,1和2说明了“墨西哥帽”效应出现在辅助函数法(已填充函数法和拉伸函数法)在一个地方点x ∗= 4.60095.不必要的影响,即引入新的局部极小值,通过参数设置不当等引起的.新推出的局部极小值将增加原问题的复杂性和影响算法的全局搜索.因此,一个有效的参数调节方便的辅助功能的方法是值得研究的.基于此,在本文中,我们给出了一个简单的两阶段的函数变换方法,转换1398纽约王骥,J. S.张/数学和计算机和数学建模47(2008)1396–1410.x*= 4.60095的功能定义(3).“墨西哥帽”效应出现在两个点原目标函数)(xf迅速下降的收敛性和高的能力逐渐找到更好的解决方案,在更广阔的区域的一个辅助功能.这个想法是,填充函数法很相似.具体来说,我们首先发现的原始目标函数的局部最小.然后拉伸函数法和模拟填充函数法对目标函数进行连续的两个阶段的转换.构建的功能是在原来的目标函数值是高于获得一个在第一步区下降,而一个固定点必须在更好的区域存在.接下来,我们尽量减少辅助功能找到它的一个固定点(一个好点的)f比局部极小(x获得之前),然后下一个局部优化的出发点.我们重复这个过程直到终止.在新方法中,参数容易设置,例如两个常数可以被预处理,由于辅助函数的性质是不依靠不同的参数来实现,虽然两个参数中引入辅助函数.上一集的尺寸为50,与其他方法的比较表明,新的算法是更有效的标准测试问题的数值试验.A new constructing auxiliary function method for globaloptimizationJiang-She Zhang,Yong-Jun Wang/10.1016/j.mcm.2007.08.007Nonlinear function optimization problems which possess many local minimizers in their search spaces are widespread in applications such as engineering design, molecular biology, and neural network training. Although the existing traditional methods such as the steepest descent method, Newton method, quasi Newton methods, trust region method, and conjugate gradient method converge rapidly and can find the solutions with high precision for continuously differentiable functions, they rely heavily on the initial point and the quality of the final global solution is hard to guarantee. The existing difficulty in global optimization prevents many subjects from developing further.Therefore, global optimization generally becomes a challenging computational task for researchers.Generally speaking, the difficulty in designing an algorithm on global optimization is due to two reasons: One is how to determine that the obtained minimum is a global one (when the global minimum is not known in advance), and the other is that how to jump from the obtained minimum to a better one. In treating the first problem, a stopping rule named the Bayes in termination condition has been reported.Many recently proposed algorithms aim at dealing with the second problem. Generally, these methods can be classfied into two main categories, namely: (i)deterministic methods, and (ii) stochastic methods. The stochastic methods are based on biology or statistical physics,which jump to the local minimum by using a probability based approach. These methods include genetic algorithm(GA), simulated annealing method (SA) and particle swarm optimization method (PSO). Although these methods have their uses, they often converge slowly and finding a solution with higher precision is time consuming.Theyare easier to implement and to solve combinational optimization problems. However, deterministic methods such as the filled function method, tunneling method, etc, converge more rapidly, and can often find a solution with a higher precision. These methods often rely on modifying the objective function to a function with “fewer” or “lower” local minimizers than the original objective function, and then design algorithms to minimize the modified function to escape from the found local minimum to a better one.Among the referenced deterministic algorithms, the diffusion equation method, the effective energy method, and integral transform scheme approximate the coarse structure of the original objective function by a set of smoothed functions with “fewer” minimizers. These methods modify the objective function via integration of the original objective function. Such integrations are too expensive to implement, and the final solution of the auxiliary function has to be traced to the minimum of the original objective function, whereas the traced result may be not the true global minimum of the problem. The terminal repeller unconstrained sub-energy tunneling method and the method of hybridization of the gradient descent algorithm with the dynamic tunneling method modifies the objective function based on the dynamic systems’ stability theory for global optimization. These methods have to integrate a dynamic system and the corresponding computation is time consuming, especially with the increase of the dimension of the objective function, since their better point is found through searching along each coordinate till termination. The stretching function technique is an auxiliary function method which uses the obtained information in previous searches to stretch the objective function and help the algorithm to escape from the local minimum more effectively. This technique has been incorporated into the PSO to improve its success rate of finding global minima. However, this hybrid algorithm is constructed on a stochastic method, which converges slowly and applies more easily to the problem with a lower dimension. The filled function method is another auxiliary function method which modifies the objective function as a filled function, and then finds the better local minima gradually by optimizing the filled functions constructed on the obtained minima.The filled function method provides us with a good idea to use the local optimization techniques to solve global optimization problems. If the difficulty in estimating the parameters can be solved and the designed filled functions can be applied to higher dimensional functions, the filled functions approaches in the literature will be promising. The tunneling method modifies the object ive function, which ensures the next starting point with equal function value to the obtained minimum to be away from the obtained one, and thus the probability of finding the global minimum is increased. A sequential conversation method (SCM)transforms the objective function into one which has no local minima or stationary points in the region where the function values are higher than the ones obtained, except for the prefixed values. This method seems promising if the unwilling effect caused by the prefixed point is excluded.No matter whether the stretching function method, the already designed filled function method, or the tunneling algorithm are used, they often rely on several key parameters which are difficult to estimate in advance in applications,suc h as the length of the intervals where the minimizers exist and the lower or upper boundaries of the derivative of the objective function. Therefore, an effective auxiliary function method in theory is difficult to implement in practice due to the uncertainty of the parameters. An example of a one dimensional function is shown as follows:25604712)(234+-+-=x x x x x fFigs. 1 and 2 illustrate that a “Mexican hat” effect appears in the auxiliary function method (filled function method and stretching function method) at one local point x ∗ =4.60095. The unwanted effect, namely that of introducing new local minima, is caused by improper parameter setting. The newly introduced local minima will increase the complexity of the original problem and affect the global search of algorithm.Therefore, an effective and efficient auxiliary function method with easily adjusting parameters is worth investigating. Based on this, in this paper, we give a simple two-stage function transformation method which converts1398 Y .-J. Wang, J.-S. Zhang /Mathematical and Computer Modelling 47 (2008) 1396–1410.Fig. 1. A filled function (left plot) and a stretching function (right plot) constructed at x ∗= 4.60095 of the function defined in (3). A “Mexican hat” effect appe ars in the two plots.the original objective function f (x) into an auxiliary function with rapidly descending convergence and a high ability to gradually find better solutions in more promising regions. The idea is very similar to that of the filled function method. Specifically, we firstly find a local minimum of the original objective function. Then the stretching function technique and an analog filled function method is employed to execute a consecutive two stage transformation on the objective function. The constructed function is always descending in the region where the original objective function values are higher than the obtained one in the first step, while a stationary point must exist in the better region. Next, we minimize the auxiliary function to find one of its stationary points (a better point of f (x) than the local minimizer obtained before), which is then the starting point for a next local optimization. We repeat the procedure until termination. In the new method, the parameters are easy to set, e.g. two constants can be prefixed to them, because the properties of the auxiliary function are not realized by relying on the varying parameters, although two parameters are introduced in the auxiliary function. Numerical experiments on a set of standard test problems with dimensions up to 50 and comparisons with other methods demonstrate that the new algorithm is more efficient.。

相关主题