УДК 519.6 Вестник СПбГУ. Сер. 10. 2014. Вып. 2
K. Makino, M. Berz
RIGOROUS GLOBAL OPTIMIZATION OF SYSTEM PARAMETERS*)
Michigan State University, 48824, East Lansing, USA
In this paper, after reviewing the basics of the method of Taylor models which enables rigorous computations, we introduced various function range bounding methods utilizing the inherent information associated to Taylor models. The superb performance is demonstrated by using a simple but tricky example. These components allow the construction of rigorous global optimization tools. We explain how to construct such a tool based on the branch-and-bound approach using the example function, while illustrating the excellent quality obtained by the method of Taylor models with this, we proceed to demonstrate the efficiency by applying the method to a practical application to search all the parameter operation points yielding desired properties in a lattice of a charged particle storage ring. Bibliogr. 14. Il. 3. Tabl. 2.
Keywords: rigorous computation, Taylor model, function range bound, rigorous global optimization, parameter optimization.
К. Макино, М. Берц
СТРОГАЯ ГЛОБАЛЬНАЯ ОПТИМИЗАЦИЯ ПАРАМЕТРОВ СИСТЕМ*)
Мичиганский государственный университет, 48824, Мичиган, США
В работе рассмотрены основы метода моделей Тейлора, которые позволяют проводить строгие вычисления, а также представлены различные методы вычисления границ для множества значений функций с использованием информации, присущей моделям Тейлора. С помощью примера, кажущегося только на первый взгляд простым, продемонстрирована превосходная производительность предлагаемых методов, которые позволяют создать инструментарий для строгой глобальной оптимизации. С использованием примеров разъясняется способ построения такого инструментария на основе метода ветвей и границ, иллюстрирующего превосходное качество вследствие применения метода моделей Тейлора. Также демострируется эффективность разработанного метода на практическом примере задачи поиска всего множества рабочих (допустимых) точек для параметров системы, обеспечивающих желаемые свойства структуры накопительного кольца заряженных частиц. Библиогр. 14 назв. Ил. 3. Табл. 2.
Ключевые слова: строгие вычисления, гарантированные вычисления, модели Тейлора, оптимизация.
Introduction. There are numerous situations in engineering and science where parameter optimization is required, and consequently the branch of parameter optimizations has been one of the important fields in numerical computations. Tremendous amount of effort has been made to further the methods and algorithms, and to expand the types of applications that can be treated. Many different kinds of optimization tools have been used in the scientific and engineering communities to assist various tasks of design parameter optimizations to obtain desired properties, one example being the design and
Макино Киоко — доктор философии (Rh. D.), профессор; e-mail: [email protected] Берц Мартин — доктор философии (Rh. D.), профессор; e-mail: [email protected] Makino Kyoko — doctor of philosophy, professor; e-mail: [email protected] Berz Martin — doctor of philosophy, professor; e-mail: [email protected]
*) The work was supported by the US Department of Energy.
operation of particle accelerators. Still the reality is that we experience too often that the knowledge and skill in manual tuning by experts in the fields more directly leads to the desired optimization results, as opposed to modern numerical optimization methods.
One typical drawback of numerical optimization methods comes from the fact that the process gets caught by a local optimum and remains confined nearby. The process is even often sensitive to the starting values of the parameters. To compensate this difficulty, there has been a large amount of activity in the field of numerical optimizations to search the global optimum, for example utilizing genetic algorithms, as summarized in [1]. However, those methods based on point evaluations cannot avoid the risk of overlooking critically important points. Thus, ideally there should exist economical numerical tools, determining the global optimum with respect to all the parameter values yielding the optimum without overlooking anything. Such tools, if available, would promote non-experts to efficiently proceed with designs and analysis with confidence, freeing some mysteries from the hands of experts.
Numerical methods assuring confidence involve the treatment of entire sets instead of mere point evaluations. The method of interval arithmetic is a long known method to support such rigorous computations (see for example [2-4] and a large number of other excellent books). All operations are carried out on intervals instead of numbers, and furthermore, floating point inaccuracies are accounted for by rounding lower bounds down and upper bounds up. The basic operations of interval arithmetic are listed in table 1. While providing rigorous estimates, the method suffers from some practical difficulties such as the dependency problem, leading to overestimation to the extent that in some cases, the estimates may be rigorous but practically useless.
Table 1. Basics of interval arithmetic
[Li, t/i] + [L2,U2] = [L1+L2,U1+U2],
[LuUi] - [L2,U2] = [Li -U2,UX -L2],
[L\, Ui] • [L2,U2] = [min{LiL2, L\U2,U\L2,U\U2}, max{LiL2, L\U2,U\L2,U\U2}\
We have been proposing the method of Taylor models combining Taylor expansions and the remainder error bounds, supporting rigorous computations, and in this paper, we will review the basics of the method and will discuss its applications to rigorous global optimizations using some examples. Thanks to the richer information the method carries automatically, despite the more complicated structures of the method compared to conventional rigorous numerical methods like interval arithmetic, we will observe that the method offers an economical means to various problems including rigorous global optimization problems.
The Method of Taylor models. We list the definition of the Taylor model and the basic arithmetic such as addition and multiplication in table 2 and refer to [5, 6] for more details. Using these, intrinsic functions for the Taylor models can be defined by performing various manipulations. To maintain sharp estimates, however, a certain care has to be taken for how to define the Taylor models corresponding to each intrinsic function. Refer to [5, 6] for the details on definitions of standard intrinsic functions to achieve computed remainder bounds of sufficient sharpness. Since obtaining the integral with respect to variable xi of P is straightforward, it is straightforward to obtain an integral of a Taylor
Table 2. Definition of an n-th order Taylor model T and the basic arithmetic
f (x) e T = (P, I) = P(x - x0) +1 for all x e D,
where f : D C Rv ^ R is (n + 1) times continuously partially differentiable, P is the n-th order Taylor polynomial of f around xo, xo e D, and I is a bound interval,
Tl + T2 = (Pi + P2 ,Il + I2),
Tl • T2 = (Pi■ 2 ,Il■ 2 ),
where P12 is the part of the polynomial Pi • P2 up to order n, and
Il-2 = B(Pe)+ B(Pl) • I2 + B(P2 ) • Il + Il • I2,
where Pe is the part of the polynomial Pi • P2 of orders (n + 1) to 2n, and B(P) is an enclosure bound of P over D
model. Thus we have an antiderivation din the Taylor model arithmetic, and it enables Taylor model applications such as rigorous ODE solvers.
Based on the definitions of n-th order Taylor models and the arithmetic, the method has the following properties. It provides enclosures of any function given by a finite computer code list by an n-th order Taylor polynomial and a remainder bound with a sharpness that scales with order (n + 1) of the width of the domain D. It alleviates the dependency problem in the calculation [7], and it scales favorable to higher dimensional problems.
The method has been implemented in the code COSY INFINITY [8, 9]. The Taylor model implementation is based on that of the Differential Algebras [10] in the code, hence all the advantageous features in the Differential Algebras package such as the sparsity support and the efficient coefficient addressing scheme [11] are inherited to the Taylor model implementation, making it a realistic device to study practical problems. Another advantageous feature in the implementation is to have the Taylor coefficients adhere to the set of floating point numbers. This has practical benefits starting from the smooth connection between the Differential Algebras and the Taylor models, then the applicability of some powerful algorithms such as Differential Algebra fixed point solvers and some others [10], however it requires careful handling of errors associated to floating point numbers to maintain mathematical rigor and correctness. For details of the method and the implementation, refer to, for example, [5, 6].
Function Range Bounding. Naturally, Taylor models can be used for range bounding of functions. Even a crude method of evaluating a bound of P by applying interval arithmetic to all the monomials and then summing them up together with the remainder bound, which we call "naive" Taylor model bounding, provides good function range bounds compared to conventional range bounding methods like interval arithmetic as we will see shortly. But there are more sophisticated Taylor model based algorithms possible, such as the Linear Dominated Bounder (LDB) [5, 12, 13] and the Fast Quadratic Bounder (QFB) [12, 13]. We will review some function range bounding methods using a one dimensional function, which is simple enough so that some of the estimates can be confirmed even by hand calculations.
We study a function, originally proposed by Ramon Moore for the illustration of the points we want to make*), which is given as
f (x) = 1+ x5 - x4 (1)
*) Moore R. E. Private communication.
in [0,1], whose profile is shown in the left top picture in fig. 1. As one can hand calculate
Ax)
Figure 1. Range bounding of the function f (x) = 1 + x5 — x4 in [0,1] in sub divided domains Left top: the function in [0, 1]. Left second to bottom: using the Taylor model method in 16 subdomains by first order naive Taylor model bounding, by fifth order naive Taylor model bounding, by LDB on the fifth order Taylor models. Right from top to bottom: using interval arithmetic in 16, 128, 512, 1024 subdomains.
easily, the function is bounded from above by 1, and from below by the minimum that happens at x = 4/5 = 0.8, where the value of the function and hence the minimum is
1 + (4/5)5 - (4/5)4 = 1 — 44/55 = 0.91808. Even though both the mathematical expression and the profile of the function in [0,1] seem to be exceedingly simple, conventional function range bounding methods on computers find it rather difficult to perform the task near the minimum, which is the reason of Moore's interest in it. Because the precise answer is trivially known, the problem serves as an excellent benchmark test for rigorous computation methods.
Conducting interval arithmetic on the function in [0,1],
f ([0,1]) = 1 + [0,1]5 — [0,1]4 = 1 + [0,1] — [0,1] = 1 + [0,1] + [—1,0] = [0, 2],
we obtain the function range bound [0,2] that certainly encloses the precise bound [0.91808,1] but with a large overestimation of around 24.4 times the exact range. Dividing the domain of interest into smaller subdomains leads to decrease the overestimation, and the achieved sharpness of the range bound can be seen in the right pictures in fig. 1, from the top to the bottom, by 16, 128, 512, and 1024 equally divided subdomains. For an easier visual comparison, all the picture frames in fig. 1 are fixed to cover the range [0.90,1.02]. The function range bound estimates with 16 subdomains show unacceptably large overestimations, where the largest bound estimate happens at the right end subdomain providing the bound [0.724196,1.227524], which width is yet 6.2 times wider than the precise bound on the entire domain. When the number of subdomain reaches at 1024, disabling the picture from distinguishing each subdomain, the local function range bound becomes reasonably sharp; at the right end subdomain, the range bound is [0.995126,1.003901] with the width 8.8 • 10-3, and around x = 0.8, the range bound is [0.916077,0.920083] with the width 4.0 • 10-3. There is quite a struggle to be invested with the interval method to tackle this problem.
To observe the performance of Taylor models for this problem, let us start with the arithmetic step by step. We first represent the variable x in [0,1] by a Taylor model as
x e 0.5 + 0.5 • x0 + [0,0], x0 e [—1,1].
Then, we determine the fifth order Taylor model arithmetic on the function, which can be performed by hand with moderate effort:
fTM5 = 1 + (0.5 + 0.5 • xo + [0,0])5 — (0.5 + 0.5 • xo + [0, 0])4 = = 1 + 0.55 • (1 + 5x0 + 10x2 + 10x3 + 5x4 + x0 + [0, 0]) —
— 0.54 • (1 + 4xo + 6x2 + 4x0 + x4 + [0, 0]) = (2)
= 1 + 0.55 • (—1 — 3x0 — 2x2 + 2x3 + 3x4 + x0) + [0,0] = = 1 — 0.55 — 3 • 0.55x0 — 2 • 0.55x2 + 2 • 0.55x3 + 3 • 0.55x4 + 0.55x0 + [0,0].
Since the original function (1) is a fifth order polynomial, the most accurate Taylor model representation of the function is achieved by a fifth order Taylor model, resulting in a [0,0] remainder bound. When the Taylor model arithmetic is conducted on computers, however, a tiny nonzero remainder bound will result due to errors associated to the floating point number representation on computers. If lower order Taylor models are used, the order of the polynomial is truncated by the order used, and the higher order polynomial contributions are lumped together under the Taylor model remainder bound.
Based on (2), the fifth order Taylor model representing the function in [0,1], the simplest way to obtain a function range bound is to conduct interval arithmetic on each monomial in the polynomial part of fTM5 then add them together with the remainder error
bound, which we call "naive Taylor model bounding". Utilizing x0 G [—1,1], xg G [0,1], xg G [—1,1], xg G [0,1], and xg G [—1,1], while recognizing that even power contributions of xk cannot be negative,
fTM5 G 1 + 0.55 • ( — 1 — 3 • [—1,1] — 2 • [0,1] + 2 • [—1,1] + 3 • [0,1] + [—1,1]) + [0, 0] G G 1 + 0.55 • [—9, 8] = [0.71875,1.25],
which is much sharper than the bound [0, 2] obtained by interval arithmetic, but still around 6.5 times wider than the precise bound.
Contrary to interval arithmetic, a division of the domain brings a rapid improvement in accuracy in Taylor models. We show the case with 16 equally subdivided domains in the left pictures in fig. 1. The function range bounds via naive fifth order Taylor model bounding is shown in the left third picture, where the accuracy reaches to the level of the 1024 subdivided interval case everywhere throughout the entire domain. As a comparison, we show the first order naive Taylor model bounding in the left second picture, and they are already as sharp as the 128 subdivided interval case.
By the definition, Taylor models carry the information on the Taylor expansion to order n, and this fact can be efficiently utilized to craft sophisticated schemes for function range bounding. The behavior of a function is characterized primarily by the linear part, where the accuracy of the linear representation increases as the domain of interest becomes smaller, except when there is a local extremum, in which case the quadratic part becomes the leading representative of the function. Since Taylor models have linear and quadratic terms explicitly as coefficients of P, there is no need for further efforts to obtain them. This is a significant advantage of the Taylor model method compared to other rigorous methods like the interval method that does not have any automated mechanism to obtain such information.
The idea leads to some Taylor model based range bounders, first utilizing the linear part, second utilizing the quadratic part [5], and even utilizing the full Taylor polynomial up to the n-th order. Among them, the Linear Dominated Bounder (LDB) [5, 12, 13] and the Quadratic Fast Bounder (QFB) [12, 13] are practically economical while providing excellent range bounds. Both bounders are applicable to multivariate functions, and both can be used for multi-dimensional pruning to eliminate the area in the domain which does not contribute to range bounding. For LDB, the result of pruning can be fed back to re-evaluate the linear part in the remaining domain, resulting in the iterative refinement of bounds. Furthermore, the low end point in the domain can be used to provide a cutoff value for pruning, allowing for the scheme obtaining ultimately accurate bound if the function is monotonic.
While a general quadratic bounding tool to range bound multivariate functions, which we call the Quadratic Dominated Bounder (QDB) [5], is computationally expensive in higher dimensions, a special purpose quadratic bounder limited to only positive definite cases, the Quadratic Fast Bounder (QFB) [12, 13], is possible and leads to a very economical tool. The situation when the LDB does not work well in a local domain is a case having an isolated interior minimizer, which is the case when the local quadratic part of the function is positive definite. Thus LDB and QFB complement each other excellently. See [5, 12, 13] for details on the algorithms of those bounders. To provide a qualitative demonstration of those sophisticated Taylor model bounding methods, the left bottom picture shows the function range bounds obtained using the LDB bounder on fifth order Taylor models, where the bounds are optimally sharp within the picture resolution.
Before concluding this section, we comment that on the entire domain without subdivision, neither the LDB bounder nor the QFB bounder helps to improve the function range bound. As for QFB, fTM5 is not positive definite there, thus simply QFB is not applicable unless any subdomain is considered. As for LDB, fTM5 is not dominated by the linear part there, having larger contributions from the nonlinear polynomial part.
Rigorous Global Optimization. When those efficient tools for range bounding are used, it can lead to an efficient rigorous global optimization tool for general purpose. The key to the success is to combine all the economically available information of the objective function and the resulting tools in a smart way. For a given multi-dimensional box representing part of the search domain, we apply a branch-and-bound approach that proceeds as follows [12, 13].
Bound the function from below over the box, and if the lower bound is above the cutoff value, the box is eliminated from the task. Here the bounding tools are to be used in a hierarchical way, and even when the box cannot be eliminated, pruning of the box may happen when LDB or QFB is applied. If the box is not eliminated, bisect it to keep in the task unless the box size falls below the pre-specified discretization limit.
The cutoff value is to be updated as efficiently as possible. When working on a box, the function value at the center point of the box, which is easy to obtain, can be used for a possible update of the cutoff value in the form of a mid point test. Any other point in the search domain can be used to provide a possible update of the cutoff value. For example, some information obtained while using QFB might bring a good candidate point, and any other way is beneficial as long as it is economical. One caution is, however, that a upper bound of a rigorous estimate of the function evaluation has to be used for the cutoff value update. The current implementation of the Taylor model based rigorous global optimization package, called COSY-GO [12, 13], uses a gradient method based on the linear and the quadratic parts of a local Taylor model and a quadratic minimizer when the quadratic part of a local Taylor model is positive definite, beside the mid point tests.
We continue to work on the previous example function (1) to illustrate the mechanism of the Taylor model based rigorous global optimizer COSY-GO. As all the underlying algorithms are applicable to multivariate functions, of course the optimizer works for multi-dimensional cases as well.
Upon the first bisection of the original domain [0,1], the right subdomain provides an improvement to the cutoff value. The first improvement is brought by the mid point estimate, then a quick minimum search based on gradient methods using the linear and quadratic parts of the Taylor model can improve it as shown in the right top picture in fig. 2. Since the true minimum happens in the right subdomain, it will be subdivided and/or pruned to localize the area that holds the minimum. The function range bounding yields still big overestimation as shown in the picture, which is obtained by the naive fifth order Taylor model bounding. Since the function behavior is not dominated by the linear part, the LDB bounding does not provide any improvement here. On the other hand, the quadratic part of the Taylor model representation of the function is now positive definite, so the QFB bounding does yield an improvement on the lower bound, and also, more interestingly, it narrows the domain of interest by excluding the area that cannot assume the minimum, called "pruning". The resulting smaller subdomain [0.5876,1] by the QFB pruning and the improvement of the lower bound are shown in the picture. One may wonder why no improvement on the upper bound is shown; it is because that the QFB is meant to bound only from below if the function's quadratic part is positive definite.
In the left subdomain, the function can be easily bounded sharp enough even by mere
M
o
0.5
1
0
0.5
1
Figure 2. Branch-and-bound processes to find the minimum of function (1) rigorously
The subdomains and the function range bounds are shown by boxes, and the cutoff values including all the renewals are shown by dots. Left top: using Taylor models. Left bottom: using only interval arithmetic. Right: the Taylor model method situations after the first (top) and the second (bottom) bisects.
interval arithmetic, being able to conclude to be above the improved cutoff value, hence to be discarded from further consideration. Even if the treatment of subdomains begins from the left one instead of the right one, due to the benign behavior of the function in the left subdomain, the candidate area that may include the minimizer can be localized quickly to the right end of the left subdomain, which is near the center of the entire domain. Then, as soon as the work starts in the right subdomain, the small candidate area remained in the left subdomain is assured to yield a bound of the function that is above the improved cutoff value, which is renewed in the right subdomain.
The next step is to bisect the remaining QFB pruned subdomain in the right half into the left piece [0.5876,0.7938] and the right piece [0.7938,1]. Since the right piece contains the true minimum, further improvements of the cutoff value are possible using the local quadratic polynomial and the minimizer of it, which is shown in the right bottom picture. As one sees in the picture, the function behavior is now quite linear dominated in both the left and the right pieces, so the LDB bounding provides very sharp function range bounds, resulting in the immediate removal of the left piece from the further consideration. In the right piece, pruning of the domain of interest happens both by the LDB and the QFB schemes in an avalanched fashion until the area is localized in the size smaller than the pre-specified discretization demand, which in this example is 10~6. It is worth noting that those pruning actions happen in much finer scale than the picture resolution.
The obtained minimum is guaranteed to be enclosed in
[0.9180799999999953, 0.9180800000000021]
with accuracy around 5 • 10 15, which is only one order of magnitude larger than the representation error of floating point numbers near 1. And the minimizer is localized to reside in [0.79999992846, 0.80000007154] with the width of around 1.43 • 10~7. In fact, the achieved localization is narrower than the pre-specified discretization demand of 10~6, and it is realized thanks to the QFB pruning. The entirety of the branch-and-bound processes using the Taylor model method is shown in the left top picture in fig. 2, where the function range bound covering the entire original domain [0,1] is shown as well, which in this particular example is redundant as the domain includes the minimum thus subdivisions of the domain are necessary. However, an optimization problem can be given with multiple of initial domains, in which case bound estimates on all the initial domains are useful to allow possible removals of some initial domains in the very beginning. To conclude, the above result of the minimum and the minimizer is obtained in eight subdomain steps including the very first domain covering [0,1].
For the sake of comparison, branch-and-bound processes based on interval arithmetic are conducted without using Taylor models, and they are shown schematically in the left bottom picture. The same size 10~6 is demanded for the pre-specified discretization limit. The task required 13 767 subdomain steps, achieving the guaranteed minimum enclosure [0.91807804, 0.91808001] with the accuracy 2 • 10~6, localizing the minimizer in [0.798766, 0.801238] with the width 2.47 • 10~3, by far inferior to the Taylor model result. While this appears to be striking, one could have expected this performance difference from the previous studies shown in fig. 1.
An Example of Parameter Optimization. The discussed rigorous global optimization method can be used to find all possible values of system parameters that yield desired properties. We use a triple bend achromat (TBA) structure in the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory, considering the strength of three quadrupole magnets, kqF, kqD and kqFA, as the system parameters [14]. The linear lattice description of the TBA and the linear transfer map depending on kqp, kqD and kqFA were provided by W. Wan at LBNL *). An heuristic approach to scan a wide range of the parameter space to analyze the system properties to globally find operation values to satisfy certain conditions was reported in [14]. This is simple and easy to conduct technically, but in practice, the computational cost becomes high to provide satisfactory solutions with fine discretization so that important regions would not be missed. Furthermore, as the dimensionality of the parameter space increases, the approach becomes merely to provide very rough ideas on the properties, not to mention the prohibitively increased computational cost.
The Taylor model based rigorous global optimization is used to search all parameter values that yield the tune values vx = 0.63, vy = 0.53, by defining the objective function as
f (kqp, kqD, kqp a ) = (trX — [trx(v* = 0.63)]2)' + (try — [try(vy = 0.53)]2)' on the parameter space
(kqp, kqD,kqpA) G [—10,10]3.
The parameter regions yielding the specified tunes are shown in fig. 3, covering a quite large range.
To evaluate the performance, the entire parameter space is scanned to compute the tunes. The discretization size matching the consumed CPU time to that of the
*) Wan W. Private communication.
-1
10
Figure 3. Three dimensional parameter optimization to yield desired x and y tunes, vx = 0.63 and vy = 0.53, for the ALS-TBA by the Taylor model based rigorous global optimization
Taylor model based rigorous global optimization turned out to be very coarse 0.1 in each parameter dimension, totaling 2013 = 8.12 • 106 scanning points. Among all the scanned parameter values, there are none found that yield the desired tune values, as it can be expected to be very difficult if not impossible. In this example case, the closest parameter values that yield the tune values nearest to the desired values are (kqp,kqD,kqFA) = (1.7, —1.1,1.4), providing the tune values vx = 0.6292, vy = 0.5417.
1. Poklonskiy A. Evolutionary Optimization Methods for Beam Physics: PhD thesis. Michigan, USA: Michigan State University, East Lansing, 2009 // URL: http://bt.pa.msu.edu/pub.
2. Moore R. E. Interval Analysis. Englewood Cliffs, New Jersey: Prentice-Hall, 1966. 145 p.
3. Moore R. E. Methods and Applications of Interval Analysis. Philadelphia: SIAM, 1979. 201 p.
4. Alefeld G., Herzberger J. Introduction to Interval Computations. New York; London: Academic Press, 1983. 352 p.
5. Makino K. Rigorous Analysis of Nonlinear Motion in Particle Accelerators: PhD thesis. Michigan, USA: Michigan State University, East Lansing, 1998 (see also MSUCL-1093, URL: http://bt.pa.msu.edu/pub).
6. Makino K., Berz M. Taylor models and other validated functional inclusion methods // Intern. Journal of Pure and Applied Mathematics. 2003. Vol. 6, N 3. P. 239-316.
7. Makino K., Berz M. Efficient control of the dependency problem based on Taylor model methods // Reliable Computing. 1999. Vol. 5, N 1. P. 3-12.
8. Berz M., Makino K. COSY INFINITY Version 9.1 programmer's manual: technical report MSUHEP-101214. Michigan, USA: Department of Physics and Astronomy, Michigan State University, East Lansing, 2011 (see also URL: http://cosyinfinity.org).
9. Makino K., Berz M. COSY INFINITY version 9 // Nuclear Instruments and Methods. 2006. Vol. 558. P. 346-350.
10. Berz M. Modern Map Methods in Particle Beam Physics. San Diego: Academic Press, 1999 (also available at URL: http://bt.pa.msu.edu/pub).
11. Berz M. Forward algorithms for high orders and many variables // Automatic Differentiation of Algorithms: Theory, Implementation and Application. SIAM. 1991. P. 147-156.
12. Berz M., Makino K., Kim Y.-K. Long-term stability of the Tevatron by validated global optimization // Nuclear Instruments and Methods. 2006. Vol. 558. P. 1-10.
13. Makino K., Berz M. Range bounding for global optimization with Taylor models // Transactions on Computers. 2005. Vol. 4, N 11. P. 1611-1618.
14. Robin D., Wan W., Sannibale F. Global analysis of all linear stable settings of a storage ring lattice // Phys. Review ST-AB. 2008. Vol. 11. P. 024002.
1. Poklonskiy A. Evolutionary Optimization Methods for Beam Physics: PhD thesis. Michigan, USA: Michigan State University, East Lansing, 2009, URL: http://bt.pa.msu.edu/pub.
Literature
References
2. Moore R. E. Interval Analysis. Englewood Cliffs, New Jersey: Prentice-Hall, 1966, 145 p.
3. Moore R. E. Methods and Applications of Interval Analysis. Philadelphia: SIAM, 1979, 201 p.
4. Alefeld G., Herzberger J. Introduction to Interval Computations. New York, London: Academic Press, 1983, 352 p.
5. Makino K. Rigorous Analysis of Nonlinear Motion in Particle Accelerators: PhD thesis. Michigan, USA: Michigan State University, East Lansing, 1998 (see also MSUCL-1093, URL: http://bt.pa.msu.edu/pub).
6. Makino K., Berz M. Taylor models and other validated functional inclusion methods. Intern. Journal of Pure and Applied Mathematics, 2003, vol. 6, no. 3, pp. 239—316.
7. Makino K., Berz M. Efficient control of the dependency problem based on Taylor model methods. Reliable Computing, 1999, vol. 5, no. 1, pp. 3—12.
8. Berz M., Makino K. COSY INFINITY Version 9.1 programmer's manual: technical report MSUHEP-101214. Michigan, USA: Department of Physics and Astronomy, Michigan State University, East Lansing, 2011 (see also URL: http://cosyinfinity.org).
9. Makino K., Berz M. COSY INFINITY version 9. Nuclear Instruments and Methods, 2006, vol. 558, pp. 346-350.
10. Berz M. Modern Map Methods in Particle Beam Physics. San Diego: Academic Press, 1999 (also available at URL: http://bt.pa.msu.edu/pub).
11. Berz M. Forward algorithms for high orders and many variables. Automatic Differentiation of Algorithms: Theory, Implementation and Application. SIAM. 1991, pp. 147-156.
12. Berz M., Makino K., Kim Y.-K. Long-term stability of the Tevatron by validated global optimization. Nuclear Instruments and Methods, 2006, vol. 558, pp. 1-10.
13. Makino K., Berz M. Range bounding for global optimization with Taylor models. Transactions on Computers, 2005, vol. 4, no. 11, pp. 1611-1618.
14. Robin D., Wan W., Sannibale F. Global analysis of all linear stable settings of a storage ring lattice. Phys. Review ST-AB, 2008, vol. 11, pp. 024002.
Статья рекомендована к печати проф. Д. А. Овсянниковым. Статья поступила в редакцию 19 декабря 2013 г.