Онлайн-доступ к журналу: http: / / mathizv.isu.ru
Серия «Математика»
2018. Т. 26. С. 91-104
УДК 519.6 MSG 41А25, 65D99
DOI https://doi.org/10.26516/1997-7670.2018.26.91
Some Modifications of Newton's Method for Solving Systems of Equations
V. A. Srochko
Irkutsk State University, Irkutsk, Russian Federation
Abstract. The problem of numerical solving a system of nonlinear equations is considered. Elaboration and analysis of two modifications of the Newton's method connected with the idea of parametrization are conducted. The process of choosing the parameters is directed to provision of the monotonicity property for the iteration process with respect to some residual.
The first modification uses Chebyshev's residual of the system. In order to find the direction of descent we have proposed to solve the subsystem of the Newtonean linear system, which contains only the equations corresponding to the values of the functions at a current point, which are maximum with respect to the modulus. This, generally speaking, implies some diminution of the computational complexity of the modification process in comparison to the process typical of Newton's method. Furthermore, the method's efficiency grows: the subsystem can have its solution, when the complete system is not compatible. The formula for the parameter has been derived on account of the condition of minimum for the parabolic approximation for the residual along the direction of descent.
The second modification is connected with the Euclidean residual of the system. It uses the Lipscitz constant for the Jaeobi matrix. The upper bound estimate for this residual in the form of a strongly convex function has been obtained. As a result, the new modification has been constructed. Unlike that for Newton's method, it provides for nonlocal reduction of the Euclidean residual on each iteration. The fact of global convergence with respect to the residual for any initial approximation at the rate of geometric progression has been proved.
Keywords: nonlinear system of equations, Newton's method with parameter, modifications.
V. A. SROCHKO Introduction
The problem of numerical analysis for the systems of nonlinear equations still retains its theoretical and application urgency in the aspect elevation of both efficiency and diversity of the methods used for its solving.
No doubt, the main approach used for solving systems of nonlinear equations is the classical Newton's method (NM) which attracts attention of specialists in computational mathematics during many decades. Presently, one can find a large number of NM modifications, which provide for improvement of some characteristics of the iteration process (complexity of realization, domain and rate of convergence, property of monotonicity, etc.) [2; 5-7; 9-12]
In the present paper, the author constructs two modifications of NM with alternative characteristics. The basis of the approach is formed by a technology of NM with parameter, which is quite natural from the viewpoint of optimization methods. In this case, the choice of the parameter is directed to provision of the property of monotonicity of the iteration process with respect to some residual.
The first modification (Mi) uses Chebyshev system's residual (maximum of modules). To the end of finding the direction of descent with respect to this residual we propose to solve the subsystem of the Newtonean linear system, which contains only equations corresponding to maximum (with respect to the modulus) values of functions at a current point. The normal solution of this subsystem, in which the number of equations is, in the general case, smaller than the dimension of initial system, is taken as the basis. This, generally speaking, leads to the reduction of the computational complexity of modification M\ in comparison with NM. Furthermore, modification Mi can work (the subsystem has a solution), when the iteration of NM is not realized (the full system is not compatible). Absence of the guaranteed diminution of the residual for the proposed value of the parameter, which is obtained from the condition of minimum of the parabolic approximation, is considered as a minus of the modification process.
The second modification (M2) is connected with the system's Euclidean residual, and it uses the Lipschitz constant for the Jacobi matrix, which can be found in conditions of the theorem on convergence of NM. An upper estimate for this residual in the form of a strongly convex function (majorant) has been obtained. Along the Newtonean direction of descent this majorant is bounded in its turn by the convex parabola, whose minimization leads to an obvious formula for the parameter. As a result, we have obtained a modification M2, which, unlike that of NM, provides for the nonlocal diminution of the Euclidean residual on each iteration. The fact of global convergence of M2 with respect to the residual (for any initial approximation) at the rate of geometric progression with denominator (0,5)
has been proved. Note that the Lipschitz constant, which exists in M2, may be computed by the formula for quadratic systems.
1. Newton's method and the corresponding relations
Consider the following system of equations
fi{x\, ...,xn) = 0, i = l,n (1.1)
under the assumption that fi : Rn —>• R are continuous-differential functions with the gradients V/j(-)-
Having assumed that x = (xi, ...,xn), F = (/1,..., /„), let us proceed to the vector form
F(x) = 0. (1.2)
Let F'(x) be a Jacobi matrix for the vector function F(x) with rows Vfi(x), i = T~n.
A standard formula of the Newton's method with the application to equation (1.2) has the form:
xk+l = xk — (F'(xk))~1F(xk), k = 0,1,... (1.3)
Within frameworks of system (1.1) this formula is realized as follows xk+1 =xk k = _
where vector pk is a solution of the linear system
(Vft(xk),x) = -ft(xk), i = T~n. (1.4)
The iterative procedure
xk+l = xk + akpk, k = 0,1,...
with parameter ak > 0, which can be obtained with the aid of an explicit formula or as a result of one-dimensional search to the end of diminution of the residual <p(x) of system (1.1) on a set of points xk(a) = xk + apk, a > 0 is a natural modification of Newton's method. The extremum property is the sufficient condition of such diminution: vector pk is the direction of descent of function tp(x) at point xk.
Consider now the following residual function in the Euclidean form
n
i= 1
Hence vector pk is the direction of descent of function ipi at point xk : Lpi(xk) > 0. Indeed, the derivative with respect to the direction pk is negative:
(VVl(xk), Pk) = ~^i{xk) < 0.
Next, consider the conditions of the theorem on convergence of NM in the form (1.3) [1;8]
1) the vector function F(x) is continuous differentiable in the domain
Ss = {x : \\x — ж*|| < ¿>0, where x* is the solution of equation (1.2);
2) for all x € Ss there exists an inverse matrix (F'(a;))_1, furthermore,
\\(^(х))-1\\<а1, a,\ > 0;
3) for all x, у € Ss
|\F(x) - F{y) - F'{y){x -y)|| < а2||ж - y||2, a2 > 0;
4) i0 g Se, e = min{6, a = a\a2-
Under these conditions, the quadratic convergence of NM is ensured by the following inequality
Ilrfc+1 — r*ll < n/\\rk — t*II2
kKj LX .
Let us pay attention to condition 3), which has a nonstandard character: under the sign of the norm stands its increment minus its linear part. The Lipschitz condition for the Jacobi matrix are more preferable (the matrix norm and the vector norm are correlated)
\\F\x)-F\y)\\<L\\x-y\\, x,y&Ss,
from which follows condition 3) for 0:2 = \L [3;5].
Right the Lipschits condition may be taken as the basis for constructing modification In this connection, let us identify the systems (1.1) with quadratic functions
fi(x) = ^{x, Aix) + {b\ x)+Ci, i = l,n,
where Ai € цахп is a symmetric matrix; Ьг € Rn, Ci € R.
Consider the Lipschitz condition for the gradient V/j in the Euclidean vector norm || • Ц2
||V/^)-V/,(y)||2 < \\Аг\\2\\х - y\\2, x,y&Rn. (1.5)
Here \\Ai\\2 is a spectral matrix norm. On account of the property of symmetry this is a spectral radius of matrix Ai : 11 112 = p(Ai)- According
to the known property p(Ai) < where is any norm of matrix
Let us verify the Lipschitz condition for the matrix F'(x) on Rn, while using the Frobenius matrix norm || • and the Euclidean vector norm coordinated with it
I\F'(x) - F'(y)\\F <L\\x- y\\2, x, y € Rn. (1.6)
On account of the structure of matrix [F'(x) — F'(y)] and inequality (1.5), we obtain
\F'{x)-Fl{y)\\l = Y,\Wh{x)-Vh{ym
12 <
i= 1
n
Whence we come to the Lipschitz condition
\\Fl(x)-F\y)\\F<L2\\x-y\\2, x, y £ Rn with the constant i
l2=(EpW)2.
Note further that the matrix norm is also admissible in inequality
(1.5), what leads to the Lipschitz condition with a constant
lf= hrn(A
i
i)\\F
KÎ=1
In this case, L2 < Lp.
Therefore, in case of quadratic systems, the Jacobi matrix satisfies the Lipschitz condition on Rn with the constants L2, Lp, which are expressed via the norms of matrices of secondary derivatives Ai, i = l,n.
In the general case, it is necessary to obtain estimates for matrices of the second derivatives V2fi(x), i = l,n in some domain Ss
\\y2U{x)\\F<k, xeSs.
As a result, we arrive at the Lipschitz condition of the form (1.6) on Ss with the constant
i
n \ 2
n
2. Chenyshev's residual. The first modification
Let us define the following residual function of system (1.1) at point x
<p2{x) = max \fi(x)\.
1 <l<n
Let us identify a set of indices of the active (the most deviating from zero) functions at this point
I(x) = {i = l,...,n: \fi(x)\=<p2(x)}.
Consider the issue of differentiability of function <p2(x) with respect to the directions.
Let at some point y € Rn function fi(x), i € I{y) is different from zero: fi(y) / 0- Then function \fi(x)\ is continuously differentiable at point y with the gradient
V\fi(v)\=Vfi(v)signfi(y).
Let now proceed to function <p2(x). On account of the known result for the function of maximum [4] we come to the conclusion on differentiability of function <p2(x) at each point y : <p2(y) > 0 with respect to any direction q € Rn, q / 0 with the derivative
dLp^ = max (V/i(y), q)signfi{y). dq i&i(y)
Now let us define vector q{y) as a solution of the linear system {Vfi(v),x) = -fi(y), iel(y).
The corresponding derivative with respect to the direction q{y) may be expressed as follows:
= max(-|/i(2/)l) = max[-<p2(y)] = -Lp2{y) < 0.
iei(y) iei{y)
Therefore, vector q{y) gives the direction of descent for the residual function <p2(x) at point y : <p2(y) > 0.
Next, it is possible to organize some procedure of local descending along the direction q(y) to the end of reduction of the residual <p2. Although, when following [6], one can find an explicit formula for an acceptable step along q{y) within the frames of the following scheme.
Let us conduct parabolic approximation of the function
s(a) = p2(y + aq(y)), a> 0
acccording to the rule
s(a) « <p2(y) + + ccy2 = f2{y){l ~a) + ca2. (2.1)
Coefficient c may be found from the condition of interpolation when a = 1 :
s(l) = c c = <p2(y + q(y)).
Step a(y) may be expressed, while proceeding from the condition of minimum for the approximation
f2{y + q{y))a2 ~ f2{y)a ->■ min, a > 0.
As a result, we obtain the desired expression for the step:
<P2 (y)
a(y) =
2^2 {y + q{y)Y
Let us consider the iteration description of the proposed modification Mi.
Let k = 0,1,... , xk € Rn. Identify the indices of active functions at point xk
Ik = {i = 1 ,...,n : \fi(xk)\ =M%k)} and obtain solution qk for the linear system
('Vfl{xk),x) = -fl{xk), ielk. (2.2)
Now compute the step
^ = ^2(xk) 2 Lp2{xk + qk)
and construct a sequential approximation
xk+1 = xk
Remark 1. A linear system (2.2) represents a fragment of the linear system of NM with respect to active functions. It is advisable to find a normal solution of system (2.2) such as a linear combination of gradients of active functions:
7iV/i(*fc).
j&ik
This leads to a linear system having the dimension equal to the number of the functions, which are the most deviating from zero .
Remark 2. The choice of step /3k does not, generally speaking, guarantee any reduction of residual if 2 in case of transition xk => xk+1 due to the approximate character of relation (2.1). Nevertheless, obtaining an explicit expression for the stepwise parameter is - due to the definite approximation of the residual function - a desirable requirement in case of constructing the methods for solving the systems of equations in [2; 6].
3. Euclidean residual. The second modification
Consider the system (1.1) in its vector form (1.2) and define the residual function in the Euclidean norm
ф) = (F(x),F(X))12 =|№)||.
Let us find the the upper functional estimate for the residual <p(x) (the majorant function) under the assumption that the Jacobi matrix F'(x) satisfies the Lipscitz condition on Rn with constant L (the matrix norm correlated with with the Euclidean norm)
\\F'(x) - F\y)\\ < L\\x-y\\, x,y&Rn.
It is known, this condition implies the following estimate:
I\F(x) - F(y) - F'(y)(x -y)|| < h\\x - y\\2.
Now, when putting here у = xk, we obtain the following
\\F{x)-F{xk)-Fl{xk){x-xk)\\ < ^L\\x-xk\\2.
Next, let us use the obvious inequality for the difference of the norms
INI - H&ll < ||a — 6||.
On account of the previous estimate we obtain:
\\F(x)\\ - \\F(xk) + F'(xk)(x - xk)\\ <
< \\F{x)-F{xk)-Fl{xk){x-xk)\\ < ^L\\x-xk\\2.
As a result, we obtain the upper estimate (estimate from above) for the residual
<p(x) = \\F(x)\\<rk(x), xeRn
with the majorant
rk(x) = 11 F(xk) + F'(xk)(x-xk)\\ + h\\x-xk\\2.
Note that Lp(xk) = rk{xk). Furthermore, the following important property takes place: function rk(x) is strongly convex on Rn with constant (\L). This means that for any xl, x2 € Rn and a € [0,1] satisfied is the following inequality
rk(axl + (1 — a)x2) < ark(xl) + (1 — a)rk(x2) — ^La(l — a)!^1 — x2\\2.
Now let us conduct the iteration description of the second modification (M2).
Let us define the set
D = {x eRn : <p(x) / 0, detF'(x) / 0}
of nonsigular points, which do not represent the solution of system (1.2).
Let k = 0,1,..., xk € D. Let us find an auxiliary point yk according to Newton's method, while solving the linear system
F{xk) + F'{xk){x -xk) = 0.
This is the point corresponding to the minimum of the first addend in the expression of rk(x). Note that
yk^xk, rk{xk) = V{xk), rk(yk) = ^L\\yk -xk\\2.
Now we can form the following convex combination: xk(a) = (1 - a)xk + ayk, a €[0,1]. Due to the property of strong convexity of function rk(x) we have:
rk(xk(a)) < (1 -a)rk{xk) + ark(yk) - ^La(l - a)\\yk - xk\\2.
After obvious transformations in the right-hand side, we obtain the estimate quadratic with respect to a estimate
rk(xk(a)) < y{xk) - v{xk)a + h\\yk - xk\\V. (3.1)
Now we are solving the problem of finding the minimum for the convex parabola:
sk(a) = iL\\yk - xk\\2a2 - Lp(xk)a rriin, a € [0,1].
As a result, we obtain the following expression for the step:
Lp(xk) L\\yk — xk\\2
ak = min < 1,
Let us now formulate the following approximation
xk+1 = xk ak(yk _ xky
Note the following important characteristic of the iteration.
Lemma 1. There exists a property of nonlocal improvement with respect to the residual: Lp(xk+l) < Lp(xk).
Proof. According to the estimate obtained above, Lp(xk+l) < rk{xk+l). Next, due to (3.1) for a = ak we obtain
rk{xk+l) < f{xk) + sk{ak).
Since sk{0) = 0, ^ |a=o = —ip(xk) < 0, we have sk{ak) < 0. Consequently, rk{xk+l) < Lp(xk).
□
Remark 3. The complexity of realization of the modification obtained coincides with that of NM. The improvement is bound up with monotonicity with respect to the residual, which is not guaranteed by NM .
Remark 4. According to the iteration formula NM yk = xk + pk, i.e. modification M2 is represented in the form xk+1 = xk + akpk, ak € [0,1]. This is NM with parameter. If ak = 1, then we obtain a NM iteration with the property of improvement with respect to the residual. If ak < 1, then an obvious approximation xk+1 is on the segment [xk,yk]..
4. Estimation of reduction of the residual. Convergence of M2
Let us study the issue of convergence of modification M2 with respect to the residual under the condition xk € D, к = 0,1,... Consider the quadratic estimate (3.1) when a = ak
фк+1) < rfc(a;fc+i) < (1 _ ак)фк) + Iyk - xk\\2a\. (4.1)
Introduce the denotation
Lp(xk) lk = L\\yk — xk\\2 '
Hence ak = min{l,7fc}.
Consider the first case, when < 1 ^ ak = jk. From (4.1) we obtain
фк+1) < (1 - 1к)фк) + Ilкфк) = (1 - \1к)фк)-
Consider the second case: > 1 tp(xk) > L\\yk — xk\\2. Hence ak = 1, and inequality (4.1) acquires the following form:
V(xk+l) <h\\yk -xk\\2 <^(xk).
Having joined these two cases, we obtain an estimate of reduction of the residual on the iteration
(^+1) < ( (1 - Mf**). -Tk < 1,
I %<p(xk), 7k > 1.
The sequence {(p(xk)} is monotonously decreasing and bounded, hence it is converging, i.e.
(p(xk+1) - tp(xk) 0, k -»• 00.
Suppose that the case, when 7^ < 1, is satisfied an infinite number of times, i.e. there exists a sequence of indices kj, j = 1,2,... such that 7kj < 1. Hence
(p(xkj+1) < (1 — j = 1,2,...
Whence we have
<p(xk*+1) - <p(xk*) < -pkj<p(xk*).
When j —> 00, the difference in the left-hand side converges to zero. Consequently,
7kMxki)= ru.k _ 1,12 (42)
Lp2(X' L\ Iyki — xki
According to the definition of point ykj we have
yk*-xk> = —[F'(xkj)]~1F(xkj).
Suppose that the inverse matrix is bounded above with respect to the norm in domain D
||[F'^)]"1!) < c, xeD.
Hence
||yk3 _xk31| < |||F,^fcj^-i|| . ||F(^)|| < Cip(xki). Furthermore, from (4.2) we obtain the lower bound
vfrki) > ^^ - J_ j 1 2
The latter contradicts to the fact of convergence
lkjp(xki) j —>■ oo.
Consequently, the assumption of infinite realization of the case, when 7fc < 1, is wrong, i.e. this inequality is fulfilled a finite number of times in the process iterations.
Therefore, it is possible to find an index ko such that for k > ko the condition 7fc > 1 is satisfied, i.e.
<p(xk+1) <^<p(xk), k = ko,ko + l,...
Therefore, we have proved the property of convergence with respect to the residual: Lp(xk) —> 0, k —> oo. The rate of convergence is represented by a geometrical progression with denominator |. The domain of convergence is represented by set D.
In the case, when 7*. < 1, reduction of the residual is characterized by the following inequality
< (1 - \lkMxk) with multiplier (1 — € 1).
Conclusion
The present paper has described the techniques of constructing two new modifications of Newton's classical method, which are connected with parametrization of its iteration formula.
The first modification uses Chebyshev's residual and on each iteration leads to obtaining a solution for some subsystem of the Newtonian linear system, what improves characteristics of the method.
The second modification uses the Lipschitz constant for the Jacobi matrix and provides for nonlocal reduction of the Euclidean residual on each iteration. The fact of global convergence of the iteration process with respect to the residue at the rate of geometric progression has been proved.
References
1. Bakhvalov N.C., Zhidkov N.P., Kobelkov G.M. Chislennye metody [Numerical methods]. Moscow, Laboratory of Basic Knowledge Publ., 2002, 632 p. (in Russian)
2. Budko D.A., Cordero A., Torregrosa J.R. New family of iterative methods based on the Ermakov-Kalitkin scheme for solving nonlinear systems of equations Comput. Math. Math. Phys., 2015, vol. 55, no. 12, pp. 1986-1998. https://doi.org/10.1134/S0965542515120040
3. Vasilyev F. P. Metody optimizatsii [Optimization Methods]. Moscow, Faktorial Press, 2002, 824 p. (in Russian)
4. Demyanov V. F., Malozemov V. N. Vvedenie v minimaks [Introduction to minimax]. Moscow, Science Publ., 1972, 368 p.(in Russian)
5. Dennis J., Schnabel R. Numerical methods for unconditional optimization and solution of nonlinear equations. Moscow, Mir Publ., 1988. 440 p.
6. Ermakov V.V., Kalitkin N.N. The optimal step and regularization of Newton's method Com,put. Math. Math. Ph/ys., 1981, vol. 21, no. 2, pp. 491-497. https://doi.org/10.1016/0041-5553 (81) 90022-7
7. Ortega J., Reinboldt V. Iterative methods for solving nonlinear systems of equations with many variables. Moscow, Mir Publ., 1975. 558 p.
8. Srochko V.A. Chislennye metody [Numerical methods]. Saint Petersburg, Lan Publ., 2010, 208 p. (in Russian)
9. Cordero A., Torregrosa J.R . Variants of Newton's method using fifth-order quadrature formulas. Appl. Math. Comput., 2007, vol. 190, pp. 686-698. https://doi.Org/10.1016/j.amc.2007.01.062
10. Nesterov Yu. Modified Gauss-Newton scheme with worst case guarantees for global performance. Optimization Methods and Software. 2007, vol. 22, no. 3, pp. 469-483. https://doi.org/10.1080/08927020600643812
11. Petkovic M., Neta B., Petkovic L., Dzunic J. Multipoint methods for solving nonlinear equations. New York, Academic Press, 2012.
12. Spedicato E., Huang Z. Numerical Experience with Newton-like Methods for Nonlinear Algebraic Systems. Computing, 1997, vol. 58, pp. 69-89. https: //doi.org/10.1007/BF02684472
Vladimir Srochko, Doctor of Sciences (Physics and Mathematics), Professor, Irkutsk State University, 1, K. Marx st., Irkutsk, 664003, Russian Federation, tel.: (3952)521241 (e-mail: srochkoOmath. isu.ru)
Received 10.10.18
Некоторые модификации метода Ньютона для решения систем уравнений
В. А. Срочко
Иркутский государственный университет, Иркутск, Российская Федерация
Аннотация. Рассматривается задача численного решения системы нелинейных уравнений. Проводится разработка и обоснование двух модификаций метода Ньютона, связанных с идеей параметризации. При этом выбор параметра направлен на обеспечение свойства монотонности итерационного процесса по некоторой невязке.
Первая модификация использует чебышевскую невязку системы. Для поиска направления спуска предлагается решать подсистему ньютоновской линейной системы, которая содержит только уравнения, соответствующие максимальным по модулю значениям функций в текущей точке. Это приводит, вообще говоря, к уменьшению вычислительной трудоемкости модификации по сравнению с методом Ньютона. Кроме того расширяется работоспособность: подсистема может иметь решение, когда полная система не совместна. Формула для параметра получена из условия минимума параболической аппроксимации для невязки вдоль направления спуска.
Вторая модификация связана с евклидовой невязкой системы и использует константу Липшица для матрицы Якоби. Получена оценка сверху для этой невязки в форме сильно выпуклой функции. В результате построена модификация, которая в отличие от метода Ньютона обеспечивает нелокальное уменьшение евклидовой невязки на каждой итерации. Доказана глобальная сходимость по невязке для любого начального приближения со скоростью геометрической прогрессии.
Ключевые слова: нелинейная система уравнений, метод Ньютона с параметром, модификации.
Список литературы
1. Бахвалов Н. С., Жидков Н. П., Кобельков Г. М. Численные методы. М. : Лаборатория базовых знаний, 2002. 632 с.
2. Будько Д. А., Кордеро А., Торрегроса X. Р. Новое семейство итерационных методов на основе схемы Ермакова - Калиткина для решения нелинейных систем уравнений // Журн. вычисл. математики и мат. физики. 2015. Т. 55, № 12. С. 1986-1998. https://doi.org/10.1134/S0965542515120040
3. Васильев Ф. П. Методы оптимизации. М. : Факториал Пресс, 2002. 824 с.
4. Демьянов В. Ф., Малоземов В. Н. Введение в минимакс. М. : Наука, 1972. 368 с.
5. Дэннис Дж., Шнабель Р. Численные методы безусловной оптимизации и решения нелинейных уравнений. М. : Мир, 1988. 440 с.
6. Ермаков В. В., Калиткин Н. Н. Оптимальный шаг и регуляризация метода Ньютона // Журн. вычисл. математики и мат. физики. 1981. Т. 21, № 2. С. 491497. https://doi.org/10.1016/0041-5553(81)90022-7
7. Ортега Дж., Рейнболдт В. Итерационные методы решения нелинейных систем уравнений со многими неизвестными. М. : Мир, 1975. 558 с.
8. Срочко В. А. Численные методы. СПб. : Лань, 2010. 208 с.
9. Cordero A., Torregrosa J. R. Variants of Newton's method using fifth-order quadrature formulas // Appl. Math. Comput. 2007. Vol. 190. P. 686-698. https://doi.Org/10.1016/j.amc.2007.01.062
10. Nesterov Yu. Modified Gauss-Newton scheme with worst case guarantees for global performance // Optimization Methods and Software. 2007. Vol. 22, N 3. P. 469483. https://doi.org/10.1080/08927020600643812
11. Petkovic M., Neta В., Petkovic L., Dzunic J. Multipoint methods for solving nonlinear equations. New York : Academic Press, 2012.
12. Spedicato E., Huang Z. Numerical Experience with Newton-like Methods for Nonlinear Algebraic Systems// Computing. 1997. Vol. 58. P. 69-89. https://doi.org/10.1007/BF02684472
Владимир Андреевич Срочко, доктор физико-математических наук, профессор, Институт математики, экономики и информатики, Иркутский государственный университет, 664003, г. Иркутск, ул. К. Маркса, 1, Российская Федерация, тел.: (3952)521241 (e-mail: [email protected])
Поступила в редакцию 10.10.18