Вычислительные технологии
Том 4, № 3, 1999
A COMPUTATIONAL TRIGONOMETRY, AND RELATED CONTRIBUTIONS BY RUSSIANS KANTOROVICH, KREIN, KAPORIN
K. E. GUSTAFSON University of Colorado, Boulder, CO, USA e-mail: gustafs@@euclid.colorado.edu
Общая тригонометрия для матриц и операторов, созданная, в основном, автором данной статьи, оказывается полезной при анализе вычислительных методов и в математической физике. Цель данной статьи — краткий обзор этой малоизвестной теории и демонстрация последних достижений и будущих направлений развития. Особое внимание уделяется обсуждению вклада русских математиков Л. Канторовича, М. Крейна и Л. Капорина.
1. Abstract operator semigroups
Approximately 30 years ago (1967) this author was working in the abstract theory of operator semigroups (Hille — Yosida theory). Given A an infinitesimal generator of a contraction semigroup Wt on a Banach space X, and B a bounded multiplicative perturbation, the following result was obtained. Recall that an operator T is strongly accretive if there exists some m > 0 such that Re (Tx,x) = m||x||2 for all x in domain of T. Recall also that the infinitesimal generators are always dissipative, Re (Ax,x) = 0, i.e., —A is accretive. Here (x,y) denotes any semi-innerproduct on the Banach space X.
Theorem 1. If A is a generator and B is bounded and strongly accretive, then BA is a generator iff BA is dissipative. In particular, for bounded strongly accretive A and B on a Hilbert space, BA is accretive when
where
sin ф(В) = cos ф(А), (1)
sin ф(В) = inf ||eB — 11|, (2)
and
Re Ax, x
cos 0(A) = inf ---————. (3)
K ’ x=o ||Ax||||x|| v ’
The new entities sin 0(B) and cos 0(A) ushered in a new operator trigonometry. Originally these were just called sin B and cos A but the above notation is better because it emphasizes the operator angle 0(A): the maximum angle through which an operator A turns vectors.
© K.E. Gustafson, 1999.
Recently in the two books [1, 2] I have summarized what has been discovered (up to 1996) for the operator trigonometry. However, to continue the development of this paper it is useful to collect a few key results of the early operator trigonometry which we will need in the following. Two important early results were the Minmax Theorem, and also the Euler Equation following below.
Theorem 2 (1968). For any strongly accretive bounded operator A on a Hilbert space X, one has
sup inf ||(eA — I)x||2 = inf sup ||(eA — I)x||2. (4)
||x|| = 1 -^<e<^ — ^><e<^> ||X|| = 1
This Minmax Theorem establishes that (2) deserves to be called sin0(B), i.e., for strongly accretive B one has the (essential for a trigonometry relation) sin2 0(B) + cos2 0(B) = 1.
Theorem 3 (1969). For any strongly accretive bounded operator A on a Hilbert space X, the antieigenvalue functional
Re (Ax, x)
" = TAxnixr (5)
has Euler equation
2||Ax||2||x||2(Re A)x — ||x||2Re (Ax,x)A*Ax — ||Ax||2Re (Ax,x)x = 0. (6)
When A is selfadjoint or normal, the Euler equation is satisfied not only by the first antieigenvectors of A, but also by all eigenvectors of A.
The term antieigenvalue was introduced in [3]. The Euler equation for the antieigenvalue functional may be considered as a significant extension of the Rayleigh — Ritz variational theory of eigenvectors of selfadjoint operators A. Not only are maximal stretchings included, but also maximal turnings are now included. The maximal stretchings occur at the eigenvectors of selfadjoint A, at which ^ =1. The maximal turning angle 0(A) occurs at the first antieigenvectors, for which ^ = cos 0(A).
Specializing further to A a strongly positive selfadjoint operator, I determined in the early period 1966-1969 that
2VmM . M — m
cos 0(A) =-----------—, sin 0(A) = —-----------, (7)
v ' m + M M + m w
where m and M are the lower and upper spectral bounds of A. Specializing further to A a symmetric positive definite (SPD) n x n matrix, the expressions (7) become
cos 0(A) = , sin 0(A) = A^TT • (8)
A1 + An An + A1
These entities are achieved at the first antieigenvectors
\ 1/2 / A \ 1/2
An \ / t1
= nTT+mj xi n xn. (9)
Here At and An denote the smallest and largest eigenvalues of A, xT and xn corresponding eigenvectors, and all of xT,xn, and the pair x± have been normalized to norm 1.
The criteria (1), although not necessary, is surprisingly sharp. For example for Ma = MB = 1 and mA = 1/2, BA is accretive (i.e., Re BA is positive) whenever mB > 0.03.
2. Krein’s deviation
M. G. Krein in [20] introduced a quantity, which he called the deviation of A, dev (A), which is equivalent to my operator angle 0(A). As I did, Krein uses the real angle: I had also defined an imaginary angle and a total angle, but these are less useful than the real angle. Through his function am(A) = min dev (£A), |£| = 1, Krein treated bounded operators A on a complex Hilbert space: I had treated arbitrary A is a Banach space, but the most important case is A bounded on a complex Hilbert space. Krein’s work appeared slightly later than mine, but I assume it to be independent. Like me, Krein was motivated from a semigroups question, but it was a different question.
The principal motivation for Krein’s introduction of the deviation dev (A) was to bound the spectrum of an operator in a sector. This came about in treating an initial value problem
^ = A(t)x(t) (10)
x(0) = x0
where A(t) is a time dependent infinitesimal generator found as the derivative of a function F(t) of strongly bounded variation. Then the integral
V (t)= TexpjdF (t )} (11)
o
solves the initial value problem with solution x(t) = V(t)x0. Since V(t) is in fact a multiplicative integral due to the multiplicative property of the exponentials, written formally
V = n0 exp dF (12)
then one has formally
t
dev (V) = dev exp dF. (13)
o
Since a(V) is in a sector bounded by dev (V), this guarantees for example that the negative real axis is not in a(V), a useful property when integrating systems of ordinary differential equations.
To my knowledge, Krein did not develop further his dev (A) theory. Nor did he introduce any antieigenvalue notion or theory. However, very nice convexity techniques were used by Mirman [22] to treat the notion of higher antieigenvalues ^n(A) which I had also introduced in [3]. Beyond Krein and Mirman, I do not know of any further work on the antieigenvalue theory by the Russian school.
It should be mentioned that the German matrix specialist H. Wielandt also introduced a notion essentially equivalent to my operator angle 0(A), and at about the same time. He called his turning angle, the matrix singular angle. This idea appeared only in his lecture notes and not in the open literature and I was not aware of it until 1994 when an editor (H. Schneider) asked me to write a review [4] of it. More information about Wielandt’s motivation (spectral inequalities of Weyl — Lidskii [21] type) may be found in the review [4].
3. Kantorovich’s bound
Being invited to speak at the 1990 conference [5], I decided to take that opportunity to examine the possible connections of the operator trigonometry to computational linear algebra. I had suspected such connections 30 years ago, I even mentioned it in one of the 1968 papers. Very quickly the following result was obtained.
Theorem 4 (1990). In quadratic steepest descent solution of the symmetric positive definite linear matrix system Ax = b, the fundamental Kantorovich error bound [17]
EA(xk+l) = 0 - (AT+Anp) Ea<i‘) (14)
is in fact trigonometric:
£A(xfc+i) = sin2 0(A)£A(xfc). (15)
Here Ea denotes the energy error inner product Ea(x) = ((x—x*), A(x—x*)), where x* is the true solution of the system. Steepest descent is asymptotically a very slowly converging scheme but its analysis opens the way to that of better gradient schemes such as Conjugate Gradient, GMRES, others. In hindsight, I had independently derived my own version of the Kantorovich theory, 30 years ago, when I obtained the expressions (7) from convexity arguments on norms. In any case, the conclusion (15) from (14) was immediate, given the Minmax Theorem from the operator trigonometry. But until I put the two theories together, no one had seen the natural geometry (15) of the longstanding 1948 [17] Kantorovich bound (14). This result first appeared in the Proceedings [5], which were published in spite of the war breaking out in Yugoslavia only a few months after the June 1990 conference in Dubrovnik.
There followed three papers [6-8], respectively, occasioned by the First Conference on the Numerical Range and Numerical Radius, the special LAA volume for Chandler Davis, and the special LAA volume for John Maybee. My interest in connecting the operator trigonometry to computational linear algebra grew and when invited to speak at the 1996 conference [9], I decided I should also look at possible connections of the operator trigonometry to general iterative operator splitting schemes in computational linear algebra. Very basic to such schemes is the Richardson method xk+1 = xk + a(b — Axk) with iteration matrix Ga = I — aA and where one chooses the parameter a to produce an optimal convergence rate.
Theorem 5 (1996). In Richardson iterative solution of Ax = b for strictly accretive A, the optimal parameter a is
Re (Axi, xi) . .
a = fm = , . i 2 ± <16)
l|Ax±||2
where fm is the minimizing parameter for sin 0(A) in (2) and where x± are A’s first antieigenvectors (9). The optimal convergence rate of the Richardson scheme is sin0(A).
The operator trigonometry-computational linear algebra connection has recently been extended to Preconditioned Conjugate Gradient, Jacobi, Gauss — Seidel, SOR, SSOR, Uzawa, AMLI, ADI, Multigrid, Domain Decomposition, and related iterative solution methods for Ax = b. More details may be found in [10-12] and work in progress. To date the computational trigonometry has provided a new geometrical convergence theory for each of these schemes. Within multilevel methods a fundamental connection between the trigonometric antieigenvalues of this paper and the strengthened C.B.S. constants of the domain decomposition methods has been established. Superlinear convergence of conjugate gradient methods is seen in terms of higher antieigenvector behavior. It is hoped that in the future the computational trigonometric
viewpoint may bring about new algorithms and improved convergence theory for already good algorithms such as GMRES, BICGSTAB.
4. Kaporin’s condition number
Preconditioned conjugate gradient methods for the general solution of the matrix equation Ax = b are becoming more important, especially as we model larger physical problems in three space dimensions. These methods have the merits of ease of implementation and, for very large problems, lower memory requirements. For their use as linear solvers of the discretization of partial differential equations in three dimensions, they appear to have a distinct time advantage (O(nL3) versus O(n2'3)) over direct solvers, as well. A central question for such iterative PCG methods is to estimate the number of iterations k required to reduce the relative error to e after k iterations. Previously, such estimates were usually given in terms of the standard condition number k(A) = Amax/Amin ratio of the largest and smallest eigenvalues of A.
I. E. Kaporin [18, 19] introduced another condition number ft (A) for this purpose, based on the ratio of the trace and determinant of the matrix A. This estimate has in principle a disadvantage of requiring knowledge of all eigenvalues but has on the other hand an advantage of demonstrating the superlinear convergence often experienced in PCG applications. Kaporin’s condition number for an n x n matrix A with n positive eigenvalues is defined to be
/fn4 <17)
The connection between ft of (17) and ^ from (5) and (8) is immediate for n = 2: ft-1/2 = ^. Thus it is useful to view the antieigenvalue ^ as a condition number and to compare its potential uses to that of the conventional condition number k and Kaporin’s condition number ft. One may easily verify their general relation
1 = ft (A)1/n = k(A) = [k(A)1/2 + k(A)-1/2]2 = 4^(A)-2 = 4ft (A). (18)
As I pointed out in [9], the preconditioning strategies according to k, ft, and ^, are equivalent in practice but different in philosophy. In seeking
k(BA) - 1 (19)
we are seeking to “undilate” A with B. In seeking
MBA) - 1 (20)
we are seeking to “untwist” A with B. In seeking
ft (BA) - 1 (21)
we are trying to both undilate and untwist A at the same time. Perhaps it is better to say that with ft we are trying to “unmoment” A with B.
In principle the use of ^(A) rather than ft(A) could have two advantages. First, in a number of matrix iterative computations, one may pull out estimates of the largest and smallest eigenvalues much easier than estimates of all eigenvalues. Second, ^(A) provides a clear geometrical picture, whereas ft(A) was developed out of log norm estimates and BFGS update strategies of quasi-Newton iterative schemes. In any case, ^(A) now brings to ft (A) a previously absent geometrical theory.
5. An Extended operator trigonometry
Originally the theory discussed here was stated in a very general context, A an unbounded operator in a Banach space, and also the immediately apparent vagaries about real, imaginary,
cosines. Because the original semigroup theory motivation dealt with accretive and dissipative operators, therefore I emphasized the real operator trigonometry. Also, one may easily rotate any desired half-plane theory to the real theory.
However, in computational linear algebra, important applications and hence interest has recently been turning to the case of A a general, nonsymmetric, perhaps sparse, perhaps very large, matrix, often n x n, invertible, and perhaps with only real entries. Having these applications in the back of my mind, for the last couple of years I have thought about the most natural way to extend the operator trigonometry to arbitrary matrices, and my opinion now is that one should use polar form. There are two strong contributing reasons for this. First, I have never been motivated in the operator trigonometry to think of uniformly turning operators A, e.g., those which rotate all vectors by a fixed angle. What interests us in the operator trigonometry is the relative turning of vectors, just as in the classical eigenvalue theory we are interested in the relative stretching of vectors. Thus polar form A = U|A| efficiently removes the “uniform turning,” e.g., in U, and we already have the operator angle theory for |A|. Second, for invertible operators A, polar form is better than singular value decomposition for our purposes of an extended operator trigonometry, because we can show that the essential Minmax Theorem extends without incident.
Therefore let us change the key definition (2) to
therefore formalize this result.
Theorem 6 (1998). Let A be an arbitrary invertible operator on a complex Hilbert space
and complex operator angles were taken care of simply by defining real, imaginary, and total
sin0(A) = inf ||eA — U||.
(22)
Of course in the A symmetric positive definite case, U is just the Identity. Then, considering first for example A to be an arbitrary n x n nonsingular matrix with singular values = a2 = • • • = > 0, we obtain from (22) and the second expression in (8) that
<23)
One may check that the key min-max identity (4) in its essential form sin2 0(A) + cos2 0(A) = 1 is then satisfied if one modifies definition (3) to
/1 A lx. X
<24)
Then cos 0(A) is given as in (8) with At and An replaced by and aT, respectively. Let us
X. From A = U|A| polar form define the angle 0(A) and sin0(A) according to (22), and cos 0(A) according to (24). Then
sin2 0(A) + cos2 0(A) = 1
and a full operator trigonometry of relative turning angles obtains for A from that of | A|.
Proof. When A is invertible, the partial isometry U in A’s polar form is unitary and |A| is strongly positive selfadjoint. Looking now at the right hand side of (4), we may write
||(eA — U)x||2 = ||U (e|A| — I )x||2 = ||(e|A| — 1 )x||2 (26)
from which
min max ||(eA — U)x||2 = min ||e|A| — 1||2. (27)
-^<e<^ ||x|| = 1 e>0
Thus, in view of definition (22) and the known second expression in (7) for strongly positive selfadjoint operators, we obtain from (27) that sin2 0(A) = sin2 0(|A|), and hence
A- 1 -1
sin 0(A) = sin 0(|a|) = ||A'|| + |A-1|-1 • (28)
Recall that ||A| = M = |||A|| and that ||A-1||-1 = m = |||A|-1||-1 and note that we prefer the norm expressions in (28) rather than the m and M from |A| because we speak of A in the Theorem, even though we may need to use |A| to evaluate the expressions in (28). Continuing, by the definition of cos 0(A) = cos 0(|A|) in (24), the left hand side of (4) becomes equal to the right hand side of (4), namely
1 — cos2 0(A) = 1 — cos2 0(|A|) = sin2 0(A). (29)
Here we have made use of the Minmax Theorem applied to the strongly accretive operator |A|, and we have also used the argument of [8, (3.10)] to get to the expression 1 — cos2 0(A). Thus we have shown (25). All spectral details known previously for the strongly positive selfadjoint operator trigonometry now transfer via | A| to A.
Let us next consider the second principal theoretical result of the early operator trigonometry, namely the Euler equation (6) for the antieigenvalue functional (5). That is, let us insert A = U|A| for A a strongly accretive (hence, invertible) bounded operator, into the Euler equation (6), from which we arrive at the expression in terms of the U and |A| of A’s polar form,
(U|A| + |A|U-)x — «U|A| + LA|U‘)X-X> • JAI2L — (<UlAl + l.lf >x'x> X = 0. <30)
Apparently our approach to a general operator trigonometry based upon polar form has not produced an appetizing Euler equation (30).
However, when we remember that the Euler equation (6) was originally derived from the (real) antieigenfunctional (5) due to our earlier interest in accretive semigroup generators, whereas now the antieigenfunctional of interest is (24) as motivated now by the needed identity (25), we see that the appropriate Euler equation for the extended operator trigonometry is in fact before us, as follows.
Theorem 7 (1999). For A = U|A| an arbitrary invertible operator on a complex Hilbert space X, and normalizing solutions x to ||x|| = 1 for convenience, the appropriate Euler equation in this extended operator trigonometry is
|A|2x 2|A|x
(|A|2x,x> (|A|x,x>
+ x = 0. (31)
Proof. Starting from the functional (24), because |A| is strongly accretive, the derivation of the Euler equation of that functional is the same as the original derivation of (6) from (5). See
e.g. [1, 2] or [7]. Then the simplification to (31) in the selfadjoint case follows, e.g. as already observed in [7].
The Euler equation in the extended operator trigonometry of this paper is actually simpler than the original one, because we have given up the emphasis on general accretive operators, in going to polar form.
6. Future directions and remarks
6.1. Iterative methods
It is expected that the extended operator trigonometry of Section 5 will open large portions of the analysis of iterative computational methods for nonsymmetric matrices A to the same new trigonometric geometrical and convergence rate investigations available to date essentially only for symmetric matrices A, e.g. as in [9-12]. Further investigation of such computational trigonometry will be pursued elsewhere.
6.2. Alternate extensions
We are not asserting that our approach in this paper is the only way in which one may obtain an extended operator trigonometry. Indeed, see on earlier discussion [8, Remarks 4.1 and 4.2] where we point out some discrepancies between the variational formulation and the Euler formulation, such discrepancies having their origin in our original interest in and emphasis on real accretivity of operators. Also, there could be situations in which the singular value decomposition could yield an advantageous theory. Even within our approach, there may in the future be a need in certain applications to also take into account certain phase angles ^ within U, e. g., two-dimensional internal plane rotations as they occur in the Jacobi scheme for eigenvalue calculations, or situations in which the action of U is that of permutation or change of basis.
However, the theory we have presented here is very general. It applies to operators A as well as matrices A. For computational linear algebra purposes all spectral information for the operator trigonometry of an n x n matrix A may be obtained from its singular values a1 and an. This dependence on estimates of singular values may be a cause of some computational complaint because it requires in principle approximately solving the eigenvalue problem for A*A and the latter also requires a (possibly large scale) multiplication by A*. On the other hand most practitioners in the computational linear algebra community when presented with a large nonsymmetric matrix A immediately think subconsciously about what its condition number k = a1/an may be, because in principle that ratio is one’s best initial feeling as to the sensitivity of Ax = b to errors and machine roundoff. Thus some consideration and thinking about A’s conditioning, e.g., its singular value magnitudes, is unavoidable.
6.3. Condition number angle
An overpreoccupation with condition number k = a1/an by the computational community is perhaps the reason that the true geometrical meaning of the Kantorovich bound [17] was not seen between 1948 and 1990 [5]. There was indeed an angle defined, the socalled Kantorovich — Wielandt condition number angle 0, defined by cot(0(A)/2) = k. I showed in [1, 2] that one may connect my antieigenvalue operator angle 0(A) to the condition number angle 0(A), in the
case of A symmetric positive definite, as follows: cos0(A2) = sin0(A). Here let us go further, by use of the extended operator trigonometry of Section 5, to state the following extension of that result to all nonsingular matrices.
Theorem 8 (1998). For any nonsingular nx n matrix A, the condition number angle 0(A) and the operator angle 0(A) are related by
cos 0(A*A) = sin 0(A). (32)
The condition number angle 0(A) is determined by the first antieigenvectors of A* A. Its sin 0(A) is the first antieigenvalue of A*A.
6.4. Invertibility
We have taken A invertible here to mean A-1 also is in B(X) but it should be mentioned that much of the extended theory above holds as well for those instances in which A is just 1-1. There are three cases here (e. g. see [13]) for Hilbert space operators, corresponding to whether
0 is in the continuous or residual spectrum of A : A is 1-1 with dense range, range of A is not dense but A-1 is bounded, or range of A is not dense and A is only 1-1. The point is that U is an isometry when A is 1-1, and then it may be checked that (26) is still true. However, |A|-1 may be bounded or unbounded.
The situation of A bounded but A not 1-1, e. g., a semidefinite matrix, is also of interest in the computational linear algebra applications. That extension of the theory of this paper will be pursued elsewhere. In that case U is a partial isometry and the null space of A agrees with that of | A| .
When A is an unbounded operator in a Hilbert space, an early result in the operator trigonometry (see [1, 2]) showed that cos0(A) = 0. Because 0(A) = 0(A-1) when A has an inverse, to date those instances in which A or A-1 is unbounded have not been interesting for computational algorithms.
6.5. Equivalent reformulation
Because |A| = U*A, we may initially define the operator angle 0(A) from
cos 0(A) = inf (33)
K ’ x=o ||Ax||||x|| v ’
and arrive at the same full extended trigonometry of Section 5.
6.6. Higher antieigenvectors
Originally, 30 years ago I defined higher antieigenvalues in terms of infima of the functional (5) restricted to subspaces orthogonal to the preceding antieigenvectors. In the 1990’s my computer experiments with the iterative linear solvers for Ax = b indicated that it was better to define higher antieigenvalues combinatorially, e. g., replacing An and A1 in (9) by Aj and Aj,
1 < < n, and then defining the higher antieigenvectors by replacing the corresponding
eigenvector components accordingly. One thereby “nests inward” through a sequence of decreasing critical turning angles. See for example the discussions in [7, Remark 6.2] and [8, Section 5]. Let us observe here that these two philosophies are not necessarily so different. Consider
any two vectors x+ = cx1 + dx2 and x- = —cx1 + dx2 where c and d are nonzero possibly complex coefficients and where x1 and x2 are linearly independent. Then x+ and x- have the same span as x1 and x2. Thus minimizing the functional (5) on the subspace orthogonal to the span of the first antieigenvector pair is the same as finding the next critical turning angle of A in the subspace orthogonal to the span of the first and last eigenvector of A, at least in the cases (e.g., A symmetric or normal) where A’s antieigenvectors may be expressed in terms of A’s extreme eigenvectors as in (9).
Pursuing this analysis a bit further, now also assume x1 and x2 = xn to have norm one. Then (using (9) and (8) respectively in the second and third lines therein) one obtains the interesting expression
(x+, x-) = — | c|2 + |d|2 + 2iIm(cd(x1, xn)) =
A A 2 A1/2 A1/2 , ,
= + i^r+r-1” (x-xn) = <34)
A1 + An A1 + An
= — sin 0(A) + i cos 0(A)Im(x1, xn).
In particular for A symmetric positive definite we have from (34) that (x+,x-) = — sin 0(A), i. e., the angle 0± of the antieigenvector pair (9) is always 0(A) + n/2. This fact has not been heretofore mentioned. The angles 0± of higher antieigenvector pairs defined combinatorially as described above will enjoy a similar relation in terms of the higher operator angles 0k.
Although the antieigenvectors x± in (9) occur in pairs, their linear span does not form an anti-eigenspace. However, if the A1 or An has multiplicity greater than one, then the respective x1 or xn in (9) may be taken arbitrarily of norm one from the respective eigenspace. This fact is easily verified but has not been pointed out heretofore. In this sense the first antieigenvectors take on the multiplicities of the first and highest eigenspaces. The same statement is valid for the higher antieigenvectors.
6.7. Other applications
The computational trigonometry presented in this paper also has interesting recent applications to control theory [14], wavelets [15], and quantum probability [16].
References
[1] GUSTAFSON K. Lectures on Computational Fluid Dynamics, Mathematical Physics, and Linear Algebra. Kaigai Publications, Tokyo, 1996; World-Scientific, Singapore, 1997.
[2] GUSTAFSON K., Rao D. Numerical Range: The Field of Values of Linear Operators and Matrices. Springer Verlag, New York, 1997.
[3] GUSTAFSON K. Antieigenvalue inequalities in operator theory. In: “Inequalities III”, Los Angeles, 1969. Ed. O. Shisha, Academic Press, 1972, 115-119.
[4] GUSTAFSON K. Commentary on Topics in the Analytic Theory of Matrices. In: “Collected Works of Helmut Wielandt”, 2. Eds. B. Huppert, H. Schneider. De Gruyters, Berlin, 1996, 356-367.
[5] GUSTAFSON K. Antieigenvalues in analysis. Eds. C. Stanojevic, O. Hadzic. Proc. Fourth Intern. Workshop in Analysis and its Applications, Dubrovnik, 1990. Novi Sad, Yugoslavia, 1991, 57-69.
[6] Gustafson K. Operator trigonometry. Linear and Multilinear Algebra, 37,1994,139-159.
[7] Gustafson K. Antieigenvalues. Linear Algebra Appl., 208/209, 1994, 437-454.
[8] Gustafson K. Matrix trigonometry. Ibid., 217, 1995, 117-140.
[9] Gustafson K. Trigonometric interpretation of iterative methods. Eds. O. Axelsson, B. Polman. Proc. Conf. Algebraic Multilevel Iteration Methods with Appl., Nijmegen, Netherlands, 1996, 23-29.
[10] Gustafson K. Operator trigonometry of iterative methods. Num. Lin. Alg. with Appls.,
4, 1997, 333-347.
[11] Gustafson K. Domain decomposition, operator trigonometry, Robin condition.
Contemporary Mathematics, 218, 1998, 455-460.
[12] Gustafson K. Operator trigonometry of the model problem. Num. Lin. Alg. with Appls.,
5, 1999 (to appear).
[13] Gustafson K. Operator spectral states. Computers Math. Appls., 34, 1997, 467-508.
[14] Gustafson K. Operator trigonometry of linear systems. Eds. N. Koussoulas,
P. Groumpos. Proc. IFAC Symp. on Large Linear Systems, Patras, Grece, 1998, 950955.
[15] Gustafson K. Operator trigonometry of wavelet frames. Eds. J. Wang, M. Allen,
B. Chen, T. Mathew. Proc. IMACS Conf. on Iterative Methods, Jackson, Wyoming, 1997,
IMACS Publications. New Brunswick, 1998, 161-166.
[16] Gustafson K. The geometry of quantum probabilities. In: “On Quanta, Mind, and Matter. Hans Primas in Context”. Eds. A. Amann, H. Atmanspacher, U. Mueller — Herold. Kluwer, Dordrecht, 1999, 151-164.
[17] Kantorovich L. Functional analysis and applied mathematics. Uspehi Mat. Nauk, 3, No. 6, 1948, 89-185.
[18] Kaporin I. E. An alternative approach to estimation of the conjugate gradient iteration number. In: “Numerical Methods and Software”. Eds. Yu. A. Kuznetsov, Acad. Sci. USSR, Department of Computational Mathematics, 1990, 53-72 (in Russian).
[19] Kaporin I. E. New Convergence results and preconditioning strategies for the conjugate gradient method. Num. Lin. Alg. with Appls., 1, 1994, 179-210.
[20] Krein M. G. Angular localization of the spectrum of a multiplicative integral in a Hilbert space. Func. Anal. Appl., 3, 1969, 89-90.
[21] Lidskii V. B. The proper values of the sum and product of symmetric matrices. Doklady Akad. Nauk SSSR, 75, 1950, 769-772.
[22] Mirman B. Antieigenvalues: method of estimation and calculation. Linear Algebra Appl., 49, 1983, 247-255.
Received for publication March 2, 1999