Вычислительные технологии
Том 13, № 6, 2008
Towards a combination of interval and ellipsoid uncertainty*
V. KREINOVICH
Department of Computer Science University of Texas at El Paso, USA
e-mail: [email protected]
A. Neumaier Universitat Wien, Austria e-mail: Arnold. Neumaier@univie .ac.at
G. Xiang
Philips Healthcare Informatics Business Line RIS, El Paso, USA e-mail: [email protected]
Во многих реальных ситуациях мы не знаем вероятностного распределения погрешностей, знаем только верхние границы для этих погрешностей. В таких случаях можно лишь заключить, что реальные (неизвестные) значения принадлежат некоторому интервалу. Базируясь на этой интервальной неопределенности, мы хотим найти возможные значения искомой функции неопределенных переменных. В общем, расчет этих значений — трудная задача, но в линейном приближении, справедливом для малых значений ошибок, существует линейный алгоритм такого расчета. В других ситуациях известен эллипсоид, содержащий искомые значения. В этом случае мы тоже имеем линейный по времени алгоритм расчета линейной функции. Иногда имеет место комбинация интервальной и эллипсоидной неопределенности, тогда искомые значения принадлежат пересечению эллипсоида и прямоугольника. В общем случае вычисление этого пересечения позволяет сузить поиск искомой функции. В этой статье мы приводим два алгоритма для оценки интервала линейной функции на пересечении в линейном времени: простой и более сложный линейно-временной алгоритмы. Оба алгоритма могут быть расширены на lp случай, когда вместо эллипсоида мы имеем набор, определяемый lp нормой.
1. Formulation of the problem
Interval uncertainty: brief reminder. Measurements are never 100% accurate; hence, the measurement result X is, in general, different from the actual (unknown) value xi of the corresponding quantity. Traditional engineering approach to processing measurement uncertainty assumes that we know the probability distribution of measurement errors Axi :=
Xi xi.
In many practical situations, however, we do not know these probability distributions. In particular, in many real-life situations, we only know the upper bound Ai on the (absolute
*This work was supported in part by NSF grants EAR-0225670, DMS-050532645, and HRD-0734825 and by Texas Department of Transportation grant N 0-5453.
© Институт вычислительных технологий Сибирского отделения Российской академии наук, 2008.
value of the) measurement error: |Ax^ < Aj. In such situations, the only information that we get about the actual (unknown) value xi after the measurement is that xi belongs to the interval xi = [¡Tj — Aj, xi + Ai]; see, e. g. [1].
Data processing under interval uncertainty: brief reminder. In addition to the values of the measured quantities xi,... ,xn, we often need to know the values of other quantities which are related to xi by a known dependence y = f(x1,...,xn). When we know xi with interval uncertainty, i. e., when we know that xi G xi, then the only conclusion about y is that y belongs to the range {f (xi,..., x„) | Xi G £Ki, . . . , xn G "Xn } of the function f (x1,... ,xn) over the box x1 x ... x xn.
Data processing: linear approximation. In general, computing this range is NP-hard — even for quadratic functions f; see, e. g., [2]. However, in many practical situations, the measurement errors are small, thus, the intervals xi are narrow, and so, on the box x1 x ... x xn, we can safely replace the original function f (x1,..., xn) by the first two terms
n _ df
in its Taylor formula: f (x1,..., xn) = y + C Axi, where yo := f (x1,... , xn) and ci := ——,
i=1 dxi
i = 1,..., n.
n
For such linear functions, the range is equal to [x — A,x + A], where A = |ci| Ai.
i=1
n
The maximum value A of the difference f — x = Y1 C Axi is attained when Axi = Ai for
i=1
ci > 0 and Axi = —Ai for ci < 0; correspondingly, the smallest value —A is attained when Axi = —Ai for ci > 0 and Axi = Ai for ci < 0.
Once we know the derivatives ci and the bounds Ai, i = 1,..., n, the value A describing the desired range can be computed in linear time O(n).
Comment. To get a guaranteed enclosure for y, we must add to this linear range an interval [—5, 5] which bounds the second and higher order terms in the Taylor expansion; this is, in effect, what is known in interval computations as centered form; see, e.g., [3-5]. Asymptotically, 5 = A2), so we get an asymptotically exact enclosure for the range in linear time.
Ellipsoid uncertainty: a brief reminder. In some cases, the information about the values Ax1, . . . , Axn comes not as a bound on the values Axi themselves, but rather as a bound z < z0 on some quantity z = g(Ax1,..., Axn) which depends on Axi.
When the measurement errors are small, we can expand the function g into a Taylor series and keep only the lowest terms in this expansion. In particular, if we keep quadratic terms, we get a quadratic zone g(Ax1,..., Axn) < z0. If this zone is a bounded set, then it describes an ellipsoid. In this case, the only information about the tuple Ax = (Ax1,..., Axn) is that it belongs to this ellipsoid.
Another situation when we get such an ellipsoid uncertainty is when measurement errors are independent normally distributed random variables, with 0 mean and standard deviations ai. In this case, the probability density is described by the known formula p(Ax) = const x Axf
i=1
exp ( — Y1 ). This probability density p(Ax) is everywhere positive; thus, in principle, =1
an arbitrary tuple Ax is possible. In practical statistics, however, tuples with very low probability density p(Ax) are considered impossible.
For example, in 1-dimensional case, we have a "three sigma" rule: values for which |Ax1| > 3a1 are considered to be almost impossible. In multi-dimensional case, it is natural to choose some threshold t > 0, and consider only tuples for which p(Ax) > t as possible ones. This formula is equivalent to ln(p(Ax)) > ln(t). For Gaussian distribution, this
n
Ax2
equality takes the form ^ —^ < r for some appropriate value r — i.e., the form of an
i=1 v.
,2
ellipsoid. The sum is x?(n) distributed, with expectation n and standard deviation y/n, so here, r2 = n + O(y/n) is a natural choice. In this paper, we will consider ellipsoids of this type.
Comment. If the measurement errors are small but not independent, then we also have an ellipsoid, but with a general positive definite quadratic form in the left-hand side of the inequality.
Ellipsoids are also known to be the optimal approximation sets for different problems with respect to several reasonable optimality criteria; see, e.g., [6, 7].
Ellipsoid error estimates are actively used in different applications; see, e.g., [8-15]. Data processing under ellipsoid uncertainty: linear approximation. The range
n
of a linear function c Axj over an ellipsoid can be easily computed by using, e.g., the i=1
Lagrange multiplier method. First, one can easily check that the maximum of a linear
n Ax2
function is attained at the border of the ellipsoid, i. e., when ^ —?r = r2. Maximizing the
i=1 ai
n
linear function C Axj under the above constraint is equivalent to solving the unconstrained i=1
n n Ax2
optimization problem c Axj + A ^ —^, where A is the Lagrange multiplier. For every
i=1 i=1
i = 1,..., n, differentiating with respect to Ax^ and equating the derivative to 0, we conclude
that the maximum value A of the linear function is attained when Ax^ = a Cja2 for a = — ^t-.
2A
n Ax2
Here, the parameter a is determined by the condition that ^ —^ = r2 — i.e., that
i=1 V
n _
a2 Y1 c? = r2 and a = r/^^Jcfa2. The smallest possible value —A of this function is i=1
attained when Ax^ = —a Cja2 for all i = 1,..., n.
The corresponding value A is equal to A = r^'Yh c2°f. This value can also be computed in linear time.
Need for combining interval and ellipsoid uncertainty. In some practical cases, we have a combination of interval and ellipsoid uncertainty. For example, in the statistical case, we may have an ellipsoid bound and also the 3 sigma bound |Ax^| < 3aj for each measurement error.
In this case, the actual values (Ax1,...,Axn) belong to the intersection of the box x1 x ... x xn and the ellipsoid.
In general, the smaller the set over which we estimate the range of a given function, the narrower the resulting range. It is therefore desirable to be able to estimate the range of a
n
linear function c Ax^ over such an intersection. i=1
What we do in this paper: main result. In this paper, we provide two algorithms for estimating the range of a linear function over an intersection in linear time: a simpler O(n log(n)) algorithm and a (somewhat more complex) linear time algorithm.
From ellipsoids to generalized ellipsoids. We have mentioned that ellipsoids correspond to normal distributions. In many practical cases, the distribution of the measurement errors is different from normal; see, e.g., [1, 16, 17]. In many such cases, we have a distribu-
tion of the type
" |AxJp
p(Ax) = const exp —
1=1 '
for some value p = 2 [16]. For this distribution, the condition p(Ax) > t takes the form
Y^-< rp for some value r.
i=i ai
The corresponding /p-methods have been successfully used in data processing; see, e.g., [18, 19].
It is therefore reasonable to consider such generalized ellipsoids as well. For a generalized
n
ellipsoid, the Lagrange approach to maximizing a linear function Y1 C Axi leads to
i=1
ci Axi + A -p--> max,
<" a
i=1 i=1 ■
|AXi |f-1
ci + Ap ■ sign(Axi)-ip— = 0, i = 1,..., n,
and hence, for p > 1, to
Axi = a ■ sign(Ci)|Ci|1/(p-1)af/(p-1), i = 1,...,n,
p
n |AXi|
for some constant a. Here, the parameter a is determined by the condition that Y1-fr
i=1 ai
rp — i.e., that ap £ Mp/(p-1)af/(p-1) = rp and
i=1
a = r/tf^ |Ci|p/(p-1)ap/(p-1). The smallest possible value —A of this function is attained when
Axi = —a ■ sign(Ci)|Ci|1/(p-1)ap/(p-1). The corresponding value A is equal to
(p-1)/p
A = r i]=1 |ci|p/(p-1)ap/(p-1)
This value can also be computed in linear time.
Need for combining interval and generalized ellipsoid uncertainty. Similarly to
n
the case p = 2, it is desirable to estimate the range of a linear function Y1 C Axi over an
i=1
intersection of a box and a generalized ellipsoid. In this paper, we will consider this problem for p > 1.
2. Analysis of the problem: general form of the optimal tuple
In the general case, we want to find the maximum and the minimum of a linear function
n
c Axi over an intersection of generalized ellipsoid and a box. In order to describe an algo-
i=1
rithm for computing the maximum and minimum, let us first describe the general properties of the tuples Ax for which these maximum and minimum are attained.
Definition 1. By a generalized ellipsoid E, we mean a set of all the tuples Ax =
n |AXi|P
(Ax1,..., Axn) which satisfy the inequality Y1-p~ < rp, where p, r, and oi are posi-
i=i <
tive real numbers.
We want to find the maximum and the minimum of a linear function on the intersection I = E fi B of a generalized ellipsoid and a box
B = [-Ai,Ai] x ... x [-An,An].
Without losing generality, we can assume that all the coefficients ci, i = 1,..., n, of a linear function are non-negative. Indeed, if ci < 0 for some i, then we can simply replace the original variable Axi with a new variable Axi = — Axi. After this replacement, the expressions for the ellipsoid E and for the box B remain the same, but the corresponding coefficient ci becomes positive.
n
Under this assumption, one can easily see that the maximum of a linear function C Axi
i=1
with ci > 0 is attained when Axi > 0 for all i. We then get the following result.
n
Proposition 1. The maximum of a linear function Y1 C Axi with ci > 0 over the in-
i=1
tersection E if B of a box B = [—A1, A1 ] x ... x [—An, An] and a generalized ellipsoid ( n |Axi|P 1
E = < Ax : Y1-^ rp \ is attained, for a certain value a, at a tuple
{ i=1 Oi J
Axi = min(Ai,ac1/(p-1)oP/(p-1)), i = 1,... ,n.
Observation. This expression has an interesting relation to the corresponding expressions for the box and for the generalized ellipsoid. Indeed, let us recall that for the box B, the maximum is attained for Axi = Ai, i = 1,...,n. For the generalized ellipsoid E, the maximum is attained when for a certain value aE, we have Axi = aE c1/(p op/(p 1), i = 1,..., n. According to Proposition 1, the optimizing tuple for the intersection E if B is a component-wise minimum of the two tuples:
— the tuple with components Ai, i = 1,..., n, which maximizes the linear function on the box B, and
— the tuple with components ac1/(p-1)op/(p-1), i = 1,..., n, which is similar to the tuple that maximizes the linear function on the generalized ellipsoid E.
It should be mentioned that the second tuple is not exactly the one that maximizes the linear function over E, since, in general, the value a (corresponding to the maximum over the intersection E if B) is different from the value aE corresponding to the maximum over E.
Comment. For general (not necessarily non-negative) coefficients q, we get Axi = sign(ci) min(Aj, a |Ci|1/(p-1)af/(p-1)), i = 1,..., n.
Proof. Let Ax = (Ax1,..., Axn) be an optimal (maximizing) tuple.
If there are indices i and j for which 1 < i, j < n, Axi < A» and Axj < Aj, then,
for sufficiently small real numbers gi and gj, we can replace Axi with Axi + Axj with
Axj + gj, and still stay within the intervals [0, Ai] and [0, Aj] — i. e., within the box B. Let
n n n . n n n |AxJp |Ax,- |p us select the changes gi and gj in such a way that the sum s := -p--1--p— remain
unchanged — then we will stay within the generalized ellipsoid as well. For small gi and gj, we have
(Axi + £i)p (Axj + gj )p
+
pp
(Axi)p + (Axj )p + AxP-1 + pgj Axp-1 + ) p + p + p + p + o(fci)-ai ai ai aj
Thus, to make sure that s does not change, we must select gj for which
Axp-1 gj Axp-1 = ( ) p + p o(gi),
ai aj
i. e.,
Axp-1 oj
gj = gi Axp-1 ap + °(gi)-
The resulting change in the maximized linear function is equal to cigi + Cjgj. Substituting the expression for gj in terms of gi, we conclude that this change is equal to
AiT' <j\ , ,
01 - cj Axp <) + o(gi)'
If the coefficient at gi was positive, then we could take a small positive gi and further increase the value of the linear function — which contradicts our selection of the tuple Axi for which the maximum is attained. Similar, if the coefficient at gi was negative, then we could take a small negative gi and further increase the value of the linear function. Thus, this coefficient cannot be positive and cannot be negative — hence it must be equal to 0. So,
_ AxT j = 0
Ci Cj Axp-1 ap =0'
or, equivalently,
Aa*-1 _ Axp-1
pp ci < cj aj
This equality holds for every two indices i and j for which 1 < i, j < n, Axi < Ai, and Axj < Aj. Thus, for all the indices i = 1,..., n for which Axi < Ai, the above ratio has
g
the same value. Let us denote this common ratio by r0; then, we conclude that for all such Axp-1
indices i, we have -= r0 and hence, that
%
Axi = ac1/(p-1) ap/(p-1),
where we denoted a := r/^-1^
If Axj < Aj and Axj = Aj, then we can similarly change Axj and Axj, but only the changes for which £j < 0 will keep us inside the box. Since the sign of £j is opposite to the sign of £j, we thus conclude that we can only take ^ > 0. Thus, the coefficient at ^ in the expression for the change in the (linear) objective function cannot be positive — because then, we would be able to further increase this objective function. So, this coefficient must be non-positive, i.e.,
AxT1 < c _ c j__j <- o
Cj Cj Axf1 af - 0,
or, equivalently,
Axf-1 Axf-1 p — p ■
cj< cj
AxP-1 Axp-1
Since Axj < Aj for the index i, we have -= r0. Thus, we conclude that -^ — r0,
Cj^i cj
i.e., Axj = Aj — ac]/(p-1)ap/(p-1). Hence,
— when Axj < Aj, we get Axj = a c1/(p-1)ap/(p-1);
— when Axj = Aj, we get Axj = Aj — a c1/(p-1)ap/(p-1).
To complete the proof of our proposition, let us consider two cases. If Aj — ac1/(p 1)ap/(p 1), then we cannot have Axj < Aj — because then we would have Axj = a c1/(p-1)ap/(p-1) and thus, Aj > Axj = a c1/(p-1)ap/(p-1) and Aj > a c1/(p-1)ap/(p-1) — which contradicts our assumption. Thus, the only remaining case here is Axj = Aj.
On the other hand, if Aj > a c1/(p 1)af/(p 1), then we cannot have Axj = Aj — because otherwise, we would have Aj — a c1/(p-1)af/(p-:L), which also contradicts our assumption. Thus, in this case, we must have Axj < Aj, and we already know that in this case, Axj = ac1/(p-1)ap/(p-1). So:
— if Aj — ac1/(p-1)ap/(p-1) then Axj = Aj;
— if Aj > ac1/(p-1)ap/(p-1) then Axj = a c1/(p-1)ap/(p-1). In both cases, we have
Axj = min(Aj, a c1/(p-1)ap/(p-1)), i = 1,..., n.
The proposition is proven. □
3. Analysis of the problem: how to find a
According to our result, once we know the value of the parameter a, we will be able to find all the values Axt, i = 1,..., n, from the optimal tuple, and thus, find the largest possible
n
value A of the desired linear function ct Axt.
For each i = 1,..., n, writing z := --—T,——, the dependence of lAxJ on a can
MV(p-1)ap/(p-1) '
be described as follows:
— If a |c-|1/(P-1Vp/(p-1) < A-, i. e., if a < z-, then we take |Ax-| = a |c-|1/(p-1)ap/(p-1).
— On the other hand, if a |ci|1/(p-1)ap/(p-1) > Ai, i. e., if a > Zj, then we take |AX-| = Aj. So, if we sort the indices by the value Zj, into a sequence z1 < z2... < zn, then the
maximizing tuple has the form
Ax = (sign(d)A1,..., sign(ci)Ai,
asign(ci+1)|ci+1|1/(p-1)aP^(1P-1),... ,asign(c„)|c„|1/(p-1)a£/(p-1))
for some threshold value t for which zt < a < zt+1.
How do we find this threshold value t? In principle, it is possible that the optimal solution is attained when = ±Aj for all i. In this case, the generalized ellipsoid contains the whole box. In all other cases, the value a must be determined by the condition that the optimal tuple is on the surface of the generalized ellipsoid, i.e., that
1 ap
Sa+a i=1 - j=t+1
+ ap ^ |c-|p/(p-1)ap/(p-1) = rp
or, equivalently,
^ (min(A-,a |c-|1/(p-1)ap/(p-1)))p
p • 1
rp
The left-hand side of this equality is an increasing function of a. Thus, to find the proper value of t, it is sufficient to check all the values a = z1,..., zn. If for some k = 1,..., n, we get
k Ap n
+ zp £ |Cj|p/(p-1)ap/(p-1) >rp, i=1 i j=k+1
this means that we need to decrease a, i. e., that we should have fewer values Axi = ±Ai — in other words, this means that t < k.
On the other hand, if for some k = 1,..., n, we get
k Ap n
Ai p
£A + zp £ |cj|p/(p-1)ap/(p-1) < rp
i=1 i 3=k+1
this means that t > k.
So, we can find the desired threshold t as the largest index k for which for a = zk, the left-hand side of the above equality is still less than or equal to rp; due to monotonicity with respect to a, this value t can be found by bisection.
Once we find this threshold value t, we can then find a from the equation
t Ap n
+ a* E |c3|p/(p-1)ap/(p-1) = rp, i=1 ai 3=t+1
rp — E- t Ap n n i\
i.e., ap = ———, where E- := Ap and E + := £ |cj|p/(p-1)ap/(p-1). After that,
E + ¿=1 ^ j=t+i
we can uniquely determine the optimal tuple Ax = (Ax1,..., Axn) and thus the desired
maximal value A = £ hA + a £ |cj|p/(p-1)ap/(p-1). ¿=1 j=t+1
So, we arrive at the following algorithms for computing A.
4. A simpler O(n log(n)) algorithm
Algorithm. First, we check whether the generalized ellipsoid contains the box, i. e., whether n Ap n
—p < rp. If this is the case, then the desired maximum is equal to £ |ci| Ai. If this is ¿=1 ¿=1 not the case, then we apply our algorithm.
In this algorithm, we first sort the indices i = 1,..., n in the increasing order by the
value of
After this sorting, we apply the following iterative algorithm. At each iteration of this algorithm, we have two numbers:
— the number i- such that for all indices i < i-, we already know that for the optimal tuple Ax, we have |Axi| = Ai;
— the number i+ of all the indices j > i+ for which we already know that for the optimal tuple Ax, we have |Axj | < Aj.
In the beginning, i- = 0 and i+ = n + 1. At each iteration, we also update the value of
¿- Ap n a
two auxiliary quantities E- := £ and E + := £ |cj |p/(p-1)ap/(p-1).
¿=1 j=i+ In principle, on each iteration, we could compute these sums "from scratch"; however, to speed up computations, on each iteration, we update these auxiliary values in a way that is faster than re-computing the corresponding sums.
Initially, since i- = 0 and i+ = n + 1, we take E- = E + = 0. At each iteration, we do the following:
— first, we compute the midpoint m = (i- + i+)/2;
m AP ¿+-1 ,( -1)
— we compute e- := E ~p and e+ := E |cj|p/(p-1)ap/(p );
¿=¿-+1 ^ j=m+1
— if E- + e- + zm (E + + e+) > rp, then we replace i+ with m +1 and E + with E + + e+;
— if E- + e- + zm (E + + e+) < rp, then we replace i- with m and E- with E- + e-. At each iteration, the set of undecided indices is divided in half. Iterations continue until
all indices are decided, after which we compute a from the condition that E- + apE + = rp, rp - E-
i.e., as ap := ———. Once we know a, we compute the maximizing tuple |Ax¿| =
E+
p/(p-1) n min(A¿,a |c¿|1/(p-1)ap/(p- )) and then, the desired maximum E M |Ax¿|.
¿=1
Computational complexity of the above algorithm. Sorting requires time O(n log(n)); see, e.g., [20].
After this, at each iteration, all the operations with indices from i- to i+ require time T linear in the number of such indices: T < C(i+ — i-) for some C. We start with the set of indices of full size n; on the next iteration, we have a set of size n/2, then n/4, etc. Thus, after sorting, the overall computation time is < C(n + n/2 + n/4 + ...) < C ■ 2n, i. e., linear in n. So, the overall computation time is indeed O(n log(n)) + O(n) = O(n log(n)).
Comment. This algorithm works for an even more general case, when there exist a function p0(x) for which for every i = 1,...,n, the probability density function pi(Axi)
/|AxJ\
of the i-th measurement error has the form pj(Axj) = p0 I -— I for some a^, and the
V a /
measurement errors are independent, i. e., p(Ax) = pi(Axi)... pn(Axn). In this case, similar
n /|AxJ\
arguments lead to a generalized ellipsoid of the type Y1 " ( -- ) — r0, where "(x) :=
i=i \ J
— ln(p0(x)). The above algorithm can be extended to the case of strictly convex smooth functions "(x) for which both this function, its derivative, and the corresponding inverse functions can be computed in polynomial time. This class includes the P-functions "(x) = |x|p with p > 1 as particular cases.
5. Linear-time algorithm
Main idea behind the linear time algorithm. Our second algorithm is similar to the above O(n log(n)) algorithm. In that algorithm, the only non-linear-time part was sorting. To avoid sorting, in the second algorithm, we use the known fact that we can compute the median of a set of n elements in linear time (see, e.g., [20]). (Our use of median is similar to the one from [21, 22].)
Our linear time algorithm is only efficient for large n. It is worth mentioning that while asymptotically, the linear time algorithm for computing the median is faster than sorting, this median computing algorithm is still rather complex — so, for small n, sorting is faster than computing the median.
This is the reason why in this paper, we present two different algorithms — both algorithms are practically useful:
— for large n, the linear time algorithm is faster;
— however, for small n, the O(n log(n)) algorithm is faster.
Let us now describe the linear time algorithm.
Algorithm. First, we check whether the generalized ellipsoid contains the box, i.e., n Ap n
whether ^ —p — rp. If this is the case, then the desired maximum is equal to c A». If i=1 i=1 this is not the case, then we perform the following iterations.
At each iteration, we have three sets:
— the set 1- of all the indices i from 1 to n for which we already know that for the optimal tuple Ax, we have |Ax^ = A»;
— the set 1+ of all the indices j from 1 to n for which we already know that for the optimal tuple Ax, we have |Axj | < Aj;
— the set 1 = {1,..., n} — 1- — 1 + of the indices i for which we are still undecided.
In the beginning, 1- = 1 + = 0 and 1 = {1,..., n}. At each iteration, we also update the
Ap
value of two auxiliary quantities E- := and E + := |cj|p/(p—1)ajp/(p 1).
¿e/- je/+
In principle, we could compute this value by computing this sum of squares, but to speed up computations, on each iteration, we update this auxiliary value in a way that is faster than re-computing the corresponding sum.
Initially, since 1- = 1 + = 0, we take E- = E + = 0.
At each iteration, we do the following:
— first, we compute the median m of the set 1 (median in terms of sorting by z);
— then, by analyzing the elements of the undecided set I one by one, we divide them into two subsets P- = {i : z < zm} and P + = {j : zj > zm};
— we compute e- = A and e+ := !cj |p/(p-1Vp/(p
ieP- jeP+
— if E- + e- + zm (E + + e+) > rp, then we replace I+ with I+ U P+, I with Pand E+ with E + + e+;
— if E- + e- + zm (E + + e+) < rp, then we replace I- with I- U PI with P + , and E- with E- + e-.
At each iteration, the set of undecided indices is divided in half. Iterations continue until all indices are decided, after which we compute a from the condition that E- + apE + = rp, rp - E-
i.e., as ap := ———. Once we know a, we compute the maximizing tuple |Axj| = E+
/( 1) n
min(Aj, a |ci|1/(p-1)ap/(p- )), i = 1,..., n, and then, the desired maximum Yl || |Axj|.
i=1
Computational complexity of the above algorithm. Let us show that this algorithm indeed requires linear time. Indeed, at each iteration, computing median requires linear time, and all other operations with I require time T linear in the number of elements |I| of I: T < C|I| for some C. We start with the set I of size n; on the next iteration, we have a set of size n/2, then n/4, etc. Thus, the overall computation time is < C(n + n/2 + n/4 + ...) < C ■ 2n, i. e., linear in n.
Acknowledgments
The authors are thankful to the participants of the Second International Workshop on Reliable Engineering Computing (Savannah, Georgia, February 22-24, 2006) and the International Conference on Finite Element Methods in Engineering and Science FEMTEC'2006 (El Paso, Texas, December 11-15, 2006) for valuable discussions.
We are also thankful to the anonymous referees for important editorial suggestions.
References
[1] Rabinovich S. Measurement Errors and Uncertainties: Theory and Practice. N.Y.: SpringerVerlag, 2005.
[2] Kreinovich V., Lakeyev A., Rohn J., Kahl P. Computational complexity and feasibility of data processing and interval computations. Dordrecht: Kluwer, 1998.
[3] Jaulin L., Kieffer M., Didrit O., Walter E. Applied Interval Analysis. L.: SpringerVerlag, 2001.
[4] Neumaier A. Interval Methods for Systems of Equations. Cambridge Univ. Press, 1990.
[5] Neumaier A. Introduction to Numerical Analysis. Cambridge Univ. Press, 2001.
[6] Finkelstein A., Kosheleva O., Kreinovich V. Astrogeometry, error estimation, and other applications of set-valued analysis // ACM SIGNUM Newsletter. 1996. Vol. 31, N 4. P. 3-25.
[7] Li S., Ogura Y., Kreinovich V. Limit Theorems and Applications of Set Valued and Fuzzy Valued Random Variables. Dordrecht: Kluwer Academic Publishers, 2002.
[8] Belforte G., Bona B. An improved parameter identification algorithm for signal with unknown-but-bounded errors // Proc. of the 7th IFAC Symp. on Identification and Parameter Estimation. York, U.K., 1985.
[9] Chernousko F.L. Estimation of the Phase Space of Dynamic Systems. M.: Nauka, 1988 (in Russian).
[10] Chernousko F.L. State Estimation for Dynamic Systems. Boca Raton, FL: CRC Press, 1994.
[11] Filippov A.F. Ellipsoidal estimates for a solution of a system of differential equations // Interval Computations. 1992. Vol. 2, N 2(4). P. 6-17.
[12] Fogel E., Huang Y.F. On the value of information in system identification. Bounded noise case // Automatica. 1982. Vol. 18, N 2. P. 229-238.
[13] Norton J.P. Identification and application of bounded parameter models // Proc. of the 7th IFAC Symp. on Identification and Parameter Estimation. York, U.K., 1985.
[14] Schweppe F.C. Recursive state estimation: unknown but bounded errors and system inputs // IEEE Transactions on Automatic Control. 1968. Vol. 13. P. 22.
[15] Schweppe F.C. Uncertain Dynamic Systems. Englewood Cliffs, NJ: Prentice Hall, 1973.
[16] Novitskii P.V., Zograph I.A. Estimating the Measurement Errors. Leningrad: Energoat-omizdat, 1991 (in Russian).
[17] Orlov A.I. How often are the observations normal? // Industrial Laboratory. 1991. Vol. 57, N 7. P. 770-772.
[18] Doser D.I., Crain K.D., Baker M.R. et al. Estimating uncertainties for geophysical tomography // Reliable Computing. 1998. Vol. 4, N 3. P. 241-268.
[19] Tarantola A. Inverse Problem Theory: Methods for Data Fitting and Model Parameter Estimation. Amsterdam: Elsevier, 1987.
[20] Cormen Th.H., Leiserson C.E., Rivest R.L., Stein C. Introduction to Algorithms. MIT Press, Cambridge, MA, 2001.
[21] Broek P. van der, Noppen J. Fuzzy weighted average: alternative approach // Proc. of the 25th Intern. Conf. of the North American Fuzzy Information Processing Society NAFIPS'2006. Montreal, Quebec, Canada, June 3-6, 2006.
[22] Hansen P., Aragao M.V.P. de, Ribeiro C.C. Hyperbolic 0-1 programming and optimization in information retrieval // Math. Programming. 1991. Vol. 52. P. 255-263.
Received for publication 18 June 2007