which is singular. The result is unchanged when R is added to HP
0
H
T
. In this case,
then, the ®lter observational update fails because the matrix HP
0
H
T
R is not
invertible.
Sneak Preview of Alternative Implementations. Figure 6.1 illustrates how
the standard Kalman ®lter and some of the alternative implementation methods
perform on the variably ill-conditioned problem of Example 6.2 (implemented as
MATLAB m-®le shootout.m on the accompanying diskette) as the conditioning
parameter d 3 0. All solution methods were implemented in the same precision (64-
bit ¯oating point) in MATLAB. The labels on the curves in this plot correspond to
the names of the corresponding m-®le implementations on the accompanying
diskette. These are also the names of the authors of the corresponding methods,
the details of which will be presented further on.
For this particular example, the accuracies of the methods labeled ``Carlson'' and
``Bierman'' appear to degrade more gracefully than the others as d 3 e, the machine
precision limit. The Carlson and Bierman solutions still maintain about 9 digits
(% 30 bits) of accuracy at d %
e
p
, when the other methods have essentially no bits
of accuracy in the computed solution.
This one example, by itself, does not prove the general superiority of the Carlson
and Bierman solutions for the observational updates of the Riccati equation. The full
implementation will require a compatible method for performing the temporal
update, as well. (However, the observational update had been the principal source
of dif®culty with the conventional implementation.)
Fig. 6.1 Degradation of Riccati equation observational updates with problem conditioning.
206 IMPLEMENTATION METHODS
6.2.3 Terminology of Numerical Error Analysis
We ®rst need to de®ne some general terms used in characterizing the in¯uence of
roundoff errors on the accuracy of the numerical solution to a given computation
problem.
Robustness and Numerical Stability. These terms are used to describe
qualitative properties of arithmetic problem-solving methods. Robustness refers to
the relative insensitivity of the solution to errors of some sort. Numerical stability
refers to robustness against roundoff errors.
Precision versus Numerical Stability. Relative roundoff errors can be
reduced by using more precision (i.e., more bits in the mantissa of the data
format), but the accuracy of the result is also in¯uenced by the accuracy of the
initial parameters used and the procedural details of the implementation method.
Mathematically equivalent implementation methods can have very different numer-
ical stabilities at the same precision.
Numerical Stability Comparisons. Numerical stability comparisons can be
slippery. Robustness and stability of solution methods are matters of degree, but
implementation methods cannot always be totally ordered according to these
attributes. Some methods are considered more robust than others, but their relative
robustness can also depend upon intrinsic properties of the problem being solved.
Ill-Conditioned and Well-Conditioned Problems. In the analysis of numer-
ical problem-solving methods, the qualitative term ``conditioning'' is used to
describe the sensitivity of the error in the output (solution) to variations in the
input data (problem). This sensitivity generally depends on the input data and the
solution method.
A problem is called well-conditioned if the solution is not ``badly''sensitive to the
input data and ill-conditioned if the sensitivity is ``bad.'' The de®nition of what is
bad generally depends on the uncertainties of the input data and the numerical
precision being used in the implementation. One might, for example, describe a
matrix A as being ``ill-conditioned with respect to inversion'' if A is ``close'' to being
singular. The de®nition of ``close'' in this example could mean within the
uncertainties in the values of the elements of A or within machine precision.
EXAMPLE 6.3: Condition Number of a Matrix The sensitivity of the solution
x of the linear problem Ax b to uncertainties in the input data (A and b) and
roundoff errors is characterized by the condition number of A, which can be de®ned
as the ratio
condA
max
x
kAxk=kxk
min
x
kAxk=kxk
6:3
6.2 COMPUTER ROUNDOFF 207
if A is nonsingular and as I if A is singular. It also equals the ratio of the largest and
smallest characteristic values of A. Note that the condition number will always be
!1 because max ! min. As a general rule in matrix inversion, condition numbers
close to 1 are a good omen, and increasingly larger values are cause for increasing
concern over the validity of the results.
The relative error in the computed solution
^
x of the equation Ax b is de®ned as
the ratio k
^
x À xk=kxk of the magnitude of the error to the magnitude of x.
As a rule of thumb, the maximum relative error in the computed solution is
bounded above by c
A
e
roundoff
condA, where e
roundoff
is the unit roundoff error in
computer arithmetic (de®ned in Section 6.2.1) and the positive constant c
A
depends
on the dimension of A. The problem of computing x, given A and b, is considered ill-
conditioned if adding 1 to the condition number of A in computer arithmetic has no
effect. That is, the logical expression 1 condAcondA evaluates to true.
Consider an example with the coef®cient matrix
A
1 L 0
01L
001
P
T
R
Q
U
S
;
where
L 2
64
18;446;744;073;709;551;616;
which is such that computing L
2
would cause over¯ow in ANSI standard single-
precision arithmetic.
The condition number of A will then be
condA%3:40282 Â 10
38
:
This is about 31 orders of magnitude beyond where the rule-of-thumb test for ill-
conditioning would fail in this precision (%2 Â 10
7
). One would then consider A
extremely ill-conditioned for inversion (which it is) even though its determinant
equals 1.
Programming note: For the general linear equation problem Ax b, it is not
necessary to invert A explicitly in the process of solving for x, and numerical stability
is generally improved if matrix inversion is avoided. The MATLAB matrix divide
(using x Anb) does this.
6.2.4 Ill-Conditioned Kalman Filtering Problems
For Kalman ®ltering problems, the solution of the associated Riccati equation should
equal the covariance matrix of actual estimation uncertainty, which should be
208 IMPLEMENTATION METHODS
optimal with respect to all quadratic loss functions. The computation of the Kalman
(optimal) gain depends on it. If this does not happen, the problem is considered ill-
conditioned. Factors that contribute to such ill-conditioning include the following:
1. Large uncertainties in the values of the matrix parameters F, Q, H ,orR. Such
modeling errors are not accounted for in the derivation of the Kalman ®lter.
2. Large ranges of the actual values of these matrix parameters, the measure-
ments, or the state variablesÐall of which can result from poor choices of
scaling or dimensional units.
3. Ill-conditioning of the intermediate result R
Ã
HPH
T
R for inversion in the
Kalman gain formula.
4. Ill-conditioned theoretical solutions of the matrix Riccati equationÐwithout
considering numerical solution errors. With numerical errors, the solution may
become inde®nite, which can destabilize the ®lter estimation error.
5. Large matrix dimensions. The number of arithmetic operations grows as the
square or cube of matrix dimensions, and each operation can introduce
roundoff errors.
6. Poor machine precision, which makes the relative roundoff errors larger.
Some of these factors are unavoidable in many applications. Keep in mind that they
do not necessarily make the Kalman ®ltering problem hopeless. However, they are
cause for concernÐand for considering alternative implementation methods.
6.3 EFFECTS OF ROUNDOFF ERRORS ON KALMAN FILTERS
Quantifying the Effects of Roundoff Errors on Kalman Filtering.
Although there was early experimental evidence of divergence due to roundoff
errors, it has been dif®cult to obtain general principles describing how it is related to
characteristics of the implementation. There are some general (but somewhat weak)
principles relating roundoff errors to characteristics of the computer on which the
®lter is implemented and to properties of the ®lter parameters. These include the
results of Verhaegen and Van Dooren [232] on the numerical analysis of various
implementation methods in Kalman ®ltering. These results provide upper bounds on
the propagation of roundoff errors as functions of the norms and singular values of
key matrix variables. They show that some implementations have better bounds than
others. In particular, they show that certain ``symmetrization'' procedures are
provably bene®cial and that the so-called square-root ®lter implementations have
generally better error propagation bounds than the conventional Kalman ®lter
equations.
Let us examine the ways that roundoff errors propagate in the computation of the
Kalman ®lter variables and how they in¯uence the accuracy of results in the Kalman
®lter. Finally, we provide some examples that demonstrate common failure modes.
6.3 EFFECTS OF ROUNDOFF ERRORS ON KALMAN FILTERS 209
6.3.1 Roundoff Error Propagation in Kalman Filters
Heuristic Analysis. We begin with a heuristic look at roundoff error propagation,
from the viewpoint of the data ¯ow in the Kalman ®lter, to show how roundoff errors
in the Riccati equation solution are not controlled by feedback like roundoff errors in
the estimate. Consider the matrix-level data ¯ow diagram of the Kalman ®lter that is
shown in Figure 6.2. This ®gure shows the data ¯ow at the level of vectors and
Fig. 6.2 Kalman ®lter data ¯ow.
210 IMPLEMENTATION METHODS
matrices, with operations of addition (È), multiplication (), and inversion (I Ä).
Matrix transposition need not be considered a data operation in this context, because
it can be implemented by index changes in subsequent operations. This data ¯ow
diagram is fairly representative of the straightforward Kalman ®lter algorithm, the
way it was originally presented by Kalman, and as it might be implemented in
MATLAB by a moderately conscientious programmer. That is, the diagram shows
how partial results (including the Kalman gain,
K) might be saved and reused. Note
that the internal data ¯ow can be separated into two, semi-independent loops within
the dashed boxes. The variable propagated around one loop is the state estimate. The
variable propagated around the other loop is the covariance matrix of estimation
uncertainty. (The diagram also shows some of the loop ``shortcuts'' resulting from
reuse of partial results, but the basic data ¯ows are still loops.)
Feedback in the Estimation Loop. The uppermost of these loops, labeled EST.
LOOP, is essentially a feedback error correction loop with gain (
K) computed in the
other loop (labeled GAIN LOOP). The difference between the expected value H
^
x of
the observation z (based on the current estimate
^
x of the state vector) and the
observed value is used in correcting the estimate
^
x. Errors in
^
x will be corrected by
this loop, so long as the gain is correct. This applies to errors in
^
x introduced by
roundoff as well as those due to noise and a priori estimation errors. Therefore,
roundoff errors in the estimation loop are compensated by the feedback mechanism,
so long as the loop gain is correct. That gain is computed in the other loop.
No Feedback in the Gain Loop. This is the loop in which the Riccati equation is
solved for the covariance matrix of estimation uncertainty (P), and the Kalman gain
is computed as an intermediate result. It is not stabilized by feedback, the way that
the estimation loop is stabilized. There is no external reference for correcting the
``estimate'' of P. Consequently, there is no way of detecting and correcting the
effects of roundoff errors. They propagate and accumulate unchecked. This loop also
includes many more roundoff operations than the estimation loop, as evidenced by
the greater number of matrix multiplies () in the loop. The computations involved
in evaluating the ®lter gains are, therefore, more suspect as sources of roundoff error
propagation in this ``conventional'' implementation of the Kalman ®lter. It has been
shown by Potter [209] that the gain loop, by itself, is not unstable. However, even
bounded errors in the computed value of P may momentarily destabilize the
estimation loop.
EXAMPLE 6.4 An illustration of the effects that negative characteristic values
of the computed covariance matrix P can have on the estimation errors is shown
below:
6.3 EFFECTS OF ROUNDOFF ERRORS ON KALMAN FILTERS 211
Roundoff errors can cause the computed value of P to have a negative characteristic
value. The Riccati equation is stable, and the problem will eventually rectify itself.
However, the effect on the actual estimation error can be a more serious problem.
Because P is a factor in the Kalman gain K, a negative characteristic value of P
can cause the gain in the prediction error feedback loop to have the wrong sign.
However, in this transient condition, the estimation loop is momentarily destabilized.
In this illustration, the estimate
^
x converges toward the true value x until the gain
changes sign. Then the error diverges momentarily. The gain computations may
eventually recover with the correct sign, but the accumulated error due to divergence
is not accounted for in the gain computations. The gain is not as big as it should be,
and convergence is slower than it should be.
6.3.1.1 Numerical Analysis. Because the a priori value of P is the one used in
computing the Kalman gain, it suf®ces to consider just the error propagation of that
value. It is convenient, as well, to consider the roundoff error propagation for xÀ.
A ®rst-order roundoff error propagation model is of the form
dx
k1
À f
1
dx
k
À; dP
k
À Dx
k1
; 6:4
dP
k1
À f
2
dP
k
À DP
k1
À; 6:5
where the d term refers to the accumulated error and the D term refers to the added
roundoff errors on each recursion step. This model ignores higher order terms in the
error variables. The forms of the appropriate error propagation functions are given in
Table 6.1. Error equations for the Kalman gain are also given, although the errors in
K
k
depend only on the errors in x and PÐthey are not propagated independently.
These error propagation function values are from the paper by Verhaegen and Van
212 IMPLEMENTATION METHODS
Dooren [232]. (Many of these results have also appeared in earlier publications.)
These expressions represent the ®rst-order error in the updated a prior variables on
the k 1th temporal epoch in terms of the ®rst-order errors in the kth temporal
epoch and the errors added in the update process.
Roundoff Error Propagation. Table 6.1 compares two ®lter implementation types,
in terms of their ®rst-order error propagation characteristics. One implementation
type is called ``conventional.'' That corresponds to the straightforward implementa-
tion of the equations as they were originally derived in previous chapters, excluding
the ``Joseph-stabilized'' implementation mentioned in Chapter 4. The other type is
called ``square root,'' the type of implementation presented in this chapter. A further
breakdown of these implementation types will be de®ned in later sections.
Propagation of Antisymmetry Errors. Note the two terms in Table 6.1 involving
the antisymmetry error dP
k
À À dP
T
k
À in the covariance matrix P, which tends to
con®rm in theory what had been discovered in practice. Early computers had very
little memory capacity, and programmers had learned to save time and memory by
computing only the unique parts of symmetric matrix expressions such as FPF
T
,
HPH
T
, HPH
T
R,orHPH
T
R
À1
. To their surprise and delight, this was also
found to improve error propagation. It has also been found to be bene®cial in
MATLAB implementations to maintain symmetry of P by evaluating the MATLAB
expression P .5*(P P') on every cycle of the Riccati equation.
Added Roundoff Error. The roundoff error (D) that is added on each cycle of the
Kalman ®lter is considered in Table 6.2. The tabulated formulas are upper bounds on
these random errors.
The important points which these tables demonstrate are the following:
1. These expressions show the same ®rst-order error propagation in the state
update errors for both ®lter types (covariance and square-root forms). These
TABLE 6.1 First-Order Error Propagation Models
Error Model (by Filter Type)
Roundoff Error
in Filter Variable Conventional Implementation Square-Root Covariance
dx
k1
À A
1
dx
k
À dP
k
ÀA
2
z À Hx
k
À Dx
k1
dK
k
A
1
dP
k
À
dP
k1
À
A
1
dP
k
ÀA
T
1
DP
k1
A
1
dP
k
ÀA
T
1
FdP
k
À À dP
T
k
ÀF
T
DP
k1
ÀFdP
k
À À dP
T
k
ÀA
T
1
Notes: A
1
F À K
k
H; A
2
H
T
HP
k
H
T
R
À1
.
6.3 EFFECTS OF ROUNDOFF ERRORS ON KALMAN FILTERS
213
include terms coupling the errors in the covariance matrix into the state
estimate and gain.
2. The error propagation expression for the conventional Kalman ®lter includes
aforementioned terms proportional to the antisymmetric part of P. One must
consider the effects of roundoff errors added in the computation of x,
K and P
as well as those propagated from the previous temporal epoch. In this case,
Verhaegen and Van Dooren have obtained upper bounds on the norms of the
added errors Dx, D
K, and DP, as shown in Table 6.2. These upper bounds give
a crude approximation of the dependence of roundoff error propagation on the
characteristics of the unit roundoff error (e) and the parameters of the Kalman
®lter model. Here, the bounds on the added state estimation error are similar
for the two ®lter types, but the bounds on the added covariance error DP are
better for the square-root ®lter. (The factor is something like the condition
number of the matrix E.) In this case, one cannot relate the difference in
performance to such factors as asymmetry of P.
The ef®cacy of various implementation methods for reducing the effects
of roundoff errors have also been studied experimentally for some applications.
The paper by Verhaegen and Van Dooren [232] includes results of this type as
well as numerical analyses of other implementations (information ®lters and
Chandrasekhar ®lters). Similar comparisons of square-root ®lters with conventional
Kalman ®lters (and Joseph-stabilized ®lters) have been made by Thornton and
Bierman [125].
TABLE 6.2 Upper Bounds on Added Roundoff Errors
Upper Bounds (by Filter Type)
Norm of
Roundoff Errors Conventional Implementation Square-Root Covariance
jDx
k1
Àj
e
1
jA
1
jjx
k
Àj jK
k
jjz
k
j e
4
jA
1
jjx
k
Àj jK
k
jjz
k
j
jD
K
k
jjHjjx
k
Àj jz
k
j jDK
k
jjHjjx
k
Àj jz
k
j
jD
K
k
j e
2
k
2
R
?
jK
k
j
e
5
kR
?
l
À1
m
R
?
jC
PK 1
j
j
K
k
C
R
?
jjA
3
j=l
1
R
?
jDP
k1
Àj
e
3
k
2
R
?
jP
k1
Àj
e
6
1 kR
?
jP
k1
jjA
3
j
jC
Pk1
j
Notes: e
1
; ; e
6
are constant multiples of e, the unit roundoff error; A
1
F À K
k
H; A
3
K
k
C
R
Ã
jC
Pk1
;
R
Ã
HP
k
ÀH
T
R; R
?
C
R
Ã
C
T
R
Ã
(triangular Cholesky decomposition); P
k1
À C
Pk1
C
T
Pk1
(triangular
Cholesky decomposition); l
1
R
Ã
!l
2
R
Ã
!ÁÁÁ! l
m
R
Ã
!0 are the characteristic values of R
Ã
;
kR
Ã
l
1
R
?
=l
m
R
?
is the condition number of R
Ã
.
214 IMPLEMENTATION METHODS
6.3.2 Examples of Filter Divergence
The following simple examples show how roundoff errors can cause the Kalman
®lter results to diverge from their expected values.
EXAMPLE 6.5: Roundoff Errors Due to Large a Priori Uncertainty If users
have very little con®dence in the a priori estimate for a Kalman ®lter, they tend to
make the initial covariance of estimation uncertainty very large. This has its
limitations, however.
Consider the scalar parameter estimation problem (F I, Q 0, ` n 1) in
which the initial variance of estimation uncertainty P
0
) R, the variance of
measurement uncertainty. Suppose that the measurement sensitivity H 1 and
that P
0
is so much greater than R that, in the ¯oating-point machine precision, the
result of adding R to P
0
Ðwith roundoffÐis P
0
. That is, R < eP
0
. In that case, the
values computed in the Kalman ®lter calculations will be as shown in the table and
plot below:
The rounded value of the calculated variance of estimation uncertainty is zero
after the ®rst measurement update, and remains zero thereafter. As a result, the
calculated value of the Kalman gain is also zero after the ®rst update. The exact
(roundoff-free) value of the Kalman gain is % 1=k, where k is the observation
number. After 10 observations,
Value
Observation
Number Expression Exact Rounded
1
P
0
H
T
P
0
P
0
1
HP
0
H
T
P
0
P
0
1
HP
0
H
T
R
P
0
RP
0
1
K
1
P
0
H
T
HP
0
H
T
R
À1
P
0
P
0
R
1
1
P
1
P
0
À K
1
HP
0
P
0
R
P
0
R
0
.
.
.
.
.
.
.
.
.
.
.
.
k
K
k
P
kÀ1
H
T
HP
kÀ1
H
T
R
À1
P
0
kP
0
R
0
P
k
P
kÀ1
À K
k
HP
kÀ1
P
0
R
kP
0
R
0
6.3 EFFECTS OF ROUNDOFF ERRORS ON KALMAN FILTERS 215
Không có nhận xét nào:
Đăng nhận xét