## Saturday, May 11, 2013

### LLS: What technique should I use: Part 3

This is part 3 of a series. The first two posts are here and here.

I am going to use the same example as last time.

A      = vander(0:0.05:1,4);
[m,n]  = size(A);
b      = A * ones(n,1) + 0.01 * randn(m,1);
x_true = A\b
x_true =

1.04597
0.97136
0.97724
1.00963

I got a slightly different answer, because the noise term is random. Still the solution is close to what we expect (all ones).

Now I am going to severely restrict the number of significant digits to 2. All matrices and vectors will be rounded off to this level of precision. Thus, the direct solution using normal equations is given by:

A    = roundsd(A,nsig);
b    = roundsd(b,nsig);
L    = chol(A'*A,'lower');
L    = roundsd(L,nsig);
x_ne = (L*L')$A'*b) x_ne = 27.41699 -36.71613 14.55408 0.30160 Whoa! Something bad happened! If I do the same thing (round of all matrices to 2 significant digits) with QR and SVD, I get, respectively: x_qr = 1.23084 0.69962 1.05513 1.00643 x_svd = 1.38864 0.60425 0.96928 1.02319 The normal equations do terrible, while QR and SVD seem to do reasonably okay. The reason for this is straightforward. The condition number of \(A$ was about 100 (which is the same as the condition number of its transpose), which makes the condition number of $A'A$ equal to 10000. Since we expect to lose about 4 digits due to the ill-conditioning (worst-case estimate), restricting ourselves to 2 significant digits screws everything up.

On the other hand, the condition number of R is the same as that of A. The QR implementation has no $R'R$ term; the condition number does not get squared! Hence, we expect to lose at most 2 digits (conservative estimate, usually much less), and QR does not suffer as much.

Between QR and SVD, the QR solution tends to be better (as measured by the distance from the true solution). But we can "regularize" the SVD solution if we want by discarding parts corresponding to smaller singular values.