I previously outlined three different numerical approaches to solve the least-squares problem \[A' A x = A' b.\]In this part, I am going to set up the problem, and outline the solution method/code. We will delay a detailed discussion of the susceptibility to round-off until the next part in this series.
I am going to choose the matrix A to be a 21 (m) by 4 (n) Vandermonde matrix,
\[A=\begin{bmatrix}
1 & p_1 & p_1^2 & \dots & p_1^3\\
1 & p_2 & p_2^2 & \dots & p_2^3\\
\vdots & \vdots & \vdots & \ddots &\vdots \\
1 & p_{21} & p_{21}^2 & \dots & p_{21}^3
\end{bmatrix},\]where \(p_i = 0.05 (i-1)\) are evenly spaced grid-points between 0 and 1. This is a discrete polynomial approximation or regression problem, although what it is, does not really matter in this case study.
To keep things under control, I am going to make up the vector \(b\) by assuming for a brief moment that \(x\) is a vector of all ones, and add some white noise. In Octave, I do all this via:
A = vander(0:0.05:1,4);
[m,n] = size(A);
b = A * ones(n,1) + 0.01 * randn(m,1);
Once I have a noisy \(b\), I am going to pretend I don't know what \(x\) I used, and try to regress it again using the normal equations. Again, in Octave this can be done directly with the backslash operator:
x_true = A\b
x_true =
0.92492
1.12503
0.94494
1.00460
It turns out that if I solve the problem in double precision, then all the methods yield the same (correct) answer, since the condition number of A (approx. 110) is not terribly bad.
Remember round-off is purely a manifestation of finite precision, and one can always delay its onset by increasing the precision or word size.
But this hides the underlying disease, which can be fatal if the matrix is poorly conditioned.
Instead of making the matrix A progressively pathological, I will take the other tack - I will limit the number of significant digits with which we operate.
To see the suceptibility of the solution method on round off, we need some way of controlling the precision with which numbers are represented. This routine (roundsd) from Matlab File Exchange allows us to do just that. We can specify number of significant digits with which to represent any number, vector or matrix.
Thus, if I set the number of significant digits to a very high 16 (nearly double precision), and carry out direct solution, QR, or SVD, I get the same answer:
nsig = 16;
A = roundsd(A,nsig);
b = roundsd(b,nsig);
1. Direct solution via normal equations:
L = chol(A'*A,'lower');
L = roundsd(L,nsig);
x_ne = (L*L')\(A'*b)
x_ne =
0.92492
1.12503
0.94494
1.00460
U = roundsd(U,nsig);
S = roundsd(S,nsig);
V = roundsd(V,nsig);
r = rank(A);
V = V(:,1:r);
S = S(1:r,1:r);
U = U(:,1:r);
x_svd = (S*V')\(U'*b)
In the next part, let us analyze the susceptibility to round-off.
I am going to choose the matrix A to be a 21 (m) by 4 (n) Vandermonde matrix,
\[A=\begin{bmatrix}
1 & p_1 & p_1^2 & \dots & p_1^3\\
1 & p_2 & p_2^2 & \dots & p_2^3\\
\vdots & \vdots & \vdots & \ddots &\vdots \\
1 & p_{21} & p_{21}^2 & \dots & p_{21}^3
\end{bmatrix},\]where \(p_i = 0.05 (i-1)\) are evenly spaced grid-points between 0 and 1. This is a discrete polynomial approximation or regression problem, although what it is, does not really matter in this case study.
To keep things under control, I am going to make up the vector \(b\) by assuming for a brief moment that \(x\) is a vector of all ones, and add some white noise. In Octave, I do all this via:
A = vander(0:0.05:1,4);
[m,n] = size(A);
b = A * ones(n,1) + 0.01 * randn(m,1);
Once I have a noisy \(b\), I am going to pretend I don't know what \(x\) I used, and try to regress it again using the normal equations. Again, in Octave this can be done directly with the backslash operator:
x_true = A\b
x_true =
0.92492
1.12503
0.94494
1.00460
It turns out that if I solve the problem in double precision, then all the methods yield the same (correct) answer, since the condition number of A (approx. 110) is not terribly bad.
Remember round-off is purely a manifestation of finite precision, and one can always delay its onset by increasing the precision or word size.
But this hides the underlying disease, which can be fatal if the matrix is poorly conditioned.
Instead of making the matrix A progressively pathological, I will take the other tack - I will limit the number of significant digits with which we operate.
To see the suceptibility of the solution method on round off, we need some way of controlling the precision with which numbers are represented. This routine (roundsd) from Matlab File Exchange allows us to do just that. We can specify number of significant digits with which to represent any number, vector or matrix.
Thus, if I set the number of significant digits to a very high 16 (nearly double precision), and carry out direct solution, QR, or SVD, I get the same answer:
nsig = 16;
A = roundsd(A,nsig);
b = roundsd(b,nsig);
1. Direct solution via normal equations:
L = chol(A'*A,'lower');
L = roundsd(L,nsig);
x_ne = (L*L')\(A'*b)
x_ne =
0.92492
1.12503
0.94494
1.00460
2. QR Decomposition: This also gives the same answer (x_qr = x_ne).
[Q R] = qr(A);
Q = roundsd(Q,nsig);
R = roundsd(R,nsig);
x_qr = R\(Q'*b)
x_qr =
0.92492
1.12503
0.94494
1.00460
3. SVD: This also gives the same answer (x_svd = x_ne). I have tried to save some multiplications by using only the non-zero singular values.
[U S V] = svd(A);
U = roundsd(U,nsig);
S = roundsd(S,nsig);
V = roundsd(V,nsig);
r = rank(A);
V = V(:,1:r);
S = S(1:r,1:r);
U = U(:,1:r);
x_svd = (S*V')\(U'*b)
x_svd =
0.92492
1.12503
0.94494
1.00460
In the next part, let us analyze the susceptibility to round-off.
No comments:
Post a Comment