Systems of linear equations \(Ax = b\) in which only the "right hand side" \(b\) changes between successive problems occur very frequently in science and engineering. In such cases, it is prudent to factorize or decompose the matrix \(A = LU\).

This factorization is often viewed as the matrix encoding of Gauss elimination (GE).

It front-loads the cost of having to solve the series of N matrix equations, \(Ax = b_1, Ax = b_2, ..., Ax = b_N\)

The only difference is that the deal offered by LU decomposition beats anything on Amazon.

For a square \(m \times m\) matrix \(A\), the computational cost of LU decomposition is \(\mathcal{O}(m^3)\). Once the factorization is available, the matrix equation can be solved for a particular right hand side using forward and back substitution steps in \(\mathcal{O}(m^2)\) operations.

Thus, the total cost of solving the series of N equations is \(\mathcal{O}(m^3 + N m^2)\). If GE is used to solve all the equations, the corresponding cost is \(\mathcal{O}(N m^3)\), which is much worse.

*.**An imperfect analogy might be a subscription to Amazon Prime, which costs (let's say) $100/year upfront (the factorization), and gives subscribers free shipping on all products. If you buy a lot of stuff from Amazon (large number of matrix equations N), the up front cost pays for itself very quickly.*

The only difference is that the deal offered by LU decomposition beats anything on Amazon.

For a square \(m \times m\) matrix \(A\), the computational cost of LU decomposition is \(\mathcal{O}(m^3)\). Once the factorization is available, the matrix equation can be solved for a particular right hand side using forward and back substitution steps in \(\mathcal{O}(m^2)\) operations.

Thus, the total cost of solving the series of N equations is \(\mathcal{O}(m^3 + N m^2)\). If GE is used to solve all the equations, the corresponding cost is \(\mathcal{O}(N m^3)\), which is much worse.

## No comments:

Post a Comment