This is an old post by Cleve Moler, but given the amount of attention I've devoted to numerical differentiation, I ought to include a reference to this technique.
It works for analytic functions (infinitely differentiable), and consists of taking a small step in the imaginary axis. Thus an \(\mathcal{O}(h^2)\) method is given by,
\[f(x_0) = Im(f(x_0 + i h))/h)\]
A particularly useful feature of this algorithm is that as the step-size "h" decreases, it is not susceptible to round-off error like most other finite difference based schemes.
It works for analytic functions (infinitely differentiable), and consists of taking a small step in the imaginary axis. Thus an \(\mathcal{O}(h^2)\) method is given by,
\[f(x_0) = Im(f(x_0 + i h))/h)\]
A particularly useful feature of this algorithm is that as the step-size "h" decreases, it is not susceptible to round-off error like most other finite difference based schemes.
No comments:
Post a Comment