Wednesday, August 27, 2014

Links

1. Steve Novella exposes weaknesses in Nassim Taleb argument on GMO crops.

2. Teaching "Inferencing" (Grant Wiggins)

3. How expensive is programmer time relative to computation time? John D Cook suggest it is about three orders of magnitude.

4. Interesting NYT portrait of James Simons.

Tuesday, August 26, 2014

Symmetric Functions And Discrete Cosine Approximation

Consider a periodic function \(f(x), ~ x \in [0,1)\), that is symmetric about \(x = 1/2\), so that \(f(x + 0.5) = f(x - 0.5)\). For example, consider the function \[f(x) = (1/2-x)^2 + \cos(2 \pi x).^2.\]
The form of the function \(f(x)\) does not have to be analytically available. Knowing the function at \(n\) equispaced collocation points \(x_i = i/(n+1)\), \(i = 0, 1, 2, ..., n\), is sufficient. Let us label the function at these collocation points \(f_i = f(x_i)\).

Due to its periodicity, and its symmetry, the discrete cosine series (DCS) is an ideal approximating function for such a data-series. The DCS family consists of members, \(\{1, \cos(2 \pi x), \cos(4 \pi x),... \cos(2 \pi j x), ...\},\) where \(v_j(x) = \cos(2 \pi j x)\) is a typical orthogonal basis function.

The members are orthogonal in the following sense. Let the inner product of two basis function be defined as,\[\langle v_i, v_j\rangle = \frac{2}{n+1} \sum_{i=0}^n v_i(x_i) v_j(x_j).\] Then we have, \[
\langle v_i, v_j\rangle = \begin{cases} 0, & \text{ if } j \neq k \\ 1 & \text{ if } j = k >0 \\ 2 & \text{ if } j = k = 0 \end{cases}.\] This can be verified easily by the following Octave commands:

This snippet yields \[\frac{2}{n+1} V^T V = \begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix}\]

The idea then is to approximate the given function in terms of the basis functions,\[f(x) = \sum_{j=0}^{N-1} a_j v_j(x_i),\] where \(N\) is the number of basis functions used.

From a linear algebra perspective we can think of the vector f and matrix V as, \[\mathbf{f} = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_n \end{bmatrix}, ~~~ \mathbf{V} = \begin{bmatrix} | & | & ... & | \\  v_0(x_i) & v_1(x_i) & ... & v_{N-1}(x_i) \\  | & | & ... & | \\ \end{bmatrix}. \] The \((n+1) \times N\) matrix V contains the basis vectors evaluated at the collocation points.

We are trying to project f onto the column space of V, f = Va, where the column vector a specifies the linear combination of the matrix columns. In the usual case, the number of collocation points is greater than the number of DCS modes that we want to use in the approximating function.

Thus, we have to solve the problem in a least-squared sense by the usual technique, \[\mathbf{V^T f} = \mathbf{V^T V a}.\] Due to discrete orthogonality, we have already shown that, \[\mathbf{V^T V} = \frac{n+1}{2}  \begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix}\]

Therefore, \[\begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix} \mathbf{a} = \frac{2}{n+1} \mathbf{V^T f}.\] In Octave, we can write the following
This yields,

a =

   0.5839844
   0.1026334
   0.5266735
   0.0126556
   0.0078125

and the plot:

Monday, August 25, 2014

Visualizing 3D Scalar and Vector Fields in Matlab

Visualizing 2D fields is relatively straightforward. You don't have to bring in the heavy artillery.

Its when you move to fully 3D models and try to visualize scalar S(x,y,z), or vector V(x,y,z) fields, that the task of visualization becomes a challenge

Doug Hull has a series of 9 short and crisp videos explaining how to use intrinsic commands in Matlab to exactly that. He many of the important tools listed and explained here (in text format) that Matlab makes available.