## Tuesday, August 15, 2017

### Diffusion: A Historical Perspective

The paper (pdf) "One and a half century of diffusion: Fick, Einstein, Before and Beyond" by Jean Philibert traces the history of diffusion phenomena.

It starts with Thomas Graham (of dialysis fame) who perhaps made the first systematic observations, which were integrated into phenomenological law by German physiologist Adolf Fick in 1855, at the age of 26.

Fick observed the analogy between mass diffusion and heat conduction (now considered obvious), and piggy-backed on Fourier's law of conduction (1822). The paper cites the opening lines of Fick's work:
A few years ago, Graham published an extensive investigation on the diffusion of salts in water, in which he more especially compared the diffusibility of different salts. It appears to me a matter of regret, however, that in such an exceedingly valuable and extensive investigation, the development of a fundamental law, for the operation of diffusion in a single element of space, was neglected, and I have therefore endeavoured to supply this omission.
Next, the paper talks about the contributions of  W. C. Roberts-Austen (an assistant to Thomas Graham, and successor as Master of the Mint) to quantification of diffusion in solids.

In 1905, Einstein integrated Robert Brown's observations of random zig-zag trajectories and Fick's phenomenological laws, with the crucial observation that it was the mean-squared displacement, and not the mean displacement that was related to diffusion.

Following Einstein's paper, the experimental work of Perrin was responsible helping the world accept the link between the microscopic (MSD is proportional to diffusivity and time) and macroscopic worlds (flux is proportional to concentration gradient).

It is always interesting to look at the chronological development of (now familiar) ideas. These uncontroversial ideas were once strongly wrestled with. It took centuries for scientists to come up with a comprehensive understanding, and to develop interesting applications based off of it.

## Saturday, August 12, 2017

### Exam Question on Fitting Sums of Exponentials to Data

I wrote the question below for our PhD qualifiers. It addresses a problem I have been thinking about for over a decade now - starting from my time as a graduate student: how to fit a sum of decaying exponentials to data?

The question explores a method called the Prony method. Here is the question:

A classical problem in data analysis involves fitting a sum of exponentials to a time series of uniformly sampled observations. Here, let us suppose we are given N observations $(t_i, f_i)$, where $t_i = i \Delta t$ for $i = 0, 1, ..., N-1$.

We want to fit the data to a sum of two exponentials. The model equation is, $\hat{f}(t) = a_1 e^{b_1 t} + a_2 e^{b_2 t}.$ The general nonlinear regression problem to determine $\{a_j, b_j\}$ becomes difficult as the number of exponentials in the sum increases. A number of quasi-linear methods have been developed to address this. In the question, we will explore one of these methods, and determine the fitting parameters.

(a) First, generate a synthetic dataset $(t_i, f_i)$ with true $a_1^* = a_2^* = 1.0$, $b_1^* = -2.0$, $b_2^* = -0.2$. Use $t_0 = 0$, $\Delta t = 1$, and N = 20. Attach a plot of the synthetic dataset. Use this dataset for numerical calculations below.

(b) If $b_1$ and $b_2$ are known, then we can determine $a_1$ and $a_2$ by linear least squares. Set $u_1 = e^{b_1 \Delta t}$ and $u_2 = e^{b_2 \Delta t}$. Recognize that $e^{b_i t_j} = e^{b_i j \Delta t} = u_i^j$. Hence from the model eqn, we can get a linear system:
\begin{align}
f_0 & = a_1 u_1^0 + a_2 u_2^0 \nonumber\\
f_1 & = a_1 u_1^1 + a_2 u_2^1 \nonumber\\
\vdots & = \vdots \nonumber\\
f_{N-1} & = a_1 u_1^{N-1} + a_2 u_2^{N-1}
\end{align}
Write a program to determine $a_1$ and $a_2$, given the data, $b_1$ and $b_2$.

(c) Consider the polynomial $p(z)$, which has $u_1$ and $u_2$ as its roots, $p(z) = (z-u_1)(z-u_2) = z^2 - d_1 z -d_2 = 0$. Express $u_1$ and $u_2$ in terms of $d_1$ and $d_2$.

(d) Now we seek to take linear combinations equations in the linear system above with the goal of eliminating $a_j$. For example, consider the first three equations. If we multiply the first eqn by $d_2$, the next by $d_1$, and the third by -1 and sum them up.
\begin{align*}
d_2 f_0 & = a_1 d_2 + a_2 d_2\\
d_1 f_1 & = a_1 u_1 d_1 + a_2 u_2 d_1 \\
-1 f_2 & = -a_1 u_1^2 - a_2 u_2^2.
\end{align*}
We get $-F_2 +d_1 F_1 + d_2 F_0 = -a_1(u_1^2 - d_1 u_1 - d_2) -$ $a_2(u_2^2 -d_1 u_2 - d_2) = 0$, since $p(u_i) = 0$.

We can pick the next set of three equations, and repeat the process (multiply by $d_2$, $d_1$, and -1 before summing up). Show that we end up with the following linear system:
$\begin{bmatrix} f_{1} & f_0 \\ f_2 & f_1 \\ \vdots & \vdots \\ f_{N-2} & f_{N-3} \\ \end{bmatrix} \begin{bmatrix} d_1 \\ d_2 \end{bmatrix} = \begin{bmatrix} f_2 \\ f_{3} \\ \vdots \\ f_{N-1} \end{bmatrix}$
Determine $d_1$ and $d_2$, and hence $u_1$ and $u_2$. From this, find the estimated $b_1$ and $b_2$.

(e) Once you know $b_1$ and $b_2$ find $a_1$ and $a_2$ by linear least squares solution of linear system.

## Wednesday, August 9, 2017

### Complex Numbers

1. What really are complex numbers?

2. The joy of slightly fishy proofs.

3. This discussion on MathOverflow

## Wednesday, August 2, 2017

### NumPy and Matlab

This post bookmarks two sites that provide handy cheat sheets of numpy equivalents for Matlab/Octave commands.

The ones for linear algebra are particularly handy, because that is one subdomain where Matlab's notation is more natural.

1. Numpy for Matlab users (Mathesaurus)

2. Cheatsheets for Numpy, Matlab, and Julia (quantecon)