Wednesday, February 27, 2013

A Titanic or Tantalizing Possibility?

The Economist has an interesting piece on how the the metals titanium and tantalum may become very affordable.

From the article:
Aluminum was once more costly than gold. Napoleon III, emperor of France, reserved cutlery made from it for his most favoured guests, and the Washington monument, in America’s capital, was capped with it not because the builders were cheapskates but because they wanted to show off. How times change. And in aluminium’s case they changed because, in the late 1880s, Charles Hall and Paul Héroult worked out how to separate the stuff from its oxide using electricity rather than chemical reducing agents. Now, the founders of Metalysis, a small British firm, hope to do much the same with tantalum, titanium and a host of other recherché and expensive metallic elements including neodymium, tungsten and vanadium.
Check it out. 

Monday, February 25, 2013

Improved Finite Difference Formula: Regular Grid

I am came across this excellent paper "Calculation of Weights in Finite Difference Formulas" by Bengt Fornberg, for deriving numerical differentiation formulae quickly.

If you don't have access to the website, check out this article on Scholarpedia (by Fornberg himself). You won't miss much.

Here, I will consider a very special case considered above.

Problem: Say you have a regular grid of equispaced points \(x_0, x_1, ..., x_n\) of \(n+1\) points, such that \(x_j = x_0 + j h\). Let \(f_j\) denote the function value at node \(x_j\). Let us say, we want to find the best approximation to the m-th derivative at a point \(x = x_0 + s h\), \(f^{(m)(x)}\), where \(x_0 \leq x \leq x_n\).

In the figure above, for example, n = 4 (5 points), m =2 (we are interested in an approximation to the second derivative) at x defined by s = 1.5.


We want to find the set of n+1 coefficients \(c_i\)s so that the explicit approximation is as good as possible
\[ f^{(m)}(x_0 + sh) = \sum_{i=0}^n c_i  f_i\]
Solution: The solution is a one-liner in Mathematica:

CoefficientList[Normal[Series[(x^s*Log[x]^m),{x,1,n}]/h^m],x]

Why does this work?

Set \(f(x) = e^{i \omega x}\) in the approximate formula above. Note that \(f^{(m)}(x_0 + sh) = (i \omega)^m e^{i \omega x_0} e^{i \omega s h}\), and \(f_j = e^{i \omega x_0} e^{i \omega j h}\).
\[(i \omega)^m e^{i \omega x_0} e^{i \omega s h} = \sum c_j e^{i \omega x_0} e^{i \omega j h} \]
Using the substitution \(e^{i \omega h} = \zeta\), which implies \( \ln \zeta = i \omega h \), we get
\[ \left( \frac{\ln \zeta}{h} \right)^m \zeta^s = \sum c_j \zeta^j \]
We want the approximation above to be as accurate as possible at small "h" or when \(\zeta = 1\). Pulling the "h" term aside, and recognizing that the RHS looks like a truncated Taylor series, the coefficients \(c_j\) can be obtained.

We can post facto throw in the factor of \(h^m\) in the denominator to fix things up.

Friday, February 15, 2013

Math Links

1. Belated Valentine's Day Equation (via Mike Croucher): Simplify the inequality,

9 x -7 i > 3 (3 x - 7 u)

(Answer: i less than 3 u)

2. SpikedMath cartoon

Monday, February 11, 2013

Beamer Howto: Scale Equations to Fit a Slide

A quick and dirty way to scale equations, especially equations with matrices spelled out, so that they fit on a line uses the commonly used graphicx package.

In the preamble say:

\newcommand\scalemath[2]{\scalebox{#1}{\mbox{\ensuremath{\displaystyle #2}}}}

And when you need to use it say something like $\scalemath{0.7}{rest of the equation}$

Thursday, February 7, 2013

Arrays and Linked Lists

I came across this blog post while revising my old lecture notes on data structures for a class I am teaching this semester (applied computational science).

The "provocative" claim is that arrays often do better in practice than linked-lists for insertion and deletion of random elements, so long as the elements are "small".

As you would expect, there is a very lively comment section.

Worth a read.

Sunday, February 3, 2013

Molecular Weight Distributions: The Log-Normal Distribution

A log-normal distribution in molecular weights often arises as a consequence of anionic polymer synthesis. You can see pictures of this asymmetric distribution on the linked wikipedia page.

Let us review some properties of this distribution. The number distribution of segments with molecular weight \(M\) has two parameters
\[ N(M;m, s) = \frac{1}{M s \sqrt{2 \pi}} \exp \left( {-\frac{(\ln M - m)^2}{2 s^2}} \right). \]
The quantity \(\ln M\) is normally distributed around \(m\) with variance \(s\). You can verify that:
\[ \int_0^\infty N(M) dM = 1 \]
\[M_n =  \int_0^\infty M N(M) dM = \exp\left(m + \frac{s^2}{2} \right) \]
\[M_w =  \int_0^\infty M W(M) dM =\exp\left(m + \frac{3 s^2}{2} \right), \]
where I tacitly used the relation for the weight distribution \(W(M) = N(M)/M_n\) in the last line. The polydispersity index \[p = M_w/M_n = \exp(s^2) \]
It is usual to report the \(p\) and \(M_w\) of an empirically obtained distribution. One can express the parameters of the log-normal distribution in terms of these numbers as:
\[ s^2 = \ln p \]
\[m = \ln M_w - \frac{3}{2} \ln p\]
One could use these to directly express the number distribution in the following somewhat ugly form:
\[N(M) = \frac{1}{\sqrt{2 \pi } M \sqrt{\ln (p)}} \exp \left[- \frac{\left( \ln M-\ln  \left(\frac{2 M_n}{p} \right) \right)^2}{2 \ln (p)}\right]\]
Note: The cumulative distribution function is given by:
\[C(M; m,s) = \frac{1}{2} + \frac{1}{2} \text{erf} \left(\frac{\log M-\mu}{\sqrt{2}\sigma} \right)\]

Friday, February 1, 2013

Interview with Popular Science Writers

Interesting interview with five popular science writers.

Here is an interesting part of the conversation. I've highlighted parts of Pinker's (SP) response that I liked. I've kept Brian Green's (BG) response in there, because it sets the context.

I've heard it being called the "file drawer problem" because all the failures are stowed away.
How has the formal, technical way scientists write journal papers affected popular science writing? 
BG: I was looking back over some quantum mechanics papers from the 1920s and in one article the scientist described an accident in his laboratory when a glass tube exploded, a nickel got tarnished and he heated it to get rid of the tarnish – he went through the whole story himself in the technical article. You don't really see that much these days. I don't know if that is a one-off example, I haven't done an exhaustive study, but have journal articles moved away from telling the story of discovery to just a more cut-and-dried approach?  
SP: They have; I think that's been documented. There is scientifically a problem with that, as opposed to narrating what happened. The problem is that since you're under pressure from the journal editor to tell your story leading up to your conclusion without talking about all the blind alleys and accidents, it actually distorts the story itself because it inflates the probability that what you discovered is really significant. If you tried 15 things that didn't work and one thing that did work and didn't talk about the 15 that didn't work, then the statistic that makes it significant is actually mistaken. The statistic has to be computed over all of the experiments you ran, not just the one that happened to work. In the social sciences especially, we're seeing that there's a lot of damage done by the practice of only reporting the successes and telling the story as if it was a straight line to a successful result.