Wednesday, February 18, 2015

Birds in a Lorry

Does the weight of an air-tight box containing a bird change when it starts flying?

I remember discussing this question with friends and teachers since I first encountered it nearly 25 years ago. It is a classic problem in mechanics, for which "intuition" can be misleading.

The standard answer to this riddle (for example on this reddit thread) suggests that over long periods, a time-average of the weight of the box+bird does not depend on whether the bird is flying or not, due to Newton's third law. There will be (fluctuations of opposite signs) around the average weight, depending on the direction in which the birds are flapping their wings.

Earlier this year, scientists from Stanford provided experimental confirmation of this long-lived conjecture by building an ultra-sensitive experimental platform that helped them measure these tiny transient forces. See "Birds in a lorry riddle finally solved by Stanford University", and videos such as "How a bird can become weightless".

While it is always nice to have concrete physical evidence for a theory, I think the popular media overplayed the insight that this particular experiment offered. Don't get me wrong - development of the force platform to measure tiny forces is a wonderful advance in its own right. But to claim that the "riddle was finally solved" suggests that it was unsolved before this paper came out.

In a certain sense, the experiment "merely" confirmed a rich body of theory, and fairly detailed and compelling computational fluid dynamics calculations. This previous body of work used multiple techniques and experimental inputs, and I think evidence for the standard explanation was robust. As the authors state in their introduction, their primary contribution was the device, not the resolution of the riddle.
... these calculations are based on indirect measurements of variables that need to be differentiated or integrated numerically to calculate force, which introduces numerical error. A non-intrusive, real-time, direct force measurement method (similar to the instrumented tether) does not exist for studying free locomotion in fluids. ... Here, we present an aerodynamic force platform (AFP) that enables such measurements in fluids. 

Monday, February 16, 2015

Fortran90's intrinsic RNG versus Numerical Recipes' RNG

I tried to do a simple speed test between Fortran90's intrinsic random number generator (random_number) and Numerical Recipes' rand(seed). I have been using the latter for a long time, and was hoping that a switch to a native RNG might be more efficient.

I tried to generate 100 million uniformly distributed random numbers using both as outlined in this program. Here is the output:

 Intrinsic RNG Elapsed time    3.11599994    
 NR RNG Elapsed time    1.58000016 

In this test (Desktop running Linux/Ubuntu), the NR RNG seemed to be almost two times faster than Fortran's intrinsic RNG.

Friday, February 13, 2015

Numerical Quadrature Question

For this year's prelim exam, I set the following numerical integration question, which seems very appropriate to repost on Valentine's Day.

The radius of gyration \(R_G\) of an object is the average distance of points from its center-of-mass, \(\mathbf{r}_{cm}\). Consider a heart-shaped domain \(\mathcal{D} \in \mathbf{R}^{2}\) enclosed by the curve for \(0 \leq t < 2\pi.\)
\[\mathbf{h}(t) = \left(16 \sin^3 t,~ 13 \cos t - 5 \cos 2t - 2 \cos 3t - \cos 4t \right).\]


Assuming \(\mathbf{r} = (x, y)\) is a point in \(\mathbf{R}^2\), numerically estimate the following:

(1) The area of \(\mathcal{D}\), given by,
\[A = \int_{\mathbf{r} \in \mathcal{D}} dx~dy.\]
(2) The center of mass,
\[\mathbf{r}_{cm} = \frac{1}{A} \int_{\mathbf{r} \in \mathcal{D}} \mathbf{r} ~dx~dy.\]
(3) The radius of gyration,
\[R_G^2 = \frac{1}{A} \int_{\mathbf{r} \in \mathcal{D}} (\mathbf{r} - \mathbf{r}_{cm})^2 ~dx~dy,\] where \((\mathbf{r} - \mathbf{r}_{cm})^2\) is the square of the Euclidean distance between \(\mathbf{r}\) and the center of mass.
(4) Comment on the accuracy of your numerical estimates.

Thursday, February 12, 2015

Some Meta-Heuristics Resources

Earlier this semester, one of my colleagues introduced me to the category of algorithms filed under the broad-label of "meta-heuristics". There are some excellent resources on the web, and here are two that I found really useful:

  • "Essentials of Metaheuristics" by Sean Luke: It is a great introduction targeted at the "motivated-undergrad" level. The author even lets you download the book, if you fill out a simple online form for him. If you prefer, you can order a paperback for about $20.
  • "Clever Algorithms" by Jason Brownlee: It is written in a very accessible tone. You can read it online algorithm-by-algorithm, or buy the PDF ebook for $11.

Thursday, February 5, 2015

Sums and Integrals

I was teaching Gauss elimination to solve linear systems of equations in my undergrad Algorithms class last week, and at one point we had to estimate the asymptotic computational cost of the algorithm. This involved sums such as \[1+2+3+...+n = \sum_{k=1}^n k = \frac{n(n+1)}{2},\] and \[1^2+2^2+3^2+...+n^2 = \sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6},\] and so on. Sums of higher powers (upto 10) of natural numbers are archived in many places, including here at MathWorld.

One of the students in my class, asked me how to derive these, and I showed a couple of simple tricks that let you calculate the sum of some of the lower power series.

If we only care only about the leading polynomial and its prefactor, as we do when we estimate asymptotic computational costs, there is a much easier method. Under the big-oh notation, one can simply use the approximation, \[\sum_{k=1}^n k^p \approx \int_{0}^{n} x^p dx = \frac{n^p}{p+1}.\]

Tuesday, February 3, 2015

Computing Weighted Averages With Tiny Weights

Consider a set of n data-points \(\{ (w_1, R_1), (w_2, R_2), ..., (w_n, R_n) \}\), where w is the weight, and R is the observed data. We can compute a weighted mean by, \[\bar{R} = \frac{\sum\limits_{i=1}^N w_i R_i}{\sum\limits_{i=1}^N w_i}.\]

Suppose that the weights were super-tiny so that instead of the actual weights, you kept track of the logarithm of the weights \(L_i = \log w_i\). If the weights are small and disparate, a naive computer implementation of the formula, \[\bar{R} = \frac{\sum\limits_{i=1}^N e^{L_i} R_i}{\sum\limits_{i=1}^N e^{L_i}},\] would run into trouble.

This can be illustrated by a simple python program:

import numpy as np
L = np.array([-1500, -999, -1001, -1002], float);
R = np.array([800, 50, 100, 66], float);

sumNumerator   = sum(np.exp(L)*R)
sumDenominator = sum(np.exp(L))

print sumNumerator, sumDenominator

This runs into underflow problems and yields the output:

0.0 0.0

We note that the numerator and denominator can be multiplied by any common number without changing the ratio. If all the data is available, then this observation informs a simple fix. We can rescale the logWeights by subtracting out a common scale factor, \(L_{i}^{'} = L_i - L_s\), since \[\bar{R} = \frac{e^{L_s} \sum\limits_{i=1}^N e^{L_{i}^{'}} R_i}{e^{L_s} \sum\limits_{i=1}^N e^{L_i^{'}}}.\] Continuing the python example from above, and choose maximum of the logWeights as the scale factor :

Lscale = max(L)
Lp = L - Ls;
sumNumerator   = sum(np.exp(Lp)*R)
sumDenominator = sum(np.exp(Lp))

print sumNumerator, sumDenominator

#
# If denominator not too small
#
if sumDenominator > np.finfo(float).eps:
weightAverage = sumNumerator/sumDenominator
print "Scaled Weighted Average", weightAverage

yields the output,

66.8194748359 1.1851223516
Scaled Weighted Average 56.3819210274

Suppose, the data were not all available at once, but trickled in one datapoint after another. The principal complication in this case is that we don't know all the weights a priori (note: if we did, then this would be a simpler problem). We could start by setting the scale factor using the first data-point, and then appropriately weighting or de-weighting, when a better (larger) candidate for the scale factor is found. This Python function does the trick: