Wednesday, December 25, 2013

Puzzle: Shortest distance between points on a cubic grid

This is a middle or high-school level puzzle aimed at encouraging algorithmic thinking.

We know that the distance between two points A and B is \[d_{AB} = \sqrt{(x_A - x_B)^2 + (y_A - y_B)^2 + (z_A - z_B)^2},\] where x, y, and z denote the Cartesian co-ordinates.

Suppose the points A and B lie on a cubic grid or lattice. That is the Cartesian co-ordinates are integers.

What is the shortest distance between the points along the lattice?

In the 2D example below (assuming each small square has sides of length 1), the shortest distance between points  A and B with and without the lattice constraint is \(3 \sqrt{2} + 1\) and 5, respectively.


Additional Notes/Hints:

1. In other words, you start from the point A and hop to any of the 8 (2D) or 26 (3D) nearest lattice points. You repeat this process, until you reach B.
2. In 3D, an individual hop may be of length \(1, \sqrt{2}, \sqrt{3}\).
3. The shortest path itself may not be unique, but the shortest length is.
4. It may be useful to rephrase the problem in terms of \(\Delta x = |x_A - x_B|, \Delta y = |y_A - y_B|, \Delta z = |z_A - z_B|\).


Friday, December 13, 2013

Transition Slides in Beamer

Imagine that you want to produce a series of three transition or thematically linked slides. By that, I mean that most of the information on the slide remains the same, with minor additions, deletions or changes.

As an example consider the following three frames:




The first step, in this case, would be to draw the three figures in a drawing program (like Inkscape), preferably one which supports layers.

I like to draw a border around the image. That way, when I export the image into PNG or PDF format the dimensions of the image are conserved.

Next, we use the following self-explanatory TeX code:

Thursday, December 5, 2013

RIPE: More Polymer Entanglements

In a previous post I discussed entanglements in polymer melts. I thought I’d spend sometime discussing some “practical” matters in which entanglements play a role. Today, lets focus on a very special kind of polymer: DNA.

You have a lot of DNA in your body. In fact, each cell in your body has over 2 meters of DNA packed inside a small bag called the nucleus. The diameter of a typical nucleus is less than 10 microns.

If that hasn’t knocked you off of your seat yet, let me put it in perspective. If all the DNA in your body were set end to end, it would stretch from the Sun to Pluto!

To freaking Pluto; which they say is not even a planet anymore!

I have trouble dealing with headphones in my pocket, and the nucleus manages to pack and use all of that DNA inside it. How the heck does it do that?

The answer turns out to have some parallels in everyday life. How do you deal with a really long water hose or vacuum cleaner cable? You roll it around something! You organize it!

Thus, DNA is not packed randomly. It is organized in a very sophisticated hierarchical manner. This allows the nucleus to sequester a lot of material and information in a very small compartment. Here is a nice video that explains this organization (DNA - nucleosomes - chromatin).

Friday, November 29, 2013

Links

1. Zakaria's commentary on "the Rediscovery of India"

One fact that stuck with me.
Most of India’s wealth is generated from its cities and towns. Urban India accounts for almost 70 percent of the country’s GDP. But almost 70 percent of its people still live in rural India. “As a consequence,” writes Ashutosh Varshney of Brown University, “for politicians, the city has primarily become a site of extraction, and the countryside is predominantly a site of legitimacy and power. The countryside is where the vote is; the city is where the money is.”

2. A python library for interactive visualization called Bokeh (via Nathan Yau)

3. Sal Khan of Khan Academy at Google discussing his new book (with Eric Schmidt) "The One World Schoolhouse".

While I think there is a lot of hype around the idea of online education, I seriously think there are quite a few insights in this talk. For example, I never questioned (*) why school is arranged according to age?, (*) why we have scheduled holiday vacations?, (*) the role of exams and mastery.

Wednesday, November 27, 2013

Wine Crying due to Tension

A beautiful video explaining the phenomenon of "crying wine" (via Jimmy Touma).



The demonstration with pepper and detergent was impressive. I had to impress my daughter with it.

Thursday, November 21, 2013

Zen Pencils: Feynman on Beauty

After Reid Gower's video on Feynman's "Beauty (of a flower)" (as part of a trilogy) Zen pencils offers a comic tribute to the quote.
I have a friend who's an artist and has sometimes taken a view which I don't agree with very well. He'll hold up a flower and say "look how beautiful it is," and I'll agree. Then he says "I as an artist can see how beautiful this is but you as a scientist take this all apart and it becomes a dull thing," and I think that he's kind of nutty. First of all, the beauty that he sees is available to other people and to me too, I believe. Although I may not be quite as refined aesthetically as he is ... I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside, which also have a beauty. I mean it's not just beauty at this dimension, at one centimeter; there's also beauty at smaller dimensions, the inner structure, also the processes. The fact that the colors in the flower evolved in order to attract insects to pollinate it is interesting; it means that insects can see the color. It adds a question: does this aesthetic sense also exist in the lower forms? Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds to the excitement, the mystery and the awe of a flower. It only adds. I don't understand how it subtracts.

Sunday, November 17, 2013

Perspective

A speaker today showed this picture of earth taken by Cassini from Saturn to make the point that perspective is important.


Lots of other beautiful pictures of Saturn on the JPL website.

Thursday, November 14, 2013

Feynman Trilogy

Canadian filmmaker Reid Gower has a Feynman trilogy on Beauty, Honors, and Curiosity.

From the second film,
When I was in High School, one of the first honors I got was to be a member of the Arista, which is a group of kids who got good grades. Everybody wanted to be a member of the Arista. I discovered that what they did in their meetings was to sit around to discuss who else was worthy to join this wonderful group that we are. OK So we sat around trying to decide who would get to be allowed into this Arista. This kind of thing bothers me psychologically for one or another reason. I don’t understand myself.

Tuesday, November 12, 2013

Stretching the truth

Doug Richards presents a brief primer on basic material science and rheology (as it applies to human tissue), on his way to explaining "why" some kinds of stretching work and others don't.


PS: The title of the post, with or without punctuation, conjures very different images.

Friday, November 1, 2013

Links

1. Talk nerdy to me: 50 "intellectual" jokes. Examples:
Two fermions walk into a bar. The first says “I’d like a vodka martini with a twist.” The second says “Dammit, that’s what I wanted!”
Did you hear about the suicidal homeopath? He took 1/50th of the recommended dose.

2. How not to talk to your kids. Reiterates how praising "smarts" can be problematic.
I am smart, the kids’ reasoning goes; I don’t need to put out effort. Expending effort becomes stigmatized—it’s public proof that you can’t cut it on your natural gifts.
I remembered how during my undergrad days at IIT Bombay, the label "maggu" - someone who had to work hard to earn his grade - was  decidedly derogatory.

Thursday, October 31, 2013

Climate Science Resource

I found this resource page on Skeptical Science through the NY Skeptics podcast.

The nice part about the way the page is organized is that it tries to debunk "climate change denial" claims at two levels: a basic level that you could use to engage middle school kids (simplified explanations), and an intermediate-level that is like a scientific paper, and backed by more data, references, and nuance.

You should check it out.

Monday, October 21, 2013

Review: Land of the Seven Rivers

I read Sanjeev Sanyal's "Land of the Seven Rivers: A Brief History of India's Geography" with much interest after I heard a podcast of his talk at LSE.

It presents Indian history (and prehistory) through the lens of geography and climate. The new perspective allows for some fascinating reinterpretations.

For example, Sanyal claims that the Ramayana was a oriented along a North-South axis (Ayodhya to Sri Lanka), and that the Mahabharata, was more aligned along an approximately East-West axis. Remnants of these paths still exist in the form of segments of national highways.

Similarly, he tells the fascinating story of why Gautam Buddha went to Sarnath to propagate his new philosophy (it sits at the intersection of two important commercial routes, which allows for rapid dissemination of goods and ideas). He dwells on the role of "pillars", and how some of them became a canvas on which successive waves of emperors would carve out their names for posterity.

One surprising aspect (which perhaps should not have been that surprsing) is how much the climate has changed even in the last few thousand years, and how a response to such massive climate change has shaped the course of history (example, the end of the Indus-Saraswati valley civilization). As we deal with a climate crisis of our own, it is useful to look at the massive disruption such upheavals cause.

If another reason to study history is to learn how we got where we are, then I think this book shows us how big of a role Nature and fortune (as opposed to politics and monarchs, which are the usual lens through which history is told) played in shaping the course of events.

The book is full of factoids and clever observation of patterns. Throughout the book there is an underlying concept of "continuity" and the idea of "nationhood" extending from 90 million years ago, through the merger of the Indian subcontinent with Asia, the early river valley civilizations, through the last couple of millennium, up to the present day with a discussion of the Indian diaspora.

It is truly a fascinating read, and very highly recommended. 

Saturday, October 12, 2013

Nicer Octave Plots for Presentations/Documents

GNU Octave has a very capable plotting system (which is transitioning away from gnuplot as its backend, even as we "speak"). Quite often, you generate a figure and make it look like you want:

You then want to incorporate it into some other document - say a presentation or a document.

So you say:

print -dpng test0.png

Note that you could also use -depsc2 or -dpdf to generate EPS or PDF formats among a whole host of other possibilities.

You get a figure that looks like:

Clearly imperfect.

There are a number of ways to improve the image.

These include writing to tikz, TeX, which gives you exquisite control over all features, but this can take too much effort.

Or you could spend time fiddling with adjustable parameters to make the plot look just right. If you are trying to work on "the" central plot of a paper, or a PhD thesis, then such care is certainly warranted.

But if you are merely dissatisfied (as I am) with the size and font of the text, there is an easy fix.

print -dpng -FHelvetica:18 test1.png


Sunday, October 6, 2013

Metropolis-Hastings

As I mentioned previously, one of my colleagues is running an MCMC seminar this semester, which is essentially a journal club that is trying to read and discuss some of the more seminal papers in the field.

The second paper we discussed was Hastings famous 1970 paper, which generalized the "Metropolis" algorithm and couched it in a more general form (the Metropolis-Hastings algorithm).

The story behind this man and his famous paper turns out to be every bit as fascinating as that behind the original paper.

From 1966 to 1971, Hastings was an Associate Professor in the Department of Mathematics at the University of Toronto. During this period, he wrote the famous paper listed above (which generalised the work of N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller (1953), "Equations of state calculations by fast computing machines", J. Chem. Phys. 21, 1087-1091). Hastings explains: 
When I returned to the University of Toronto, after my time at Bell Labs, I focused on Monte Carlo methods and at first on methods of sampling from probability distributions with no particular area of application in mind. [University of Toronto Chemistry professor] John Valleau and his associates consulted me concerning their work. They were using Metropolis's method to estimate the mean energy of a system of particles in a defined potential field. With 6 coordinates per particle, a system of just 100 particles involved a dimension of 600. When I learned how easy it was to generate samples from high dimensional distributions using Markov chains, I realised how important this was for Statistics, and I devoted all my time to this method and its variants which resulted in the 1970 paper.
He seems to have been a very fascinating character.

He wrote only three peer-reviewed papers all this life, and supervised only a single PhD student. 

Wednesday, October 2, 2013

Links

1. Matt Walsh writes a funny and compelling post on how not to be an ass when other people's kids get cranky in public (h/t Julia).
I had an older guy complain to me recently about babies that cry during church. He said: “Back when our children were babies, you didn’t have this problem.” Interesting. Apparently babies didn’t cry in the 50′s. The whole “crying baby” thing is a new fad, it would seem. These folks who had kids a long time ago seem to have a rather selective memory when it comes to their own days of parenting young kids. They also tend to dismiss the fact that modern parenting presents unique challenges, some of which didn’t apply several decades ago. I always love the older folks who lecture about how THEIR kids weren’t as “attached to electronics” as kids are nowadays. That’s probably true, but mainly because, well, YOU DIDN’T HAVE ELECTRONICS. You had a toaster and a black and white TV with 2 channels, both of which were pretty easy to regulate. But, sure, congratulations for not letting your kids use things that didn’t exist. On that note, I have a strict “no time machines or hover-boards” policy in my home. It is stringently enforced. I’m thinking of writing a parenting book: “How to Stop Your Child From Becoming Dependent Upon Technology That Isn’t Invented Yet”
2. Putting time into perspective: Like the "pale blue dot", there is something liberating about looking at ourselves from afar. It brings the relative insignificance of our personal (or planetary) struggles into sharp relief.

Saturday, September 28, 2013

Dalio: How the Economic Machine Works

Ray Dalio put together this very nice video to paint the workings of an economy with a broad brush. It presents a compelling lens through which to view the relentless onslaught of "economic" news. For the first time, I think I have a rough idea of what Ben Bernanke is trying to pull off:


Wednesday, September 25, 2013

Links: War Edition

1. OPEN Magazine's "Playing at War: How two madmen brought the world to the brink of a third great war" on the international machinations during the liberation of Bangladesh, should make us skeptical of official government lies lines. Very skeptical.

2. Of course, we should also be very skeptical of the ongoing war cries to strike against Syria. Here is Mish Shedlock's interesting take.

Friday, September 20, 2013

Pindyck, EconTalk, Climate Change, and Uncertainty

Russ Roberts recently interviewed Robert Pindyck on EconTalk. I found the interview very fascinating, because it avoids the two extremes that people instinctively gravitate towards: (a) impose a big carbon tax, or (b) do nothing.

He poses the question more narrowly and grapples with the question of "how big should the tax be given the 'real' uncertainties and timescales involved?" Blurb on EconTalk:
Pindyck argues that while there is little doubt about the existence of human-caused global warming via carbon emissions, there is a great deal of doubt about the size of the effects on temperature and the size of the economic impact of warmer climate. This leads to a dilemma for policy-makers over how to proceed. Pindyck suggests that a tax or some form of carbon emission reduction is a good idea as a precautionary measure, despite the uncertainty.
The interview uses his working paper, "Climate Change Policy: What do the Models Tell Us?" (pdf) whose abstract starts with two words that summarize the paper "Very Little."

A more accessible paper is published at the Cato Institute site (pdf), entitled "Pricing Carbon When We Don’t Know the Right Price". Given the political leanings of the Cato Institute the subtitle is even more provocative: "Despite the unknowns, we should begin to tax carbon."

Sunday, September 15, 2013

Links:

1. The classic Feynman Lectures in Physics now available online for free.
Caltech and The Feynman Lectures Website are pleased to present this online edition of The Feynman Lectures on Physics. Now, anyone with internet access and a web browser can enjoy reading a high-quality up-to-date copy of Feynman's legendary lectures. This edition has been designed for ease of reading on devices of any size or shape; text, figures and equations can all be zoomed without degradation.
2. Since we are talking physics, here is a placeholder link at Mathworks for teaching physics using Matlab.

Saturday, September 14, 2013

Molecular Weight Distribution Posts

I wanted to put together four posts I did on molecular weight distributions (MWD) just for future reference:
  1. The relationship between size-exclusion chromatography (SEC) and MWD: The first post explored the relationship between the typical SEC plot (w(log M) v/s log M.
  2. The second post looked at a particular example to "particularize" some of the equations above.
  3. I also looked at two common parametric forms for MWD, namely the log-normal distribution, and the generalized exponential or GEX functions.


Friday, September 13, 2013

Markov Chain Monte Carlo Seminar

This semester my colleague Peter Beerli is running a seminar in a journal club format. We are trying to read the most seminal papers in MCMC.

Of course, one has to start at the beginning and read "the" famous paper by Metropolis et al., "Equation of state calculations by fast computing machines" which has been cited more than 25,000 times. It had been a while since I had read the paper (more than a decade ago, I think), and I enjoyed re-reading it after the hiatus.

It was like re-reading one of those books you read as a teenager, and taking completely new things away from it. When I first read it, I was caught up with the mechanics, since they were still very new to me. This time around, I got to marvel at some of their astute observations (and lament their choice of a horrible random number generator).

I also spent some time reading the background of the paper (some links on Peter's webpage). In a relatively recent interview, one of the paper's authors Marshall Rosenbluth had this to say about some of the other co-authors:
Barth: Your collaborators for these papers were Edward Teller and Nick Metropolis and your former wife?
Rosenbluth: Yes. She actually did all the coding, which at that time was a new art for these new machines. You know, no compilers or anything like that.
Barth: And it’s also listing A.H. Teller.
Rosenbluth: That was Teller’s wife, who during the war had been one of these computer [women] - he wanted her to get back into the work, but she never showed up. So she was basically -
Barth: Put on the paper for it?
Rosenbluth: Yes. As was Metropolis, I should say. Metropolis was boss of the computer laboratory. We never had a single scientific discussion with him.
Some of this recollected in a historical paper (probably behind a paywall).

It is very interesting that none of the coauthors made any subsequent use of the algorithm.

The last bit from that paper:
Obviously, Marshall’s revelations prompted hallway and diner discussions about whether the Metropolis algorithm should be called the Rosenbluth or at least the Rosenbluth-Teller algorithm. One of the organizing committee, Rajan Gupta, whose curiosity got the best of him, privately asked Marshall his opinion about the name of the algorithm. Marshall replied: 
“… life has been good to me. I feel rewarded in knowing that this algorithm will allow scientists to solve problems ranging from fluid flow to social dynamics to elucidating the nature of elementary particles.” The original name will stick. 
Three times during the conference, Marshall walked up to me and said, “It hard to explain how exciting it was to be at Los Alamos during those times. To be able to interact with Teller and von Neumann was very important to me.” Even heroes have their heroes.

Wednesday, September 11, 2013

Robert Krulwich: Commencement Speech at CalTech

Robert Krulwich's (co-host of Radiolab, among other things) commencement speech at CalTech is a call for scientists to engage with the public using stories, metaphors, and narratives.


Link


Friday, September 6, 2013

Links

1. WonkBlog @ the Washington Post: A 10-part series on "Why the Tuition is Too Damn High?"

2. Slate.com: "The MOOC That Roared" Georgia Tech offers a masters in CS for $6,600.

As an academic and father of two kids who I still need to put through college, this is an important subject.

There are a lot of things that MOOCs still have to work out (how do you replace physical wet labs? How do you grade non-multiple choice questions? How do you ensure no cheating is taking place?), but it seems very possible that tuition won't increase at the rate that it has been or may even deflate.

Monday, September 2, 2013

RIPE: Polymers and Entanglements

Let's start with three everyday examples of entanglements.

(i) I am love my iPod and listen to it quite a bit. Usually, after I am done listening, I just stick it, along with headphones, into my pocket. The next time I pull it out, the headphones are all tangled up, and I often wonder what the heck happened to them in my pocket.






(ii) A similar thing happens every time I use my leaf-blower, and don’t stow away the cables carefully. It is a mess!



(iii) As a father of daughters, I am a big fan of short hair. Why? Long hair gets tangled up more easily, and needs more effort combing (de-entangling).

Headphones, cables, and hair – they all have a natural tendency to get all tangled up.

The “longer” these things get, the more trouble they are.

What has this got to do with polymers?

Polymers are long chain-like molecules formed by stringing together units called monomers. As the number of units increases, their “molecular weight” increases, as does their propensity to get tangled up.

Indeed, there is a threshold molecular weight – different for different polymers – called the entanglement molecular weight, beyond which their rheology is strongly influenced by these entanglements. (This opens up a fascinating cascade of research questions that I have spent countless hours pondering!)

Below the entanglement molecular weight, the viscosity (the resistance to flow) of polymer melts increases gently. Double the molecular weight, and you double the viscosity.

Beyond the entanglement molecular weight, this linear increase in viscosity is abruptly disturbed (see a picture here). In this highly entangled regime, a doubling of molecular weight results in nearly a 10-fold increase in viscosity.

Take that!

Footnote: Polyethylene in your milk jars, and polystyrene in Styrofoam cups have entanglement molecular weights of about 1000 and 15,000 daltons respectively.

Footnote2: RIPE = Research In Plain English

Friday, August 30, 2013

Zen Pencils: Bill Watterson's advice

Bill Watterson has been my hero for a long time (Calvin and Hobbes is probably is only "complete collection" I possess). 

Here's a beautiful "comic tribute" from Zen Pencils (which is an amazing "comic" blog in its own right), on his famous 1990 speech at Kenyon College.

I think this part of the speech has bits that apply to academia:
We're not really taught how to recreate constructively. We need to do more than find diversions; we need to restore and expand ourselves. Our idea of relaxing is all too often to plop down in front of the television set and let its pandering idiocy liquefy our brains. Shutting off the thought process is not rejuvenating; the mind is like a car battery-it recharges by running. 
You may be surprised to find how quickly daily routine and the demands of "just getting by: absorb your waking hours. You may be surprised matters of habit rather than thought and inquiry. You may be surprised to find how quickly you start to see your life in terms of other people's expectations rather than issues. You may be surprised to find out how quickly reading a good book sounds like a luxury.
The end of the speech is, of course, great!
But having an enviable career is one thing, and being a happy person is another. 
Creating a life that reflects your values and satisfies your soul is a rare achievement. In a culture that relentlessly promotes avarice and excess as the good life, a person happy doing his own work is usually considered an eccentric, if not a subversive. Ambition is only understood if it's to rise to the top of some imaginary ladder of success. Someone who takes an undemanding job because it affords him the time to pursue other interests and activities is considered a flake. A person who abandons a career in order to stay home and raise children is considered not to be living up to his potential-as if a job title and salary are the sole measure of human worth. 
You'll be told in a hundred ways, some subtle and some not, to keep climbing, and never be satisfied with where you are, who you are, and what you're doing. There are a million ways to sell yourself out, and I guarantee you'll hear about them. 
To invent your own life's meaning is not easy, but it's still allowed, and I think you'll be happier for the trouble. 
Reading those turgid philosophers here in these remote stone buildings may not get you a job, but if those books have forced you to ask yourself questions about what makes life truthful, purposeful, meaningful, and redeeming, you have the Swiss Army Knife of mental tools, and it's going to come in handy all the time.

Tuesday, August 27, 2013

Lengthscales Animation

A very nice interactive animation of length scales called "the Scale of the Universe". You can even turn your speakers on, as you travel from the smallest to the largest length scales.

There used to be another video (I cannot seem to locate it right now) which essentially did the same thing, except it was not interactive. The cool part of that video was that it showed quite vividly how most of space is empty at both ends of the lengthscale spectrum (atoms and outer space), and how matter is just like some tiny discrete dust sprinkled on, almost as an afterthought.


Sunday, August 18, 2013

Block Averaging: Estimating Uncertainty (Part 2)

In the last post, we ended with a conundrum.

To summarize:

When we have a timeseries of independent samples, estimating the mean and the uncertainty of that estimate is straightforward. The central-limit theorem tells us everything we need to know:  \(\sigma_{\langle x \rangle} = \sigma_x/\sqrt{N}\).

However, when successive data points are correlated they tell us "less" than  uncorrelated data points. If we ignore this diminished importance of points, and pretend as if they were as good as uncorrelated samples (by lavishing the central-limit theorem on them), we underestimate the true uncertainty in \(\sigma_{\langle x \rangle}\).

In the previous example, we found  \(\langle x \rangle = 4.686 \pm 0.057\).

The problem with this result is not necessarily that the mean is not as close to "5" as we would like, but that the error-bar (standard deviation) gives us a false sense of security.

Note that in this example, we knew the true answer. In a molecular simulation, we don't know the true answer, and it is easier to be seduced by small error-bars.

In short, the real problem with treating correlated data is that the present analysis underestimates the true error-bar.

Now let us see, what we can change. In "the real world" we cannot change the fact that data are going to be correlated or that the estimated mean is not as close to the "true" mean as we would like, unless we are prepared to carry out extraordinarily long simulations (or extraordinarily many independent simulations). Thus, we set ourselves a fairly modest task.

We want more reliable estimates of \(\langle x \rangle\). Here is where a trick called block averaging can be surprisingly helpful. This one slide from David Kofke's lecture notes on Molecular simulation explains the whole idea.

The idea is to chop the dataseries into \(n\) chunks or "blocks" of different size "b" (n * b = N). We compute the mean of each block \(\bar{m}_i, ~ i = 1, 2, ... n\), and compute the standard deviation of the block averages, \(\sigma_{\bar{m}_b}\). The subscript "b" denotes that this standard deviation is for blocks of size "b".

The simulation error is estimated as \(\sigma_{\langle x \rangle} = \sigma_{\bar{m}_b}/\sqrt{n-1}\). Luckily for you, I wrote an Octave script which does this calculation for you.

If you plot this quantity versus "b", you get a quantity that gradually increases, and then plateaus out. For the correlated data set from the previous post, we get:

We get plateau estimate of \(\sigma_{\langle x \rangle} \approx 0.296\), which implies a much more consistent "1 sigma" estimate of \(\langle x \rangle = 4.686 \pm 0.296\).

As you can see, block averaging reflects the true uncertainty in the data, which is larger than that implied by a mindless application of the central-limit theorem.

What happens if we subject our uncorrelated data from the previous post to the block averaging treatment.

See for yourself:

The standard deviation doesn't change with block size.

Wednesday, August 14, 2013

Block Averaging: Estimating Uncertainty (Part 1)

When you periodically sample a property (eg. thermodynamic attributes such as pressure, static properties such as radius of gyration of a polymer molecule, etc.) in "equilibrium" molecular dynamics or Monte Carlo simulations, you end up with a time series of that property.

For specificity, let us call this property \(x(t)\), where \(t = 1, 2, ..., N\) are discrete time steps (easy to adapt this discussion to cases where it is not discrete). Usually, the sampling frequency is greater than the characteristic relaxation time of the system, which makes successive measurements of \(x\) "correlated" or "non-independent".

The weather or the stock market on successive days is correlated because the timescale of a weather system or a business news item is often a few days; however the successive tosses of a fair coin or pair of dice can be expected to be uncorrelated.

Uncorrelated data-points are nice and desirable because they save us a lot of trouble. We can not only find average value \[\langle x \rangle = \frac{x(1) + x(2) + ... + x(N)}{N},\]  of the property easily, but also associate an error bar or standard deviation.

Thus, if \(\sigma_x\) is the standard deviation of \(x\), then the central-limit theorem tells us that \(\sigma_{\langle x \rangle} = \sigma_x/\sqrt{N}\).

Let us consider a simple example. I generated time series with N = 1000 samples of a random number with mean value of "5" using the Octave command: x = 5 + 2 * rand(N,1) - 1, to add a uniform random number between [-1, 1].

I plot the time-series (or vector in Octave parlance), and its autocorrelation function computed via this script, in the following figure.
As expected, \(\sigma_x = 0.575 \approx 1/\sqrt{3}\), and \(\langle x \rangle = 4.983 \pm 0.018\), where the standard deviation of the mean is 0.575/sqrt(1000). This gives a restrictive "1 sigma" range for the mean between 4.965 and 5.001, between which the mean value of "5" falls, and everything is hunky dory.

We also see the autocorrelation function quickly decay from 1 to zero. This is a hallmark of an uncorrelated process. If successive samples were correlated, then the autocorrelation function would take its own sweet time to fall to zero (as we shall see in an example shortly). The amount of time it takes to fall to zero or "decorrelate" is a measure of how correlated the data is.

Now let me make up correlated process. In the following example, I set x(1) = 0; x(t+1) = 0.95 * x(t) + 2 * rand(N,1) - 1, and then shifted all the values by "5". This is an autoregressive model, but we don't have to know anything about that here.

The time-series and its autocorrelation function look as follows:

You can "see" the correlation in the data-series visually. The series fluctuates around "5" (not completely obvious) in a qualitatively different manner from the uncorrelated series. This is apparent in the autocorrelation function, which now decays to 0 around \(t = 50\). The red line is the autocorrelation of this particular dataset, and the black line is the average over 100 such datasets.

Here \(\sigma_x = 1.791\), and naively using  \(\sigma_{\langle x \rangle} = \sigma_x/\sqrt{N}\), we get \(\langle x \rangle = 4.686 \pm 0.057\). This "1 sigma" range for the mean between 4.629 and 4.743, which is not the "correct" bound for the mean value of "5". The "2 sigma" or 95% interval is not much better.

Clearly something is amiss here. We'll find out more in the next installment.


Saturday, August 10, 2013

Links

1. Should Humans Eat Meat: Vaclav Smil presents a thumbnail sketch in this Scientific American article.
... is it possible to come up with a comprehensive appraisal in order to contrast the positive effects of meat consumption with the negative consequences of meat production and to answer a simple question: are the benefits (health and otherwise) of eating meat greater than the undesirable cost, multitude of environmental burdens in particular, of producing it?
2. Is Dawkins hurting Athesim:  Martin Robin contends that the movement is maturing, and no longer needs the forceful and erratic professor.
@RichardDawkins is the increasingly erratic comedy creation of a bored Oxford Professor called Richard Dawkins. One of the best science writers of the last few decades, Dawkins has succeeding in crafting an online character that ironically parodies the more militant tendencies in capital-A Atheism, serving as a useful reminder for all of us to be more nuanced and tolerant.

Thursday, August 8, 2013

EconTalk, Pallotta, Charity etc.

I came across Dan Pallotta through an interview he did with Russ Roberts on EconTalk. Here is an interesting TED talk which presents his thesis on charity and the non-profit sector.

Interesting!

Sunday, August 4, 2013

Dangerous Code

Michael Lewis tells the chilling story of Sergey Aleynikov in Vanity Fair:
Then he explained what he knew, or thought he knew: in April 2009, Serge had accepted a job at a new high-frequency-trading shop called Teza Technologies, but had remained at Goldman for the next six weeks, until June 5, during which time he sent himself, through a so-called “subversion repository,” 32 megabytes of source code from Goldman’s high-frequency stock-trading system. The Web site Serge had used (which has the word “subversion” in its name) as well as the location of its server (Germany) McSwain clearly found highly suspicious. He also seemed to think it significant that Serge had used a site not blocked by Goldman Sachs, even after Serge tried to explain to him that Goldman did not block any sites used by its programmers, but merely blocked its employees from porn and social-media sites and suchlike. Finally, the F.B.I. agent wanted him to admit that he had erased his “bash history”—that is, the commands he had typed into his own Goldman computer keyboard. Serge tried to explain why he had done this, but McSwain had no interest in his story. “The way he did it seemed nefarious,” the F.B.I. agent would later testify. [Emphasis mine]
Wikipedia has some additional details on this extremely bizarre case, which I just don't seem to get!

Friday, August 2, 2013

Timescale is Everything!

In my last post, I waxed about my field of rheology in an attempt to show that the boundary between solids and liquids is fuzzier than you might think.

One other thing that rheology teaches you to appreciate is the importance of timescales. To rheologists the "pitch-drop experiment" is exciting; the difference between something that flows and something that doesn't is patience.

In fact, there is widely used dimensionless number called the Deborah number named after the prophetess Deborah, who said "The mountains flowed before the Lord".

Presumably "the Lord" works on much longer timescales.

All this is a long prelude to some amazing videos by the Slow Mo Guys. They film things squishing, popping, dropping or exploding, with a high-speed camera, and set it to music. The result is beauty and a deep appreciation for everyday phenomena that we miss because we sense and process visual data at "faster" time scales.

Here are specific links to a few that I really enjoyed: bubble bursting, underwater bullets, exploding water-melons and rubber bands, droplet collisions, etc.

Heck! Check them all out.


Tuesday, July 30, 2013

RIPE: Rheology

I just came back from a workshop, and one of the things I "re-learnt" was the need to communicate research in plain english (RIPE) to a broad audience. I am going to try to do a pitch on some aspect of my research hopefully on a monthly basis.

Here is the first installment. It is an attempt to introduce my field of rheology to an audience with a high-school level science background.

You probably know that there are three states of matter: solid, liquid and gas (there are a few more exotic ones too).

Take water for example. At room temperature it is a liquid. You can drink it, splash it, swim in it etc. Put some water in a freezer and cool it below melting temperature, and you get cold solid ice-cubes. You can use them to chill your soda, preserve food in an ice-box, or as cold-compress to heal a bruise. Instead of cooling water, you could heat it until it e-“vapor”-ates into steam, which, as it turns out, is the life-blood of the chemical industry.

Okay. So you know your solids, liquids and gases.

So let me ask you this: Is toothpaste solid or liquid?

It holds its shape like a solid, but you can easily “deform” it like a liquid.

How about shaving gel? Mayonnaise, silly-putty, jam?

It turns out that a wide class of materials that you encounter frequently in the food and cosmetic industry, defy this simple binary classification into solids or liquids. The size of this class of materials – unimaginatively dubbed “complex fluids” - is actually even larger.

For instance, most synthetic polymers or plastics that you see around you are processed in the molten state and are called polymer melts (I single them out because I have been studying them for over a decade now, and often feel that they don’t get the attention they deserve).

Polymer melts are also complex fluids.

The study of how such complex fluids respond to deformation is called rheology.

Let me restate the statement above in plain English.

When you try to deform or strain matter, it gets “stressed out”. Different materials, like people, react to stress differently.

Solids usually resist strain by opposing the forces that try to deform or disfigure it. Solids are like people who really don’t like to be pushed around!

Fluids (liquids and gases), on the other hand, go with the flow. Like Zen monks, they don’t resist. They bend, assimilate the imposed force, dissipate its energy, and return to a state of calm.

Complex fluids are like children with “solid” and “liquid” parents. They inherit traits like elasticity from their “solid” parent, and viscosity from their “liquid” parent. It is therefore no surprise that they are very frequently called “viscoelastic fluids”.

Studying the rheology of viscoelastic fluids like polymer melts, helps us devise better methods of processing them.

Friday, July 26, 2013

Octave/Matlab: Loop through Files in a Directory

Looping through files that match a certain pattern is an easy and routine operation for a BASH shell for example:

One can mimic this feature from within an Octave or Matlab program by using the command "glob". Here's how:


Monday, July 22, 2013

Academic Career Advice

Radhika Nagpal dishes out some very sensible advice that is not dished out as often as it should be.
In 2004 when I came to Harvard as a junior faculty, I wrote it on my desk. This-is-a-7-year-postdoc.
I type it in every day. For all seven+ years I have been at Harvard. No joke. 
It is an incredibly liberating point of view. If I’m not here for tenure, then there are a bunch of things I do not need to do. For example, I don’t need to spend my seventh year travelling doing the tenure talk circuit (I did not do this), or make sure I invite and get to know personally exactly 18 folks who might be my letter writers, or be on organizing committees so everyone important knows me well, or try to get nominated for awards as fast and as young as possible (I just turned 42). Frankly most of this is not possible to actually do!


Friday, July 19, 2013

Orthogonal Polynomials: Mathematica Script

A family of polynomials \[\{ \phi_{0}, \phi_1, ..., \phi_{n} \}\] (the subscript indicates the degree) are orthogonal with respect to a weighting function \(w(x)\), over an interval \([a,b]\), if \[\langle\phi_n, \phi_m\rangle = \int_{a}^{b} w(x) \phi_{n}(x) \phi_{m}(x) dx = \delta_{nm} c(n)\]
In other words, given a \(w(x)\) and an interval \([a, b]\) an orthogonal sequence of polynomials can be found via Gram-Schmidt orthogonalization of monomials.

This is done by a straightforward Mathematica script (link at the end). A possible use is:

P = OrthoPoly[Exp[-x], 0, 10, 3] // FullSimplify

This spits out the first three members of the family of orthogonal polynomials with \(w(x)=\exp(-x)\), over the interval [0,10].

One can use the zeros of these polynomials to obtain corresponding Gauss quadrature points. For example:

Solve[P[[3]] == 0, x] // N

yields {{x -> 0.180784}, {x -> 0.752396}}.

The code is available here.

Monday, July 15, 2013

GeoGebra Worksheet

GeoGebra makes building JavaScript or HTML5 applets a piece of cake.

I recently wrote a quick program to dynamically "analyze" the effect of additional monthly prepayments on a loan, on the amortization schedule. The formulae are available on this wikipedia page.

Here is a link to the GeoGebra worksheet on GeoGebraTube. You can download the file, or merely play with it in (either JavaScript or HTML5).


Monday, July 8, 2013

How thick is a human hair?

The quick answer seems to be between 20 to 200 microns.

If you want to find out how thick your hair is, using only a laser pointer and measuing tape, check out these two cool YouTube videos.



They make for very interesting summer afternoon projects for kids!

Wednesday, July 3, 2013

Links

1. Keith Devlin has some interesting thoughts on the use of video games to facilitate math learning. In particular why we should be wary of "Benny's Rules". He also has an interesting response to "Math Wars" that I linked to earlier.

2. Empirical Zeal tries to explain the visually fascinating phenomenon of "gravity defying chain of metal beads". This effect is very well-known in my field of polymer rheology and there are nice YouTube videos highlighting some of these features here, here and here.

3. Happy Birthday from SpikedMath

Monday, July 1, 2013

Podcasts on my iPod

I've been a big fan of listening to podcasts on my iPod Touch for three years now. Here's are the some of the podcasts I subscribe to:
  • RadioLab: As of this moment, my favorite podcast. Jad and Robert explore scientific ideas under a theme. Powerful narratives,  ingenious use of sound effects, and seeped in humor.
  • This American Life: Ira Glass regales us with stories or "acts" under a theme. Similar in spirit to Radiolab, although the topics are more "human interest" than science.
  • EconTalk: Russ Roberts chats up one guest each week on topics loosely related to economics. I love the question and answer format.
  • The Reality Check: A bunch of college friends talking science. Each person leads a story, while the others ask questions, and make snarky quips.
  • Way With Words: Martha and Grant do with language what Tom and Ray Magliozzi do with cars (in CarTalk). Entertaining and educational!
  • Rationally Speaking: Another skeptical take on issues and ideas in a chat-based format.
  • Commonwealth Club and Big Ideas: Talks on interesting topics by smart and/or famous people. Most of the talks are more in-depth than TED talks.
  • NPR StoryCorps: Extraordinary stories of ordinary people archived for posterity.
  • Selected Shorts: Actors read famous short stories. In some ways, my only reliable source of fiction these days.
  • Stuff To Blow Your Mind: Another discussion-based science program.
The links above are to their webpages. Links to the podcasts may be different.

Wednesday, June 26, 2013

Vector Representation in Different Basis

Teacher: We now know how to rotate a vector \(u\) counter-clockwise by an angle \(\theta\) using a rotation matrix \(Q\). In this lesson, we are not going to transform the vector \(u\) - instead we are going to investigate how the matrix representation changes when we move from the standard basis vectors to  some other basis.

Student: Right, that last time around you did remark that in the matrix representation of a vector \(u = (1,1)\) the basis was tacitly assumed. So I guess, we have to first talk about a new basis.

Teacher
: Yep. Let's again consider a vector \(u = (u_1, u_2)\) and the standard basis vectors \(e_1 = (1,0)\) and \(e_2 = (0,1)\) in 2D. Thus,
\[u = u_1 e_1 + u_2 e_2 = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}.\]
Student: I notice you dropped the subscript "o" from last time, because we are not going to touch the vector per se. Also I notice that we using a general representation instead of something specific \(u = (u_1, u_2)\).

Teacher
: Good observation. We will toggle between a general and a specific \(u\), depending on the situation. Now let's pick two new basis vector \(E_1\) and \(E_2\). Just to reiterate lessons from last time, let us generate this new basis by rotating the standard basis by 90 degrees.

Student: Okay. Let me figure this part out. I set  \(\theta = \pi/2\). I can compute \(Q(\pi/2)\) and get:
\[E_1 =Q(\pi/2) e_1 = \begin{bmatrix} 0 & -1\\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} .\]

Similarly, \(E_2 = Q(\pi/2) e_2 = (-1, 0)\). I guess that means I can draw a picture such as:


Teacher: Great. Now let's consider the point \(u = (1,1)\) as before, and ask ourselves how its representation in the new basis \(U = (U_1, U_2)\) looks like.

Student: Hang on. I thought we were not going to do anything to the vector \(u\).

Teacher
: We aren't! We are simply looking at the same geometrical object \(u\) with a different lens (basis). It similar to saying: "Texas is to the west", when you are in Florida, and "Texas is to the East", when you are in California. Texas hasn't moved. You have.

Student: Aha! I see what you mean. We are trying to represent the same geometrical object \(u_1 e_1 + u_2 e_2 = U_1 E_1 + U_2 E_2\). 

Teacher
: Great. Since we know the relationship between the old and new basis, we should be able to figure out the co-ordinates in the new basis.
\[ U_1 E_1 + U_2 E_2 = U_1 Q e_1 + U_2 Q e_2 = Q U_1 e_1 + Q U_2 e_2\]
That is: \(QU = u\), or \(U = Q^T u\).

Student: Okay let me try to keep things straight. \(Q\) codifies the relationship between the old and new axis. You are telling me that a point \(u\) in the old basis is the same as the point \(Q^T u\) in the new basis.

Teacher
: Yes! Why don't you try out the example?

Student: Okay. I already know what \(Q\) is. I can get:
\[U = Q^T u =  \begin{bmatrix} 0 & 1\\ -1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} .\]

Teacher
: Does that look like the right answer?

Student: \(U = 1 E_1 + (-1) E_2 \). That looks about right from this figure!


Teacher
: Right. If you want to rotate a vector you multiply it by \(Q\). If you want to represent the same vector in a different basis you multiply it by \(Q^T\).

Student: Question. Here the new basis was a simple rotation of the old basis given by \(Q\). What about a general new basis.

Teacher
: Good question. We can figure this out using linear algebra for a n-dimensional space. We know what the standard basis for such a space is. \(e_i = [0~... 0, 1, 0, ..., 0]^T\), where the 1 is in the i-th row. Consider an alternative basis packed together as columns in the matrix C:
\[C = (v_1~v_2~...v_n)\]
Student: Each of the \(v_i\) is a n-dimensional vector, and since they form a basis for \(R^n\), they are linearly independent. And \(C\) is really a n by n matrix, that is invertible.

Teacher
: Someone here remembers their linear algebra! Okay now we can write a point \(u \in R^n\) in terms of the new basis
\[u = U_1 v_1 + ... + U_n v_n = C \begin{bmatrix} U_1 \\ \vdots \\ U_n \end{bmatrix} = C U\]
And hence, \(U = C^{-1} u\).

Student: I get it! In our case, the the basis transformation was simply a rotation \(C = Q\). Therefore
\(U =  C^{-1} u = Q^T u\). That really does tie it all in together. 

Teacher
: I am so glad that it does. You know what else. Khan Academy has a bunch of nice videos on this topic.

Student: I sure will check them out.



Monday, June 24, 2013

Links

1. Math Wars: NYT Opinionater piece by Alice Crary and Stephen Wilson making the case for teaching algorithms. (H/T Alexandre Borovik). The comments section, as usual, is very entertaining.

2. How not to shoot a monkey (Empirical Zeal).

3. The footnotes to (2) above reference the excellent Radiolab segment "Escape", which talks about Newton's "moon problem": Why does the moon, unlike an apple fall, not fall straight towards the earth?

I had heard before that the Newton's apple story was probably apocryphal. Apparently, it was perpetuated by Newton himself as a cunning ruse to claim primacy over the concept of gravitation (from rival Robert Hooke). It turns out that the real story was far more interesting, and I wish it received as much air-time as that darned apple. If you don't want to listen to the Radiolab episode, this link explains the idea.

Thursday, June 20, 2013

Exploring GeoGebra

I've known about the existence of GeoGebra for a while now, but this summer I've spent some time taking it out for longer rides. I really like the program - it is very intuitive to use, and there is a lot of help available online.

It does make math (algebra, calculus, geometry) very interactive and fun.

One thing I am very excited about is the ability to export GeoGebra worksheets into dynamic HTML5 webpages with the press of a button. This means you can make all those interactive JavaScript or HTML5 applets *very* easily.

You can already go to GeoGebraTube, and explore many shared worksheets either as Java or HTML5 applets, or download the "ggb" files and open them using GeoGebra installed on your computer.

I could not help but notice that its creator Markus Hohenwarter was at Florida State University (where I work) between 2008–2009. It is unfortunate that I did not get to meet him then.

Friday, June 14, 2013

HFT in decline?

Interesting story entitled "How the Robots Lost: High-Frequency Trading's Rise and Fall" in BusinessWeek on how speed is becoming increasingly commoditized!
For the first time since its inception, high-frequency trading, the bogey machine of the markets, is in retreat. According to estimates from Rosenblatt Securities, as much as two-thirds of all stock trades in the U.S. from 2008 to 2011 were executed by high-frequency firms; today it’s about half. In 2009, high-frequency traders moved about 3.25 billion shares a day. In 2012, it was 1.6 billion a day. Speed traders aren’t just trading fewer shares, they’re making less money on each trade. Average profits have fallen from about a tenth of a penny per share to a twentieth of a penny.

Thursday, June 13, 2013

Rotating Vectors

Teacher: Let's start simple, and consider a vector \(u_o = (1,1)\), with the usual standard basis vectors \(e_1 = (1,0)\) and \(e_2 = (0,1)\) in 2D.

Student: Are we going to stay in 2D thoughout?

Teacher: For now, at least. It makes visualization much easier. To begin with, we are going to consider rotation of the vector \(u_o\) by an angle \(\theta\) to the new vector \(u_n\).

Student: I guess the subscripts "o" and "n" stand for "old" and "new". So you mean I should conjure a picture like this?



Teacher: That's excellent. What do we really mean by the vector  \(u_o = (1,1)\) in terms of the bases?

Student: I know. I know. We mean \(u_0 = 1 e_1 + 1 e_2\). The "co-ordinates" of a vector tell me what linear combination of the basis I should take.

Teacher: Very impressive! You can also write the vector in "matrix notation" as \[u_0 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}.\] Remember that the matrix notation implicitly assumes a certain basis. We will revisit this subtle but confusing point later, when we talk about non-standard bases.

Student: Okay. So as long as I use the same basis or co-ordinate system throughout a problem, I don't have to spend too much time thinking about it?

Teacher: Right. Now let us consider the 2 by 2 rotation matrix \(Q\), which performs the rotation. One can derive it from pure geometrical considerations. This magic matrix looks like:
\[Q = \begin{bmatrix}
\cos \theta & -\sin \theta \\
\sin \theta & \cos \theta \\
\end{bmatrix}\]
Student: How do I use it?

Teacher: Very simple. To rotate \(u_0\) counter-clockwise by an angle \(\theta\) just multiply
\[u_n = Q(\theta) u_0.\]
Student: Cool. Let me try. If I set \(\theta = \pi/2\). I get:
\[u_n =\begin{bmatrix} 0 & -1\\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} -1 \\ 1 \end{bmatrix} .\]



Teacher: Try \(\theta = \pi/4\), and tell me what you see?

Student: I get \(u_n = (0, \sqrt{2})\). Hmmm.
I see. I see. The length of the vector is unchanged. Both \(u_0\) and \(u_n\) are of length \(\sqrt{2}\).

Teacher: Pure rotation, my friend. The matrix \(Q\) is special. A general matrix \(A\) would have been less gentle. In general, a 2 by 2 matrix rotates and stretches the vector on which it acts.

Student: Cool! Is there anything else that is special about \(Q\)

Teacher: Yes! I am glad you asked. \(Q\) belongs to a special class of matrices called orthogonal matrices. These matrices have the interesting property that the transpose is the inverse.
\[Q^{-1} = Q^T.\]

Student: Does that have any relevance to our discussion?

Teacher: Sure it does! Given \(u_n = Q u_0\) we can use the property of orthogonal matrices to write \(u_0 = Q^T u_n\). That is, we can rotate a vector clockwise by an angle by multiplying by \(Q^T\).

Student: Wait a minute. Shouldn't I have to multiply with \(Q(-\theta)\), instead of \(Q^T\)?

Teacher: A rose by any other name is still a rose!

Student: Aha! They are the same matrix.

Teacher
: Excellent. Next time, we'll consider the problem we alluded to earlier. What happens to a vector when we change the basis.


Monday, June 10, 2013

The Heart-warming Story of Yitang Zhang's Proof

It's been over a month and a half since Yitang Zhang emerged from obscurity and "announced a proof that there are infinitely many pairs of consecutive primes with a gap at most 70 million" (wikipedia).

Wired in a nice piece (originally featured on Simons Science News) entitled "Unknown Mathematician Proves Elusive Property of Prime Numbers" retells the story of the announcement.
“Basically, no one knows him,” said Andrew Granville, a number theorist at the Université de Montréal. “Now, suddenly, he has proved one of the great results in the history of number theory.” 
Zhang said he feels no resentment about the relative obscurity of his career thus far. “My mind is very peaceful. I don’t care so much about the money, or the honor,” he said. “I like to be very quiet and keep working by myself.”
From every angle, a feel-good science/math story (like this and this). 

Thursday, June 6, 2013

Nipun Mehta: Commencement Speech

I enjoyed his commencement "Give, Receive and Dance" speech at Harker. Here's a transcript, and here is an amateur video:


A particularly poignant incident he talks about:
Sometime last year, I spontaneously treated a homeless woman to something she really wanted -- ice-cream. We walked into a nearby 7-11, she got her ice-cream and I paid for it. Along the way, though, we had a great 3-minute chat about generosity and as we’re leaving the store, she said something remarkable: "I'd like to buy you something. Can I buy you something?" She empties her pockets and holds up a nickel. The cashier looks on, as we all share a beautiful, awkward, empathy-filled moment of silence. Then, I heard my voice responding, “That’s so kind of you. I would be delighted to receive your offering. What if we pay-it-forward by tipping this kind cashier who has just helped us?” Her face breaks into a huge smile. “Good idea,” she says while dropping the nickel into the tip-jar.

Tuesday, June 4, 2013

Does skin sense temperature?

A long time ago, I remarked how we sense "heat flux" rather than temperature. I discovered two videos which articulate the same idea more eloquently.

From minute-physics:



From veritasium:

I particularly enjoyed the melting ice-cube part.

Saturday, June 1, 2013

Mathy Drawing

Found this two drawing programs via MathMunch

1. Recursive Drawing
Recursive Drawing is an exploration of user interface ideas towards the development of a spatially-oriented programming environment.
2. Silk:  Beautifully done. With its haunting background score, the drawing experience is almost meditative.

Wednesday, May 22, 2013

Ranbaxy and Gupta

Read two negative pieces of India/Indians this past weekend! Long and interesting reads.

1. Rajat Gupta's fall from grace:
The 64-year-old Gupta, who remains free on appeal, has vigorously maintained his innocence. But even as his appeal is heard this week, the fundamental question behind his case remains a mystery. Why would one of the most revered C.E.O.’s of his generation, who retired with a fortune worth some $100 million, show such bad judgment?
2. The unsettling fraud at Ranbaxy: Parts of this story made the hair on my neck stand up!
Thakur left Kumar's office stunned. He returned home that evening to find his 3-year-old son playing on the front lawn. The previous year in India, the boy had developed a serious ear infection. A pediatrician prescribed Ranbaxy's version of amoxiclav, a powerful antibiotic. For three scary days, his son's 102° fever persisted, despite the medicine. Finally, the pediatrician changed the prescription to the brand-name antibiotic made by GlaxoSmithKline (GSK). Within a day, his fever disappeared. Thakur hadn't thought about it much before. Now he took the boy in his arms and resolved not to give his family any more Ranbaxy drugs until he knew the truth.

Monday, May 20, 2013

Distribution of Birthdays and Student Performance

I accidentally happened to chance upon the part of Malcolm Gladwell's book "Outliers", where he talks about how the selection system for junior ice-hockey leagues in Canada strongly favors older kids in a particular cohort, and how that effect lingers for a while.

Here's Gladwell describing the idea in an ESPN interview.
It's a beautiful example of a self-fulfilling prophecy. In Canada, the eligibility cutoff for age-class hockey programs is Jan. 1. Canada also takes hockey really seriously, so coaches start streaming the best hockey players into elite programs, where they practice more and play more games and get better coaching, as early as 8 or 9. But who tends to be the "best" player at age 8 or 8? The oldest, of course -- the kids born nearest the cut-off date, who can be as much as almost a year older than kids born at the other end of the cut-off date. When you are 8 years old, 10 or 11 extra months of maturity means a lot.
The data on that site, and many others such as this one, seem to bear it out. Of course, reality may be more complicated but there does seem to be a germ of truth in the Gladwell's simplified narrative.

Here (pdf) is another study looking at reading skills as a function of birthday distribution, where a similar effect is found.

Why should you care?

Suppose, like me, you have a child whose birthday falls in that weird window (due to arbitrary school cut-off ages), where he or she can either be the oldest or youngest person in her class (depending on your decision to wait an additional year or enroll right away).

I haven't converged on an answer yet, but I know years from now, regardless of what I do, it will all be my fault!