Thursday, August 25, 2016

Linear Algebra Insights

3Blue1Brown has a wonderful new series on the "Essence of Linear Algebra", where he visually explores fundamental topics in linear algebra.


Worth a watch!

Sunday, August 21, 2016

Interesting Things I Learnt Last Week

1. Yoga might not be as old as we thought (Surprisingly Awesome podcast)

Apparently everyone else was in on this one a long time ago.
Goldberg is the author of The Goddess Pose: The Audacious Life of Indra Devi, the Woman Who Helped Bring Yoga to the West. Her book traces the modern Western practice of yoga to a Russian woman named Indra Devi, who was born in 1899 with the birth name Eugenia Peterson. Devi became interested in yoga after reading about it in a book written by an American new-age thinker. She studied the practice in India before introducing it to political leaders in Russia and Shanghai and, in 1947, bringing it to America, where her students included Hollywood celebrities like Greta Garbo and Gloria Swanson.
2. Why the "F" and "J" keys have bumps on them (Quora)

Short Answer: So you can type without looking at the keyboard.

Friday, August 19, 2016

Counting by Fingers

Interesting post on allowing kids to use fingers to count. I never quite understood the opposition in the first place.

Hidden near the end is an interesting technique for multiplying by 6, 7, 8, 9, or 10.


To multiply 8 by 7, say:

Algorithm:

1. Touch the "8" and "7" fingers
2. The number of fingers including the touching fingers (5, here) forms the tens place (5 * 10 = 50)
3. Multiply the number of fingers remaining on either hand (3 * 2 = 6)
4. Sum step 2 and 3 (50 + 6 = 56).

Can you figure out why this technique works?

Tuesday, August 16, 2016

F2PY: Interfacing Fortran and Python

Python is notoriously slow with loops. One of the many ways to speed things up is to incorporate subroutines written in C or Fortran into Python.

F2PY is a program that enables us to just that: "provide a connection between Python and Fortran languages." It lets us wrap Fortran code (you'll need a compiler like gfortran) for use in Python.

You can thus salvage legacy Fortran 77 code, or write "new" Fortran 90 code. The latter is much easier to incorporate.

Basic Steps:

1. Write a subroutine file in Fortran 90. Use "intent" modifier to express interface. Avoid allocatable arrays.
2. Use f2py to generate a python interface/signature file ("pyf" file) and a module that can be imported by python. You may have to edit the signature file sometimes.
3. Compile the python module.

Some resources:

1. Robert Johansson has a very nice set of iPython notebooks for Scientific Computing with python. The first part of the lecture "Lecture-6A-Fortran-and-C.ipynb" provides a gentle introduction to F2PY. If you don't have jupyter/iPython notebooks installed on your computer, a PDF of the entire set of notebooks is also available on his github site.

2. A nice example is available here.

3. The official site also provides a cheat-sheet for three different ways to wrap things.

4. Another nice introduction, that goes on to talk about stuff beyond the basic.

I plan on demonstrating this on a practical example shortly.

Wednesday, August 10, 2016

Block Averaging: Matlab and Python

I'd written about block averaging to estimate error bars from correlated time-series in a couple of blog posts nearly three years ago. Here are the two posts, which explain the motivation and logic behind this technique (post1 and post2).

I wrote programs to carry out this operation in:


The required input is a data-stream or correlated "x" samples. There are optional flags for turning onscreen printing on or off, and to limit the size of the maximum block. The default choice for maximum block size ensures that the datastream is chopped into 4 blocks and more.

The output is a set of three arrays: the block size, the block variance, and the block mean.

The program prints out the mean and variance corresponding to the largest block sizes on screen. This may or may not be the "best choice". A quick look at the plot (by turning the isPlot flag on) will help ascertain this.

Friday, August 5, 2016

On Writing

I overheard this quote from E. L. Doctorow on my favorite language podcast "A Way with Words"
[The act of writing is] “like driving a car at night: you never see further than your headlights, but you can make the whole trip that way.”
This beautifully-expressed thought touched a chord.

Usually when I begin writing, I have a rough notion of the ideas I want to communicate, but they are all tangled up like a ball of wool. The hope is to untangle the mess (think), cut redundant strands ("there is no writing, only rewriting"), and weave a sweater (a narrative) - to push the wool metaphor a bit.

Writing helps me think. It helps me learn. It helps me see new patterns in things I already know.

I have a simplistic theory on why writing works as a thinking and learning tool.

Our mind stores thoughts and ideas like my kids store their toys. They are all over the place.


A sentence or paragraph or story has a linear structure. It has a beginning, a middle, and an end.

In computer science terms, ideas in our mind are like graphs, ideas on paper are like queues.

The act of writing makes us examine the graph carefully, figure out the relevant or important links, and to project them into a one-dimensional (or quasi 1D) narrative.


Here's a game you can play with a kid that captures some of these thoughts.

Look at a map of the world showing different countries. These are like the ideas in your head. Each country borders other countries or water bodies (a graph).

Now pick any two countries, say US and Romania. Suppose the goal is find a path from the US to Romania, keeping track of the boundaries you cross. Perhaps, you want to minimize these crossings (or perhaps, you want to take the scenic route).

The act of figuring out an "optimal" path forces us to project the map of the world onto a queue. If we are attentive and lucky, we might learn new things.

Tuesday, August 2, 2016

Fama and Thaler on Market Efficiency

This is a fantastic moderated conversation between Eugene Fama and Richard Thaler on the question of "Are Markets Efficient?"


While it is fashionable to bash the efficient market hypothesis (EMH) these days, the wonderful discussion highlights many of the nuances.

Fama posits that the EMH is a useful model, even if it is not perfectly true all the time. Pointing out occasional anomalies doesn't invalidate the model. Furthermore, one has to be careful about hindsight bias (bubbles for example) before rejecting the EMH.

It should be understood that the EMH is not a deterministic model in the same sense as physical laws or models (example: Newton's laws of motion). Instead, it bears resemblance to probabilistic or statistical models (example: weather models).

A single anomaly can completely reject a deterministic model.

If a model says "A implies B", and you find a counter-example, where "A does not imply B", then you have to reject or amend the model "A implies B".

A real example might be the belief that heavy objects fall faster than lighter objects (in the absence of air resistance). A single example (or thought experiment) is enough to destroy the model.

On the other hand, anomalies don't necessarily eliminate probabilistic models.

Consider a model that says "A often implies B", such as "cigarette smoking often implies lung cancer". You find someone who smoked a pack everyday and lived to 90. That example is treated as an anomaly, or "the exception that proves the rule".

EMH, perhaps, belongs to the second group.

If you think like a Bayesian, your belief in the model should decrease as the evidence against the model begins piling up.