## Monday, October 31, 2016

### On the Paradox of Skill

The relationship between talent, hard work, success, and luck is complicated.

Generally, this is what people think:

talent + hard work = success

If you read biographies of famous, successful people, this is the message that is repeated ad nauseam. The role of luck is absent or downplayed. "The harder I work, the luckier I get."

It is amazing how large of a role luck plays. Robert Frank recently wrote a whole book about it. You should read this writeup in the Atlantic, or listen to this EconTalk podcast. In it, he paints a beautiful picture of how the difference between the person who comes in first, and the one who comes in second isn't talent or hard work. It is usually, luck.

In our increasingly "winner take all" world, this can have serious consequences.

This essay by Michael Mauboussin elaborates on a point first made by Stephen Jay Gould. As talented people in a given field relentlessly work hard, they do two things. They raise the bar (the average increases to a point), and they narrow the distribution of skill (the standard deviation shrinks). As relative differences in skill diminish, the role of luck is enhanced.

This paradox - dubbed the paradox of skill - is extremely counter-intuitive. In "professional" fields, where the overall level of skill is high, there is increasing reliance on luck.

In my own field of academia, one sees instances of this phenomenon everywhere.

Consider finding a tenure-track position at a decent university. I have been on both sides of the equation. When I am evaluating applications, I am often awestruck  by the generally high level of competence. Most of the serious candidates are talented, and have got to where they are by working extraordinarily hard. Due to the extreme imbalance in the demand and supply of potential faculty, the applicants who rise to the top often get a huge assist from Lady Luck.

This is not to claim that the winners don't deserve their success. Of course, they do. But one has to be generous to the "losers". What they lacked was luck - a factor over which they had no control, almost by definition.

What does this mean? If you are outmatched in terms of skill, you shouldn't play by the standard rules. Change the rules, or change the game.

If you are David, you don't engage with Goliath in hand-to-hand combat.

If you are Small Community College playing Alabama, you try as many trick plays as you can.

If you are investing in stocks, find illiquid small-caps, which the Warren Buffets of the world cannot consider.

If you are planning a career, look at intersections of traditional domains, which are not particularly crowded.

## Thursday, October 20, 2016

### Cleve Moler on Householder Reflections

Two recent posts by Cleve Moler on Householder Reflectors

1. QR decomposition

2. Comparison with Gram-Schmidt

The second post includes nice examples, demonstrating the difficulty in preserving orthogonality with standard Gram-Schmidt.

## Monday, October 17, 2016

### Redemption

WaPo has an inspiring story of redemption, "The White Flight of Derek Black".

It paints an intimate portrait of Derek Black, a man groomed to inherit the intellectual mantle of white nationalism in the US, as he confronted facts, and changed his mind.
He had always based his opinions on fact, and lately his logic was being dismantled by emails from his Shabbat friends. They sent him links to studies showing that racial disparities in IQ could largely be explained by extenuating factors like prenatal nutrition and educational opportunities. They gave him scientific papers about the effects of discrimination on blood pressure, job performance and mental health. He read articles about white privilege and the unfair representation of minorities on television news.
The most important takeaway for me was the courage (to oppose the only family he knew and clearly loved) and intellectual honesty it must have taken to change his opinion on something so central to his belief-system, in full public-glare.

It gives me a new hero, and a fresh lease of hope in mankind.

## Friday, October 14, 2016

### On Existence and Uniqueness

In engineering, it isn't uncommon to approach problems with an implicit assumption that they can be solved. In fact, we take a lot of pride in this problem-solving mindset. We ask "how", not "if"?

In a non-traditional department like mine, I have colleagues and collaborators from applied math. Their rigor and discipline prods them to approach new problems differently.

Instead of asking "how can I start solving this?", the first question they ask is usually, "does a solution exist?"

If there is no solution, there is no point in looking for one. You can't find a black cat in a dark room, if it isn't there.

If there is a solution, the next question to ask is: "is there one unique solution?", or are there many, perhaps, even infinite possible answers.

If there is a unique solution, any path that takes us to Rome will do. In practice, there is a preference for a path that might get us there fastest. We can begin thinking about an optimal algorithm.

If there are many possible solutions, and we seek only one, perhaps we can add additional constraints on the problem to discard most of the candidates. We can try to seek a solution that is optimal in some way.

There might be lessons from this mindset that are applicable to life in general.

When faced with a new problem, we might want to triage it into one of the three buckets.

Does this problem have a solution? If there is no solution, or the solution is completely out of one's control, then there is no point in being miserable about it.

As John Tukey once remarked, "Most problems are complex; they have a real part and an imaginary part." It is best to see isolate the real part, and see if it exists.

If there is a unique solution, then one should spend time finding the best method to solve the problem.

If, like the majority of life problems (who should I marry? what career should I pick?), there are multiple solutions, then one has to spend time formulating the best constraints or optimal criteria - before looking for a method of solution.

## Saturday, October 8, 2016

I chanced upon Alon Amit's amazing answer on Quora to the question:
I am not amazingly good at maths, so it is obvious to me whether I can solve a problem or not after 5 minutes. How do people spend hours on problems?
He paints a beautiful problem, which is easy to describe, and which anyone can have fun with. You should check it out.
One of the bad misconceptions people have about math problem solving is that it’s a mechanical process: either you know which algorithm to follow, in which case just fucking follow it and be done with it, or you don’t, in which case there’s nothing you can do.

## Tuesday, October 4, 2016

I recently stumbled into academictree.org which is a wonderful and growing repository of academic trees.

Simply put, it is a nice place to squander time by reveling in a weird form of narcissism. Anyway, here is mine:

## Saturday, October 1, 2016

### Curve-Fitting with Python

The curve_fit function from scipy.optimize offers a simple interface to perform unconstrained non-linear least-squares fitting.

It uses Levenberg-Marquardt, and provides a simple interface to the more general least-squares fitting routine leastsq.

## Procedure

• Given a bunch of data ($x_i, f_i$) and a fitting function $f(x; a_1, a_2)$, where the $a_i$ are the parameters to be fit, and $x$ is the independent variable
• Convert the math function to a python function. The first argument should be the independent variable; the parameters to be fit should follow
• Import the function curve_fit from scipy.optimize
• Call this routine with the function and data. It returns the best-fit and covariance matrix.

## Example

Define function:

def f(x, a1, a2):
f = np.exp(a1*x) * np.sin(a2*x)
return f

Generate noisy data with a1 = -1 and a2  = 0.5:

xi = np.linspace(0,5.)
fi = f(xi, -1, 0.5) + np.random.normal(0, 0.005, len(xi))

Perform the fit:

from scipy.optimize import curve_fit
popt, pcov = curve_fit(f, xi, fi)

popt = array([-1.00109124,  0.49108962])

Plot:

plt.plot(xi,fi,'o')
plt.plot(xi, f(xi, popt[0], popt[1]))