Sunday, November 27, 2016

Interesting Links

1. Should I Get a PhD (H/T John D. Cook)

2. Google and Lip Reading (the Verge)

3. Magnus Carlsen and Chess (Vice)

4. The Theranos Whistleblower (WSJ)

Thursday, November 17, 2016

A Visual Interpretation of IVPs

Let us  try to explore the structure of an IVP graphically by considering a specific example:\[\dfrac{dy}{dt} = y - y^2, \quad \quad \quad y(0) = 0.1.\] Suppose, we are interested in the solution \(y(t)\) over the domain \(t \in [0, 10]\).

A jupyter notebook, which accompanies this blog, is available here on github.

click to enlarge
Let us consider the 2D domain (y versus t) on which the solution to the IVP is shown as a thick blue line. This is the solution y(t), which satisfies the initial condition, and the differential equation.

We can look at any point (t,y) on this domain, and ask "what is f(y,t) here?"

Note \(dy/dt = f(y,t)\) is the "slope"; it can be positive, negative or zero. The only restriction  is that it cannot point "backwards" in the direction of negative t.

To visualize the slope at each grid point, we can set the horizontal component \(u = 1\), and the vertical component \(v = f(y,t)\), and normalize by dividing each component by \(\sqrt{u^2 + v^2}\).

Therefore, the function f(y,t) completely defines the field of arrows. The streamlines, if you will.

The initial condition \((t_0, y_0)\) tells us where to start in this field; where to "drop the feather" in the river, to be guided and carried away by the streamlines.

The jupyter notebook linked to above, lets you play around with different problems, and different initial conditions. Take a spin.

Friday, November 11, 2016

Quotes from Twitter


Thursday, November 10, 2016

Virial Pressure and PBCs

The total energy of a system can be decomposed into kinetic and potential parts:
\[E = K + U.\] Using the thermodynamic relation, \[p = - \left(\dfrac{dE}{dV}\right)_T.\] This translates to, \[P = \dfrac{NkT}{V} - \left\langle\dfrac{dU}{dV}\right\rangle_T,\] where the angle brackets denote a statistical average. I will drop the subscript \(T\) for brevity from here on.

Molecular simulations are typically carried out in boxes. Let us assume a cubic box of volume \(V = L^3\), with \(N\) particles. Let the coordinates of the particles be \(\mathbf{r}^N = \{\mathbf{r}_1, \mathbf{r}_2, ..., \mathbf{r}_N,\}\).

People often like to use scaled coordinates:\[\mathbf{r}_i = \mathbf{s}_i L.\] The potential energy describes interactions of the particles, and can be thought of \(U(\mathbf{r}^N, L)\). The explicit dependence on \(L\) will become clear shortly.

Now one can write:
\[\dfrac{dU}{dV} = \dfrac{dL}{dV}\left[\dfrac{\partial U}{\partial L} + \sum_{i=1}^N \dfrac{\partial U}{\partial \mathbf{r}_i} \cdot  \dfrac{d\mathbf{r}_i}{dL} \right].\] Using \(V = L^3\) and \(d\mathbf{r}_i/dL = \mathbf{s}_i\), and identifying the force on particle \(i\) as \(f_i = -\partial U/\partial \mathbf{r}_i\), one can simplify this relation to obtain:

\[P = \dfrac{NkT}{V} + \left\langle \dfrac{1}{3V} \sum_{i=1}^N \mathbf{r}_i \cdot \mathbf{f}_i - \dfrac{1}{3L^2} \dfrac{\partial U}{\partial L} \right\rangle.\]

Finite Non-periodic Systems

For finite, non-periodic systems, the potential energy depends only on atomic coordinates, \(U(\mathbf{r}^N)\). Hence, the partial with respect to \(L\) is equal to zero, and we obtain the familiar form:

\[P = \dfrac{NkT}{V} + \left\langle \dfrac{1}{3V} \sum_{i=1}^N \mathbf{r}_i \cdot \mathbf{f}_i \right\rangle.\] It is easy to show that in \(d\) dimensions, \(V = L^d\), this equation can be generalized to: \[P = \dfrac{NkT}{V} + \left\langle \dfrac{1}{dV} \sum_{i=1}^N \mathbf{r}_i \cdot \mathbf{f}_i \right\rangle.\] For pairwise forces, \(\mathbf{f}_i = \sum_{j \neq i} \mathbf{f}_{ij}\), where \(\mathbf{f}_{ij} = - \mathbf{f}_{ji}\), this equation can be further simplified to:

\[P = \dfrac{NkT}{V} + \left\langle \dfrac{1}{6V} \sum_{i=1}^N \sum_{j \neq i}^N \mathbf{r}_{ij} \cdot \mathbf{f}_{ij} \right\rangle.\]

Periodic Systems

Very often, periodic boundary conditions are used in molecular simulations.



Thus, a given particle interacts not only with particles in the given box, but all the periodic images. This includes, by the way, images of itself in other periodic boxes.

One now has to contend with the entire expression, including the partial with respect to box length:
\[P = \dfrac{NkT}{V} + \left\langle \dfrac{1}{3V} \sum_{i=1}^N \mathbf{r}_i \cdot \mathbf{f}_i - \dfrac{1}{3L^2} \dfrac{\partial U}{\partial L} \right\rangle.\] Note that the dependence of \(U\) on \(L\) is real. If we expand the box, we have to choose whether we scale all the coordinates, or just expand the box, leaving the particle coordinates in the simulation box unchanged.


In either case, the interaction energy between particle \(i\) and periodic images is mediated by \(L\). Thus, ignoring this dependence makes virial pressure calculations for general (especially multibody) potentials incorrect!

However, for pairwise potentials (which constitute the bulk of simulations in the literature), it turns out that one can write the total potential energy as, \[U = \dfrac{1}{2} \sum_{i=1}^N \sum_{j \neq i}^{MN} u(r_{ij}),\] where \(M\) sums over the (potentially infinite) periodic boxes. Note that since we are summing over all the periodic boxes, we don't need an explicit dependence on \(L\) - the only dependence on \(L\) is implicit via scaling of \(\mathbf{r}_i\).

For affine deformation, in which particle coordinates are scaled, it turns out that we indeed recover the expression:

\[P = \dfrac{NkT}{V} + \left\langle \dfrac{1}{6V} \sum_{i=1}^N \sum_{j \neq i}^N \mathbf{r}_{ij} \cdot \mathbf{f}_{ij} \right\rangle.\]

This is extraordinarily fortunate!

References

1. Manuel J. Louwerse, Evert Jan Baerends, ``Calculation of pressure in case of periodic boundary conditions", Chemical Physics Letters 421, 138-141, 2006.

This paper, while not the first, is the most recent reminder that PBCs don't play well with multibody potentials.

2. Aidan P. Thompson, Steven J. Plimpton, and William Mattson, ``General formulation of pressure and stress tensor for arbitrary many-body interaction potentials under periodic boundary conditions",  Journal of Chemical Physics, 154107, 2009.

This paper offers a surprisingly straightforward solution for addressing PBCs and multibody potentials.

Tuesday, November 8, 2016

Roger Cotes of Newton-Cotes Fame

We have been discussing Newton-Cotes integration rules in our class this week.

Everyone knows Newton. Not everyone knows the other guy: Robert Cotes.

Robert Cotes died young: he was only 33, when he died of a violent fever. Nevertheless, he made three important contributions:

  • helped Newton edit the second edition of the Principia
  • helped develop the Newton-Cotes forumulae
  • discovered a version of the famour Euler Formula: \[e^{i\theta} = \cos \theta+i \sin \theta.\]
They are beautifully summarized in this blog.

Wednesday, November 2, 2016

Kipnis and Title IX

Recently, we have a Title IX training seminar in our department. A colleague brought to our attention the fascinating case of Laura Kipnis at Northwestern University. This video summarizes the contours of the case:


Here's the rough outline. She wrote a piece in the Chronicle entitled "Sexual Paranoia Strikes Academe" (pdf), which prompted student outrage. She became the subject of a Title IX investigation, which she chronicled in another article (pdf), describing the opaque process.

Eventually, the charges against her were dropped, but the amount of time, money, and effort wasted on a stupid bureaucratic process raised important questions.
University President Morton Schapiro said many people have questioned the University’s decision to investigate the complaints at all. 
“The idea that a student shouldn’t be able to bring a Title IX complaint against a faculty member because of the faculty member’s protection under the First Amendment — that’s not my decision. That’s not Northwestern’s decision,” Schapiro told The Daily. “That’s federal law.”
The unfortunate outcome of such frivolous charges are that it helps de-legitimize/de-sensitize people from really outrageous violations. It is sad that the scope of the original legislation has been elastically expanded to become a political weapon.

Monday, October 31, 2016

On the Paradox of Skill

The relationship between talent, hard work, success, and luck is complicated.

Generally, this is what people think:

talent + hard work = success

If you read biographies of famous, successful people, this is the message that is repeated ad nauseam. The role of luck is absent or downplayed. "The harder I work, the luckier I get."

It is amazing how large of a role luck plays. Robert Frank recently wrote a whole book about it. You should read this writeup in the Atlantic, or listen to this EconTalk podcast. In it, he paints a beautiful picture of how the difference between the person who comes in first, and the one who comes in second isn't talent or hard work. It is usually, luck.

In our increasingly "winner take all" world, this can have serious consequences.

This essay by Michael Mauboussin elaborates on a point first made by Stephen Jay Gould. As talented people in a given field relentlessly work hard, they do two things. They raise the bar (the average increases to a point), and they narrow the distribution of skill (the standard deviation shrinks). As relative differences in skill diminish, the role of luck is enhanced.

This paradox - dubbed the paradox of skill - is extremely counter-intuitive. In "professional" fields, where the overall level of skill is high, there is increasing reliance on luck.

In my own field of academia, one sees instances of this phenomenon everywhere.

Consider finding a tenure-track position at a decent university. I have been on both sides of the equation. When I am evaluating applications, I am often awestruck  by the generally high level of competence. Most of the serious candidates are talented, and have got to where they are by working extraordinarily hard. Due to the extreme imbalance in the demand and supply of potential faculty, the applicants who rise to the top often get a huge assist from Lady Luck.

This is not to claim that the winners don't deserve their success. Of course, they do. But one has to be generous to the "losers". What they lacked was luck - a factor over which they had no control, almost by definition.

What does this mean? If you are outmatched in terms of skill, you shouldn't play by the standard rules. Change the rules, or change the game.

If you are David, you don't engage with Goliath in hand-to-hand combat.

If you are Small Community College playing Alabama, you try as many trick plays as you can.

If you are investing in stocks, find illiquid small-caps, which the Warren Buffets of the world cannot consider.

If you are planning a career, look at intersections of traditional domains, which are not particularly crowded.

Thursday, October 20, 2016

Cleve Moler on Householder Reflections

Two recent posts by Cleve Moler on Householder Reflectors

1. QR decomposition

2. Comparison with Gram-Schmidt

The second post includes nice examples, demonstrating the difficulty in preserving orthogonality with standard Gram-Schmidt.

Monday, October 17, 2016

Redemption

WaPo has an inspiring story of redemption, "The White Flight of Derek Black".

It paints an intimate portrait of Derek Black, a man groomed to inherit the intellectual mantle of white nationalism in the US, as he confronted facts, and changed his mind.
He had always based his opinions on fact, and lately his logic was being dismantled by emails from his Shabbat friends. They sent him links to studies showing that racial disparities in IQ could largely be explained by extenuating factors like prenatal nutrition and educational opportunities. They gave him scientific papers about the effects of discrimination on blood pressure, job performance and mental health. He read articles about white privilege and the unfair representation of minorities on television news.
The most important takeaway for me was the courage (to oppose the only family he knew and clearly loved) and intellectual honesty it must have taken to change his opinion on something so central to his belief-system, in full public-glare.

It gives me a new hero, and a fresh lease of hope in mankind.

Friday, October 14, 2016

On Existence and Uniqueness

In engineering, it isn't uncommon to approach problems with an implicit assumption that they can be solved. In fact, we take a lot of pride in this problem-solving mindset. We ask "how", not "if"?

In a non-traditional department like mine, I have colleagues and collaborators from applied math. Their rigor and discipline prods them to approach new problems differently. 

Instead of asking "how can I start solving this?", the first question they ask is usually, "does a solution exist?"

If there is no solution, there is no point in looking for one. You can't find a black cat in a dark room, if it isn't there. 


If there is a solution, the next question to ask is: "is there one unique solution?", or are there many, perhaps, even infinite possible answers.

If there is a unique solution, any path that takes us to Rome will do. In practice, there is a preference for a path that might get us there fastest. We can begin thinking about an optimal algorithm.

If there are many possible solutions, and we seek only one, perhaps we can add additional constraints on the problem to discard most of the candidates. We can try to seek a solution that is optimal in some way.

There might be lessons from this mindset that are applicable to life in general.

When faced with a new problem, we might want to triage it into one of the three buckets.

Does this problem have a solution? If there is no solution, or the solution is completely out of one's control, then there is no point in being miserable about it.

As John Tukey once remarked, "Most problems are complex; they have a real part and an imaginary part." It is best to see isolate the real part, and see if it exists.

If there is a unique solution, then one should spend time finding the best method to solve the problem.

If, like the majority of life problems (who should I marry? what career should I pick?), there are multiple solutions, then one has to spend time formulating the best constraints or optimal criteria - before looking for a method of solution.

Saturday, October 8, 2016

Inspiring Meta-Math Answer

I chanced upon Alon Amit's amazing answer on Quora to the question:
I am not amazingly good at maths, so it is obvious to me whether I can solve a problem or not after 5 minutes. How do people spend hours on problems?
He paints a beautiful problem, which is easy to describe, and which anyone can have fun with. You should check it out.
One of the bad misconceptions people have about math problem solving is that it’s a mechanical process: either you know which algorithm to follow, in which case just fucking follow it and be done with it, or you don’t, in which case there’s nothing you can do.

Tuesday, October 4, 2016

Academic Geneology

I recently stumbled into academictree.org which is a wonderful and growing repository of academic trees.

Simply put, it is a nice place to squander time by reveling in a weird form of narcissism. Anyway, here is mine:


Saturday, October 1, 2016

Curve-Fitting with Python

The curve_fit function from scipy.optimize offers a simple interface to perform unconstrained non-linear least-squares fitting.

It uses Levenberg-Marquardt, and provides a simple interface to the more general least-squares fitting routine leastsq.



Procedure


  • Given a bunch of data (\(x_i, f_i\)) and a fitting function \(f(x; a_1, a_2)\), where the \(a_i\) are the parameters to be fit, and \(x\) is the independent variable
  • Convert the math function to a python function. The first argument should be the independent variable; the parameters to be fit should follow
  • Import the function curve_fit from scipy.optimize
  • Call this routine with the function and data. It returns the best-fit and covariance matrix.

Example


Define function:

def f(x, a1, a2):
    f = np.exp(a1*x) * np.sin(a2*x)
    return f

Generate noisy data with a1 = -1 and a2  = 0.5:

xi = np.linspace(0,5.)
fi = f(xi, -1, 0.5) + np.random.normal(0, 0.005, len(xi))

Perform the fit:

from scipy.optimize import curve_fit
popt, pcov = curve_fit(f, xi, fi)

popt = array([-1.00109124,  0.49108962])

Plot:

plt.plot(xi,fi,'o')
plt.plot(xi, f(xi, popt[0], popt[1]))




Wednesday, September 21, 2016

Loser's Game

If you ask students who the "smartest" kid in their class is, it is likely that the person they identify is not the class topper (in terms of GPA). Why?

Toppers are superb test takers. They have mastered the art of error minimization. Sure, they are smart, and they prepare hard. So do many other students. What distinguishes them is caution, and an obsession with avoiding stupid mistakes.

They may not offer the most enterprising solutions to hard problems, or be thought of as the most "creative" people (whatever that term means). But they are good at one thing: controlling error!

Controlling error is important! In many disciplines like accounting, building bridges or spaceships, computer programming etc. it is perhaps the most important thing.

"In Winning the Loser's Game", Charlie Ellis (pdf) talks about difference between amateur and professional tennis. In professional tennis, the winner is the player who outplays the other. Generally, the more skillful player wins.

In amateur tennis the game is not decided by who hits more winners. Rather the winner is the player who makes the least mistakes - commits fewer double faults, hits fewer shots into the net, etc.

He makes an interesting point that investing is also like that. Unless you are a professional, don't swing for the winners. Instead, focus on controlling the things that you can control, and the rest will take care of itself.

Monday, September 19, 2016

On the Political Economy

Many people are not thrilled with the choices available this year's presidential election cycle. Given our moribund Congress, it is easy to get pessimistic.

However, I've been reading a number of interesting proposals to fix the deadlock in our political system.

1. Howards Marks on "Political Reality" (pdf)

I find his take compelling. I think "moderate" and "compromise" are positive words/attributes. Towards the end of the document Marks offers some prescriptions for where to begin. The proposal try to make it easier for moderates, and centrists to win elections.

2. Terry Moe on EconTalk...

... offers a different diagnosis, and a radically different medicine: a stronger presidency.
With some exceptions, Congress has never been capable of crafting effective policy responses to the nation’s problems, a fact that is well documented. Polarization has made a bad situation worse, but it is not the underlying cause of Congress’s core inadequacies—which are baked into the institution and not of recent vintage. Congress is an ineffective policymaker because it is wired to be that way by the Constitution, whose design ensures that legislators are electorally tied to their local jurisdictions and highly responsive to special interests. Congress is not wired to solve national problems in the national interest. It is wired to allow hundreds of parochial legislators to promote their own political welfare through special-interest politics.

Wednesday, September 14, 2016

F2PY Example: Radial Distribution Function

In a recent post, I discussed the use of F2PY for speeding up parts of python code.

The radial distribution function (RDF) or the pair correlation function \(g(r)\) is a statistical mechanical property which quantifies density variation as a function of distance from a reference particle as a function in a system of particles.

It is a fairly fundamental property that can easily be computed from molecular simulations. The Fourier transform of the RDF is experimentally accesible via light scattering.

For the purposes of this demonstration, we are given a bunch of points in a periodic simulation box. Our goal is to find the pair correlation function \(g(r)\), and compare the relative advantage of using f2py to wrap a Fortran 90 subroutine and a vectorized python implementation of the same algorithm.

The details are in this shared iPython notebook on GitHub. You should be able to both (i) see a static html rendition on the linked site, and (ii) download a jupyter notebook for further experimentation.

The bottom line here was that  eventually both python and Fortran gravitated to a \(N^2\) scaling, where \(N\) is the number of particles. Since the Python code exploits vectorization, its performance is not as bad. Indeed, its relative performance in this case seems to improve as the number of particles increase from 500x worse to only 10x worse as the number of particles was increased from 10 to 10000.


Thursday, September 8, 2016

Extraneous Roots

Mr. Honner takes issue with the grading rubric for a math problem:


Mistakes, even if they are made by the examiners, are useful learning grounds. Here are a couple more problems that he unearthed.

James Tanton also addresses some of these issues (issues with domain, extraneous solutions etc.) in a readable essay (pdf).

Tuesday, September 6, 2016

Veil of Ignorance

Recently, a bunch of us were chatting about the relative merits of taxing consumption (sales tax), income, and wealth (estate tax), and how one would go about designing a new tax code if we were not encumbered by existing laws.

A goal was to keep the system as simple as possible, and then coming up with imaginary cases of people who would be much worse off under the new tax code.

During the discussion, the idea of the veil of ignorance kept popping up.

Here is a quick video explainer of the concept:


And a quick introduction to the man who most recently popularized the role of this concept in designing policies with social justice in mind.


Here is a link to an EconTalk podcast (Rawls, Nozick and Justice) on related issues.

Monday, September 5, 2016

Teacher's Day

Long before Hallmark completely littered the calendar with "special days", we celebrated September 5, in India, as Teacher's Day; a day when we paused to reflect over how our teachers had shaped us.

I have been fortunate to have had many wonderful teachers. So I thought it might be a good idea to rekindle the spirit of the day, by highlighting one such teacher, every year.

Today, let me tell you about Ms. Mary Fernandes, who taught us English in 8th and 9th grade.

Like many kids, I liked to read books mostly for the stories they told. I had no appreciation for the art of good-writing until Mary teacher (as we called her) brought it sharply into focus.

The difference between good and mediocre writing is often not content, but how that content is skillfully unwrapped. A good writer understands the state of the reader's mind, which once ensnared, can be coaxed to go wherever the writer wants.

Before Mary teacher, I had no love for poetry; mostly because of how we were tested on them. We had to memorize long quirky poems, including the goddamn punctuation! Any deviation was penalized. It was like memorizing a page of computer code; any small error in reproduction, and the code wouldn't compile. I hated it then, and its memory makes me angry even after all these years.

In 8th and 9th grade, I learnt how to let myself enjoy poems. I still hated the way we were tested; but I began the slow process of forgiving William Wordsworth, John Milton, and their ilk for their years of torture. The most important thing I learnt was that the best way to read poems, was to read them aloud - even if people around you gave you strange looks.

In terms of our writing ability, all of us are a work in progress. We read, we learn, we change. And our writing changes with us. In many ways, it is like our signature. With age, there are subtle shifts, lots of rounding of edges, and a move towards directness and simplicity.

Most of the time, this change is gradual, but sometimes there are drastic  "phase transitions". High-school was like that for me - I heard my own voice for the first time.

Perhaps the most enduring lesson I learnt was the irresistible pull of a story well told. I remember laughing uncontrollably, while we read a Don Quixote story in Mary teacher's class. I wanted to stop laughing; I knew I was embarrassing myself. But I just couldn't. The harder I tried, the worse it got.

The bulk of my writing these days is technical or semi-technical. The goal is to illuminate, rather than to entertain. While this imposes some constraints, narratives are just as important (unless you are writing a manual), in technical writing. It is perhaps the only writing lesson I try to actively cultivate in my students.

People like stories, and scientists are people. The papers I enjoy the most have many elements of good story-telling. How did we get into this bind? Who/what are the key actors/methods? How do things unfold? What is the moral of the story/paper?

For that lesson alone, I am forever grateful to Mary teacher's role!

Thursday, August 25, 2016

Linear Algebra Insights

3Blue1Brown has a wonderful new series on the "Essence of Linear Algebra", where he visually explores fundamental topics in linear algebra.


Worth a watch!

Sunday, August 21, 2016

Interesting Things I Learnt Last Week

1. Yoga might not be as old as we thought (Surprisingly Awesome podcast)

Apparently everyone else was in on this one a long time ago.
Goldberg is the author of The Goddess Pose: The Audacious Life of Indra Devi, the Woman Who Helped Bring Yoga to the West. Her book traces the modern Western practice of yoga to a Russian woman named Indra Devi, who was born in 1899 with the birth name Eugenia Peterson. Devi became interested in yoga after reading about it in a book written by an American new-age thinker. She studied the practice in India before introducing it to political leaders in Russia and Shanghai and, in 1947, bringing it to America, where her students included Hollywood celebrities like Greta Garbo and Gloria Swanson.
2. Why the "F" and "J" keys have bumps on them (Quora)

Short Answer: So you can type without looking at the keyboard.

Friday, August 19, 2016

Counting by Fingers

Interesting post on allowing kids to use fingers to count. I never quite understood the opposition in the first place.

Hidden near the end is an interesting technique for multiplying by 6, 7, 8, 9, or 10.


To multiply 8 by 7, say:

Algorithm:

1. Touch the "8" and "7" fingers
2. The number of fingers including the touching fingers (5, here) forms the tens place (5 * 10 = 50)
3. Multiply the number of fingers remaining on either hand (3 * 2 = 6)
4. Sum step 2 and 3 (50 + 6 = 56).

Can you figure out why this technique works?

Tuesday, August 16, 2016

F2PY: Interfacing Fortran and Python

Python is notoriously slow with loops. One of the many ways to speed things up is to incorporate subroutines written in C or Fortran into Python.

F2PY is a program that enables us to just that: "provide a connection between Python and Fortran languages." It lets us wrap Fortran code (you'll need a compiler like gfortran) for use in Python.

You can thus salvage legacy Fortran 77 code, or write "new" Fortran 90 code. The latter is much easier to incorporate.

Basic Steps:

1. Write a subroutine file in Fortran 90. Use "intent" modifier to express interface. Avoid allocatable arrays.
2. Use f2py to generate a python interface/signature file ("pyf" file) and a module that can be imported by python. You may have to edit the signature file sometimes.
3. Compile the python module.

Some resources:

1. Robert Johansson has a very nice set of iPython notebooks for Scientific Computing with python. The first part of the lecture "Lecture-6A-Fortran-and-C.ipynb" provides a gentle introduction to F2PY. If you don't have jupyter/iPython notebooks installed on your computer, a PDF of the entire set of notebooks is also available on his github site.

2. A nice example is available here.

3. The official site also provides a cheat-sheet for three different ways to wrap things.

4. Another nice introduction, that goes on to talk about stuff beyond the basic.

I plan on demonstrating this on a practical example shortly.

Wednesday, August 10, 2016

Block Averaging: Matlab and Python

I'd written about block averaging to estimate error bars from correlated time-series in a couple of blog posts nearly three years ago. Here are the two posts, which explain the motivation and logic behind this technique (post1 and post2).

I wrote programs to carry out this operation in:


The required input is a data-stream or correlated "x" samples. There are optional flags for turning onscreen printing on or off, and to limit the size of the maximum block. The default choice for maximum block size ensures that the datastream is chopped into 4 blocks and more.

The output is a set of three arrays: the block size, the block variance, and the block mean.

The program prints out the mean and variance corresponding to the largest block sizes on screen. This may or may not be the "best choice". A quick look at the plot (by turning the isPlot flag on) will help ascertain this.

Friday, August 5, 2016

On Writing

I overheard this quote from E. L. Doctorow on my favorite language podcast "A Way with Words"
[The act of writing is] “like driving a car at night: you never see further than your headlights, but you can make the whole trip that way.”
This beautifully-expressed thought touched a chord.

Usually when I begin writing, I have a rough notion of the ideas I want to communicate, but they are all tangled up like a ball of wool. The hope is to untangle the mess (think), cut redundant strands ("there is no writing, only rewriting"), and weave a sweater (a narrative) - to push the wool metaphor a bit.

Writing helps me think. It helps me learn. It helps me see new patterns in things I already know.

I have a simplistic theory on why writing works as a thinking and learning tool.

Our mind stores thoughts and ideas like my kids store their toys. They are all over the place.


A sentence or paragraph or story has a linear structure. It has a beginning, a middle, and an end.

In computer science terms, ideas in our mind are like graphs, ideas on paper are like queues.

The act of writing makes us examine the graph carefully, figure out the relevant or important links, and to project them into a one-dimensional (or quasi 1D) narrative.


Here's a game you can play with a kid that captures some of these thoughts.

Look at a map of the world showing different countries. These are like the ideas in your head. Each country borders other countries or water bodies (a graph).

Now pick any two countries, say US and Romania. Suppose the goal is find a path from the US to Romania, keeping track of the boundaries you cross. Perhaps, you want to minimize these crossings (or perhaps, you want to take the scenic route).

The act of figuring out an "optimal" path forces us to project the map of the world onto a queue. If we are attentive and lucky, we might learn new things.

Tuesday, August 2, 2016

Fama and Thaler on Market Efficiency

This is a fantastic moderated conversation between Eugene Fama and Richard Thaler on the question of "Are Markets Efficient?"


While it is fashionable to bash the efficient market hypothesis (EMH) these days, the wonderful discussion highlights many of the nuances.

Fama posits that the EMH is a useful model, even if it is not perfectly true all the time. Pointing out occasional anomalies doesn't invalidate the model. Furthermore, one has to be careful about hindsight bias (bubbles for example) before rejecting the EMH.

It should be understood that the EMH is not a deterministic model in the same sense as physical laws or models (example: Newton's laws of motion). Instead, it bears resemblance to probabilistic or statistical models (example: weather models).

A single anomaly can completely reject a deterministic model.

If a model says "A implies B", and you find a counter-example, where "A does not imply B", then you have to reject or amend the model "A implies B".

A real example might be the belief that heavy objects fall faster than lighter objects (in the absence of air resistance). A single example (or thought experiment) is enough to destroy the model.

On the other hand, anomalies don't necessarily eliminate probabilistic models.

Consider a model that says "A often implies B", such as "cigarette smoking often implies lung cancer". You find someone who smoked a pack everyday and lived to 90. That example is treated as an anomaly, or "the exception that proves the rule".

EMH, perhaps, belongs to the second group.

If you think like a Bayesian, your belief in the model should decrease as the evidence against the model begins piling up.

Wednesday, July 27, 2016

RIPE: Reconstructing Reality and Material Structure

In Plato’s allegory of the cave, he imagines chained prisoners facing a blank wall. A raging fire burns far in the background, behind their backs. The prisoners watch shadows cast by objects passing in front of the fire on the wall ahead of them. They form ideas about the true nature of the objects, from these shadows.

One can only guess the true nature of the objects from these shadows. The 2D shadows that 3D objects cast, project some information, and conceal the rest.

For example, a sphere and a cylinder can both cast a circular shadow.

Perspective is everything (google giraffe or elephant illusion if video below doesn't work).


This problem bedevils characterization of materials as well. A particular measurement or test (chromatography, rheology, thermal tests, scattering, microscopy etc.) gives us but one look at an unknown sample.

If we have a fairly good idea of what the material might be, to begin with, then perhaps this is enough.

More often than not, especially with new materials, each new test narrows the scope of the possible.

The fable of the six blind men and the elephant (which is perhaps a more pedestrian retelling of the Plato's allegory) provides a potential path out. In the story six blind men touch different parts of an elephant and form radically different notions of what an elephant is.

The only way (for the blind men) to come up with a more realistic notion of what an elephant looks like is to find ways of combining their knowledge to come up with a consensus view.

Saturday, July 23, 2016

On Magic, Knowledge, and Wonder

“Its still magic even if you know how it’s done.” Teri Pratchett

The space that Science occupies in popular imagination is not always flattering. By demystifying “magic”, it is often accused of making the world less interesting.

The metaphor is sometimes that of the bully who bursts out the secret of Santa Claus on unsuspecting kids, forever destroying the magic, and ending innocence.

Science is portrayed as cold, logical, disciplined, linear, rational, and precise.

Art, on the other hand, is seen as the antithesis. Art is beautiful, creative, nonlinear, and immersive.

One is a machine; the other has a soul.
One is Vulcan; the other is Human.

This dichotomy between “thinking” and “feeling”, while convenient, is a giant fabrication. In other words, scientists, even when “doing serious science”, are more like Dr. Spock - half Vulcan, and half human.

Consider a discipline like math, which is seen as cold and calculating (pun intended). It relies heavily on leaps of imagination to propose bold conjectures, with order, beauty, and intuition, as the only guide.

Substantial parts of it, in other words, are art.

Yes, it is true that Science occasionally destroys ideas that some people find magical. Often, however, a deeper beauty is revealed.

For example, the evidence-based claim that we came from something like the Big Bang, may superficially resemble the bully destroying the secret of Santa or a divine Creator.

But really, isn’t the implication far more amazing? I don’t know about you, but the notion that we are all made of “star-stuff”, I find totally awesome!

What an incredible Universe, we live in!

Tuesday, July 19, 2016

What I Learned Last Week

1. A single Gypsy moth may not amount to much, but collectively ... [nasa]
The ecological consequences of gypsy moth outbreaks are often cosmetic, but they can become serious. Deciduous trees can normally withstand one or two years of defoliation by caterpillars, but three or more successive years of severe defoliation can result in widespread tree mortality.
2. Orcas are dolphins too! [wnyc]

On the Lennard Lopate show, I learned that killer whales are really dolphins. The label "whale" apparently has no biological meaning - it is an informal term that generally means "big". Thus, we have whale sharks (world's biggest fish), and blue whales (world's biggest mammal).

3. Swordfish and drag reduction [nationalgeographic]
So Videler thinks that the gland is yet another drag-reducing adaptation. Its oil repels water and allows incoming currents to flow smoothly over the surface of the bill. That depends on the oil staying warm, but swordfish have a solution for that, too. They have modified some of their eye muscles into heat-producing organs that warm their blood and sharpen their vision as they hunt. This same heating effect could liquefy the drag-reducing oil, allowing it to ooze out of the glands just as the fish have the greatest need for speed.

Friday, July 15, 2016

Sunday Afternoon Fun

I spent a glorious afternoon playing the "can you sketch this picture without lifting your pencil?" game with my daughter. We started with a few standard ones that you can find with a simple Google Image search like:

click on image to enlarge
Soon enough, we just started making up random pictures for each other.

After a while, I told her the first part of the secret (Euler Paths), which tells you whether a solution exists.
  • identify all the points (nodes)
  • write down the number of lines that meet at each node
  • scratch off nodes with an even number number of lines
  • count the number of (remaining) "odd" nodes
  • if this number is 0 or 2, then it is possible to solve the problem

As an example the following figure,
has 3 even nodes (2, 4, and 2), and 2 odd nodes. Thus, it is possible to sketch this "graph" without lifting your pencil.

Showing existence is only the first part, but the second part is an algorithm to actually solve the problem, once you know it is solvable.

Wikipedia lists two methods (both of which are over a century old): Fleury's and Hierholzer's algorithms. But for "kiddie problems" a simpler rule of thumb works in most cases:

So this is the second secret: start from an odd node! For most cases, there are multiple solutions, and it is hard to go wrong with this simple "suggestion"!

PS: If the number of odd nodes is 0, it doesn't matter where you start, and your start and finish points are exactly the same!

Wednesday, July 13, 2016

Links

1. Defending the Traditional Lecture (NYT)
A lecture is not the declamation of an encyclopedia article. In the humanities, a lecture “places a premium on the connections between individual facts,” Monessa Cummins, the chairwoman of the classics department and a popular lecturer at Grinnell College, told me. “It is not a recitation of facts, but the building of an argument.” 
Absorbing a long, complex argument is hard work, requiring students to synthesize, organize and react as they listen. In our time, when any reading assignment longer than a Facebook post seems ponderous, students have little experience doing this. Some research suggests that minority and low-income students struggle even more. But if we abandon the lecture format because students may find it difficult, we do them a disservice. Moreover, we capitulate to the worst features of the customer-service mentality that has seeped into the university from the business world. The solution, instead, is to teach those students how to gain all a great lecture course has to give them.
2. Gravitational Waves Explained (minutephysics)


3. Kokichi Sugihara's amazing illusions


Wednesday, July 6, 2016

Force from Pairwise Potentials

Consider a pairwise potential, such as the Lennard Jones potential, \[U(r) = 4 \epsilon \left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} \right].\]Question: How can we get the force (magnitude and force) from such a potential?

For simplicity let us assume that we have only two particles A and B. Specifically, what is the force on particle A due to particle B?

We know that the force is the negative gradient of the potential; therefore, \[\begin{align*}
\mathbf{f}_{AB} & = -\dfrac{dU(r_{AB})}{d\mathbf{r}_A} \\
& = -\dfrac{dU(r_{AB})}{dr_{AB}} \color{blue} {\dfrac{dr_{AB}}{d\mathbf{r}_A}}
\end{align*}\] How do we evaluate the term in blue?

Let \(\mathbf{r}_A = x_A \mathbf{e}_x + y_A \mathbf{e}_y + z_A \mathbf{e}_z\), and \(\mathbf{r}_B = x_B \mathbf{e}_x + y_B \mathbf{e}_y + z_B \mathbf{e}_z\) be the positions of the two particles. Let \(\mathbf{r}_{AB} = \mathbf{r}_B - \mathbf{r}_A\) be a vector pointing in a direction from A to B. The distance between the two particles is:
\[r_{AB} = \sqrt{x_{AB}^2 + y_{AB}^2 + z_{AB}^2},\] where \(x_{AB}^2 = (x_B - x_A)^2\), etc.

We can now try to tackle the blue term: \[\begin{align*}
\dfrac{dr_{AB}}{d\mathbf{r}_A} & = \dfrac{dr_{AB}}{dx_A} \mathbf{e}_x + \dfrac{dr_{AB}}{dy_A} \mathbf{e}_y + \dfrac{dr_{AB}}{dz_A} \mathbf{e}_z\\
& = \dfrac{1}{2 r_{AB}} \left(\dfrac{dx_{AB}^2}{dx_A} \mathbf{e}_x + \dfrac{dy_{AB}^2}{dy_A} \mathbf{e}_y + \dfrac{dz_{AB}^2}{dz_A} \mathbf{e}_z \right) \\
& = \dfrac{1}{2 r_{AB}} \left(-2 x_{AB} \dfrac{dx_A}{dx_A} \mathbf{e}_x - 2 y_{AB} \dfrac{dy_{A}}{dy_A} \mathbf{e}_y - 2 z_{AB} \dfrac{dz_{A}}{dz_A} \mathbf{e}_z \right) \\
& = -\dfrac{\mathbf{r}_{AB}}{r_{AB}} \\
& = -\hat{\mathbf{r}}_{AB}
\end{align*}\] Thus, the force is, \[\mathbf{f}_{AB}(r) = \dfrac{dU(r)}{dr} \hat{\mathbf{r}}_{AB}.\] If \(U^{\prime}(r)\) is negative (repulsive regime of LJ, for instance), the direction of the force is along \( -\hat{\mathbf{r}}_{AB} = \hat{\mathbf{r}}_{AB}\); this force points in the direction of A from B, trying to push particle A away from particle B. If \(U^{\prime}(r)\) is positive (attractive part of LJ), the force acts along \(\hat{\mathbf{r}}_{AB}\).

Saturday, July 2, 2016

Links

1. John D. Cook has an interesting take on why student group projects don't work.
The best teams have people with complementary skills, but similar work ethic. Academic assignments are the opposite. There’s not much variation in skills, in part because students haven’t yet developed specialized skills, and in part because students are in the same class because they have similar interests. The biggest variation is likely to be work ethic. [...] The person doing most of the work learns that it’s best to avoid working with teams.
 2. Don't be a Grammar Nazi. A linguist lashes out:
What changed me was realizing that language isn’t some delicate cultural artifact but an integral part of being human. I found this out by reading what scholars of language — linguists, grammarians and cognitive scientists — say about the subject. It fascinated me. Language — which all human societies have in immense grammatical complexity — is far more interesting than pedantry.
3. Academic Administration as a Calling:
To be sure, no one completes a Ph.D. (as opposed to an Ed.D.) in order to enter campus administration. But for some of us, at a certain point in our careers, administrative work is no longer something to dread or to apologize for. For some of us, serving as chair of a department or dean of a college comes unbidden as a second, midcareer calling. Too often, perhaps, it calls us away from the work we were destined to do, and those tend to be the stories we hear. But sometimes, taking on administrative duties is precisely the culmination and fulfillment of that scholarly work, allowing us, for the first time, to recognize our past as prologue.

Friday, July 1, 2016

Numpy: Arrays, Vectors and Matrices

Define numpy arrays of different shapes:

import numpy as np
a = np.array([1, 2])
b = np.array([[1, 2], [3, 4]])

print a.shape, type(a)
print b.shape, type(b)

(2,)

(2, 2)

Numpy arrays are not exactly row or column vectors or matrices. Of course, we can design proper row/column vectors and matrices using the np.matrix construct. We can even use a simpler Matlab/Octave like method to build matrices, by using ";" to start new rows, and comma or spaces to separate successive row elements.

am = np.matrix([1, 2]).T  # column vector
bm = np.matrix('1, 2; 3, 4')

print am.shape, type(am)
print bm.shape, type(bm)

(2, 1)
(2, 2)



Matrix Operations


The array "a" is not a linear algebra vector. So if I try to take a transpose. It doesn't quite produce the expected result.

print a, a.T
[1 2] [1 2]

But it works for matrices defined as arrays of arrays.

print b, b.T
[[1 2] [3 4]] [[1 3] [2 4]]

To multiply vectors and matrices that ndarrays, use the "dot" command.

print np.dot(b,a)
[ 5 11]

For actual matrix objects, the * operator is overloaded, and can perform matrix multiplication.

print bm*am
[[ 5] [11]]

np.dot(bm,am)
matrix([[ 5], [11]])


Converting array to row or column vector


It is easy to convert a python array to a linear algebra row or column vector either by using the "reshape" command (a.reshape(2,1)), or alternatively,
a = np.array([1, 2])
a = a[:, np.newaxis] # makes into column vector

print a, a.shape, type(a)
[[1] [2]] (2, 1) 

But you need to be careful with * operation. Don't use it unless all the underlying objects are matrix types.

print b*a
[[1 2] [6 8]]

Each row of b was multiplied by the row of a.



Efficiency


That said, it appears that np.dot on array types is faster than other alternatives.

%timeit bm*am
The slowest run took 9.73 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 5.76 µs per loop
%timeit np.dot(b,a)
The slowest run took 7.84 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 1.52 µs per loop
%timeit np.dot(bm,am)
The slowest run took 7.88 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 3.3 µs per loop

Sunday, June 26, 2016

Background Material For Students Entering SciComp

Our department, Scientific Computing, is highly interdisciplinary, and we get graduate students from very different backgrounds.

Here is a list of resources, I recommended incoming grad students to look at before they start.

These video lectures by Gilbert Strang and Cleve Moler present a quick summary of ODEs and major Linear Algebra topics. 
In addition to a compiled language like C++/Fortran/Java, it is useful to know either Matlab or Python
This contains a set of extremely useful jupyter or iPython notebooks (PDF link if you don't have jupyter installed) which provide a gentle introduction to Python in Scientific Computing. It also includes several advanced topics (parallel programming, incorporating C and Fortran, version control etc.)

Friday, June 24, 2016

Good and Bad Metrics

We learnt not that long ago that 66 new journals were banned by Thomson-Reuters for abusing impact factors by excessive self-citation. While crimes committed by these journals may have been egregious, the subtle, and sometimes not-so-subtle, abuse of impact factors is pervasive.

Curiously, I don't find the crimes surprising. In fact, I would be surprised, if such manipulations did not occur.

If someone (Thomson-Reuters) tells you, "I will measure your performance by this simple yardstick," and that metric has real consequences (whether libraries buy your journal), then clearly you (the publisher) are going to do everything you can to push that metric as far high as you can.

If you build a simple metric or index to quantify complex stuff (academic ranking of universities, IQ to measure smartness, student performance on standardized tests to determine teacher pay etc.), which is linked to a "real" prize, you can rest assured that your metric will be gamed. As I have said before:
I gather this fascination has something to do with out inability to grope with multidimensional complexity. We try to project a complex high-dimensional space onto a simple scalar. We like scalars because we can intuitively compare two scalars. We can order them, plot them on graphs, and run statistics on them with ease.
We can step outside the academic realm and look at few examples.

Cases where a simple metric works best is where the "thing" being measured is simple. For example, in a 100m sprint or high-jump the only thing you care about is the speed and height, respectively. I think of underlying stuff as being "one-dimensional". There is nothing to game here (no pun intended); if you can run faster, you deserve to be champion.

One example, where a simple metric of a somewhat complex thing actually works alright, is Google PageRank. Before Google came along with the really cool idea of "one link, one vote", which reduced the complex task of organizing the relative importance of websites to solving an eigenvalue problem, web search was really hit of miss.

When people did not know about the metric (PageRank in this case), it worked beautifully. Once the metric was public knowledge, and Google became a virtual monopoly in this business, the business of "search engine optimization" (SEO), which seeks to game the metric, suddenly became very lucrative.

Now Google has to do secret stuff to the keep the abusers out. Their efforts, and the consolidation of the web (wikipedia is the #1 link almost always) has helped, so that the metric has not been completely compromised (or so I think, since I don't know what Google doesn't show me).

This is a useful example, since it exposes the conflict between Google Search users (who want the more relevant results to surface to the top), and websites (who want to surface to the top, regardless of relevance), that Google has to manage.

A final example, whose story perhaps has the greatest relevance to academic short-cut metrics, is the somewhat whimsical metric of FICO credit scores in the US. These scores are supposed to determine a person's credit-worthiness, and has real significance if you want to get a mortgage or car loan. A pesky problem with the score is that it treats a perfectly frugal person, who pays her bills on time (in cash), and has never taken on any form of debt, with contempt.

A bigger problem is the reliability of the score (see Credit Scores: Not-so-Magic-Numbers), perhaps because the key ingredients that go into the score are reasonably well-known and easily gamed. One of the heartening reactions:
Golden West Financial (WB), a longtime FICO skeptic, is one of the few mortgage lenders to minimize its use in recent years—and it credits that decision for its below-average mortgage losses. Now a subsidiary of Wachovia (WB), Golden West's delinquency rate on traditional mortgages is running at 0.75%, vs. 1.04% for the industry. Richard Atkinson, who oversees part of Golden West's mortgage unit from San Antonio, says the bank calls to verify employment, examines a borrower's stock holdings and other assets, and employs a team of appraisers who are judged not by the volume of loans but by the accuracy of the appraisal over the life of the loan. "The way we do business is a lot more costly, and cost was a big reason many competitors embraced credit scoring," he says. "But some of our best borrowers had low FICO scores and our worst had FICO scores of 750."
How great it would be if we academics adopted a similar approach. 

Wednesday, June 15, 2016

The Savitzky-Golay Filter

In 1964, Abraham Savitzky and Marcel Golay published a paper "Smoothing and differentiation of data by simplified least squares procedures" in Analytical Chemistry, which has been heralded as one of the 10 most influential papers in the journal's history.

The Savitzky-Golay filter (SGF) is a digital filter used to smooth noisy data. The basic idea is to chop the dataset into subsets, and then use a low order polynomial to fit successive subsets.

Implementations are available in Octave/Matlab and in recent versions (>0.16) of scipy for python.

Here is a potential use case. I did a Lennard-Jones melt simulation using LAMMPS, and obtained the following pair correlation function g(r) [click to enlarge].

If you look closely, there is a fair amount of noise due to binning.
Let us use SciPy to smooth the noise.

from scipy.signal import savgol_filter
r, gr = np.loadtxt('gr.dat', unpack=True) # read file from disk
gsm = savgol_filter(gr, 15, 4) # smooth it

The second argument (15) has to be an odd number and is the window or subset size, and the third (4) argument is the degree of polynomial to regress. When I plot the smoothed curve:

plt.plot(rFG,grFG,'.')
plt.plot(rFG,gsm1,label='SG')

If you look closely, again:

One can experiment with the window size and the degree of polynomial. In general, a larger window, and a higher degree polynomial make the curve smoother. The figure below shows a window of size 7 and 31 with a degree 4 polynomial.


Thursday, June 9, 2016

The Upselling of Grit

You might have seen this TED video on the importance of grit


The key pitch as summarized by Daniel Engber at Slate are two ideas: (i) grit is among the best predictors of success, and (ii) we can change the level of grit.

The pitch is successful in part because the first idea seems obvious: we all remember examples of underdogs who overcame incredible odds by triumphing over superior enemies (or overwhelming circumstances) through sheer perseverance and hard work. The second idea appeals to our sensibility of fairness by suggesting that your success is not predestined by the circumstances of birth. Rather, it is within the circle of your influence.

Engber puts it this way:
[...] optimistic message that you find in Grit: It’s possible for all of us to change or, as one book puts it, to feel the triumph of a “neuroplastic transformation.” They tell us that we needn’t be the victims of our meager talents or our lousy genes.
Critical examination reveals a more pessimistic picture. A picture in which that ugly monster, IQ, raises its unwelcome head.

In this interview on Vox "Why IQ matters more than Grit" Resnick has the following exchange with Stuart Ritchie:
BR: I found a lot of this research to be depressing. In your book, you lay out a compelling case that IQ reliably is correlated with longevity, economic success, and physical well-being. You also make it clear that IQ doesn't change all that much throughout our lives. We're kind of stuck with what we've got. I guess I find it unfair. 
SR: First of all, the most important thing to say is that it doesn’t matter if it’s depressing if that’s what the research says. One can’t deny it. 
Think about how it would it be if it was the other way around; there might actually be some bad outcomes. 
Because then parents would be able to totally control their kids with bad parenting, and wreck kids’ IQs for the rest of their lives. Governments could have big influences on people’s IQs by enacting different policies toward different sets of people in the country.
It also turns out that IQ is also strongly correlated with measures like emotional intelligence and grit itself.


Tuesday, June 7, 2016

Diffuvisity Induced Segregation

It takes only a small difference in size or shape for particles to spontaneously demix. The famous "Brazil Nut Effect" is one common example.

There are perhaps sociological analogs, where racial, income-based, or religious clustering arises from small differences. A Google or Google-Scholar search for "auto-segregation" or  "self-segregation" brings out many of these examples.

It was with much interest that I read "Binary Mixtures of Particles with Different Diffusivities Demix" (paywalled). The abstract reads:
The influence of size differences, shape, mass, and persistent motion on phase separation in binary mixtures has been intensively studied. Here we focus on the exclusive role of diffusivity differences in binary mixtures of equal-sized particles. We find an effective attraction between the less diffusive particles, which are essentially caged in the surrounding species with the higher diffusion constant. This effect leads to phase separation for systems above a critical size: A single close-packed cluster made up of the less diffusive species emerges. Experiments for testing our predictions are outlined.
There is a non-paywalled video that shows the demixing process in the supplemental materials section.

Here is a decent commentary:
Soluble substances normally become evenly distributed throughout the solvent medium, thanks to passive molecular diffusion. The rate at which this occurs depends on the diffusion constant of the molecule concerned, whose magnitude increases with the temperature. In mixtures that have attained thermal equilibrium, particles of equal size normally exhibit the same diffusion constant. "We were interested in what happens when particles of equal size differ in their diffusion constants," says Simon Weber, first author on the new paper.

Friday, June 3, 2016

Quotables

1. Alain de Botton in "Why You Will Marry the Wrong Person"
The person who is best suited to us is not the person who shares our every taste (he or she doesn’t exist), but the person who can negotiate differences in taste intelligently — the person who is good at disagreement. Rather than some notional idea of perfect complementarity, it is the capacity to tolerate differences with generosity that is the true marker of the “not overly wrong” person. Compatibility is an achievement of love; it must not be its precondition.
2. "Forgiveness means letting go of the hope for a better past." - @LamaSuryaDas
 3. “The perfect man employs his mind as a mirror. It grasps nothing; it refuses nothing. It receives, but does not keep.” -Laozi
4. "I can see that you have a complex problem: it has a real and an imaginary part." -- John Tukey
5. Opportunities multiply as they are seized. (Sun Tzu, The Art of War)