Friday, December 29, 2017

Teaching Kids to Code

I've been on the lookout for video tutorials that teach kids in upper elementary school the basics of coding.

Some of my requirements/biases are:
  • a general-purpose fully-featured language that one can grow with. Presumably, this is the start of a multi-year commitment. This eliminates awesome, but specialized, tools like Scratch.
  • a language with rich library support, so that one can get started quickly and start prototyping. This probably eliminates most fully featured compiled languages like C++ etc. 
  • A language that is cross-platform, and can do graphics well. Use art (mathematical perhaps) as the window.
Python seems like a potential choice.

I found a superb series of YouTube lectures, which caters directly to my requirements. Here is a link to the playlist from KidsCanCode.

Saturday, December 16, 2017

Taubes, Sugar, and Fat

Last week, I listened to Shane Parrish's interview with Gary Taubes on the Knowledge Project podcast. Taubes provides an informative historical perspective on some aspects of research in nutrition science.

His view is not charitable. Perhaps, deservedly so.

I have to confess that I have't read the book "The Case Against Sugar", but I have followed Taubes' arguments for quite a while. His thesis, essentially the same as his previous two books, is that we ditch a "low-fat high-carb" diet, for a "low-carb high-fat (and protein)" diet.

The points he make are provocative, and interesting.

That said, I wished Shane would have challenged Taubes more, and held him accountable.

This counter-point by Stephan Guyenet points to numerous reasonable flaws with Taubes' thesis. It is worth reading in its entirety, if only for the balance it provides.

A couple of other rebuttals are available here and here.

Tuesday, December 12, 2017

Randomized SVD

Dimension reduction is an important problem in the era of big data. SVD is a classic method for obtaining low-rank approximations of data.

The standard algorithm (which finds all the singular values) is one of the most expensive matrix decomposition algorithms.

Companies like Facebook or Google deal with huge matrices (big data can be big). Often, they don't care about finding all the singular values - perhaps only the first 10 or 20. They may also not need exquisite precision in the singular values. Good approximations might do just fine.

Fortunately, there are randomized algorithms for finding SVDs which work on a relatively simple logic. One approximates the range of the matrix, by repeatedly multiplying it with random vectors, and works with with those.

The algorithm is fairly simple to implement:
Figure from Erichson et al
In Octave or Matlab, the code can be implemented in about 10 lines.

The resulting truncated-SVD can be a surprisingly good approximation, which can shave multiple orders of magnitude (mileage improves as matrices get bigger) from computation time.

For python, there are decent implementations of randomized SVD in the sklearn package, and the fbpca package from Facebook. This blog post shows some code to call these routines, and provides some benchmarks.

Thursday, December 7, 2017

More is Different

Last week, I read a nearly 50 year old essay by P. W. Anderson (h/t fermatslibrary) entitled "More is Different" (pdf). It is a fascinating opinion piece.
  • "Quantitative differences become qualitative ones" - Marx
  • Psychology is not applied biology, nor is biology applied chemistry.
This other essay on the "arrogance of physicists" speaks to a similar point:
But training and experience in physics gives you a very powerful toolbox of techniques, intuitions and approaches to solving problems that molds your outlook and attitude toward the rest of the world. Other fields of science or engineering are limited in their scope. Mathematics is powerful and immense in logical scope, but in the end it is all tautology, as I tease my mathematician friends, with no implied or even desired connection to the real world. Physics is the application of mathematics to reality and the 20th century proved its remarkable effectiveness in understanding that world, from the behavior of the tiniest particles to the limits of the entire cosmos. Chemistry generally confines itself to the world of atoms and molecules, biology to life, wonderful in itself, but confined so far as we know to just this planet. The social sciences limit themselves still further, mainly to the behavior of us human beings - certainly a complex and highly interesting subject, but difficult to generalize from. Engineering also has a powerful collection of intuitions and formulas to apply to the real world, but those tend to be more specific individual rules, rather than the general and universal laws that physicists have found. 
Computer scientists and their practical real-world programming cousins are perhaps closest to physicists in justified confidence in the generality of their toolbox. Everything real can be viewed as computational, and there are some very general rules about information and logic that seep into the intuition of any good programmer. As physics is the application of mathematics to the real world of physical things, so programming is the application of mathematics to the world of information about things, and sometimes those two worlds even seem to be merging.

Sunday, November 26, 2017

Post-Thanksgiving Links

Some links to interesting scientific content:

1. How Wikipedia Tackles Fringe Nonsense (neurologica)

2. Seven Academic-World Lies (Mariana Cerdeira)

3. Numerically Approximating Ghosts (John D Cook)

4. An Archive of Projects Using Differential Equations (Zill)

Tuesday, November 14, 2017

History of PowerPoint

The history of MS Office is riveting.

This essay in IEEE Spectrum recounts the "Improbable Origins of PowerPoint". I did not know that Xerox PARC had such a direct influence of on MS Office (including MS Word).

Reading the essay, one gets a sense for how fluid the desktop computer landscape was between the advent of the Apple Lisa and Microsoft's bundling of Word, Excel, and PowerPoint.

Monday, November 13, 2017

Exporting Numpy Arrays and Matrices to LaTeX

Over the past couple of years, a lot of my "numerical experimentation" work has moved from Octave to python/numpy.

I incorporate a lot of this work into my classes and presentations (made using beamer), and having a script to translate vectors and matrices to LaTeX format is handy.

In the past, I shared a Matlab/Octave script which does this.

Here is a python/numpy script which does something similar. The script

  • autodetects integers and floats
  • allows you to control the number of decimals for floats
  • allows you optionally render floats in scientific format
  • right-justify using the bmatrix* environment (good for -ve numbers)
  • suppress small values near zero (~ 1e-16)

Monday, November 6, 2017

Python: Orthogonal Polynomials and Generalized Gauss Quadrature

A new (to me) python library for easily computing families of orthogonal polynomials.

Getting standard (generalized) Gauss quadrature schemes is extremely simple. For example to get 13 nodes and weights for Gauss-Laguerre integration, correct up to 50 decimal places:

pts,wts = orthopy.schemes.laguerre(13, decimal_places=50)

The numpy Polynomial package provides similar functionality:

pts, wts = numpy.polynomial.laguerre.laggauss(13)

A nice feature (besides arbitrary precision) is that you can derive custom orthogonal polynomials and quadrature rules. All you need to provide is a weight function and domain of the polynomials. From the project webpage:
import orthopy
moments = orthopy.compute_moments(lambda x: x**2, -1, +1, 20)
alpha, beta = orthopy.chebyshev(moments)
points, weights = orthopy.schemes.custom(alpha, beta, decimal_places=30)
This generates a 10-point scheme for integrating functions over the interval [-1, 1], with weight function \(w(x) = x^2\).

Friday, October 27, 2017

Science Links

1. When the Revolution Came for Amy Cuddy (Susan Dominus in the NYT)
But since 2015, even as she continued to stride onstage and tell the audiences to face down their fears, Cuddy has been fighting her own anxieties, as fellow academics have subjected her research to exceptionally high levels of public scrutiny. She is far from alone in facing challenges to her work: Since 2011, a methodological reform movement has been rattling the field, raising the possibility that vast amounts of research, even entire subfields, might be unreliable. Up-and-coming social psychologists, armed with new statistical sophistication, picked up the cause of replications, openly questioning the work their colleagues conducted under a now-outdated set of assumptions. The culture in the field, once cordial and collaborative, became openly combative, as scientists adjusted to new norms of public critique while still struggling to adjust to new standards of evidence.
2. When correlations don't imply causation, but something far more screwy! (the Atlantic)
2a. John D. Cook follows up with "negative correlations" induced by success.

3. STEM resources for students from K-PhD, and beyond (PathwaysToScience)

Tuesday, October 24, 2017

Gauss and Ceres

Car Friedrich Gauss was an intellectual colossus, whose work informed or revolutionized broad and seemingly unrelated swathes of science and math. In computational science, his name is attached to numerous methods for solving equations, integrating functions, and describing probabilities.

Interestingly, perhaps two of his most enduring contributions - Gaussian elimination to solve systems of linear equations, and normal or Gaussian distribution are linked through the fascinating story of how Gauss determined the orbit of Ceres (great read!).

While there is plenty of geometry involved, this example illustrates how multiple observations of the asteroid by astronomers, lead to an over-determined system of equations. Assuming that these measurements were tainted by normal or Gaussian error, Gauss built the resulting "normal equations" and solved for the orbit.

When Ceres was lost to the glare of the sun, he was able to use these calculations to direct astronomers to the part of the sky where they should point their telescopes.

Saturday, October 21, 2017

Pascal's Wager

I enjoyed this recent conversation between Julia Galef and Amanda Askell on the nuances of Pascal's wager. According to wikipedia:
Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does actually exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas they stand to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (eternity in Hell).
I always thought this was something of a tongue-in-cheek argument because "of course" the argument fails the smell test. However, if we take it seriously, we find that it resists simple attempts at tearing it down. This blog post ("Common objections to Pascal's wager") outlines some of the rebuttals. It makes for interesting reading.

One of the things from the podcast that stuck with me was a comment about whether belief in climate change maps neatly onto Pascal's wager. Simplistically, let C be the claim that climate change is true, and ~C be the opposite claim. Let A denote action (taken to avert C), and ~A denote inaction (business as usual).

Then, we have the following four possibilities, A|C (action given climate change), A|~C, ~A|C, and ~A|~C.

A|C = mildly painful

An analogy might be something like an appendectomy. There is a problem (inflamed appendix or climate change), and appropriate corrective action is applied (surgical removal, CO2 reduction).

A|~C = mildly painful

An analogy would be unused insurance. You buy home insurance for a year, and nothing happens. You had to fork over premiums (which is mildly painful), but you accept that as reasonable risk against catastrophe.

~A|C = catastrophe

Piggybacking on the previous analogy, here your house is in flames and you realize you skimped on fire insurance. The external "shock" is bad (climate change or house catching fire), but your "penny-wise but pound-foolish" behavior made a bad situation much much worse.

~A|~C = mildly pleasurable

An analogy (which strikes close to home) might be skipping the annual dental checkup, and finding out nothing is wrong with your teeth. As someone once remarked to me, sometimes "pleasure is simply the absence of pain."

Note that the catastrophic outcome 3 (~A|C), with its "infinities", crowds out the others.

Hence, Pascal might argue that we should believe in both, God and climate change.

Thursday, October 12, 2017

Introduce Concepts in Historical Order?

Let me confess: I have read very few scientific classics in the original.

I haven't read the Principia, the Origin of Species, or the Elements.

I had not even read Einstein's 1905 classic on Brownian motion, until a few years ago, even though half of my research is directly or indirectly animated by it.

Ever since I saw this amazing series on complex numbers, I have been wondering whether presenting the historical progression of ideas might be "better" than the standard textbook introduction. Here are some of my observations.

The historical approach (HA) is inherently interesting, because it is about ideas and the people behind them. Stories of humans exploring and pushing boundaries, regardless of domain, are fascinating. These stories often have imperfect people grappling with new ideas, getting confused by their implications, arguing back and forth, improving, and gradually perfecting them over centuries. This happened with classical mechanics, evolution, complex numbers, quantum mechanics, etc.

The standard approach (SA), on the other hand, steers away from messy pasts, leaps of intuition that came seemingly from nowhere, the entertaining bickering, and the trials and errors. It trims away the excess fat of distractions, consolidates different viewpoints, and presents a sanitized account of an idea. It is, without question, the quickest and cleanest way to learn a new concept. This is an extremely desirable feature in university courses, which have a mandate to "cover" a set of concepts, often in limited time.

Perhaps, a good practical compromise is to start with an example rooted in the historical approach to motivate the topic,  and transition to the standard textbook approach to teach the meat of the topic. It might be interesting to conclude once again with a historical perspective, perhaps mixed with a discussion of the current state of art and open questions.

Wednesday, October 4, 2017

Monday, September 25, 2017

Prony Method

Given N equispaced data-points \(F_i = F(t = i \Delta t)\), where \(i = 0, 1, ..., N-1\), the Prony method can be used to fit a sum of m decaying exponenitals: \[F(t) = \sum_{i=1}^{m} a_i e^{b_i t}. \] The 2m unknowns are \(a_i\) and \(b_i\).

In the Prony method, the number of modes in the exponential (m) is pre-specified. There are other methods, which are more general.

Here is a python subprogram which implements the Prony method.
If you have arrays t and F, it can be called as:

a_est, b_est = prony(t, F, m)

Friday, September 22, 2017

MCMC Samplers Visualization

A really nice interactive gallery of MCMC samplers


You can choose different algorithms, and target distributions, change method parameters and observe the chain evolve.

This might come in handy next semester, when I teach a Monte Carlo class.

Tuesday, September 19, 2017

Some Useful Math Links!

1. The history of the division  symbol (obelus) is fascinating! (DivisionByZero)

2. On the same blog: "What is the difference between a theorem, a lemma, and a corollary?"

3. The "glue function"

4. Free abridged Linear Algebra book from Sheldon Axler.

Saturday, September 16, 2017

Implicit Bias Test

I thoroughly enjoyed this Jesse Singal interview on Rationally Speaking on the problems with  the "implicit association test" for diagnosing implicit bias.

The following Dateline video shows how the test was sold to the public as scientifically robust.


For fun, you can take the test yourself.

For the problems with the test, check out Jesse Singal's piece from earlier this year, "Psychology’s Favorite Tool for Measuring Racism Isn’t Up to the Job". It is a thoughtful essay, that should be read in its entirety.
A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such. The history of the test suggests it was released to the public and excitedly publicized long before it had been fully validated in the rigorous, careful way normally demanded by the field of psychology.
Singal is careful to point out that just because IAT is flawed it doesn't imply that implicit bias doesn't exist. I liked an analogy he used in the podcast. If a thermometer is flawed, you can't use it to determine if a person has a fever. The person may or may not have a fever, but the thermometer should probably be tossed away. 

Tuesday, September 12, 2017

Euler and Graph Theory

I have been enjoying Marcus du Sautoy's fine podcast of famous mathematicians for BBC4. 

Yesterday, I listened to the Leonhard Euler episode. While I always knew Euler was one of the top mathematicians of all time, his contributions are truly remarkable.

The podcast talks about how he solved the seven bridges of Konigsberg problem by inventing graph theory, and proving its first theorem. I looked at that theorem as it applies to a "kids game" in a previous blog.

Tuesday, September 5, 2017

Teacher's Day 2017

I did not expect writing this post would be so bittersweet. Last Teacher's Day, I decided I would use the occasion to highlight specific teachers, who have had an outsized impact on me.

Today, I am going to tell you about Kartic C. Khilar, or KCK as he was called at IIT Bombay. KCK was a central figure, and participant, as I navigated a period of multiple transitions.

Interestingly, I first "met" KCK even before I met him. The year I took the Joint Entrance Exam (JEE) to apply for admission to IIT, he was the principal administrator. The only reason I remember is because he had a "killer" last name (so juvenile, I know!).

Like 200,000 other rats, I studied relentlessly for two years. JEE is like academic Olympics. We trained like mental athletes: cardio, weights, pilates, the whole nine yards. Then, the starting gun went off, and we scampered. The first two thousand got in.

Miraculously, I tumbled my way into IIT Bombay first, and then to the chemical engineering department. KCK was the head of the department, when my "batch" arrived.

He taught us fluid mechanics and solid-fluid operations. He was a fantastic teacher - one of the best I've had. His lectures were crisp. He was always cheerful. And he cared about all his students - not just toppers.

He had one striking attribute: no ego. No made up sense of self-importance, which is all the more remarkable given the power gap between teachers and students (especially in India). If you went to his office, he would listen, despite how busy he was, or how unimportant you were.

A highlight of the undergrad program at IIT is the B. Tech project (BTP), which is the undergrad equivalent of a PhD dissertation. Again, due to a random set of circumstances, he ended up being my BTP mentor. Over the course of the last year and half at IIT our interaction deepened, if only because we met one-on-one on a weekly basis to discuss research.

Research in the Fluid Mechanics lab was fun. I don't think I would have embarked on a research career, if I hadn't enjoyed this experience so much. This work on "colloid-facilitated contaminant transport" with KCK and his grad student at that time - Tushar Sen - would end up becoming my first peer-reviewed publication.

I ended up at the University of Michigan as a grad student, in no small part due to his kind word. Michigan was his alma mater too. He visited Ann Arbor twice, while I was there. Once, when I was a PhD student, and later just before I started my new academic job at Florida State. Each time I went to Bombay, I would meet him; usually over lunch or dinner.

Throughout this period, he selflessly offered his mind for me to pick, and his ocean of experience for me to draw from. At several points during this journey, I abandoned hopes of an academic career. Each time, he listened without judgment, and quietly held a mirror to my desire for autonomy and passion for teaching. For better or for worse, he was instrumental in me ending up on the trajectory I am currently on.

And I couldn't be more grateful! Sometimes you try to peek over the horizon, but you can't see what a taller person who has been to more places can (in my case, that is literally true too).


In 2009, I shut the door to my office and wept, when I learnt about his untimely passing. He was 57, in great mental and physical shape, and I always expected him to be around forever.

When I first encountered KCK in 1994, I knew him as an administrator. Later at IIT he became my chairman and teacher, before becoming my BTP supervisor.

Somewhere along the way, he became a mentor, and a close friend; emails that started with "Dear Prof. Khilar" eventually started with "Dear Kartic".

Today, even though I knew it would bounce, I nearly wrote (to his familiar email address), "Dear Kartic, you are sorely missed." 

Saturday, September 2, 2017

Complex Numbers: Part Deux

I was pointed to this excellent series on complex numbers from Welch labs, following my last post on complex numbers. It in the 3Blue1Brown mold, with just the right dose of insight and animation. The complex number series starts with basic ideas, and ends with a discussion of Riemann surfaces.

I also came across an interesting way of proving exp(ix) = cos x + i sin x (@fermatslibrary), which I feel compelled to share, since we are already talking about complex numbers.

Let \(f(x) = e^{-ix} (\cos x + i \sin x)\).

The derivative of this function is \[f'(x) = e^{-ix} (i\cos x - i \sin x) - i e^{-ix} (\cos x + i \sin x) = 0.\] Since \(f'(x) = 0\), the function is a constant.

Also f(0) = 1, which implies f(x) = 1.

Thus, \(e^{ix} = \cos x + i \sin x\).

PS: One of my students told me last week about the new podcast (Ben, Ben, and Blue) that Grant Sanderson (of 3Blue1Brown) hosts on math, computer science and education. It is delightful.

Tuesday, August 29, 2017

Anomalous Diffusion

I've been taking a deep dive into the world of anomalous diffusion over the past month. It is a fascinating subject that integrates applications from a variety of different fields.

For someone interested, I'd recommend the following resources:

1. A Physics World feature on "Anomalous diffusion spreads its wings" (pdf - currently not paywalled)

2. A YouTube video on anomalous diffusion in crowded environments



3. A gentle introduction/tutorial on normal and anomalous diffusion, which introduces the intuition and mechanics of fractional calculus

4. A more academic review of anomalous diffusion and fractional dynamics (may be paywalled)

Wednesday, August 23, 2017

If $1 = 1 sec ...

If $1 were equal to 1 second, the median US household income per year of $50,000 would correspond to half a day.

This helps puts millions, billions, and trillions into perspective.

Roughly,
  • $1 million = 2 weeks
  • $1 billion = 32 years
  • $1 trillion = 300 centuries (before recorded history)
A trillion is a really large number! 

Tuesday, August 15, 2017

Diffusion: A Historical Perspective

The paper (pdf) "One and a half century of diffusion: Fick, Einstein, Before and Beyond" by Jean Philibert traces the history of diffusion phenomena.

It starts with Thomas Graham (of dialysis fame) who perhaps made the first systematic observations, which were integrated into phenomenological law by German physiologist Adolf Fick in 1855, at the age of 26.

Fick observed the analogy between mass diffusion and heat conduction (now considered obvious), and piggy-backed on Fourier's law of conduction (1822). The paper cites the opening lines of Fick's work:
A few years ago, Graham published an extensive investigation on the diffusion of salts in water, in which he more especially compared the diffusibility of different salts. It appears to me a matter of regret, however, that in such an exceedingly valuable and extensive investigation, the development of a fundamental law, for the operation of diffusion in a single element of space, was neglected, and I have therefore endeavoured to supply this omission.
Next, the paper talks about the contributions of  W. C. Roberts-Austen (an assistant to Thomas Graham, and successor as Master of the Mint) to quantification of diffusion in solids.

In 1905, Einstein integrated Robert Brown's observations of random zig-zag trajectories and Fick's phenomenological laws, with the crucial observation that it was the mean-squared displacement, and not the mean displacement that was related to diffusion.

Following Einstein's paper, the experimental work of Perrin was responsible helping the world accept the link between the microscopic (MSD is proportional to diffusivity and time) and macroscopic worlds (flux is proportional to concentration gradient).

It is always interesting to look at the chronological development of (now familiar) ideas. These uncontroversial ideas were once strongly wrestled with. It took centuries for scientists to come up with a comprehensive understanding, and to develop interesting applications based off of it.

Saturday, August 12, 2017

Exam Question on Fitting Sums of Exponentials to Data

I wrote the question below for our PhD qualifiers. It addresses a problem I have been thinking about for over a decade now - starting from my time as a graduate student: how to fit a sum of decaying exponentials to data?

The question explores a method called the Prony method. Here is the question:

A classical problem in data analysis involves fitting a sum of exponentials to a time series of uniformly sampled observations. Here, let us suppose we are given N observations \((t_i, f_i)\), where \(t_i = i \Delta t\) for \(i = 0, 1, ..., N-1\).

We want to fit the data to a sum of two exponentials. The model equation is, \[\hat{f}(t) = a_1 e^{b_1 t} + a_2 e^{b_2 t}.\] The general nonlinear regression problem to determine \(\{a_j, b_j\}\) becomes difficult as the number of exponentials in the sum increases. A number of quasi-linear methods have been developed to address this. In the question, we will explore one of these methods, and determine the fitting parameters.

(a) First, generate a synthetic dataset \((t_i, f_i)\) with true \(a_1^* = a_2^* = 1.0\), \(b_1^* = -2.0\), \(b_2^* = -0.2\). Use \(t_0 = 0\), \(\Delta t = 1\), and N = 20. Attach a plot of the synthetic dataset. Use this dataset for numerical calculations below.

(b) If \(b_1\) and \(b_2\) are known, then we can determine \(a_1\) and \(a_2\) by linear least squares. Set \(u_1 = e^{b_1 \Delta t}\) and \(u_2 = e^{b_2 \Delta t}\). Recognize that \(e^{b_i t_j} = e^{b_i j \Delta t} = u_i^j\). Hence from the model eqn, we can get a linear system:
\begin{align}
f_0 & = a_1 u_1^0 + a_2 u_2^0 \nonumber\\
f_1 & = a_1 u_1^1 + a_2 u_2^1 \nonumber\\
\vdots & = \vdots \nonumber\\
f_{N-1} & = a_1 u_1^{N-1} + a_2 u_2^{N-1}
\end{align}
Write a program to determine \(a_1\) and \(a_2\), given the data, \(b_1\) and \(b_2\).

(c) Consider the polynomial \(p(z)\), which has \(u_1\) and \(u_2\) as its roots, \(p(z) = (z-u_1)(z-u_2) = z^2 - d_1 z -d_2 = 0\). Express \(u_1\) and \(u_2\) in terms of \(d_1\) and \(d_2\).

(d) Now we seek to take linear combinations equations in the linear system above with the goal of eliminating \(a_j\). For example, consider the first three equations. If we multiply the first eqn by \(d_2\), the next by \(d_1\), and the third by -1 and sum them up.
\begin{align*}
d_2 f_0 & = a_1 d_2 + a_2 d_2\\
d_1 f_1 & = a_1 u_1 d_1 + a_2 u_2 d_1 \\
-1 f_2 & = -a_1 u_1^2 - a_2 u_2^2.
\end{align*}
We get \(-f_2 +d_1 f_1 + d_2 f_0 = -a_1(u_1^2 - d_1 u_1 - d_2) -\) \(  a_2(u_2^2 -d_1 u_2 - d_2) = 0\), since \(p(u_i) = 0\).

We can pick the next set of three equations, and repeat the process (multiply by \(d_2\), \(d_1\), and -1 before summing up). Show that we end up with the following linear system:
\[\begin{bmatrix} f_{1} & f_0 \\ f_2 & f_1 \\
\vdots & \vdots \\
f_{N-2} & f_{N-3} \\
\end{bmatrix} \begin{bmatrix} d_1 \\ d_2 \end{bmatrix} = \begin{bmatrix} f_2 \\ f_{3} \\ \vdots \\ f_{N-1} \end{bmatrix}\]
Determine \(d_1\) and \(d_2\), and hence \(u_1\) and \(u_2\). From this, find the estimated \(b_1\) and \(b_2\).

(e) Once you know \(b_1\) and \(b_2\) find \(a_1\) and \(a_2\) by linear least squares solution of linear system.

Wednesday, August 9, 2017

Wednesday, August 2, 2017

NumPy and Matlab

This post bookmarks two sites that provide handy cheat sheets of numpy equivalents for Matlab/Octave commands.

The ones for linear algebra are particularly handy, because that is one subdomain where Matlab's notation is more natural.

1. Numpy for Matlab users (Mathesaurus)

2. Cheatsheets for Numpy, Matlab, and Julia (quantecon)

Friday, July 28, 2017

Interesting Scaling Laws

I recently read Geoffrey West's book "Scale", and thought it was really great. Here are some resources to prime you for the subject.

1. TED Talk

2. Talk @ Google

3. Essay at the Edge

4. Essay on Medium

Tuesday, July 25, 2017

Russell's paradox

I came across this interesting paradox on a recent podcast. According to wikipedia:
According to naive set theory, any definable collection is a set. Let ''R'' be the set of all sets that are not members of themselves. If ''R'' is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves. This contradiction is Russell's paradox. 
Symbolically:
\[\text{Let } R = \{ x \mid x \not \in x \} \text{, then } R \in R \iff R \not \in R\]
There is a nice commentary on the paradox in SciAm, and a superb entry on the Stanford Encyclopedia of  Philosophy

Wednesday, July 19, 2017

Questions Kids Ask

Between my curious 4- and 8-year olds, I got asked the following questions in the past month.

I found all of them fascinating.

1. Why are our front milk teeth (incisors) the first to fall out?
2. Why is "infinity minus infinity" not equal to zero?
3. Why don't you get a rainbow when you shine a flashlight on rain in the night?
4. How are Cheerios and donuts made (into tori)?
5. His, hers, ours, yours. Then why not "mines"?

PS: I also learned from my 4-year old that daddy long legs aren't really spiders and don't spin webs, and that sea turtles feed on jellyfish.

Sunday, July 16, 2017

Matplotlib: Subplots, Inset Plots, and Twin Y-axes

This jupyter notebook highlights ways in which matplotlib gives you control over the layout of your charts. This is intended as a personal cheatsheet.



Friday, July 7, 2017

John Roberts Commencement Speech

This part of the address is really nice and timeless.


The transcript of the full speech is available here.

Wednesday, July 5, 2017

Joints from Marginals: Compilation

For convenience, here is a link to the three blogs in this series in one place.

1. A technique for solving the problem in a special case

2. The reason this technique works

3. The corners/edges of this technique, or how it fails for non-Gaussian marginals

Sunday, July 2, 2017

Joint from Marginals: non-Gaussian Marginals

In a previous post, I asked the question if the method described here can be used with non-Gaussian distributions.

Let us explore that by considering two independent zero mean, unit variance distributions that are not Gaussian. Let us sample \(x_1\) from a triangular distribution, and \(x_2\) from a uniform distribution.

We consider a triangular distribution with zero mean and unit variance, which is symmetric about zero (spans -sqrt(6) to  +sqrt(6)). Similarly, we consider a symmetric uniform distribution, which spans -sqrt(3) to  +sqrt(3).

Samples from these independent random variables are shown below.

When we use a correlation coefficient of 0.2, and use the previous recipe, we get correlated random variables with zero mean and the same covariance matrix, but ...
... the marginals are not exactly the same!

This is evident when we increase the correlation coefficient to say 0.5.

The sharp edges of the uniform distribution get smoothened out.

Did the method fail?

Not really. If you paid attention, the method is designed to preserve the mean and the covariance matrix (which is does). It doesn't really guarantee the preservation of the marginal distributions. 

Wednesday, June 28, 2017

Printing webpages as PDFs

PrintFriendly and PDF has a useful browser extension (tested on Chrome) that creates more readable PDFs from web content.

Here is a screenshot (click to enlarge) from a Matlab blog that I follow:



Notice that the webpage has lots of links, and a frame on the left.

When I use the "Print to File" feature directly from my Chrome browser, I get a PDF which looks like this:

It does the job, but it looks very amateurish. On more complicated websites, results can be horrendous.

Here is the same webpage, now using PrintFriendly.

Notice that the PDF is much cleaner, is well formatted, and contains all the relevant information.

Thursday, June 22, 2017

Joint from Marginals: Why?

In the previous blog post, we saw a special example in which we were able to sample random variables from a joint 2D-Gaussian distribution from the marginals and the correlation coefficient.

I listed a simple method, which seemed to work like magic. It had two simple steps:

  • Cholesky decomposition of the covariance matrix, C(Y)
  • Y = LX, where X are independent random variables

The question is, why did the method work?

Note that the covariance matrix of random variables with zero mean and unit standard deviation can be written as, \(C(Y) = E(Y Y')\), where \(E()\) denotes the expected value of a random variable. Thus, we can write the expected value of the Y generated by the method as, \[\begin{align*} E(Y Y') & = E\left(LX (LX)'\right)\\ & = L E(XX') L' \\ & = L I L'\\ & = LL' = C.\end{align*}.\] Here we used the fact that the covariance of X is an identity matrix by design.

Note that this method preserves the covariance matrix (and hence the standard deviation of the marginals).

Does it preserve the mean?

Yes. \(E(Y) = E(LX) = L E(X) = 0.\)

Do the marginals have to be normal for this method to work? Would this work for any distribution (with zero mean, and unit standard deviation)?

We will explore this in a subsequent blog.

Thursday, June 15, 2017

Joint Distribution From Marginals

Consider two dependent random variables, \(y_1\) and \(y_2\), with a correlation coefficient \(\rho\).

Suppose you are given the marginal distributions \(\pi(y_1)\) and \(\pi(y_2)\) of the two random variables. Is it possible to construct the joint probability distribution \(\pi(y_1, y_2)\) from the marginals?

In general, the answer is no. There is no unique answer. The marginals are like shadows of a hill from two orthogonal angles. The shadows are not sufficient to specify the full 3D shape (joint distribution) of the hill.

Let us simplify the problem a little, so that we can seek a solution.

Let us assume \(y_1\) and \(y_2\) have zero mean and unit standard deviation. We can always generalize later by shifting (different mean) and scaling (different standard distribution). Let us also stack them into a single random vector \(Y = [y_1, y_2]\).

The covariance matrix of two such random variables is given by, \[C(Y) = \begin{bmatrix} E(y_1 y_1) - \mu_1 \mu_1 & E(y_1 y_2) - \mu_1 \mu_2 \\ E(y_2 y_1) - \mu_2 \mu_1 & E(y_2 y_2) - \mu_2 \mu_2 \end{bmatrix} = \begin{bmatrix} 1 & \rho \\ \rho  & 1 \end{bmatrix},\] where \(\mu\) and \(\sigma\) refer to the mean and standard deviation.

Method

A particular method for sampling from the joint distribution of correlated random variables \(Y\) begins by drawing samples of independent random variables \(X = [x_1, x_2]\) which have the same distribution as the desired marginal distributions.

Note that the covariance matrix in this case is an identity matrix, because the correlation between independent variables is zero  \(C(X) = I\).

Now we recognize that the covariance matrix \(C(Y)\) is symmetric and positive definite. We can use Cholesky decomposition \(C(Y) = LL^T\) to find the lower triangular matrix \(L\).

The recipe then says that we can draw the correlated random variables with the desired marginal distribution by simply setting \(Y = L X\).

Example

Suppose we seek two random variables whose marginals are normal distributions (zero mean, unit standard deviation) with a correlation coefficient 0.2.

The method above asks us to start with independent random variables \(X\) such as those below.

Cholesky decomposition with \(\rho\) = 0.2, gives us,  \[L = \begin{bmatrix} 1 & 0 \\ 0.1  & 0.9797 \end{bmatrix}.\] If we generate \(Y = LX\) using the same data-points used to create the scatterplot above, we get,

It has the same marginal distribution, and a non-zero correlation coefficient as is visible from the figure above.

Saturday, June 10, 2017

Links

1. "The seven deadly sins of statistical misinterpretation, and how to avoid them" (H/T FlowingData)

2. Desirability Bias (Neurologica)
[...] defined confirmation bias as a bias toward a belief we already hold, while desirability bias is a bias toward a belief we want to be true.
3. H/T John D. Cook
“Teachers should prepare the student for the student’s future, not for the teacher’s past.” — Richard Hamming
 4. This xkcd cartoon on survivorship bias

Thursday, June 8, 2017

Matplotlib Styles

I created a jupyter notebook demonstrating the use of built-in or customized styles in matplotlib, mostly as a bookmark for myself.

Monday, June 5, 2017

Jupyter Notebook Tricks

Some cool Jupyter notebook tricks from Alex Rogozhnikov. Here are some that I did not know:
  • %run can execute python code from .py files and also execute other jupyter notebooks, which can quite useful. (this is different from %load which imports external python code
  • The %store command lets you pass variables between two different notebooks.
  • %%writefile magic saves the contents of that cell to an external file.
  • %pycat does the opposite, and shows you (in a popup) the syntax highlighted contents of an external file.
  • #19  on using different kernels in the same notebook, and #22 on writing fortran code inside the notebook




Thursday, June 1, 2017

Annotating PDFs on Linux

Most of my working day is spent reading.

Usually, this means poring over some PDF document, and scribbling my thoughts - preferably on the PDF itself. I find these markups extremely helpful, when I want to recall the gist, or when it is time to synthesize "knowledge" from multiple sources.

I use Linux on both my desktops (home and work), and the usual applications (Evince, Okular, etc.) for marking up PDFs are inadequate in one form or another. Adobe Reader, while bloated, used to do the job. But they don't release a Linux version anymore.

The solution that best fits my needs currently is Foxit Reader. Although you can't use the standard software manager (ex. apt-get on Ubuntu) to get it, you can easily download a 32- or 64-bit version from their website.

The "installation guide" tells you how to do the rest [unzip, cd, and run the executable installer].

On my Linux Mint systems it was easy, peasy!

The software itself is intuitive. You can highlight, add text, stick in comments, and draw basic shapes. The changes you make are permanently saved into the PDF, so that when you use another application to reopen, the changes persist.

It is cross-platform, so you can get a version on any OS (including iOS) you want.

Thursday, May 25, 2017

PyCon 2017 Talks

Some interesting Python talks (links to YouTube videos) from this year's PyCon.

1. Jake Vanderplas: The Python Visualization Landscape

2. Chistopher Fonnesbeck: PyMC3

3. Eric Ma: Bayesian analysis

4. Alex Orlov: Cython

5. Bret Cannon: What new is python 3.6?



Tuesday, May 23, 2017

Scott Galloway's Advice to Graduates

Let me start with a confession: I love commencement ceremonies.

It's not the elaborate regalia, the tradition, or the choreographed deference that appeal to me - although those do add to the theater.

What I enjoy most is the commencement address, seeing heartfelt hugs between students and their mentors, and cheers from family members in the galleries.

Like Disney movies, they fill me with hope and optimism.

I came across this speech "No Mercy, no Malice" by Prof. Scott Galloway, which condenses so much practical wisdom. It is short; I recommend reading it in its entirety. Here's a snippet to entice you.

from L2

Sunday, May 21, 2017

Quotes

Everything that irritates us about others can lead us to an understanding of ourselves.
Carl Jung
The value of a prototype is in the education it gives you, not in the code itself.
Alan Cooper

Saving money is a non-socially-rewarded, non-observable, 1 player game. Spending money is a socially rewarded, observable, multiplayer game

Eric Jorgenson

Academic life is 10% what happens to you, and 99% making it count for multiple sections on your CV.

Shit Academics Say

"Most papers in computer science describe how their author learned what someone else already knew."
Peter Landin

"Do I not destroy my enemies when I make them my friends?"
Abraham Lincoln



Friday, May 19, 2017

Smart Machines, Complexity, and 42

In Hitchhiker's Guide to the Galaxy, there is a poignant moment when a machine is asked the answer to the ultimate question of life, the universe, and everything.

It crunches numbers for millions of years, and returns the baffling answer "42".

The scenario that Douglas Adams concocted might be amusing, but given our increasing reliance on machines, it has ripples in today's world.

Computers can often provide answers, without providing insight. For meaning-seeking humans, this can be deeply unsatisfying. Witness the unease surrounding computer-assisted proofs.

Machine learning can help us deduce models to navigate complex systems. Neural network models might start with simple rules for learning. But the models they "learn" or end up with, are anything but simple.

To keep things specific, consider programming a self-driving car. The model may start simple ("keep between the lanes"), but get hellishly complicated as numerous edge cases ("dog jumps into the road", "many people don't check their blind spot", "the night is foggy") are subsumed.

Even if the car works reasonably well, how it responds to a "black swan" situation (one it has never seen before) might be anybody's guess.

Practical models for complex systems might be insanely complicated.

A recent "Rationally Speaking" podcast touched upon many of these issues. In particular, I found the discussion of "physics thinking", which emphasizes universal models by ignoring details, and "biological thinking", which celebrates the diversity of phenomenon by focusing on details, incredibly fascinating. From the transcript:
The physics approach, you see it embodied maybe in like an Isaac Newton. A simple set of equations explains a whole host of phenomena. So you write some equations to explain gravity, and it can explain everything from the orbits, the planets, the nature of the tides, to how a baseball arcs when you throw it. It has this incredibly explanatory power. It might not explain every detail, but it maybe it could explain the vast majority of what's going on within a system. That's the physics. The physics thinking approach, abstracting away details, deals with some very powerful insights.
On the other hand, you have biological thinking. Which is the recognition that oftentimes in other types of systems, in certain types of systems, the details not only are fun and enjoyable to focus on, but they're also extremely important. They might even actually make up the majority of the kinds of behavior that the system can exhibit. Therefore, if you sweep away the details and you try to create this abstracted notion of the system, you're actually missing the majority of what is going on. The biological approach should be that you recognize the details are actually very important. And therefore they need to be focused on. 
I think when we think about technologies both approaches are actually very powerful. But oftentimes I think people in their haste to understand technology, oftentimes because technologies are engineered things, we often think of them as perhaps being more the physics thinking side of the spectrum. When in fact, because they need to mirror the extreme messiness of the real world, or there's a lot of exceptions, or they've grown and evolved over time, often it's a very organic, almost biological fashion. They actually end up having a great deal of affinity with biological systems. And systems that are amenable to biological thinking and biology approaches.

Tuesday, May 16, 2017

Purdue-Kaplan: Is Disruption Knocking?

Last month Purdue University announced they were going to acquire Kaplan University, an online for-profit institution. The NewU ...
... will be distinct from others in the Purdue system, relying only on tuition and fundraising to cover operating expenses. No state appropriations will be utilized. It will operate primarily online, but has 15 locations across the United States, including an existing facility in Indianapolis, with potential for growth throughout the state. Indiana resident students will receive a yet-to-be-determined tuition discount.
The deal has the potential to bring down tuition costs, enhance access, and provide Purdue's solid brand name. Here are some reactions to news:

1. Purdue's official statement is, of course, positive.
Former U.S. Secretary of Education Arne Duncan said, “I’ve always had great respect for Gov. Daniels, and I’m excited by this opportunity for a world-class university to expand its reach and help educate adult learners by acquiring a strong for-profit college. This is a first, and if successful, could help create a new model for what it means to be a land-grant institution.”
2. However, questions are being raised (NPR).
The deal is eye-catching, but also part of a trend. Over the past decade dozens of nonprofit universities have contracted with private companies to expand their online offerings. For example, Arizona State University works with Pearson, and the University of Southern California with a company called 2U. Florida A&M and South Carolina State, both historically black institutions, have partnered with the University of Phoenix. In an atmosphere of ever-skinnier state budgets, these programs enable universities to reach a global market, cater to working adults, and potentially increase revenue without expensive capital investment.
3.  The faculty at Purdue is not happy (InsideHigherEd)
No faculty input was sought before the acquisition decision was made, and no assessment of its impact on Purdue’s academic quality was completed, according to the resolution. The resolution proceeded to fault a lack of transparency and a lack of an impact study on how the acquisition will affect faculty, curriculum, students and staff at Purdue. The resolution also wondered what will happen to faculty governance and academic freedom at Purdue’s newly acquired university. And it said previously Purdue’s administration has gone through University Senate structures -- which include faculty input -- when pursuing program restructuring or creation.
4.  An interview with the seller, Donald Graham, chairman of Graham Holdings.

[Q:] I see what Purdue gets from the arrangement—a jumpstart into providing online courses. But what does Graham Holdings get out of this deal? 
Graham: [...] You asked about when Graham Holdings shareholders might be rewarded. The only way we would be rewarded, the only way we would get a growing stream of revenue, would be if Purdue continued over the years to add students. In other words if the university became a big success under Purdue's leadership, we'll be part of that success. But we will not be a participant in any profits. We're out of the for-profit education business here. We will be paid for our services, and the profits if any will go to Purdue, and hopefully back into the whole educational system.
5. Some older links to universities, MOOCs and online education

Friday, May 12, 2017

Computational Thinking Classes

Here is a collection of "Computational Thinking for non-majors" type of classes:

1. My department's very own

2. Another list which links to classes from Berkeley, and Harvey Mudd.

3. A self-study course package

4. This essay by Stephen Wolfram on how to teach computational thinking 

Tuesday, May 9, 2017

Wait But Why

I first came across Tim Urban, through his interview with Julia Galef. The interview was one of my favorite episodes of Rationally Speaking.

His website "Wait But Why" is an amazing resource. It is well thought out, and provides historical, scientific, and philosophical context to many contemporary issues.

Listen to this TED talk for a quick introduction.


Sunday, April 30, 2017

Google Autodraw

Imagine playing pictionary with a computer.

Google Autodraw let's you do just that. You doodle/sketch on a board, and the computer continuously tries to guess what you are drawing. In a short amount of time, my guess is that it is going to get pretty good!

Here's an example:



Friday, April 28, 2017

History of the Logarithm of Negative Numbers

Not too long ago, I did a blog post on how matlab and python have different responses to logarithms of negative numbers.

It turns out that the history of the logarithm of negative numbers is truly fascinating, and had Leibnitz, Bernoulli, Euler, and the other greats embroiled. Take a look at this article (pdf) by Deepak Bal.

Here is the abstract:
In 1712, Gottfried Leibniz and John Bernoulli I engaged in a friendly correspondence concerning the logarithms of negative numbers. The publication of this correspondence in 1745 sparked an interest in the mathematical community on the topic. In this paper I will discuss the evolution of the logarithmic concept in the time leading up to their discussion. I will then give a synopsis of the correspondence followed by a description of a paper by Leonhard Euler in which he addresses the issue.

Sunday, April 23, 2017

End of The World

As I sat bewildered and amused, I knew it was a story that I had to be save for her wedding reception.  Next to me, my older daughter was sobbing inconsolably, "why does it all have to end this way?"

Rewind ten minutes. It was not a conversation that was supposed to go like this.

You see, my dad is absolutely fascinated by the night sky. I thought I'd play the role of a good son and father, by testing whether talking about space would ignite my daughter's interest in the subject. Perhaps, next time they met, they could obsess over a shared interest.

Me: Do you know the name of our galaxy, M.?
M.: Of course, the Milky Way!
Me: Good! They teach you good stuff in school. Now, harder question; do you know which galaxy is right next to ours?
M.: No, which is it, Baba?
Me: Andromeda. And I bet you haven't heard this. Billions of years later, Andromeda and the Milky Way are going to smash into each other. It is going to be spectacular!


M.: (troubled) Can't we do anything to stop it?

I cracked open a laptop, and fired up a browser.

Me: Look this is Tallahassee. This is Florida. This is the Earth. This is the solar system (zooming out each time). This is the Milky Way, and this is Andromeda. We are too tiny to do anything meaningful.
M.: (definitely worried) What does that mean? Does it mean we all die?
Me: (scoffing) Oh, don't worry about that. This is going to happen after BILLIONS of years. We will all be dead long before that. In fact, perhaps the Earth will be gone before that.
M.: What do you mean, Baba?
Me: You know how the sun is a star, right?  Like all stars, it shines by burning gas. It has tons of gas, kinda like Baba's tummy. But once it runs out of most of that gas, it might expand to about 3 times its size, and gobble up Mercury, Venus, and probably Earth.

I noticed tears streaming down her cheeks. I had to console her. And I had do it fast.

Me: But don't worry. Dont' worry! This is not going to happen for  BILLIONS of years more. We will all be gone by then.
M.: (now sobbing inconsolably) why does it all have to end this way?

After a few minutes, she regained her composure. I was debating whether it would be tone-deaf to talk about the real things to be scared of, like global warming, or pandemics, or..., when she interrupted me with a plea.

"Baba, can we please not talk about space, anymore?"

Tuesday, April 11, 2017

Strogatz and Art of Mathematics

For some reason, these Steve Strogatz' columns from 2015 on the "Art of Mathematics" have resurfaced.

Here are two blog posts, which explains the origins, inspiration, and mechanics.

The associated website is chock full of useful resources and ideas designed to help liberal arts students appreciate the art and joy of mathematics.

Thursday, March 30, 2017

Statistics and Gelman

Russ Roberts had a fantastic conversation with Andrew Gelman on a recent podcast. It covered a lot of issues and examples, some of which were familiar.

A particularly salient metaphor "the Garden of Forking Paths" crystallized (for me) some unintentional p-hacking by people with integrity.
In this garden of forking paths, whatever route you take seems predetermined, but that’s because the choices are done implicitly. The researchers are not trying multiple tests to see which has the best p-value; rather, they are using their scientific common sense to formulate their hypotheses in reasonable way, given the data they have. The mistake is in thinking that, if the particular path that was chosen yields statistical significance, that this is strong evidence in favor of the hypothesis.
This is why replication studies in which "researcher degrees of freedom" are taken away have more reliable scientific content. Unfortunately, they are unglamorous. Often, in the minds of the general population, they do not replace the flawed original study.

Gelman discusses numerous such examples on his blog. These include studies on "priming" and "power poses" that have failed to replicate. Sure there is the element of schadenfreude, but what I find far more interesting is the response of scientists who championed a theory react to new disconfirming data. For instance, Daniel Kanheman recently admitted that he misjudged the strength of the scientific evidence on priming, and urged readers to disregard one of the chapters devoted to it in his best-seller "Thinking Fast and Slow". Similarly, one of the coauthors of the original power poses work, Dana Carney, had the courage to publicly change her mind.

That is what good scientists do. They update their priors, when new data instructs them to do so.

This brings me to another health and nutrition story doing rounds on the internet. It suggests a 180-degree turn on how to deal with rising incidence of peanut allergies. Instead of keeping infants away from nuts, it urges parents to incorporate them into early, and often. I haven't looked at the original study carefully, but my instincts on retractions and reversals of consensus tells me to take the findings seriously.


Monday, March 27, 2017

Logarithms of Negative Numbers

A plot of log(x) looks something like the following:

As x decreases to zero log(x) approaches negative infinity. For negative values of real x, the log function is undefined. For example, consider the following numpy interaction:

>>> import numpy as np
>>> np.log(1)
0.0
>>> np.log(-1)
__main__:1: RuntimeWarning: invalid value encountered in log
nan

If I try to do the same in Octave, I get something different, and interesting.

octave:1> log(1)
ans = 0
octave:2> log(-1)
ans =  0.00000 + 3.14159i

The answer makes sense if we expand the scope of "x" from real to complex. We know Euler's famous identity, \(e^{i \pi} = -1\). Logarithms of negative numbers exist. They just exist in the complex plane, rather than on the real number line.

Octave's answer above just takes the logarithm of both sides of Euler's identity.

We can make python behave similarly by explicitly specifying the complex nature of the argument. So while log(-1) did not work above, the following works just as expected.

>>> np.log(-1+0j)
3.1415926535897931j

For x < 0, if we plot the absolute value of the complex number, then we get a nice symmetric plot for log(x).


Notes:

  • In matlab, the command reallog is similar to np.log

Thursday, March 23, 2017

Housel on Writing

Morgan Housel is one of my favorite writers on the subject of economics and finance. He offers three pieces of writing advice in this column.

Paraphrasing,

1. Be direct
2. Connect fields
3. Rewrite

Tuesday, March 21, 2017

Try a Pod

I am an avid podcast listener; over the past 6 years, they have enriched commutes, workouts and chores, immeasurably. There has been a concerted call to evangelize for the platform ("try a pod") in the past few weeks. In 2013, I already shared what I was listening to then. Podcast that I currently follow:

History/Politics
  • BackStory
  • My History Can Beat Up Your Politics
  • Hardcore History with Dan Carlin
  • CommonSense with Dan Carlin
  • Revisionist History
Science and Tech
  • Radiolab
  • Skeptics Guide to the Galaxy
  • Science Vs
  • a16z
  • Above Avalon
  • Full Disclosure
  • Note to Self
  • Recode Decode
  • Rationally Speaking
  • Reply All
  • 50 Things That Made the Modern World
Stories
  • Snap Judgement
  • The Moth
  • Criminal
  • This American Life
  • Found
  • 99% Invisible
Language
  • The Allusionist
  • And Eat it Too!
  • A Way with Words
Economics/Business
  • EconTalk
  • Five Good Questions
  • FT Alphachat
  • How I Built This
  • Invest like the Best
  • The Knowledge Project
  • Masters in Business
  • Rangeley Captical Podcast

Others
  • Audio Dharma
  • Philosophize This
  • Educate
  • Commonwealth Club of California
  • Fareed Zakaria GPS
  • Frontline audiocast
  • In Our Time
  • Intelligence Squared
  • Intelligence Squared US
  • Left Right and Center
  • Please Explain
  • More Perfect

Tuesday, March 14, 2017

Links:

1. Doug Natelson's compilation of "advice" blog-posts (nanoscale views)

2. Are Polar Coordinates Backwards? (John D. Cook)

3. Learning Styles are baseless? (Guardian)

4. 5 Unusual Proofs (PBS YouTube Channel)

Friday, March 10, 2017

QuickTip: Sorting Pairs of Numpy Arrays

Consider the two "connected" numpy arrays:

import numpy as np
x = np.array([1992,1991,1993])
y = np.array([15, 20, 30])

order = x.argsort()
x     = x[order]
y     = y[order]

x = array([1991, 1992, 1993])
y = array([20, 15, 30])

Wednesday, March 8, 2017

Perverse Incentives and Integrity

Edwards and Roy write about scientific integrity in the face of perverse incentive systems (full citation: Edwards Marc A. and Roy Siddhartha. Environmental Engineering Science. January 2017, 34(1): 51-61. doi:10.1089/ees.2016.0223.)

Here is a table from the paper, which grapples with incentives and unintended consequences.


Worth a look!