Friday, November 30, 2018

Machine Learning and Behavioral Biases

Machine learning, using neural nets for example, helps us tease out hidden nonlinear correlations or patterns. A standard application is digitizing hand written numerals.

You train the model on a particular dataset, and test it on data that is previously unseen. If the training and test datasets are "similar", then the predictions of the learnt model will be good.

If the test data look nothing like the training data then the ML model will fail (e.g. train on Arabic numerals and test on Roman numerals).

Our brains were trained for thousands of years when we lived in the wild. Survival was key. A significant fraction of the model has been seared into our hardware.

Modern times look nothing like the training data for which our brains were optimized (saber-tooth tigers, food scarcity).

Most cognitive or behavioral biases originate in this mismatch between our training dataset and test dataset.

Tuesday, November 27, 2018

Links: Math Edition

1. How paradoxes shape Mathematics (article links to presentation)

2. Paul Romer on "Jupyter, Mathematica, and the Future of the Research Paper"

3. Wilson's Matrix (Cleve Moler)

4. Is a kB 1000 bytes or 1024 bytes? (John D. Cook)

Wednesday, November 21, 2018

Interesting Quotes

"Luck is probability taken personally.” - Chip Denman (via Penn Jillette and Annie Duke)

"Commenting your code is like cleaning your bathroom - you never want to do it, but it really does create a more pleasant experience for you and your guests." - Ryan Campbell

"Almost everything that you succeed at looks easy in retrospect." - Luis Sordo Vieira (via @strogatz)

"When one teaches, two learn." - Robert Heinlein

"Nice library. Is one of these a trick book?"
"How so?"
"Like you pull it off the shelf and a hidden door opens."
"Oh. Yeah, all of them." - @ASmallFiction

"Read what you love until you love to read." - @naval