Friday, November 30, 2018

Machine Learning and Behavioral Biases

Machine learning, using neural nets for example, helps us tease out hidden nonlinear correlations or patterns. A standard application is digitizing hand written numerals.

You train the model on a particular dataset, and test it on data that is previously unseen. If the training and test datasets are "similar", then the predictions of the learnt model will be good.

If the test data look nothing like the training data then the ML model will fail (e.g. train on Arabic numerals and test on Roman numerals).

Our brains were trained for thousands of years when we lived in the wild. Survival was key. A significant fraction of the model has been seared into our hardware.

Modern times look nothing like the training data for which our brains were optimized (saber-tooth tigers, food scarcity).

Most cognitive or behavioral biases originate in this mismatch between our training dataset and test dataset.

No comments: