Let me relate a story to set up my thinking for this post:
During my PhD at Michigan, I worked on some approximate models for polymer dynamics called slip-link (SL) models. This (SL) model was more accurate than the standard theoretical model. As expected, it took greater computational resources to numerically simulate my model (a few hours) than the standard theory (a few seconds).
There are some applications where this trade-off between accuracy and speed is desirable.
Of course, there are other richer models, which are more accurate and even more computationally expensive than mine.
One of the people on my committee asked me: "If computational speed were infinite, would anyone care about your model?" I don't remember exactly how I responded; my guess: some mumbled gibberish.
But this is indeed a profound question that touches upon
what I said recently about seeking too much accuracy in models. If I can numerically compute the most accurate model available for something, should I waste my time with alternatives?
Let's set aside the hypothetical "if computer speed were infinite" part of the question, and work under the premise that such computers were indeed available.
Should we then simply use ab initio quantum mechanics, or perhaps, the standard model of physics (whatever that is!) to study everything?
But seriously, if you want to study migration of birds, or mechanical properties of a spaceship, or the ups and downs of a business cycle, would you really want to study it in terms of quarks?
As Douglas Adams pointed out in the Hitchhikers Guide, our computer model may give us the "
Answer to the Ultimate Question of Life, the Universe, and Everything", and yet we may be unable to comprehend it.
Misquoting Richard Hamming, "the purpose of modeling is usually insight, not numbers."