The Seductiveness of Models.
When it comes to Global Warming (GW), great emphasis is put on what the computer models predict will happen, so let’s get some idea of what exactly a model is. We’ll do a thought experiment. This is a complex area but no maths, I promise, and I’ll pick a really really simple problem.
Imagine we’re going to build a model of the game of Snooker (or Pool) and we’ll keep it even simpler by only considering the cue ball which is hit towards one object ball. A two ball problem. We want to be able to predict where both balls will come to rest after the impact. The physics of the situation are well understood. If the cue ball hits the object ball in the centre then the object ball will roll straight on. If the cue ball hits the object ball to the left of centre, then the object ball will deflect to the right and the cue ball to the left. If the cue ball hits the object ball to the right of centre, then the reverse happens. When a ball hits the side of the table, the angle it bounces off with will be equal to the angle it came in with.
Some basic Applied Mathematics will give us the energy loss and deflection angles due to friction and the collisions. We specify the problem in minute detail for a software developer who then builds the software model as a program to be run on a computer. At the start of each run of the model, we input the positions of the balls on the table and the direction and power with which the cue ball is hit. The model predicts where the balls will come to rest with what looks like plausible locations but in order to validate the model; we build a mechanical cueing device.
We can set this to hit the cue ball with an exact amount of force and in an exact direction. The test plan is to run the model and the cueing device against each other over repeated runs. The output positions of the balls after a run will be used as the new start positions for the next run. At the end of the first run, the model has predicted where the balls actually came to rest but with a 5% error. On the second run, the error has increased to 8%. Long before we get to run ten, the model’s predictions are hopelessly inaccurate.
After some head scratching, we determine that we’ve not added to the model any information about the effect of the cue tip on energy absorption. After some research on cue tips, we amend the model and rerun the experiment. The first run error has dropped to 3% but by run ten, the predictions are still hopelessly inaccurate.
The programmer suggests that doing the calculations to 10 decimal places rather than the current 5 will improve accuracy. The software change is made and the error on the first run drops to 2.5% but by run ten, it’s hopeless. They increase the precision to 20 decimal places and the first run error drops to 2.4% but the run ten predictions are still useless.
Rather than believe the manufacturer’s numbers, they weigh the balls and find the weights are not 100% precisely as stated. They amend the model and rerun the sequence with only a marginal improvement in accuracy.
They know there’s something wrong with the model and in an effort to find out what that may be, they consult with someone who plays Snooker. He suggests their cueing machine may not be hitting the cue ball in exactly the centre, introducing ‘side’ or ‘spin’ which would throw off their predicted collision angles. They add a laser sighting device to the cueing machine and rerun. First run error drops to 2.1% but the results at run ten are still disastrous.
They add laser distance gauges to the table to determine within microns the exact positions of the balls and to ensure the table is completely flat. Again, a marginal first run improvement but the usual run ten disaster.
The suggestion is made that the coefficient of friction of the felt on the table may not be as uniform as they assumed. The measure it in grids on the table and indeed find minute variations. The model is amended to include the varying friction patches of the table and the experiment is rerun. First run error drops to 1.8% but it’s the usual disaster by run ten.
The next idea is to examine the coefficient of elasticity of all the sides of the table. Sure enough, there is variation, even if it is tiny. This new information is programmed into the model and the experiment rerun. First run error drops to 1.4% but the usual result by run ten.
The suggestion is made that slight currents of air or tiny variations in its density during the day may be having an effect. The table and cueing machine are moved to a climate controlled chamber and the experiment rerun. Minor improvements result.
The wooden cue is replaced with an iron rod which will not flex as the cue does. The model is amended to take account of that. Some improvement ensues. First run error drops to 1.34% but by run ten etc etc.
And so it goes; on and on and on and on …
I have to confess at this point that though I said I’d pick a simple problem and I did, I also know that when it comes to modelling reality, there’s no such thing as a simple problem. We understand the physics and mathematics of this problem and can come up with a first run prediction that is well within some reasonable range of error. What we cannot do, is feed the result of a run into subsequent ones because all that happens is the error margin is soon exceeded. Any error at all will soon magnify on subsequent runs.
This is exactly what climate simulators do and why they need such powerful computers; it’s not that the calculations are complex, it’s the simple fact the simulation is run hundreds or thousands of times. The output from any one run may look plausible as the input for the next run but the overall result is guaranteed to be completely wrong.
When it comes to modelling climate, we simply don’t know enough about the factors we’re including. We still don’t understand clouds and as for turbulence, come up with the math to handle that adequately and you’ll be on the receiving end of a Fields Medal and a Nobel Prize. Newton and Einstein will turn green with envy. We also have to consider that there may be major ‘unknown unknowns’ we’re simply not aware of. Until a few short years ago, the major weather phenomena whimsically called Sprites, Jets and Elves were undiscovered. What else is out there?
The people working on the Snooker problem also illustrate the biggest drawback of climate modelling; it’s not possible to validate any of the models because there’s no way to build the equivalent of the cueing machine. We’ve simply got to believe that what they’re predicting to happen in a few decades or hundreds of years is accurate. Believing them is essentially an act of faith. The models don’t even agree with each other and sometimes by quantitative amounts greater than the temperature rise they’re predicting.
Hands up anyone who thinks further development of the snooker model would have gone on if they’d not built the cueing machine which showed them how far from reality the model’s predictions were?
Weather forecasts for tomorrow are fairly accurate, for the day after that less so and for the third day; you’re looking at a little better than evens. Does that behaviour sound familiar? A whole branch of Mathematics, called rather fittingly Chaos Theory, grew out of that sort of ‘spiralling unpredictably out of control’ behaviour exhibited by one of the first attempts to simulate the weather on a computer.
The essential seductiveness of models is that the feeling grows in those working on them that by adding enough refinements to it, it will be 100% accurate. This can never happen.
The person who really knows these things now is the programmer whom I shall call Harry, for reasons that are not that particularly enigmatic. One evening, working late yet again amending the model, he decided to take a break and visit the bane of his life but also the source of most of his work; the cueing machine. He decided to run the experiment for himself.
He ran the whole thing and watched the usual disparity grow between the predicted and actual positions of the balls and then he had an idea. Forget the model; just run the cueing machine again but using the exact same starting positions for the balls. At the end of the first shot, the balls came to rest but not in precisely the same positions they had the first time. By the tenth shot, they were nowhere near where they had been the first time. Just to be sure, he reran the whole thing again with the same exact start positions. By run ten, the balls were in totally new positions.
He sat and thought for a while about what he’d just seen. Eventually he tidied up the equipment and left to go home early for a change. Harry never visited the cueing machine again.