Recently I’ve been working on some problems in disease modeling for influenza, and one of the problems is to calculate the basic reproduction number for a model which includes differential disease strengths in poor and rich risk groups. Calculating this number is generally done with a method called the “Next Generation Matrix” method, and to do this one needs to calculate two matrices of partial derivatives, invert one and multiply it by the other, then calculate the eigenvalues – the basic reproduction number is the largest eigenvalue of the resulting calculation. Doing this for just one risk group in the model I’m fiddling with can be done analytically in about 7 pages of notes – it involves finding the inverse of a 5×5 matrix, but actually this is quite quick to do by hand because most of the matrices involved are wide open spaces of zeros. However, once one extends the model to four risk groups the calculation becomes nastier – it involves inverting a 20×20 matrix, then finding the eigenvalues of a product of 20×20 matrix. Even recognizing that most of these matrices are zero elements, one still ends up with a fiendish hand calculation. On top of this, the matrices themselves contain many separate values all multiplied together. I started this by hand and decided today that I want to take a shortcut – a student needs to use some basic values from this soon and neither she nor I are going to get it done analytically before our deadline.

So tonight I came home and, after a nice dinner and an hour spent with my partner, I spent about an hour programming Matlab to do the calculation numerically for me. I now have the two values that my student needs, and if she needs to tweak her model it’s just a few presses of a button on my computer to get the updated reproduction number. Also, it’s a matter of a second’s work to test any other parameter in the model, and with a few loops I can produce charts of relationships between the reproduction number and any parameter. It’s nice and it was fairly trivial to program in Matlab. In this instance Matlab saved me a couple of days’ work fiddling around with some enormously tricky (though not mathematically challenging) hand calculations.

On this blog a short while back I investigated a weird probability distribution I had encountered at Grognardia. For that calculation, rather than going through the (eye-bleedingly horrible) tedium of attempting to generate a mathematical expression for the probability distributions I wanted to analyze, I simply ran a simulation in R with so many runs (about 100,000) that all random error was stripped out and I essentially got the exact shape of the theoretical underlying distribution I wanted.

In both cases, it’s pretty clear that I’m using a computer to do my thinking for me.

This is very different to using a computer to run an experiment based on the theory one developed painstakingly by hand. Rather, I’m using the brute number-crunching power of modern machines to simply get the theoretical result I’m looking for without doing the thinking. That Grognardia problem involved a badly programmed loop that executed a total of 4,500,000 dice results just to produce one chart. I did it on a computer with 32Gb of RAM and 12 chips, it took about 3 seconds – and I didn’t even have to program efficiently (I did it in R without using the vector nature of R, just straight looping like a 12 year old). The resulting charts are so close to the analytical probability distribution that it makes no difference whatsoever that they’re empirical – that hour of programming and the 3 seconds of processor time short circuited days and days of painstaking theoretical work to find the form of the probability distributions.

Obviously if I want to publish any of these things I need to do the hard work, so on balance I think that these numerical short cuts are a good thing – they help me to work out the feasibility of a hard task, get values to use in empirical work while I continue with the analytic problems, and give a way to check my work. But on the flip side – and much as I hate to sound like a maths grognard or something – I do sometimes wonder if the sheer power of computers has got to the point where they genuinely do offer a brutal, empirical short cut to actual mathematical thinking. Why seek an elegant mathematical solution to a problem when you can just spend 10 minutes on a computer and get all the dynamics of your solution without having to worry about the hard stuff? For people like me, with a good enough education in maths and physics to know what we need to do, but not enough concerted experience in the hard yards to be able to do the complex nitty-gritty of the work, this may be a godsend. But from the broader perspective of the discipline, will it lead to an overall, population-wide loss of the analytical skills that make maths and physics so powerful? And if so, in the future will we see students at universities losing their deep insight into the discipline as the power of the computer gives them ways to short cut the hard task of learning and applying the theory?

Maybe those 12 chips, 32Gb of RAM, 27 inch screen and 1Gb graphics card are a mixed blessing …