The Guardian reported recently that a researcher in the UK has developed a climate model for Middle Earth. Apparently he developed a climate model and ran it over six days in a super-computer at Bristol university, and has been able to identify different climate zones in Middle Earth. We now know that the Shire had a climate like Dunedin in New Zealand, and Mordor was like LA under smog. The paper contains some nice images of Middle Earth climate, you can see that the researcher treated it like a serious modeling task, divided the map into grids and ran a proper climate model.

For me this raises the immediate and obvious question: will industrialization in Mordor and Isengard lead to global warming, and if so how long will it be before all of Angmar is unfrozen? If so, should the peoples of northern Middle Earth be more worried about rising sea levels, or the undead beasts of Angmar being freed from the ice and descending like a scourge on Eriador?

Perhaps this is the real allegorical message of Lord of the Rings: that global warming will unleash Orcs. I wonder if Dr. Lunt included Orcs and Witch-kings in his climate model?

Or maybe Mordor is where the (illusory) “missing heat” is hiding … seems as good an explanation as any for such a fantasy …

And yet it still appeals to some ...

And yet it still appeals to some …

Recently Myles Allen wrote a piece for the Guardian suggesting we should use direct action to mandate fossil fuel companies deliver carbon capture technology, and appears to be juxtaposing this with carbon taxes. A few global warming blogs I read took issue with the piece. I’m suspicious about the feasibility of carbon capture technology, so the idea of forcing fossil fuel extractors to implement this technology seems far off to me, but I believe we need to get serious about carbon, so in principle the idea appeals to me, along with a whole bunch of emergency measures. Rational economics suggests that Allan’s policy is at best going to be no different to a carbon tax that applies an equivalent cost to carbon production, and probably less efficient, but I suspect that there is something going on here that lies outside of economic theory, and I think it can be well understood by reference to a couple of public health principles, and some successful public health campaigns. Basically, over the next 30 years we need to go carbon neutral, that is to a society that exhausts no carbon. If we delay, we may have to go negative. Some economists think we can do this simply by taxing carbon. I want to use the Framework Convention on Tobacco Control (FCTC) to show that’s probably impossible without broader measures; and then I will use the example of HIV to show that the debate about mitigation may also be a lost cause, or at least to show that we shouldn’t be too confident that humanity can solve a serious problem through mitigation measures alone.

Comparing Decarbonization with the FCTC

Let’s not beat around the bush: the purpose of the tobacco control movement is to eliminate the consumption of tobacco from the face of the earth. One day there will be no smokers, because smoking is a poison. But the goal of the FCTC is to achieve a non-smoking world through the free choice of individuals – through health promotion and intervention measures aimed at reducing smoking. The FCTC proposes a variety of methods to achieve this goal, but only one of them is taxation. Taxation has been a core tool deployed against tobacco, and with devastating effect, but it has not eliminated tobacco smoking. Taxes on tobacco in Australia, for example, have essentially increased exponentially since 1985, but they haven’t achieved their goal: something around about 20% of Australians smoke, and Australia (as the picture above shows) is one of the most aggressive anti-smoking nations on the planet.

And this is the thing that is relevant to decarbonization: 16% of Australians still smoke, despite huge legislative efforts to convince them to stop. Not just punitive taxation, but a whole raft of other measures: plain packaging, banning smoking in public areas, very strict measures against underage smoking, bans on advertising, forcing cigarettes to be hidden from shop counters, widespread distribution of subsidized treatments for tobacco addiction, huge investment in educating general practitioners about smoking cessation, investing overseas aid money in developing alternatives to tobacco crops and increased funding to police action against illicit tobacco trading. With regards to children, a whole range of laws have been passed to prevent them from getting access to tobacco. Companies and public organizations – especially hospitals – have gone further, passing laws to prevent teachers, doctors and nurses smoking within sight of such facilities. The WHO will not employ smokers. Some states and countries have suggested a gradually increasing age-related ban, so that everyone coming of age in the west is permanently banned from smoking – a kind of generational form of prohibition.

Yet despite this campaign, 16% of Australians still smoke. What would the equivalent measures be in a “voluntary” decarbonization scenario: finding that massive carbon taxes failed to prevent the use of carbon-based energy, governments would be required to ban certain uses of coal or oil, would force all petrol companies to use the same non-branded advertising, would require all public organizations to use non-fossil fuel energy and would push big private companies to do the same; would pass incredibly strict air quality laws; would invest aid money heavily in non-fossil fuel energy products; would introduce any other public measures against carbon that could be effectively policed; would heavily subsidize all alternative energy sources.

Without these interventions, smoking rates would not have dropped to 16%; and smoking is an addictive substance. If exponentially increasing taxes cannot prevent smoking, why do carbon tax advocates think it will work to reduce carbon emissions to the required level : zero?

The lessons of HIV and AGW mitigation strategies

In the early years of the HIV epidemic, before treatments became available, the only prevention was behavioral change: wearing a condom, and always using a clean needle. In a few settings, promotion of condom use worked, but in sub-Saharan Africa HIV became a generalized epidemic before people even knew what it was, and by the time the preventive measures were understood it was widespread and devastating. In this context, mitigation through behavioral change became a completely ineffectual tactic. From 2000 under PEPFAR, the Presidents Emergency Plan For AIDS Relief, testing and treatment strategies – essentially, adaptation strategies – became widespread in sub-Saharan Africa. This wasn’t due to any progressive plan of George Bush’s, but through dumb luck they were successful because treatment reduces the infectiousness of treated people by about 95%. In the long term, in the face of a complete failure to effectively disseminate behavioral change in Africa, testing and treatment made huge gains in combating HIV, and now there is a lot of confidence that if well managed and supported by international donors these strategies may be sufficient to eliminate HIV. Those of us (like me) who grew up in the era of HIV in the west, where HIV never became a generalized epidemic and gay men responded well to condom use initiatives, were initially unwilling to believe the success of test and treat strategies – we falsely believed that our mitigation strategies would work in all settings, but we were wrong. As the evidence came in, I changed my mind and now recognize that behavioural change for HIV (mitigation) is a tactic that works in unique settings (primarily, injecting drug users, politically connected gay men and unionized sex workers). In a generalized epidemic, such strategies fail.

Of course, global warming is a classic generalized epidemic. Mitigation won’t work by itself, but at the moment we have no alternatives – just like HIV in the 1980s. We need to do whatever is necessary to prevent further spread of the disease, but as soon as someone finds an effective treatment (carbon capture and storage) we need to switch.

Public health lessons for decarbonization

If we can’t eliminate smoking through exponentially increasing taxes, why do we think we will do better with carbon? No one really cares if people choose to smoke, it’s a personal choice and a non-zero smoking rate is no big deal. But we need to be carbon neutral within 30 years. We couldn’t do that through taxation alone for smoking, so why does anyone think we can do it for decarbonization? Such a goal is going to require measures well in advance of the FCTC, and those measures are pretty harsh. We also need to accept the possibility that mitigation measures aren’t going to work. In health, naturally, no one assumes that prevention is the only cure. We look for a cure. The same attitude needs to be applied to carbon. We need a range of strict legislative responses, and we need major investment in projects to find cures. And we need to treat this situation with the same urgency we applied to the HIV epidemic – or more. Carbon taxes alone will not be enough. We need a full range of legal interventions, now.

In case anyone hasn’t noticed, the Philippines was just hit by a monster storm that killed more than 1000 people. It’s likely that this is going to be the third year in a row that the Philippines experiences a new record-setting disaster, and this is also probably the fourth biggest storm on record anywhere in the world. Of course, others have noted that certain infamous denialists are trying to pretend that this is just a normal storm, but only idiots would believe such crap. The world has changed. In this same year Australia has had record bushfires occurring earlier than ever before; Japan has suffered at least two moretsu (extremely violent) typhoons, one of which was generally described as “never previously recorded”; Japan’s summer was excessively intense; Japan’s cherry blossom viewing season was delayed by heat; and the Southern hemisphere had the hottest year on record. Britain also had its second strongest storm in 100 years, and Somalia experienced its worst ever cyclone simultaneously with typhoon Haiyan. Natural disasters from storms, flood and fire are coming thick and fast, and every year sees a new record in at least one and often more than one of these dimensions. It’s time to recognize that we aren’t in Kansas anymore.

Scientists, of course, want to proceed steadily without jumping ahead of the evidence. For example, the current thinking in science is that the arctic won’t be ice free for a long time (probably not till 2050, I think) because that is what the theory and models tell us; but the evidence is pointing at 2020 by the latest, and the consequences of extreme arctic melt (such as occurred in 2012) for North Atlantic countries are serious. This year Britain had to import wheat for the first time (Kelloggs stopped producing Shredded Wheat!) because of rain-related crop failures. Hurricane Sandy’s extreme damage was directly related to the arctic ice melt – everyone knows it, but science isn’t able to prove it, so we have to just pretend that yet another extreme weather event was just random variation. Yet nothing about what happened in Sandy or Haiyan matches our understanding of normality – I am quite familiar with tsunami damage and the pictures I am seeing on TV of the wake of Haiyan look to me exactly like the northeast coast of Japan. No typhoon has done that in the last 30 years, and our instincts tell us this. We need to recognize this: the climate has jumped the shark, and science isn’t keeping up.

On the other side of the coin, economists and political scientists are used to the measured rhetoric of equilibrium, and they don’t have a language or a culture that is able to accept what is happening, because what is happening is disequilibrium. Economists are still labouring under the impression that the changes that are coming – and the changes that are happening now – can be expressed in percentages of GDP and the cold calculus that applies to growth in ordinary times. They can’t. Today 8 people died in a riot at a rice factory, because the destruction in the central Philippines is so complete that millions of people are going without food, and desperation is their watchword. The calculus of mainstream economics is not geared up for looting, for the destruction of cities, for life on the edge. And that is where increasingly people are being driven. Economics hasn’t come to terms with the concept of ecosystem services – it’s too far outside the selfish, consumerist culture of economics to make sense – but this is where we’re at. Our ecosystem has turned against us. Which means we’re fucked. Does George Mason University’s economics faculty have a department of We’re Fucked? No, which is why they’re still churning out plagiarized shit about how climate change is all wrong and stuff. Economists still think this is a problem that can be dealt with using the numerical analysis of small changes: Nicholas Stern on the one hand with his arcane trade-offs and debates about discount rates, and the Lomborg’s of the world on the other hand with their ideas about balancing the future costs of adaptation with the current costs of mitigation, and angels dancing on the heads of pins that are buried in the debris of Leyte Island.

No, we’ve entered a new era: the Anthropocene. The era of We’re Fucked. We need to develop a new politics, a politics of Getting Unfucked, and we need it now, not 10 years from now when the baby boomers have finally chuffed off to the next plane and stopped complaining about ineffectual carbon taxes. We need to get desperate, and we need to do it now.

This is going to mean some radical changes. For starters, and most importantly, every developed nation needs to ban coal. Set a deadline: five years from now, anyone who owns a coal-fired power station is done for. Get rid of them. And while we’re at it the main providers of coal need to stop. Australia needs to declare: we ain’t selling no more, 2018 is it. Sorry kids, but your dope dealer is planning to retire. Canada needs to do the same. And this decision shouldn’t be enforced with pathetic halfway measures like taxes. We need to ban that shit, before the planet decides to ban us. What’s going on in Germany – closing nuclear plants and falling back on coal and gas – is absolutely criminal. Let’s not beat around the bush about this. Anyone in Germany who supports this kind of ecocide should get on a plane right now, fuck off to Tacloban, get on their knees in the salty dirt and say “I’m sorry, but your family died because I’m stupid.” There is nowhere on this earth where coal is a good idea, but a country with power and choices like Germany is absolutely behaving like an international criminal in choosing to go back to this poison. Anyone who supports such a move should be ashamed of themselves. Ten years from now people with such views will be being locked up, mark my words.

We also need to give up on the idea that solar and wind are our short-term saviours. Long-term, yes, they are the siznich. But right now, we have a grid that is developed for baseload generators in centralized locations, and we need to recognize that. So we need to go nuclear. It’s the simple, clean, safe alternative to coal. Every country with a major energy economy needs to shift to a world war 2 style war economy of energy, and replace its existing plants with nuclear. Don’t fuck around with new technologies, because we’re heading into a disaster zone. We have perfectly good nuclear plant designs now, so let’s get them up and running. With robust oversight and good monitoring agencies they’ll be fine. Sure, there’ll be accidents, but the reality is that nuclear power is not that dangerous. It kills a crap-ton less people than coal and it’s easy to live in areas with nuclear fallout. It’s not so easy to live in areas that are too hot to grow food, too stormy to build, or too flooded to stay. And – sorry, country folks – if you build nuclear plants in the country, the accidents really don’t affect many people.

Some people say that nuclear is too expensive, it needs heavy subsidies, but who cares? Home owners in Australia get $35 billion a year in state subsidies, and no one would dare interfere in such a sacrosanct subsidy. Why not give another 35 billion to an industry that might save us from destruction? Why quibble? And if you’re going to quibble about the cost of nuclear, then fuck, let’s make this clear: remove all state subsidies to all industries, and let them fight each other to the death. Don’t want to do that? Then stop pretending the electricity market is free of distortions, stop pretending it’s somehow above politics, and above all stop pretending it’s not going to destroy us all if we don’t interfere.

Since the Kyoto protocol was first signed in nineteen fucking whatever, people – well, economists anyway – have been trying to pretend that we can solve the global warming problem through market mechanisms. Well here we are 20 years later, and fate’s duck is crapping on our eiderdown. We don’t have a functioning market mechanism that will prevent diddly squat, and we have ascended beyond diddly squat to epic storms that wipe out cities, fires that threaten whole communities, homicidal heat and wholesale changes to the way we live. It’s time to recognize that the market has had its chance, and every oily fucker, grafter and spiv who had any chance to get in the way has spoilt the opportunity. So let’s drop the pretense and get serious. We need to move to legislative and political solutions to the most serious environmental problem the world has ever faced. Scientists and economists need to take a back seat to eco-fascists and hard-arsed decision makers. Ban coal, bring on the nukes, and let’s fix this problem the old-fashioned way – through the cold, hard application of power.

Earlier this year I posted a prediction of the minimum arctic sea ice extent, in which I used a simple regression model to predict the average September extent. My final conclusion:

My final estimate for sea ice extent in September 2013 is 4.69 million square kilometres (95% CI: 4.06 – 5.32 million square kilometres).

On October 3rd the National Snow and Ice Data Center (NSIDC) released their estimate of the September extent, which was 5.35 million square kilometres. The linked blog post gives a nice description of the main reasons why the extent recovered, and some of the competing influences on the extent this year. It also explains some of the Antarctic extent’s record gains.

The final extent for September was just 0.03 million square kilometres outside of my 95% confidence interval, or 0.05 square kilometres outside my estimate as posted on the SEARCH September Sea Ice Outlook competition. This means that I only just missed including the observed value in my confidence interval. My prediction in July was 4th closest to the true value, beaten by NOAA, NSIDC themselves, and Barthelemy et al. It was the second closest estimate based on statistical estimation, and was beaten by two model-based estimates. My estimate was also the closest amongst all those that didn’t include the true value in their 95% confidence intervals. The lowest estimate was from Neven’s Arctic Sea Ice Blog, which just goes to show that crowd-sourced estimates aren’t necessarily the best. Watts Up With That were very close to me in June and August (they didn’t submit in July) at 4.8 million km2, but they didn’t give a confidence interval. I think this is the first time in the annual SIO competition that WUWT have come close to the mark, which just goes to show that eternal optimism has its value.

I’m pretty happy with my prediction. I originally planned to do it for the August submission adding July sea ice temperatures, and I think that would have probably bumped up my prediction a little closer to the true value. I also considered doing an ensemble model, using a wide range of different statistical models and averaging the results after incorporating more covariates. I think next year I will try to be more systematic, and submit a prediction for every month using a range of modeling techniques. The key point of my model is that it accurately predicted a very large rebound from the 2012 minimum based on just a few key variables selected without much systematic basis. I think I can do better next year!

Introduction

Every year the Arctic Research Consortium of the US runs a competition to predict the mean arctic sea ice extent in September, and this year I have decided to enter. I have been an avid reader of Neven’s Arctic Sea Ice Blog for the last year, and they host predictions there too. The general idea is that in June, July and August a deadline is set for submissions of the prediction of the mean sea ice extent in September, using any methods available. One can submit as individuals or a team, professionally or personally. I thought I would put my modeling skills to the test, and see what I can do.

As background, arctic sea ice melts from May(ish) to September every year, reaching a minimum sometime in September, before the sun loses its strength and the whole area freezes up again. Over the past 20 years the melt has been strengthening, and recently extent and area have been in freefall. Records were broken in 2007 and then, spectacularly, again in 2012. Activity on Neven’s blog was frantic that year as the sea ice watchers tried to understand the enormity of the drop, and this year you can see again a whole bunch of very professional arctic observers watching the minutiae of the melting process. It’s fascinating because many of them are real experts in their field, and you can watch the joy of scientists learning new things about the world in real time, and see the enormous creativity they put into understanding the processes they are observing.

More seriously, arctic sea ice melt is expected to have significant effects on northern hemisphere weather, and understanding its accelerating destruction is important to understanding what is going to happen to northern hemisphere weather over the next 10-20 years. So, predictive modeling is not just a fun exercise, but a potentially useful tool to understand where the ice is going.

Method

[This is a relatively (for a stats methods section) tech-free methods, so you should be able to understand the gist without any prior education in statistics]

I used data on arctic sea ice extent and area from the National Snow and Ice Data Center (NSDIC), and northern hemisphere land-sea surface temperatures from the Goddard Institute for Space Studies (GISS, commonly called GISSTemp). I used the following variables to predict sea ice extent:

  • Sea ice extent from the previous September
  • May Extent, Area and northern hemisphere snow cover
  • June Extent, area and northern hemisphere snow cover
  • April and May surface temperature

June surface temperature was not available. Snow cover and surface temperatures are expressed as anomalies – the latter from the 1951 – 1980 baseline, the former from some baseline I can’t remember. I also used year in the model, since it’s reasonable to assume a trend over time.

I put all these variables into a Prais-Winsten regression model in Stata/MP 12. Prais-Winsten regression models enable multiple regression fitting of a single outcome variable to multiple predictors under the assumption of residuals with auto-correlation at lag 1, a common assumption that it is necessary to make in order to adjust for the serial dependence inherent in all time series. Since my main interest in this task is the point estimate (mean) of sea ice extent, I could have used a simple linear regression, but this would have given overly narrow confidence intervals. I could have looked for other modeling methods but Prais-Winsten is trivially easy in Stata, and I am lazy.

I didn’t use any model-building philosophy, just kept all variables in the model regardless of significance. I could have tried a couple of different models in competition, done some best-subset or backwards stepwise fitting, but given the amount of data I had (extent, area, snow cover and temperature readings for every month of the year) there was a big risk of over-fitting, so unless I crafted a careful ensemble model-fitting approach I risk producing a model that can explain everything and predict nothing. I have a day job, folks. So I just ran the one model. I may come back to the ensemble issue for the August estimate.

I first ran the model for the period 1979-2012 to check its fit and get parameter estimates. I then ran the model for the period 1979-2007, and obtained predicted values for 2008 – 2012. I did this to see if the model could accurately estimate the 2012 crash having only one prior major crash in the training data set. For the sake of interest, I then ran the model to 2011 and re-checked its predictive powers for 2012. Both are plotted in this report. I then ran the model to 2012 and used it to predict the mean September extent in 2013 with 95% prediction interval.

Results

The model converged and for the 1979-2012 period had an R-squared of 0.9303, indicating it explained 93% of the variance in the data. That’s quite ridiculous and highly suggestive of over-fitting. Table 1 contains the parameter estimates from this model.

Table 1: Parameter estimates from the full model (1979-2012)

Variable

Coefficient

Standard Error

T

P value

Lower CI

Upper CI

Lagged Minimum Extent

0.01

0.11

0.06

0.96

-0.22

0.24

May Extent

0.56

0.33

1.67

0.11

-0.13

1.25

June Extent

-1.04

0.41

-2.51

0.02

-1.89

-0.18

May Area

-0.91

0.46

-1.98

0.06

-1.85

0.04

June Area

1.85

0.32

5.75

0

1.18

2.51

Year

-0.07

0.02

-2.63

0.02

-0.12

-0.01

June Snow Anomaly

0.12

0.04

2.76

0.01

0.03

0.21

April Temp

-1.05

0.37

-2.85

0.01

-1.81

-0.29

May Temp

1.49

0.7

2.13

0.04

0.04

2.93

Intercept

136.22

51.41

2.65

0.01

29.88

242.57

These coefficients can be interpreted as indicating the amount by which the September mean extent varies in millions of kilometers for a unit change in the given value. So for example, every degree increase in April temperatures reduces the September extent by just over a million square kilometres, and there is a 70,000 square kilometer decline every year. Note the conflict between area and extent, and the strange protective effect of high temperatures in May. This is could be a sign of a model that is ignorant of physics, but just fits numbers to get the best fit. We probably shouldn’t try and use these coefficients to understand the physics of sea ice loss!

Figure 1 shows the predictive ability of the model run from 1979-2007. All values within this time frame are “within-sample” predictions, generally with low standard error and expected to be close to the true values. Values from 2008 – 2012 are “out of sample” predictions, with wider confidence intervals and greater risk of departure from the true value.

Figure 1: Predictive fit for 2008-2012 based on 1979-2007 model run

Figure 1: Predictive fit for 2008-2012 based on 1979-2007 model run

As can be seen, the model predicts the 2007 crash very well, but doesn’t handle the 2012 crash particularly brilliantly. It does predict a new record for 2012 though, guessing at a value of 4.13 million square kilometres, just below its 2007 estimate of 4.25. This is half a million square kilometres off the true value (3.63 million square kilometres).

Figure 2 shows the same fit for the same time periods when the model is run up to 2011. In this case only the year 2012 is an out-of-sample prediction.

Figure 2: Predictive fit for 2012 based on 1979-2011 model run

Figure 2: Predictive fit for 2012 based on 1979-2011 model run

This predictive fit is very good, estimating a value for 2012 of 3.90 million square kilometres – just 270,000 square kilometres off. Note that the 2012 true value is within the 95% confidence intervals for both predictive fits.

Using the model built for figure 2, I estimated the 2013 mean sea ice extent to be 4.69 million square kilometres, with 95% confidence interval 4.06 – 5.32 million square kilometres.

Conclusion

My final estimate for sea ice extent in September 2013 is 4.69 million square kilometres (95% CI: 4.06 – 5.32 million square kilometres). This is a huge recovery from September 2012, of just over 1 million square kilometres, but still historically a very low value. Given what I have read on the updates at Neven’s sea ice blog I find it hard to believe that this recovery could occur, but I also note that a lot of people are impressed by the slow early collapse of the ice, and think that unless the high summer is very unusual melting will be slower than last year. I also note that my model has successfully predicted the previous two crashes when run to one year before them, and doesn’t do a bad job of predicting crashes even five years out. I also note that in noisy series, data points don’t tend to continue below the trend for very long, so it’s about time for a correction. However, there is some concern that the persistent cyclone in May really destroyed a lot of ice and has prepared the arctic for a catastrophic summer.

Let’s hope my model is right!

I thought it was blue ...

I thought it was blue …

Despite the bleating in the Guardian, I think it is still the case that there is a surprising dearth of global warming-related science fiction. This lack of effort by sci-fi writers is despite the fact that the changes are fast approaching, and most surprisingly one of the changes expected to take longest – arctic ice loss – is happening at an incredible pace before our very eyes, with potentially huge effects. We have already seen major crop losses in the UK due to flooding, and I am convinced that the flooding in the UK is due to arctic sea ice loss (or I will be convinced, I should say, if it is a regular phenomenon in the next few years). So, I’m wondering if the world faces the possibility of a major, generalized agricultural failure in our lifetime, and what that will look like. Let’s have a go at imagining it, but first let’s look at what it might be and how it might happen.

Describing a generalized agricultural failure

Only a small number of countries provide a large amount of food for the majority of the world. Wheat, for example, is primarily produced in China, the USA, EU, Australia and Canada; rice is clustered in a small number of Asian countries and is highly dependent on monsoonal weather and water supplies. A generalized agricultural failure would easily occur if just a couple of countries experienced a simultaneous loss of productive capacity. Particularly, crop failures in the USA, China, the EU and Australia would seriously disrupt the balance of food supply. Furthermore, there are a lot of countries that due to either economic decisions or environment are heavily dependent on imports of food. Middle eastern countries with large areas of non-arable land and African nations that are heavily committed to cash cropping are examples of this. Many of these countries are also low- or middle-income nations with very limited emergency food supplies, which makes them very vulnerable to disruptions in international trade. Finally, some major high-income economies with serious military power – such as Japan and the UK – do not have food security, and are currently heavily dependent on international food markets. Collapses in supply for these countries would make them extremely itchy about guaranteeing overseas trade supplies.

Much of the world’s food is devoted to supplying cattle, and a lot of arable land is currently devoted to biofuels or other “non-essential” supplies (such as sugar cane or oil-producing crops). However, food is not an immediately replaceable good – being dependent on seasonal patterns, it can take a year to switch crops, but societies with poor food reserves can’t go a year while they wait. Also some crops that might be replaced in that year have a huge investment in infrastructure that their owners might not want to reverse in times of national emergency: cork, olives, vineyards and all forms of orchards can take 10 or 15 years to bring to productive capacity, so ploughing them under to grow essential foods means a potentially quite long-term reduction in food diversity. The global agricultural system is not nimble in the way that a manufacturing system might be, and is also often heavily subsidized and protected.

So a general agricultural failure would involve failure of crops in a couple of independent producers for a couple of different food types all in the same year – possibly after a couple of years of build up in which reserves were strained – and in both the northern and southern hemispheres. For maximum effect it would need to occur in some high- and some low- or middle-income countries, disrupting not just the production of food but consumption and export patterns. It would have to affect a couple of exporters to have a truly global impact, and it would have to affect foods that are used for human as well as animal consumption.

How would agricultural failure happen?

In the short- to medium-term, a generalized agricultural collapse is only going to happen if it combines some global-warming-related phenomena with some bad luck. The only global-warming-related phenomenon that seems to be reliably weird at the moment is the arctic, but this is having fairly large effects and they can probably be expected to grow more extreme. They seem to be particularly affecting the food producers in the EU and North America, so a viable near-term scenario for agricultural failure would probably be:

  • serious flooding in Autumn in the EU and/or UK: due to the arctic sea ice loss increasing rainfall over northern europe
  • crop failure due to late spring and severe winters in Canada and northern/western europe: due to weakening jetstreams around the poles allowing cold air to flow further south and disrupting the Atlantic climate
  • a massive el nino causing drought and crop failure in Australia and latin America: obviously this is completely unrelated to global warming but the chances of a switch to el nino over any 5-10 year period are very high, and in a warming world the next el nino is going to be associated with some very unpleasant high temperatures
  • a random failure of monsoon or rainy season in east or southeast Asia: also (probably) not global warming related, but for example this year Japan’s rainy season – important for its rice crop – is already late and showing no signs of starting

In combination, these effects could lead to a huge loss of wheat, rice and corn crops in several major food producing nations. The likelihood is that the full global implications of the failure would not be understood until after the northern hemisphere harvest, by which time (maybe) the crops for the following season would already be laid down in the southern hemisphere. Even if governments were quick thinking enough to see the risk for the following year and mandate changes in crops, this would mean the southern hemisphere would have wasted a lot of arable land on non-essential plantings. Of course, the chances that governments would respond in time to the crisis to be able to mandate planting of only essential crops are pretty small, and although price signals might encourage some farmers to switch to essential crops, it is likely that this would take more than a year to happen – especially given the highly protected nature of agriculture in most parts of the world. So after the initial food collapse shock it is likely that there would be a second year of weak harvests, even if the weather turned good. Collapses in wheat and corn crops would be followed by a glut of cheap meat as farmers killed off unprofitable herds; the following year would see a spike in meat prices (I think this happened this year, actually).

What would a generalized agricultural collapse look like?

The collapse would likely be seen in the most vulnerable nations first, most likely those countries with limited food security and heavy subsidization of food prices. I think a lot of these countries are in the middle east and there have already been suggestions that the Arab spring was related to food markets. Jared Diamond famously blamed the Rwanda massacre on pressure for farmland, and other historians have suggested an economic imperative driving the holocaust. Even where it is not obvious, pressure over food and food prices can lead to political instability, upheaval and chaos, and this will likely be the first symptom of the collapse, as prices rise and food importers in the middle east respond rapidly to the collapse of stocks. Unfortunately, market liberalization doesn’t happen quickly and in any case, in the face of a general loss of supply there will be no solution for these countries: they will fall into an increasingly desperate round of riots and political upheaval, and possibly also major population movements.

Following internal tensions in the most food insecure nations, international tensions will begin to develop between major traders and their clients. Faced with generalized crop failures in major wheat trading partners, countries will try to find new markets, but some of these (such as Australia) will also be facing lost supplies, and will likely restrict trade to ensure security of domestic supply. This would lead to tensions between trading partners, followed by a desperate scramble as countries like the UK and Japan rushed to secure supplies. The first casualty of these efforts would be the poorest nations, who would suddenly find food suppliers deserting them for lucrative western markets. At its worst this could lead to riots, seizure of property, and expulsion of businesses and representatives from high-income nations. Emergency food aid would also collapse as countries conserved resources, and this would lead to famine and disaster in countries like North Korea and parts of sub-Saharan Africa, as well as countries newly thrown into food insecurity – especially poorer middle eastern countries like Yemen and Iraq.

Finally, as food reserves dwindled, tensions would rise between high income nations as they competed with each other for food supplies. Particularly, the EU, Japan and China would run into conflict as they sought to outbid each other for the remaining food supplies from the Russian breadbasket areas and the Americas. In southeast Asia, piracy would become commonplace, as it also would around the horn of Africa, and the second-tier powers would probably finance or trade with pirates as an alternative to direct conflict with the major powers. To protect these sea lanes countries with traditional rivalries – such as Iran and Iraq in the Gulf, and China and Japan in Asia – would have to send expeditionary forces. Although Japan currently has the ability to defeat China on the high seas, a war over something as fundamental as food is one of the few situations where China might be willing to deploy its nuclear arsenal. Imagine also what would happen if America suffered a general crop failure due to widespread drought, but Canada’s crop failure was only partial…

Small countries with the ability to protect their borders and a smart farming community or government could stand to benefit from these changes, however. For example, a small country with no bad weather that responded rapidly to food collapse by switching from cash crops to high-intensity farming of a particular food supply could feed its own community and potentially make huge amounts of money selling to major trading partners; in such a case, for a developing nation, centrally mandated rationing and calorie restriction could enable a huge accumulation of wealth through trade that could completely change the country’s future. On the other hand, countries in such a situation who are near a major regional power might suddenly find themselves annexed and subject to strict rationing as the regional power confiscated the fruits of their clever planning.

In the broad, we would see major famines across much of Africa and the middle east, and for the first time in perhaps 50 years we would see generalized famines outside of a small region of Africa, including potentially on other continents. Political upheaval and chaos in the middle east and parts of southeast Asia would bring down governments and lead to major population movements. Piracy and low-level national conflicts, as well as breakup of unstable nations, would lead to violence and conflict on a large scale through complex regions like southeast Asia or East Africa. Finally, there would be the risk of major conflict between the high-income nations, ending in nuclear attacks if the collapse was broad enough.

I think this would be quite a good campaign setting … but let’s hope it stays in the realm of the imagination …

The Yellow Dragon can use Stinking Cloud at will

The Yellow Dragon can use Stinking Cloud at will

Today it was 26C in Tokyo, and we had our first taste of this year’s yellow dust, the strange and nasty pollution that tends to drift over Japan from China during spring and summer. Today’s was the worst I have ever seen in 5 years in Japan – the above photograph, taken from my ground floor balcony, shows the sky at about 3pm today, just after the cloud reached us. Apparently in Matsue, in Western Japan, visibility was down to 5 km. In case this seems like a strange thing to care about, let me assure you this “weather” is not pleasant: it causes sneezing, eye irritation, headaches and drowsiness in many people when it is at its worst, and I think some towns in Kyushu issued alerts that would cause some people to stay inside (especially those with respiratory problems). The US army monitors this phenomenon in Korea and issues regular warnings. Of particular recent concern is the increasing concentration of what the Japanese call “PM2.5,” very small particles of pollutants of size less than 2.5 microns, which seem to arise from industrial pollution and smog, and have specific associated health concerns. According to the Global Burden of Disease 2010, Ambient PM Pollution is the 4th biggest cause of lost disability-adjusted life years in China, and ranks much higher as a cause of years of life lost than of years of disability. By way of comparison it is ranked 16 in Australia and 10 in the USA.

Some part of the yellow dust problem is natural, due to sandstorms in the interior of China, but in the past 10 years the problem has become worse and its health effects more significant. No doubt part of the concern about its health effects arises from greater awareness, but there is also a confluence of factors at work in China that create the problem: desertification, soil erosion and pollution, and industrial pollution due primarily to coal power and transport. It’s becoming increasingly clear that as China develops, it needs to make a shift away from coal power and personal transportation, and it needs to do it soon. No matter how bad the yellow dust is in Japan, it has become very bad in China, and concern is growing about the seriousness of its health and economic effects.

This puts China on the horns of a dilemma. Development is essential to the improvement of human health, but the path China has taken to development, and the rapidity of its industrial and economic growth, are seriously affecting environmental quality. It’s possible that China is the canary in the coalmine of western development, and may be the first country to find its economic goals running up against its environmental constraints – and this despite a rapid slowing in population growth. China is going to have to start finding ways to reverse desertification, soil erosion, and particulate pollution, because it cannot afford to continue losing marginal farmland, degrading the quality of its farmland, and basing its industrial and urban growth on highly-polluting fossil fuels.

This raises the possibility that China needs to introduce a carbon tax (or better still, a carbon-pricing system) for reasons largely unrelated to global warming. A carbon pricing system with options for purchasing offsets, linked into the EU market, would potentially encourage reforestation and reductions/reversals in the rate of desertification; it would also provide economic incentives for investments in non-fossil fuel-based energy sources, probably nuclear for the long term and renewables for the short term. The government, by selling off permits, would be able to raise money to help manage the infrastructure and health needs of the poorest rural areas most in need of immediate development. These effects are important even without considering the potential huge benefits for the world from China slowing its CO2 emissions. I notice I’m not alone in this idea; Rabett Run has a post outlining the same environmental issues, and suggesting that there are many direct economic and social benefits of such a system.

This is not just of practical importance to China, but it’s rhetorically a very useful thing to note: that a lot of carbon sources (and most especially coal) have huge negative health and social consequences in their own right; raising the cost of using them and finding financial incentives to prevent or reverse deforestation is of huge benefit for a lot more reasons than just preventing runaway climate change. It would be cute indeed if China’s immediate economic and environmental problems became the cause of strong action to prevent climate change; on the other hand, it would be very sad if the focus on the AGW aspects of carbon pricing – which are a shared international burden rather than a national responsibility – led China’s decision makers to miss the other vital environmental problems it can address. Especially if failure to address those other environmental problems caused China’s economic growth and social liberalization to stall or fall backwards.

If any country is going to run up against environmental limits to growth, it is China; and if China can avoid that challenge, and the social and health problems it will cause, then there is great hope for the future of the planet. So let’s hope the Chinese can come to terms with their growing environmental challenges as adroitly as they have dealt with some of their others … and if their efforts to tackle those problems will benefit the rest of the world too.

Today’s edition of PLOS Medicine contains an article describing a possible cap-and-trade scheme for global health investment, designed around a cap-and-trade carbon permit scheme. Built on the assumption that health is a global public good, it proposes that all countries sign up to a centralized system of permits based on disability-adjusted life years (DALYs). If a country wishes to invest in a low cost-effectiveness health project, it would need to offset the poor gains arising from its own scheme by purchasing DALYs for a low-income nation. The article contains some interesting examples, including the additional cost that a low-efficiency health project in a developed nation would incur through purchasing the DALY offsets.

For example, introducing pneumococcal conjugate vaccination (PCV) in Australia is a highly cost-ineffective strategy (costing about $100,000 per DALY gained), well above the threshold defined for a cost-effective intervention in a low-income nation. In order to implement this project, Australia would have to purchase about 1300 DALY offsets, which it could do through (for example) increasing the coverage of a standardized vaccination schedule in South East Asia. Purchasing these DALY offsets through this project would add 0.2% to the cost of introducing PCV in Australia (see Box 2 in the text).

The article also gives a cute chart showing which countries would need to increase investment and which could reduce investment in global health in order to meet the conditions of the scheme, and the authors suggest a significant change in the distribution of global health funding:

19%–28% of the total increase, or US$6.8–US$10 billion, would come from the US, 5%–6% from Japan, 4%–6% from Germany, 3%–4% from France, while some of the bigger middle-income countries would also contribute substantially, with 6%–7% from China (i.e., US$2.1–US$2.7 billion), 3% from Brazil, and 2% from India. Our proposal, therefore, involves a marked change in perspective over who should contribute to meeting the health MDGs [millennium development goals], with contributions expected from large emerging economies such as China and Brazil.

This is an interesting change in perspective and also a strong statement about the extent to which a few key countries (e.g. the USA, Japan and France, which it should be noted is an ex-colonial empire) are shirking their global health responsibilities.

I don’t know whether cap-and-trade systems are the best way to solve problems of the commons – the authors claim they are and give a reference, but I don’t know if they’re on strong grounds – and I’m not sure how much of a case can be made for health as a global commons compared to, say, the climate. But even if you drop the argument about global commons and just propose this cap-and-trade system as a mechanism for enforcing global investment in health priorities, I think it’s an interesting case. Certainly, a lot of countries are failing to meet their millennium development goal (MDG) targets, and although the authors note a range of reasons that are independent of funding mechanisms, it’s fairly certain that some of the shortfall is simply due to a lack of investment, and (again, as the authors observe) extremely inefficient investment choices. From a global perspective, the amount of money Japan is going to commit from 2013 to funding PCV, with limited cost-effectiveness, to save just a small number of lives, is a terrible waste when it could be funding crucial vaccinations (like tetanus for pregnant women) in countries with very low incomes and fragile health systems. It would be interesting to see how fast these countries’ health metrics would improve if the entire world adopted a scheme that forced them to consider the most efficient health investments, from a global rather than a local perspective …

 

 

One possible consequence of the collapse of the summer arctic ice cover is that storms like Sandy will become the new normal. There are reasons to think that the freak conditions that caused Sandy to become so destructive are related to the loss of arctic ice, and although the scientific understanding of the relationship between the arctic and northern hemisphere weather in general is not robust, there seems to be at least some confidence that the ice and weather around the Atlantic are related.

It’s worth noting that what is happening in the arctic this year is well in advance of scientific expectations. The 2007 Intergovernmental Panel on Climate Change (IPCC) report, for example, predicted an ice free arctic in about the year 2100. The cryosphere blogs, however, are running bets on about 2015 for “essentially ice free,” and no ice in 2020, as shown, for example, in this excellent post on ice cover prediction by Neven. Results presented by the IPCC are one of the main mechanisms by which governments make plans to manage climate change – in fact this was their intention – and one would think that events happening 80 years sooner than the IPCC predicts would make a big difference to the plans that governments need to consider.

One of the biggest efforts to make policy judgments based on current predictions of future effects of climate change was the Stern Review, published in 2006 and based on the best available scientific predictions in the previous couple of years. The key goal of the Stern Review was to assess the costs and benefits of different strategies for dealing with climate change, to answer the question of whether and when it was best to begin a response to climate change, and what that response should be.

The Stern Review received a lot of criticism from the anti-AGW crowd, and also from a certain brand of economists, partly because of the huge uncertainties involved in predicting such a wide range of events and outcomes so far in the future, and partly because of it particular assumptions. Of course, some people rejected it for being based on “alarmist” predictions from organizations like the IPCC, or rejected its fundamental assumption that climate change was happening. But one of the most persistent and effective criticisms of the Review was that it used the wrong discount rate, and thus it overemphasized the cost of rare events in the future compared to the cost of mitigation today.

I think Superstorm Sandy and the arctic ice renders that criticism invalid, and instead a better criticism of the Stern Review should now be that it significantly underestimates the cost of climate change, regardless of its choice of discount rate. Here I will attempt to explain why.

According to its critics, the Stern Review used a very low discount rate when it considered future costs. A discount rate is essentially a small percentage by which future costs are discounted relative to current costs, in order to reflect the preference humans have for getting stuff now. The classic, simplest discount rate simply applies an exponential reduction in costs over time with a very small rate (typically 2-5%), so that costs incurred 10 years from now are reduced by an amount exp(-10*rate). I use this kind of discounting in cost-effectiveness analysis, and a good rough approximation to its effects is to assume that, if costs are incurred constantly over a human’s lifetime, actually only about 40% of the total costs a person might be expected to incur will actually be counted now.

For example, if I am considering an intervention today that will save a life, and I assume that life will last 80 years, then from my perspective today that life is actually only really worth about 30 years. This reflects the fact that the community prefers to save years of life now, rather than in 70 years’ time, and also the fact that a year of life saved in 20 years time from an intervention enacted today is only a virtual year of life – the person I save tomorrow could be hit by a bus next week, and all those saved life years will be splattered over the pavement. The same kinds of assumptions can be applied to hurricane damage – if I want to invest $16 billion  now on a storm surge barrier for New York, I can’t offset the cost by savings from a $50 billion storm in 50 years time, because $16 billion is worth more to people now than in 50 years’ time, even if we don’t consider inflation. I would love to have $16 billion now, but I probably wouldn’t put much stock on a promise of $16 billion in 50 years’ time, and wouldn’t change my behavior much in order to receive it[1]. Stern is accused of rejecting this form of discounting, and essentially using a discount rate of 0%, so that future events have the same value as current events.

There are arguments for using this type of discounting when discussing climate change, because climate change is an intergenerational issue and high discount rates (of e.g. 3%) fundamentally devalue future generations relative to our own. Standard discounting is probably a logic that should only be applied when considering decisions made by people about issues in their own lifetimes. This defense has been made (the wikipedia link lists some people who made it), and it’s worth noting that many of the conservative economists who criticized the Stern Review for its discounting choice implicitly use Stern’s type of discounting when they talk about government debt – they complain extensively about “saddling future generations” with “our” debt, when their preferred discounting method would basically render the cost to those generations of our debt at zero. This debate is perhaps another example of how economists are really just rhetoricists rather than philosophers. But for now, let’s assume that the Stern Review got its discounting wrong, and should have used a standard discounting process as described above.

The Stern Review also made judgments about the effects of climate change, largely along the lines of the published literature and especially on the material made available to the world through previous rounds of IPCC reports. For example, if you actually access the Stern Review, you will note that a lot of the assumptions it makes about the effects of climate change are essentially related to the temperature trend. That is, it lists the effects of a 2C increase in temperature, and then applies them in its model at the point that the temperature crosses 2C. For example, from page 15 of Part II, chapter 5 (the figure), we have this statement:

If storm intensity increases by 6%, as predicted by several climate models for a doubling of carbon dioxide or a 3°C rise in temperature, this could increase insurers’ capital requirements by over 90% for US hurricanes and 80% for Japanese typhoons – an additional $76 billion in today’s prices.

The methods in the Stern Review are unclear, but this seems to be suggesting that the damage due to climate change is delayed in the analysis until temperature rises by 3C[2] – which will happen many years from now, in most climate models.

The assumptions in the Stern Review seem to be that the worst effects of climate change will begin many years from now, perhaps after 2020, and many (such as increased storm damage) will have to wait until the temperature passes 2C. There seems to be an assumption of a linear increase in storm damage, for example, which loads most storm damage into the far future.

This loading of storm and drought damage into the far future is the reason the discount issue became so important. If the storm damage is in the far future, then it needs to be heavily discounted, and the argument becomes that we should wait until much closer to the time to begin mitigating climate change. This argument is flawed for other reasons (you can’t stop climate change overnight, you have to act now because it’s the carbon budget, not the rate of emissions, that is most important to future damage), but it is valid as it applies to the debate about whether we should be acting to prevent climate change or prepare for climate change.

However, recent events have shown that this is irrelevant. Severe storm damage and droughts are happening now, and at least in the Atlantic rim these events are probably related to the collapse of the arctic ice load, and reductions in snow albedo across the far north. Stern’s analysis was based on most of these events happening in the far future, not now, and as a result his analysis has two huge flaws:

  1. It underestimates the total damage due to climate change. Most economic analyses of this kind are conducted over a fixed time frame (e.g. 100 years), but for any fixed time frame, a model that assumes a gradual increase in damage over time is going to underestimate the total amount of damage that occurs over the period relative to a model that assumes that the damage begins now. Stern couldn’t assume the damage begins now, because those kinds of things weren’t known in 2006. But it has begun now – we need to accept that the IPCC was wrong in its core predictions. That means that the total damage occurring in the next 100 years is not going to be $X per year between 2050 and 2100, but $X per year between 2010 and 2100 – nearly twice as much damage.
  2. The discount rate becomes irrelevant. Discount rates affect events far in the future, and have minimal effect now. If Stern had used a standard discount rate of 3%, then from his perspective in 2006 the current estimates of storm damage in the USA due to Sandy ($50 billion) would be about $42 billion. Also, all the damage in the USA due to Sandy is excess damage, because without the collapse of the arctic ice fields, Sandy would probably have headed out to sea, and done 0 damage. The estimated cost of the storm surge barrier mentioned above was $16 billion, so assuming that this cost is correct (unlikely) and it could have been built by now (impossible), that investment alone would have been worthwhile. Whereas if we assume a storm like Sandy won’t happen until 2050, the cost of the storm from Stern’s perspective is $14 billion, and we shouldn’t bother building the barrier now.

This means that the main conservative criticism of the Stern Review is now irrelevant – all that arcane debate about whether it’s more moral to value our future generations equally with now (Amartya Sen[3]) or whether we should focus on building wealth now and let our kids deal with the fallout (National Review Online) becomes irrelevant, because the damage has started now, and is very real to us, not to our potential grandchildren.

The bigger criticism that needs to be put is that Stern and the IPCC got climate change wrong. The world is looking at potentially serious food shortages next year, and in the last two years New York has experienced two major storm events (remember Irene’s storm surge was only 30cm below the level required to achieve the flooding we saw this week). Sandy occurred because of a freak coincidence of three events that are all connected in some way to global warming. We need to stop saying “it’s just weather” and start recognizing that we have entered the era of extreme events. Instead of writing reviews about what this generation needs to do to protect the environment for its children, we need to be writing reviews about what this generation can do to protect itself. Or better still, stop writing reviews and start acting.

fn1: This is a problem that has beset the organized religions for millenia. An eternity in heaven is actually not equivalent to many years on earth, if you discount it at 3% a year.

fn2: Incidentally, I’m pretty sure I was taught in physics that the use of the degree symbol in representing temperatures is incorrect. Stern uses the degree symbol. Economists!!! Sheesh!

fn3: Incidentally, I think in his published work, Sen uses the standard discounting method.

Some academic has written a paper suggesting global warming (AGW) skeptics are moon-landing conspiracy theorists, and this has sparked a bit of a controversy. One of the many complaints about it is that it recruited subjects online through a survey posted on blogs, and therefore is completely unrepresentative of skeptics in the community. I’m going to examine this a bit here, in the context of the problem of studying online communities. I’m going to do this through everyone’s favourite context: gaming.

Suppose that you’re a lefty tree-hugging academic who wants to do a study of attitudes towards women in role-playing games. You want to find female gamers and you want their voice to be representative of all gamers in the community. There is basically only one robust way to do this: a simple random sample of the community. Since this is impossible, we usually use something that can be forced to approximate it through statistical tricks and a bit of hand-waving: the cluster-sampled face-to-face survey or the random sampled phone survey. These can be extremely resource-intensive, and a typical poll in Australia will involve 800 -1500 people; all the polling goodness for Australia can be found here. So let’s suppose some well-funded researcher can pay Roy Morgan or Newspoll to tack a few questions about gaming onto the end of a poll (companies do this all the time). They will then get to ask the question they want to ask of about 1000 randomly selected Australians of all ages over 16 and both sexes. This means they will identify about 20 role-players, 1 of whom will be a woman. They could design a special poll that they commission separately, which oversamples 20-40 year olds, which will get them about 50 roleplayers (3 women); but this yields diminishing returns because for statistical reasons the weighting that gets applied to an over-sampled poll reduces its accuracy. In either case the sample of gamers will be “representative” of the population, but the precision will be so poor that we will be able to say something like “0-100% of women who game think that the gaming industry is sexist.” The only way to up the accuracy is to recruit enough role-players that we get about 30 women; that is, about 600 role-players in a randomly-selected sample of size 30,000. At this size we can do prevalence estimates but no regression comparisons of males and females – for that we need probably another 30 women, or a sample size of about 60,000. We need to sample a quantifiable proportion of the Australian population to find out that, yes, female gamers think the industry is sometimes sexist.

That was worth it, wasn’t it[1]? Why hasn’t someone funded this research? Governments have really crappy priorities these days – they’ll fund some guy in WA to do an internet survey of mere <i>climate skeptics</i> but they won’t fork out the cash for a decent survey of Aussie role-players! Maybe we need to get smarter with our grant applications … so instead we notice that gamers gather in clubs and shops, and realize that we could get a reasonably representative sample of gamers by recruiting subjects there. A couple of weekends and some hard yards later, and we’ve recruited our 600 gamers[2]. Of course, our sample is no longer strictly representative of either the gaming community or the general community, because some gamers don’t play at shops (the <i>Vampire</i> crowd are hanging around the graveyard, and the cool kids are doing it at heavy metal gigs or at their local coven). Also, we’ve given up the ability to estimate population prevalences, because we don’t know how many gamers we missed in our study. But if we know something about our topic, and we work hard to recruit, and we also put up adverts in the right places and do a bit of snowball sampling (get them to invite their friends) we may be in with a chance of covering enough of the community, and getting a diverse enough range of gamers, that we’ve got something that if not completely representative, is at least robust to criticism. The only reason to do this is that it’s much cheaper, but this is a common problem in modern research: Michael Mann may be up to his ears in NASA-funded cocaine, dancing girls and cadillacs, but the rest of us are just struggling to recruit 100 sweaty-palmed nerds to fill in a two page survey.

This is pretty much the standard way that one recruits “hard-to-reach” groups: role-players, street-based sex workers, injecting drug users, hamster-fetishists, AGW skeptics … sex work is legal (or decriminalized/licensed) in Australia now but good luck trying to recruit a nationally-representative sample of sex workers over the phone. No, you have to do the hard yards, slogging through brothels and asking if you could interview the pretty girl at the back in the cherry boots … your sample will never be nationally representative if you do this, but it will be representative of <i>something</i>, and if you target your survey selection well and do the right work, you can make your findings valid in some sense.

We could extend this basic principle to online gaming, though online gamers have a registration system and a defined world they operate in, so if we were to get the cooperation of the gaming companies it would be possible to run simple random samples of gamers and get quite a good response. To do this we would need the cooperation of the gaming community’s custodians: the companies that run the games. With their help it would be easy to distribute a survey to the gamers who use their servers, getting a large and robust sample. We wouldn’t be able to get prevalence estimates because to do that you need to randomly sample the whole community and ask them about their gaming habits; but it would be enough to examine relationships between gamers and their opinions of stuff.

The same thing applies to other online communities, and perhaps more so because online communities can be very fragmented compared to other communities: they can be international, for starters, dislocated geographically and never meeting in person. These communities can have very strong shared bonds, such as the people who comment at acrackedmoon‘s website, but know nothing about each other’s physical lives. And they may be bound together by very strong political ties but completely socially unconnected. Surveying these people physically is almost impossible. We see this at both warmist and skeptic websites: the online AGW-debating world is a world that has no physical analog, and can’t be sampled through physical means. The only way to sample it with any accuracy is to sample it online.

However, there is a problem here. To justify using an online survey to recruit online skeptics/warmists for research, we need to prove that the online community of skeptics/warmists is different to the rest of the community. That is, if I select 1000 ordinary Australians and get their opinions of global warming[3], I should expect that their responses will be different to the online community of skeptics/warmists – presumably, less inflammatory and less extreme. If I can be confident that the online community is special, contained within itself, rare and not necessarily representative of the community as a whole, then I can be fairly confident that I need to recruit them using special convenience sampling methods – but I can also be fairly confident that existing research on the issues at play cannot be applied to them, which I think is what Lewandowsky did with his assumption about pre-existing factor structures.

I think that in order to understand the modern skeptic/warmist debate we need to recruit these people online. But the only reason to do this is that these groups are different, which means that we can’t apply existing cognitive models to them, we need to make new ones from an exploratory perspective. Lewandowsky seems not to have done the latter, but he tried to do the former. Sadly, the skeptic blogs didn’t accommodate him in his efforts, and anyone who has done research with hard-to-reach groups knows that you need your research to be supported by trusted peers before you can implement it successfully. As a result, the survey was only conducted on warmist sites, and I would challenge any skeptic reading this to toddle over to Deltoid, look at the comments from the skeptics posting there, and then tell me with a straight face that you want those skeptics speaking on your behalf. If you want your voice heard in research, you need to take part in research. Otherwise the weirdos at Catallaxy will do it for you, and before you know it this guy will be telling Stephan Lewandowsky that AGW is a myth because the Aztecs faked the moon landing.

fn1: Incidentally, this points us at the inherent efficiency of women’s studies as a discipline. One junior academic sitting in her office could have told you that from looking at back issues of <i>Dragon</i> magazine. And she could have taught her first year students where their clitoris is before lunch. Much cheaper!

fn2: Actually I’m not really convinced that there are 600 gamers in Australia.

fn3: which I would *never* do with a 5-point scale, because all Australians would just say “3,” “don’t really care”