Could you lie to this nice lady?

On 18th May 2019 Australia held a federal election, and the ruling Liberal/National Party (LNP) Coalition scored a victory over the Australian Labor Party (ALP) that was billed by most observers as an “upset” because opinion polls had in general been predicting a narrow ALP victory. The opinion polls predicted that the ALP would get a two-party preferred vote of 51.5% over 48.5% for the LNP, and would cruise to victory on the back of this; in fact, with 76% of the vote counted the Coalition is on 50.9% two party preferred, and the ALP on 49.1%. So it certainly seems like the opinion polls got it wrong. But did they, and why?

Did opinion polls get it wrong?

The best site for detailed data on opinion polls is the Poll Bludger, whose list of polls (scroll to the bottom) shows a persistent estimate of 51-52% two-party preferred vote in favour of the ALP. But there is a slightly more complicated story here, which needs to be considered before we go to far in saying they got it wrong. First of all you’ll note that the party-specific estimates put the ALP at between 33% and 37% primary vote, with the Greens running between 9% and 14%, while the Coalition is consistently listed as between 36% and 39%. Estimates for Pauline Hanson’s One Nation Party put her between 4% and 9%. This is important for two reasons: the way that opinion pollers estimate the two party preferred vote, and the margin of error of each poll.

The first thing to note is that the final estimates of the different primary votes weren’t so wildly off. Wikipedia has the current vote tally at 41% to the Coalition, 34% to ALP and 10% to Greens. The LNP vote is higher than any poll put it at, but the other three parties’ tallies are well within the range of predicted values. The big outlier is One Nation, which polled at 3%, well below predictions – and far enough below to think that the extra 2% primary vote to the Coalition could reflect this underperformance. This has big implications for the two party preferred vote estimates from the opinion poll companies, because the two-party preferred vote is not a thing that is sampled – it is inferred from past preference distributions, from simple questions about where respondents will put their second choice, or from additional questions in the poll. So uncertainty in primary votes of the minor parties will flow through to larger uncertainty in two-party preferred vote tallies, since these votes have to flow on. By way of example, a 1% difference in the primary vote estimate for the Greens (e.g. 9% vs. 10%) will manifest as a difference of 10% in the total number of two-party preferred votes flowing to the major parties. If the assumed proportion of those votes that go to the Liberals is wrong, then you can expect to see this multiplied through in the final two-party preferred vote. In the case of One Nation, some polls (e.g. Essential Research) consistently gave them 6-7% of the primary vote, when they actually got 3%. So that’s a 50% miscalculation in the number of preference votes that flow to someone from this party. This is a unique problem for opinion polling in a nation like Australia and it raises the question: Have opinion poll companies learnt to deal with preferencing in the era of minor parties?

The second thing to note is the margin of error of these polls. Margin of error is used to show what the range of possible “true” values for the polled proportion might be. For example, if a poll estimates 40% of people will vote Liberal with a 2% margin of error that means that the “real” proportion of people who will vote Liberal is between 38% and 42%. For a binary question, the method for calculating the margin of error can be found here, but polls in Australian politics are no longer a binary question: we need to know the margin of error for four proportions, and this margin of error grows as a proportion of the estimate when the estimate is smaller. For example the most recent Ipsos poll lists its margin of error as 2.3%, but this suggests that the estimated primary vote for the Coalition (39%) should actually lie between 36.7% and 41.3%. This means that the estimated primary vote for the ALP should have a slightly wider margin of error (since it’s smaller) and the Greens even more so. Given this, it’s safe to say that the observed primary vote totals currently recorded lie exactly within the margins of error for the Ipsos poll. This poll did not get any estimates wrong! But it is being reported as wrong.

The reason the poll is reported as wrong is the combination of these two problems: the margin of error on the primary votes of all these parties should magnify the margin of error on the two-party preferred vote so that in the end it is larger than 2.3%, so we should be saying that the two-party preferred vote for the Coalition that is inferred from this poll is probably wider than the range 47 – 51%. That’s easily wide enough for the Coalition to win the election. But newspapers never report the margin of error or its implications.

When you look at the actual data from the polls, take into account the margin of error and consider the uncertainty in preferences, the polls did not get it wrong at all – the media did in their reporting of the polls. But we can ask a second question about these polls: can opinion polls have any meaning in a close race?

What do opinion polls mean in a close race?

In most elections in Australia most seats don’t come into play, and only a couple of swing seats change, because most are safe. This election has definitely followed this pattern, with 7 seats changing hands and 5 in doubt – only 12 seats mattered in this election. Amongst those 12 seats it appears (based on the current snapshot of data) that the Coalition gained 8 and lost 4, for a net gain of 4. Of those 12 seats 9 were held by non-Coalition parties before the election, and 3 by the Coalition. Under a purely random outcome – that is, if there was nothing determining whether these seats changed hands and it was purely random, the equivalent of a coin toss – then the chance of this outcome is not particularly low. Indeed, even if the ALP had a 60% chance of retaining their own seats and a 40% chance of winning Coalition seats, it’s still fairly likely that you would observe an outcome like this. A lot of these seats were on razor thin margins, so that literally they could be vulnerable to upset if there was something like bad weather or a few grumpy people or a change in the proportion of donkey votes.

I don’t think polls conducted at the national level can be expected to tell us much about the results of a series of coin tosses. If those 12 seats were mostly determined by chance, not by any structural drivers of change, how is a poll that predicts a 51% two-party preferred vote, with 2% margin of error, going to determine that they’re going to flip? It simply can’t, because you can’t predict random variation with a structural model. Basically, the outcome of this election was well within the boundaries one would expect based purely on the non-systematic random error at the population level.

When a party is heading for a drubbing you can expect the polls to pick it up, but when a minor change to the status quo is going to happen due to either luck or unobserved local factors, you can’t expect polls to offer a better prediction than coin flips.

The importance of minor parties to the result

One thing I did notice in the coverage of this election was that there were a lot of seats where the Coalition was garnering the biggest primary vote but then the ALP and the Greens’ primary vote combined was almost as large or a little larger, followed by two fairly chunky independent parties. I think in a lot of elections this means that Greens and independents’ preferences were crucial to the outcome. As the Greens’ vote grows I expect it encompasses more and more disaffected Liberal and National voters, and not just ALP voters with a concern about the environment. For example in Parkes, NSW the National Party and the ALP experienced major swings against them, but the National candidate won with a two-party preferred vote swing towards him. This suggests that preferences from minor parties were super important. This may not seem important at the national level but at the local level it can be crucial. In Herbert, which the Coalition gained, two minor parties got over 10% of the vote. In Bass the combined ALP/Green primary vote is bigger than the Coalition’s, but the Liberal member is ahead on preferences, which suggests that the Greens are not giving strong preference flows to the ALP. This variation in flows is highly seat-specific and extremely hard to model or predict – and I don’t think that the opinion polling companies have any way of handling this.

Sample and selection bias in modern polling

It can be noted from the Pollbludger list of surveys that they consistently overestimated the ALP’s two-party preferred vote, which shouldn’t happen if they were just randomly getting it wrong – there appears to be some form of systematic bias in the survey results. Surveys like opinion polls are prone to two big sources of bias: sampling bias and selection bias. Sampling bias happens when the companies random phone dialing produces a sample that is demographically incorrect, for example by sampling too many baby boomers or too many men. It is often said that sampling companies only call landlines, which should lead to an over-representation of old people so that the sample is 50% elderly people even though the population is only 20% elderly. This problem can be fixed by weighting, in which the proportions are calculated with a weight to reflect the relative rarity of young people. This method increases the margin of error but should handle the sample bias problem. However, there is a deeper problem that weighting cannot fix, which is selection bias. Selection bias occurs when your sample is not representative of the population, even if demographically they appear to be. It doesn’t matter if 10% of your sample are aged 15-24, and 10% of the population is aged 15-24, if the 15-24 year olds you sampled are fundamentally different to the 15-24 year olds in the population. Some people will tell you weighting fixes these kinds of problems but it doesn’t: there is no statistical solution to sampling the wrong people.

I often hear that this problem arises because polling companies only call landlines, and people with landlines are weirdos, but I checked and this isn’t the case: Ipsos for example samples mobile phones and 40-50% of its sample is drawn from mobile phones. This sample is still heavily biased though, because people who answer their phones to strangers are a bit weird, and people who agree to do surveys are even weirder. The most likely respondent to a phone survey is someone who is very bored and very politically engaged; and as time goes by, I think the people who answer polls are getting weirder and weirder. If your sample is a mixture of politically super-engaged young people and the bored elderly, then you are likely to get a heavy selection bias. One possible consequence of this could be a pro-ALP bias in the results: the young people who answer their mobile are super politically engaged, which in that age group means pro-ALP or pro-Green, and their responses are being given a high weight because young people are under sampled. It’s also possible that the weighting has been applied incorrectly, though that seems unlikely to be a problem across the entire range of polling companies.

I don’t think this is the main problem for these polls. There is a 2% over-estimate of the ALP two-party preferred vote but this could easily arise from misapplication of preferences. The slight under-estimate of the LNP primary vote could come from inaccuracies in the National Party estimate, for example from people saying they’re going to vote One Nation on the phone, but reverting to National or Liberal in the Booth. Although there could be a selection bias in the sampling process, I don’t think this selection bias has been historically pro-ALP. I think the problem in this election has been that the fragmentation of the major party votes on both the left (to Green/Indies) and on the right (to One Nation, UAP, Hinch and others) has made small errors in sampling and small errors in assignment of preferences snowball into larger errors in the two-party preferred estimate. In any case, this was a close election and it’s hard for polls to be right when the election comes down to toss-ups in a few local electorates.

What does this mean for political feedback processes in democracies?

Although I think the problem is exaggerated in this election, I do think this is going to be a bigger problem in future as the major parties continue to lose support to minor parties. One Nation may come and go but the Greens have been on a 10% national vote share for a decade now and aren’t going anywhere, and as they start to get closer to more lower house seats their influence on election surprises will likely grow – and not necessarily in the ALP’s favour. This means that the major parties are not going to be able to rely on opinion polls as a source of feedback from the electorate about the raw political consequences of their actions and that, I think, is a big problem for the way our democracy works.

Outside of their membership – and in the case of the ALP, the unions – political parties have no particular mechanism for receiving feedback from the general public except elections. Over the last 20 years opinion polls have formed one major component of the way in which political leaders learn about the reception their policies have in the general community. Sure, they can ask their membership for an opinion, and they’ll get feedback through other segments of the community (such as the environmental movement for the Greens, or the unions for the ALP), but in the absence of opinion polls they won’t learn much about how the politically disengaged think of their policies. But in Australia under compulsory voting the politically disengaged still vote, and they still get angry about politicians, and they still have political ideals. If this broader community withdraws completely so that their opinion can no longer be gauged – or worse still, politicians learn to believe that the opinions of those who are polled are representative of community sentiment in general – then politicians will instead learn about the reception their policies receive only through the biased filter of stakeholders, the media, and their own party organisms. I don’t see any of the major parties working to make themselves more accessible to community feedback and more amenable to public discussion and engagement, and I don’t think they will be able to find a way to do that even if they tried. Over the past 20 years instead politicians have gauged the popularity of their platform from polls, and used it to modify and often to moderate their policies in between elections. Everyone hates the political leader who simply shapes their policies to match the polls, but everyone hates a politician who ignores public opinion just as much. We do expect our politicians to pay attention to what we think in between elections, and to take it into account when making policy. If it becomes impossible for them to do this, then an important mode of communication between those who make the laws and those who don’t will be broken or worse still become deceptive.

It does not seem that this problem is going to go away or get better. This means that the major political parties are going to have to start finding new mechanisms to receive feedback from the general public – and we the public are going to have to find new ways to get through to them. Until then, expect more and nastier surprises in the future, and more weird political contortions as the major parties realize they haven’t just lost control of the narrative – they aren’t even sure what the narrative is. And since we the public learn what the rest of the public think from opinion polls as well, we too will lose our sense of what our own country wants, leaving us dependent on our crazy Aunt’s Facebook posts as our only vox populi.

As people retreat from engagement with pollsters, the era of the opinion poll will begin to close. We need to build a new form of participatory democracy to replace it. But, and how? And until we do, how confused will we become in the democracy we have? The strange dynamics of modern information systems are wreaking havoc in our democratic systems, and it is becoming increasingly urgent that we understand how, and what we can do to secure our democracies in this strange new world of fragmented information.

But as Scott Morrison stands up in the hottest, driest era in the history of the continent and talks about building more coal mines on the back of his mandate, I don’t hold out much hope that there will be any change.


And let me tell you something
Before you go taking a walk in my world,
…you better take a look at the real world
Cause this ain’t no Mr. Rogers Neighborhood
Can you say “feel like shit?”
Yea maybe sometimes I do feel like shit
I ain’t happy about it, but I’d rather feel like shit
…than be full of shit!


There are times in life when it’s necessary to turn to the original gurus of self-righteous self-inspiration, Suicidal Tendencies. Life getting you down, you feel you can’t keep going? Crank up ST and when the boys ask you “Are you feelin’ suicidal?” yell back “I’m suicidal!” and you’ll be back on track in no time. Been meandering through some shit, making mistakes you know are your own dumb fault, and need to kick yourself back onto the straight and narrow? Gotta kill Captain Stupid is what you need. Getting played by conmen who play on your better nature, maybe take you for a ride using your religious impulses? Then you can crank up Send Me Your Money and be reminded that “Here comes another con hiding behind a collar / His only God is the almighty dollar / He ain’t no prophet, he ain’t no healer / He’s just a two bit goddamn money stealer.” That’ll get your cynical radar working again! But the Suicidals’ most useful refrain, the one that applies most often and most powerfully in this shit-stained and terrible world, is the imprecation at the beginning of the second half of their skate power classic, You Can’t Bring Me Down:

Just cause you don’t understand what’s going on
…don’t mean it don’t make no sense
And just cause you don’t like it,
…don’t mean it ain’t no good

This pure reminder of the power of bullshit over mortal men came to me today when I began to delve into the background of the latest Sokal Hoax that has been visited on the social sciences. I’d like to explore this hoax, consider how it would have panned out in other disciplines, make a few criticisms, and discuss the implications of some of their supposedly preposterous papers. So as Mikey would say – bring it on home, brother doc!

The Latest Hoax

The latest hoax comes with its own report, a massive online screed that describes what they did, why they did it, how they did it and what happened. Basically they spent a year preparing a bunch of papers that they submitted to a wide range of social studies journals in a field they refer to as “grievance studies”, which they define by saying

we have come to call these fields “grievance studies” in shorthand because of their common goal of problematizing aspects of culture in minute detail in order to attempt diagnoses of power imbalances and oppression rooted in identity.

This definition of the field is easily the vaguest and most hand-wavey way to select a broad set of targets I have ever seen, and it’s also obviously intended to be perjorative. In fact their whole project could perhaps be described as having the “common goal of problematizing aspects of culture in minute detail” – starting with their definition of the culture.

The authors admit that they’re not experts in the field, but they spent a year studying the content, methods and style of the field, then wrote papers that they submitted to journals under fake names (one real professor gave them permission to use his name) from fake institutions. They submitted 20 papers over the year, writing one every 9 days, and got 7 published, one with a commendation; the other 13 were repeatedly rejected or still under review when somehow their cover was blown and they had to reveal the hoax.

The basic problem with the hoax

The papers they submitted are listed at the website and are pretty hilarious, and some of the papers that were published were obviously terrible (though they may have been interesting reading). Two of the papers they submitted – one on dog parks and one on immersive pornography – used fake data, i.e. academic misconduct, and two were plagiarized parts of Mein Kampf, with some words replaced to reverse them into a feminist meaning of some kind (I guess by replacing “Jew” with “men” or something).

Submitting an article based on fraudulent data is, let’s be clear, academic misconduct, and it is also extremely difficult for peer reviewers to catch. Sure it’s easy in retrospect to say “that data was fake” but when peer reviewers get an article they don’t get the raw data, they have to judge based on the summaries in the paper. This is how the Wakefield paper that led to the collapse in MMR vaccination got published in the Lancet – Wakefield made up his data, and it was impossible for the peer reviewers to know that. The STAPP controversy in Japan – which led to several scientists being disgraced and one suicide – involved doctored images that were only discovered when a research assistant blew the whistle. Medicine is full of these controversies in which data is faked or manipulated and only discovered after a huge amount of detective work, or after a junior staff member destroys their career blowing the whistle. Submitting fraudulent work to peer review – a process which at heart depends on good faith assumptions all around – is guaranteed to be successful. It’s not an indictment of anyone to do this.

Submitting a word-replaced Mein Kampf is incredibly tacky, tasteless and juvenile. Most academics don’t read Mein Kampf, and it’s not a necessary text for most sociological disciplines. If the journal doesn’t use plagiarism software or the peer reviewers don’t, then this is undoubtedly going to slide through, and while much of Mein Kampf is pernicious nonsense a lot of it is actually pretty straightforward descriptions of political strategies and contemporary events. Indeed the chapter they used (chapter 12 of volume 1) is really about organizing and political vision[1], with only passing references to Jewish perfidy – it’s the kind of thing that could be rendered pretty bland with a word replace. But from the description in their report one might think they had successfully published an exterminationist screed. I’m sure the hoaxers thought they were being super clever doing this, but they weren’t. Detecting plagiarism is a journal’s responsibility more than a peer reviewer’s, and not all journals can. It’s not even clear if the plagiarized text would have been easily detected by google searches of fragments if there was a suitable level of word replacement.

So several of their hoax papers were highlighting problems with the peer review process in general, not with anything to do with social studies. Of the remainder, some were substantially rewritten during review, and a lot were rejected or sent back for major revision. While people on twitter are claiming that “many papers” were accepted, in fact the most obviously problematic ones were rejected. For example the paper that recommended mistreating white students, ignoring their work and dismissing their efforts, to teach them about white privilege, was rejected three times, but people on twitter are claiming that the treatment of this paper shows some kind of problematic morality by the peer reviewers.

The next problem with the hoax is that the authors have misrepresented good-spirited, kind-hearted attempts at taking their work seriously with uncritical acceptance of their work. Consider this peer review that they report[2] on a paper on whether men commit sexual violence by masturbating to fantasies of real women (more on this below):

I was also trying to think through examples of how this theoretical argument has implications in romantic consensual relationships. Through the paper, I was thinking about the rise of sexting and consensual pornographic selfies between couples, and how to situate it in your argument. I think this is interesting because you could argue that even if these pictures are shared and contained within a consensual private relationship, the pictures themselves are a reaction to the idea that the man may be thinking about another woman while masturbating. The entire industry of boudoir photography, where women sometimes have erotic pictures taken for their significant other before deploying overseas in the military for example, is implicitly a way of saying, “if you’re going to masturbate, it might as well be to me.” Essentially, even in consensual monogamous relationships, masturbatory fantasies might create some level of coercion for women. You mention this theme on page 21 in terms of the consumption of non-consensual digital media as metasexual-rape, but I think it is interesting to think through these potentially more subtle consensual but coercive elements as well

This is a genuine, good-faith effort to engage with the authors’ argument, and to work out its implications. But this peer reviewer, who has clearly devoted considerable time to engaging with and attempting to improve this paper, now discovers that he or she was being punked the whole time, and the authors were laughing at her naivete for thinking their idea should be taken seriously. He or she did this work for free, as part of an industry where we all give freely of our time to help each other improve their ideas, but actually this good faith effort was just being manipulated and used as part of a cheap publicity stunt by some people who have an axe to grind with an entire, entirely vaguely-defined branch of academia. And note also that after all this peer reviewer’s work, this paper was still rejected – but the hoaxers are using it as ammunition for their claim that “grievance studies” takes preposterous ideas seriously. Is that fair, or reasonable? And is it ethical to conduct experiments on other academics without consent?

I would be interested to know, incidentally, if their little prank was submitted to institutional review before they did it. If I tried to pull this shitty little move in my field, without putting it through an IRB, I think my career would be toast.

But there is another problem with this hoax, which I want to dwell on in a little more detail: some of the papers actually covered interesting topics of relevance in their field, and the fact that the hoaxers think their theories were preposterous doesn’t mean they were actually preposterous. It’s at this point that the Suicidals’ most powerful rule applies: Just because you don’t understand what’s going on, don’t mean it don’t make sense.

The theoretical value of some of the hoax papers

Why don’t men use dildos for masturbation?

Let us consider first the paper the authors refer to as “Dildos”, actual title Going in Through the Back Door: Challenging Straight Male Homohysteria and Transphobia through Receptive Penetrative Sex Toy Use. In this paper the hoaxers ask why men don’t use dildos for masturbation, and suggest it is out of a fear of homosexuality, and transphobia. The hoaxers say that they wrote this paper

To see if journals will accept ludicrous arguments if they support (unfalsifiable) claims that common (and harmless) sexual choices made by straight men are actually homophobic, transphobic, and anti-feminist

But is this argument ludicrous? Why don’t men use dildos more? After all, we know that men can obtain sexual pleasure from anal insertion, through prostate stimulation. There is a genre of porn in which this happens (for both cismen and transgender women), and it is a specialty service provided by sex workers, but it is not generally commonly practiced in heterosexual intercourse or male masturbation. Why? Men can be pretty bloody-minded about sexual pleasure, so why don’t they do this more? There could be many reasons, such as that it’s impractical, or it’s dirty, or (for couple sex) that women have a problem with penetrating men, or because men see sex toys as fundamentally femininized objects – but it could also be out of a residual homophobia, right? This seems prima facie an interesting theory that could be explored. For example, the only mainstream movie I can think of where a woman penetrates a man is Deadpool, and so it should be fairly easy to study reactions to that movie and analyze them for homophobia (reddit should be pretty good for this, or MRA websites). Understanding the reasons for this might offer new ways for men to enjoy sex, and a new diversity of sex roles for women, which one presumes is a good thing. So why is this argument ludicrous?

Why do men visit Hooters?

Another article that was published was referred to by the hoaxers as “Hooters”, actual title An Ethnography of Breastaurant Masculinity: Themes of Objectification, Sexual Conquest, Male Control, and Masculine Toughness in a Sexually Objectifying Restaurant. The article argues that men visit “breastaurants” to assert male dominance and enjoy a particular form of “authentic masculinity,” presumably in contrast to the simpler motive of wanting to be able to look at tits. The authors say they did this article to

see if journals will publish papers that seek to problematize heterosexual men’s attraction to women and will accept very shoddy qualitative methodology and ideologically-motivated interpretations which support this

But again, this is basically an interesting question. Why do men go to restaurants with scantily-clad women? They could eat at a normal restaurant and then watch porn, or just read playboy while they eat. Or they could eat and then go to a strip club. So why do they need to be served in restaurants by breasty girls? And why are some men completely uninterested in these environments, even though they’re seriously into tits? The answer that this is something about performing a type of masculinity, and needing women as props for some kind of expression of dominance, makes sense intuitively (which doesn’t mean it’s right). It’s particularly interesting that this article is being presented as preposterous by the hoaxers now just as debate is raging about why Brett Kavanaugh insisted in sharing his non-consensual sexual encounters with other men, while Bill Cosby did his on the down-low. It’s almost as if Bill and Brett had different forms of masculine dominance to express! Forms of masculine dominance that need to be explored and understood! By academics in social studies, for example!

Also note here that the tone of the hoaxers’ explanation suggests that the idea that visiting breasty restaurants is problematic is obviously wrong and everyone believes them about this. In fact, many Americans of good faith from many different backgrounds don’t consider visiting Hooters to be a particularly savoury activity, and you probably won’t convince your girlfriend you’re not an arsehole by telling her she’s wrong to “problematize heterosexual men’s attraction to women” in the context of your having blown your weekly entertainment budget on a trip to Hooters. Understanding why she has problematized this behavior might help you to get laid the following week!

Do men do violence to women when they fantasize about them?

The hoaxers wrote an article that they refer to as “Masturbation”, real title Rubbing One Out: Defining Metasexual Violence of Objectification Through Nonconsensual Masturbation, which was ultimately rejected from Sociological Theory after peer review. I think this was the most interesting of their fake articles, covering a really interesting topic, with real ethical implications. The basic idea here is that when men fantasize about women without women’s consent (for example when masturbating) they’re committing a kind of sexual violence, even though the woman in question doesn’t know about this. They wrote this article to test

To see if the definition of sexual violence can be expanded into thought crimes

But this way of presenting their argument (“Thought crimes”) and the idea that the definition of sexual violence hasn’t already been expanded to thought crimes, is deeply dangerous and stupid. To deal with the second point first, in many jurisdictions anime or manga that depicts sex with children is banned. But in these comics nobody has been harmed. So yes, sexual violence has been extended to include thought crimes. But if we don’t expand the definition of sexual violence into thought crimes we run into some very serious legal and ethical problems. Consider the crime of upskirting, in which men take secret videos up women’s skirts and put them onto porn sites for other men to masturbate to. In general the upskirted woman has no clue she’s been filmed, and the video usually doesn’t show her face so it’s not possible for her to be identified. It is, essentially, a victimless crime. Yet we treat upskirting as a far more serious crime than just surreptitiously taking photos of people, which we consider to be rude but not criminal. This is because we consider upskirting to be a kind of sexual violence exactly equivalent to the topic of this article! This is also true for revenge porn, which is often public shaming of a woman that destroys her career, but doesn’t have to be. If you share videos of your ex-girlfriend naked with some other men, and she never finds out about it and your friends don’t publicize those pictures, so she is not affected in any way, everyone would agree that you have still done a terrible thing to her, and that this constitutes sexual violence of some kind. I’ve no doubt that in many jurisdictions this revenge porn is a crime even though the woman targeted has not suffered in any way. Indeed, even if a man just shows his friend a video of a one night stand, and the friend doesn’t know the woman, will never meet her, and has no way to harm her, this is still considered to be a disgusting act. So the fundamental principle involved here is completely sound. This is why porn is made – because the women are being paid to allow strangers to watch them have sex. When people sext each other they are obviously clearly giving explicit permission to the recipient to use the photo for sexual gratification (this is why it is called sexting). Couples usually don’t sext each other until they trust each other precisely because they don’t want the pictures shared so that people they don’t know can masturbate to them without their consent. We also typically treat men who steal women’s underwear differently to men who steal other men’s socks at the coin laundry – I think the reason for this is obvious! So the basic principle at the heart of this paper is solid. Yet the hoaxers treat the idea underlying much of our modern understanding of revenge porn and illicit sexual photography as a joke.

I think the basic problem here is that while the hoaxers have mimicked the style of the field, and understand which theoretical questions to target and write about, they fundamentally don’t understand the field, and so things they consider to be ludicrous are actually important and real questions in the topic, with important and real consequences. They don’t understand it, but it actually makes sense. And now they’ve created this circus of people sneering at how bad the papers were, when actually they were addressing decent topics and real questions.

How would this have happened in other fields?

So if we treat these three papers as serious recognizing that two were published, and then discount the paper with fradulent data (dog park) and the paper that was plagiarized (feminist mein kampf) we are left with just three papers that were published that might be genuinely bullshit, out of 20. That’s 15%, or 22% if you drop the plagiarized and fraudulent papers from the denominator. Sounds bad, right? But this brings us to our next big problem with this hoax: there was no control group. If I submitted 20 papers with dodgy methods and shonky reasoning to public health journals, I think I could get 15% published. Just a week or two ago I reported on a major paper in the Lancet that I think has shonky methods and reasoning, as well as poorly-gathered data, but it got major publicity and will probably adversely affect alcohol policy in future. I have repeatedly on this blog attacked papers published in the National Bureau of Economics Research (NBER) archives, which use terrible methods, poor quality data, bad reasoning and poor scientific design. Are 15% of NBER papers bullshit? I would suggest the figure is likely much higher. But we can’t compare because the authors didn’t try to hoax these fields, and as far as I know no one has ever tried to hoax them. This despite the clear and certain knowledge that the R&R paper in economics was based on a flawed model and bad reasoning, but was used to inform fiscal policy in several countries, and the basic conclusions are still believed even though it has been roundly debunked.

The absence of hoaxes (or even proper critical commentary) on other fields means that they can maintain an air of inassailability while social studies and feminist theory are repeatedly criticized for their methods and the quality of their research and peer review. This is a political project, not a scientific project, and these hoaxers have gone to great lengths to produce a salable, PR-ready attack on a field they don’t like, using a method that is itself poorly reasoned, with shonky methodology, and a lack of detailed understanding of the academic goals of the field they’re punking. They also, it should be remembered, have acted very unethically. I think the beam is in their own eye, or as the Suicidals would say:

Ah, damn, we got a lot of stupid people
Doing a lot of stupid things
Thinking a lot of stupid thoughts
And if you want to see one
Just look in the mirror


This hoax shouldn’t be taken seriously, and it doesn’t say anything much about the quality of research or academic editing in the field they’re criticizing. Certainly on the face of it some of the papers that were published seem pretty damning, but some of them covered real topics of genuine interest, and the hoaxers’ interpretation of the theoretical value of the work is deeply flawed. This is a PR stunt, nothing more, and it does nothing to address whatever real issues sociology and women’s studies face. Until people start genuinely developing a model for properly assessing the quality of academic work in multiple fields, with control groups and proper adjustment for confounders, in a cross-disciplinary team that fully understands the fields being critiqued, these kinds of hoaxes will remain just stupid stunts, that play on the goodwill of peer reviewers and academics for the short-term political and public benefit of the hoaxers, but for no longer benefit to the community being punked, and at the risk of considerable harm. Until a proper assessment of the quality of all disciplines is conducted, we should not waste our time punking others, but think harder about how we can improve our own.


fn1: I won’t link, because a lot of online texts of Mein Kampf are on super dubious websites – look it up yourself if you wish to see what the punking text was.

fn2: Revealing peer reviews is generally considered unethical, btw

Uhtred son of Uhtred, regular ale drinker, who I predict will die of injury (but will go to Valhalla, unlike you you ale-sodden wretch)

There has been some fuss in the media recently about a new study showing no level of alcohol use is safe. It received a lot of media attention (for example here), reversed a generally held belief that moderate consumption of alcohol improves health (this is even enshrined in the Greek food pyramid, which has a separate category for wine and olive oil[1]), and led to angsty editorials about “what is to be done” about alcohol. Although there are definitely things that need to be done about alcohol, prohibition is an incredibly stupid and dangerous policy, and so are some of its less odious cousins, so before we go full Leroy Jenkins on alcohol policy it might be a good idea to ask if this study is really the bees knees, and does it really show what it says it does.

This study is a product of the Global Burden of Disease (GBD) project, at the Institute for Health Metrics and Evaluation (IHME). I’m intimately acquainted with this group because I made the mistake of getting involved with them a few years ago (I’m not now) so I saw how their sausage is made, and I learnt about a few of their key techniques. In fact I supervised a student who, to the best of my knowledge, remains the only person on earth (i.e. the only person in a population of 7 billion people, outside of two people at IHME) who was able to install a fundamental software package they use. So I think I know something about how this institution does its analyses. I think it’s safe to say that they aren’t all they’re cracked up to be, and I want to explain in this post how their paper is a disaster for public health.

The way that the IHME works in these papers is always pretty similar, and this paper is no exception. First they identify a set of diseases and health conditions related to their chosen risk (in this case the chosen risk is alcohol). Then they run through a bunch of previously published studies to identify the numerical magnitude of increased risk of these diseases associated with exposure to the risk. Then they estimate the level of exposure in every country on earth (this is a very difficult task which they use dodgy methods to complete). Then they calculate the number of deaths due to the conditions associated with this risk (this is also an incredibly difficult task to which they apply a set of poorly-accredited methods). Finally they use a method called comparative risk assessment (CRA) to calculate the proportion of deaths due to the exposure. CRA is in principle an excellent technique but there are certain aspects of their application of it that are particularly shonky, but which we probably don’t need to touch on here.

So in assessing this paper we need to consider three main issues: how they assess risk, how they assess exposure, and how they assess deaths. We will look at these three parts of their method and see that they are fundamentally flawed.

Problems with risk assessment

To assess the risk associated with alcohol consumption the IHME used a standard technique called meta-analysis. In essence a meta-analysis collects all the studies that relate an exposure (such as alcohol consumption) to an outcome (any health condition, but death is common), and then combines them to obtain a single final estimate of what the numerical risk is. Typically a meta-analysis will weight all the risks from all the studies according to the sample size of the study, so that for example a small study that finds banging your head on a wall reduces your risk of brain damage is given less weight in the meta-analysis than a very large study of banging your head on a wall. Meta-analysis isn’t easy for a lot of reasons to do with the practical details of studies (for example if two groups study banging your head on a wall do they use the same definition of brain damage and the same definition of banging?), but once you iron out all the issues it’s the only method we have for coming to comprehensive decisions about all the studies available. It’s important because the research literature on any issue typically includes a bunch of small shitty studies, and a few high quality studies, and we need to balance them all out when we assess the outcome. As an example, consider football and concussion. A good study would follow NFL players for several seasons, taking into account their position, the number of games they played, and the team they were in, and compare them against a concussion free sport like tennis, but matching them to players of similar age, race, socioeconomic background etc. Many studies might not do this – for example a study might take 20 NFL players who died of brain injuries and compare them with 40 non-NFL players who died of a heart attack. A good meta-analysis handles these issues of quality and combines multiple studies together to calculate a final estimate of risk.

The IHME study provides a meta-analysis of all the relationships between alcohol consumption and disease outcomes, described as follows[2]:

we performed a systematic review of literature published between January 1st, 1950 and Dec 31st 2016 using Pubmed and the GHDx. Studies were included if the following conditions were met. Studies were excluded if any of the following conditions were met:

1. The study did not report on the association between alcohol use and one of the included outcomes.

2. The study design was not either a cohort, case-control, or case-crossover.

3. The study did not report a relative measure of risk (either relative risk, risk ratio, odds-ratio, or hazard ratio) and did not report cases and non-cases among those exposed and un-exposed.

4. The study did not report dose-response amounts on alcohol use.

5. The study endpoint did not meet the case definition used in GBD 2016.

There are many, many problems with this description of the meta-analysis. First of all they seem not to have described the inclusion criteria (they say “Studies were included if the following conditions were met” but don’t say what those conditions were). But more importantly their conditions for exclusion are very weak. We do not, usually, include case-control and case-crossover studies in a meta-analysis because these studies are, frankly, terrible. The standard method for including a study in a meta-analysis is to assess it according to the Risk of Bias Tool and dump it if it is highly biased. For example, should we include a study that is not a randomized controlled trial? Should we include studies where subjects know their assignment? The meta-analysis community have developed a set of tools for deciding which studies to include, and the IHME crew haven’t used them.

This got me thinking that perhaps the IHME crew have been, shall we say, a little sloppy in how they include studies, so I had a bit of a look. On page 53-55 of the appendix they report the results of their meta-analysis of the relationship between atrial fibrillation and alcohol consumption, and the results are telling. They found 9 studies to include in their meta-analysis but there are many problems with these studies. One (Cohen 1988) is a cross-sectional study and should not be included, according to the IHME’s own exclusion criteria. 6 of the remaining studies assess fribillation only, while 2 assess fibrillation and fibrial flutter, a pre-cursor of fibrillation. However most tellingly, all of these studies find no relationship between alcohol consumption and fibrillation at almost all levels of consumption, but their chart on page 54 shows that their meta-analysis found an almost exponential relationship between alcohol consumption and fibrillation. This finding is simply impossible given the observed studies. All 9 studies found no relationship between moderate alcohol consumption and fibrillation, and several found no relationship even for extreme levels of consumption, but somehow the IHME found a clear relationship. How is this possible?

Problems with exposure assessment

This problem happened because they applied a tool called DISMOD to the data to estimate the relationship between alcohol exposure and fibrillation. DISMOD is an interesting tool but it has many flaws. Its main benefit is that it enables the user to incorporate exposures that have many different categories of exposure definition that don’t match, and turn them into a single risk curve. So for example if one study group has recorded the relative risk of death for 2-5 drinks, and another group has recorded the risk for 1-12 drinks, DISMOD offers a method to turn this into a single curve that will represent the risk relationship per additional drink. This is nice, and it produces the curve on page 54 (and all the subsequent curves). It’s also bullshit. I have worked with DISMOD and it has many, many problems. It is incomprehensible to everyone except the two guys who programmed it, who are nice guys but can’t give decent support or explanations of what it does. It has a very strange response distribution and doesn’t appear to apply other distributions well, and it has some really kooky Bayesian applications built in. It is also completely inscrutable to 99.99% of people who use it, including the people at IHME. It should not be used until it is peer reviewed and exposed to a proper independent assessment. It is application of DISMOD to data that obviously shows no relationship between alcohol consumption and fibrillation that led to the bullshit curve on page 54 of the appendix, that does not have any relationship to the observed data in the collected studies.

This also applies to the assessment of exposure to alcohol. The study used DISMOD to calculate each country’s level of individual alcohol consumption, which means that the same dodgy technique was applied to national alcohol consumption data. But let’s not get hung up on DISMOD. What data were they using? The maps in the Lancet paper show estimates of risk for every African and south east Asian country, which suggests that they have data on these countries, but do you think they do? Do you think Niger has accurate estimates of alcohol consumption in its borders? No, it doesn’t. A few countries in Africa do and the IHME crew used some spatial smoothing techniques (never clearly explained) to estimate the consumption rates in other countries. This is a massive dodge that the IHME apply, which they call “borrowing strength.” At its most egregious this is close to simply inventing data – in an earlier paper (perhaps in 2012) they were able to estimate rates of depression and depression-related conditions for 183 (I think) countries using data from 97 countries. No prizes to you, my astute reader, if you guess that all the missing data was in Africa. The same applies to the risk exposure estimates in this paper – they’re a complete fiction. Sure for the UK and Australia, where alcohol is basically a controlled drug, they are super accurate. But in the rest of the world, not so much.

Problems with mortality assessment

The IHME has a particularly nasty and tricky method for calculating the burden of disease, based around a thing called the year of life lost (YLL). Basically instead of measuring deaths they measure the years of your life that you lost when you died, compared to an objective global standard of life you could achieve. Basically they get the age you died, subtract it from the life expectancy of an Icelandic or Japanese woman, and that’s the number of YLLs you suffered. Add that up for every death and you have your burden of disease. It’s a nice idea except that there are two huge problems:

  • It weights death at young ages massively
  • They never incorporate uncertainty in the ideal life expectancy of an Icelandic or Japanese woman

There is an additional problem in the assessment of mortality, which the IHME crew always gloss over, which is called “garbage code redistribution.” Basically, about 30% of every country’s death records are bullshit, and don’t correspond with any meaningful cause of death. The IHME has a complicated, proprietary system that they cannot and will not explain that redistributes these garbage codes into other meaningful categories. What they should do is treat these redistributed deaths as a source of error (e.g. we have 100,000 deaths due to cancer and 5,000 redistributed deaths, so we actually have 102500 plus/minus 2500 deaths), but they don’t, they just add them on. So when they calculate burden of disease they use the following four steps:

  • Calculate the raw number of deaths, with an estimate of error
  • Reassign dodgy deaths in an arbitrary way, without counting these deaths as any form of uncertainty
  • Estimate an ideal life expectancy without applying any measure of error or uncertainty to it
  • Calculate the years of life lost relative to this ideal life expectancy and add them up

So here there are three sources of uncertainty (deaths, redistribution, ideal life expectancy) and only one is counted; and then all these uncertain deaths are multiplied by the number of years lost relative to the ideal life expectancy.

The result is a dog’s breakfast of mortality estimates, that don’t come even close to representing the truth about the burden of disease in any country due to any condition.

Also, the IHME apply the same dodgy modeling methods to deaths (using a method that they (used to?) call CoDMoD) before they calculate YLLs, so there’s another form of arbitrary model decisions and error in their assessments.

Putting all these errors together

This means that the IHME process works like this:

  • An incredibly dodgy form of meta-analysis that includes dodgy studies and miscalculates levels of risk
  • Applied to a really shonky estimate of the level of exposure to alcohol, that uses a computer program no one understands applied to a substandard data set
  • Applied to a dodgy death model that doesn’t include a lot of measures of uncertainty, and is thus spuriously accurate

The result is that at every stage of the process the IHME is unreasonably confident about the quality of their estimates, produces excessive estimates of risk and inaccurate measures of exposure, and is too precise in its calculations of how many people died. This means that all their conclusions about the actual risk of alcohol, the level of exposure, and the magnitude of disease burden due to the conditions they describe cannot be trusted. As a result, neither can their estimates of the proportion of mortality due to alcohol.


There is still no evidence that moderate alcohol consumption is bad for you, and solid meta-analyses of available studies support the conclusion that moderate alcohol consumption is not harmful. This study should not be believed and although the IHME has good press contacts, you should ignore all the media on this. As a former insider in the GBD process I can also suggest that in future you ignore all work from the Global Burden of Disease project. They have a preferential publishing deal with the Lancet, which means they aren’t properly peer reviewed, and their work is so massive that it’s hard for most academics to provide adequate peer review. Their methods haven’t been subjected to proper external assessment and my judgement, based on having visited them and worked with their statisticians and their software, is that their methods are not assessable. Their data is certainly dubious at times but most importantly their analysis approach is not correct and the Lancet doesn’t subject it to proper peer review. This is going to have long term consequences for global health, and at some point the people who continue to associate with the IHME’s papers (they have hundreds or even thousands of co-authors) will regret that association. I stopped collaborating with this project, and so should you. If you aren’t sure why, this paper on alcohol is a good example.

So chill, have another drink, and worry about whether it’s making you fat.

fn1: There are no reasons not to love Greek food, no wonder these people conquered the Mediterranean and developed philosophy and democracy!

fn2: This is in the appendix to their study

No this really is not “the healthy one”

Today’s Guardian has a column by George Monbiot discussing the issue of obesity in modern England, that I think fundamentally misunderstands the causes of obesity and paints a dangerously rosy picture of Britain’s dietary situation. The column was spurred by a picture of a Brighton Beach in 1976, in which everyone was thin, and a subsequent debate on social media about the causes of the changes in British rates of overweight and obesity in the succeeding half a decade. Monbiot’s column dismisses the possibility that the growth in obesity could be caused by an increase in the amount we eat, by a reduction in the amount of physical activity, or by a change in rates of manual labour. He seems to finish the column by suggesting it is all the food industry’s fault, but having dismissed the idea that the food industry has convinced us to eat more, he is left with the idea that the real cause of obesity is changes in the patterns of what we eat – from complex carbohydrates and proteins to sugar. This is a bugbear of certain anti-obesity campaigners, and it’s wrong, as is the idea that obesity is all about willpower, which Monbiot also attacks. The problem here though is that Monbiot misunderstands the statistics badly, and as a result dismisses the obvious possibility that British people eat too much. He commits two mistakes in his article: first he misunderstands the statistics on British food consumption, and secondly he misunderstands the difference between a rate and a budget, which is ironic given he understands these things perfectly well when he comments on global warming. Let’s consider each of these issues in turn.

Misreading the statistics

Admirably, Monbiot digs up some stats from 1976 and compares them with statistics from 2018, and comments:

So here’s the first big surprise: we ate more in 1976. According to government figures, we currently consume an average of 2,130 kilocalories a day, a figure that appears to include sweets and alcohol. But in 1976, we consumed 2,280 kcal excluding alcohol and sweets, or 2,590 kcal when they’re included. I have found no reason to disbelieve the figures.

This is wrong. Using the 1976 data, Monbiot appears to be referring to Table 20 on page 77, which indicates a yearly average of 2280 kCal. But this is the average per household member, and does not account for whether or not a household member is a child. If we refer to Table 24 on page 87, we find that a single adult in 1976 ate an average of 2670 kCal; similar figures apply for two adult households with no children (2610 kCal). Using the more recent data Monbiot links to, we can see that he got his 2,130 kCal from the file of “Household and Eating Out Nutrient Intakes”. But if we use the file “HC – Household nutrient intakes” and look at 2016/17 for households with one adult and no children, we find 2291 kCal, and about 2400 as recently as 10 years ago. These are large differences when they accrue over years.

This is further compounded by the age issue. When we look at individual intake we need to consider how old the family members are. If an average individual intake is 2590 kCal in 1976 including alcohol and sweets, as Monbiot suggests, we need to rebalance it for adults and children. In a household with three people we have 7700 kCal, which if the child is eating 1500 kCal means that the adults are eating close to 3100 kCal each. That’s too much food for everyone in the house, even using the ridiculously excessive nutrient standards provided by the ONS.  It’s also worth remembering that the age of adults in 1976 was on average much younger than now, and an intake of 2590 might be okay for a young adult but it’s not okay for a 40-plus adult, of which there are many more now than there were then. This affects obesity statistics.

Finally it’s also worth remembering that obesity is not evenly distributed, and an average intake of 2100 kCal could correspond to an average of 2500 in the poorest 20% of the population (where obesity is common) and 1700 kCal in the richest, which is older and thinner. An evenly distributed 2100 kCal will lead to zero obesity over the whole population, but an unevenly distributed 2100 kCal will not. It’s important to look carefully at the variation in the datasets before deciding the average is okay.

Misunderstanding budgets and rates

Let’s consider the 2590 kCal that Monbiot finds as the average intake of adults in 1976, including alcohol and sweets. This is likely wrong, and the average is probably more like 3000 kCal including alcohol and sweets, but let’s go with it for now. Monbiot is looking to see what has changed in our diet over the past 40 years to lead to current rates of obesity, because he is looking for a change in the rate of consumption. But he doesn’t consider that all humans have a budget, and that a small excess of that budget over a long period is what drives obesity. The reality is that today’s obesity rates do not reflect today’s consumption rates, but the steady pattern of consumption over the past 40 years. What made a 55 year old obese today is what they ate in 1976 – when they were 15 – not what the average person eats today. So rather than saying “we eat less today than we did 40 years ago so that can’t be the cause of obesity”, what really matters is what people have been eating for the past 40 years. And the stats Monbiot uses suggest that women, at least, have been eating too much – a healthy adult woman should eat about 2100 kCal, and if the average is 2590 then a woman in 1976 has been at or above her energy intake every year for the past 40 years. It doesn’t matter that a woman’s intake declined to 2100 kCal in 2016, because she has been eating too much for the past 35 years anyway. It’s this budget, not changes over time, which determine the obesity rate now, and Monbiot is wrong to argue that it’s not overeating that has caused the obesity epidemic. Unless he accepts that a woman can eat 2590 kCal every year for 40 years and stay thin, he needs to accept that the problem of obesity is one of British food culture over half a century.

What this means for obesity policy

Somewhat disappointingly and unusually for a Monbiot article, there are no sensible policy prescriptions at the end except “stop shaming fat people.” This isn’t very helpful, and neither is it helpful to dismiss overeating as a cause, since everyone in public health knows that overeating is the cause of obesity. For example, Public Health for England wants to reduce British calorie intake, and the figures on why are disturbing reading. Reducing calorie intake doesn’t require shaming fat people but it does require acknowledgement that British people eat too much. This comes down not to individual willpower but to the food environment in which we all make choices about what to eat. The simplest way, for example, to reduce the amount that people eat is not to give them too much food. But there is simply no way in Britain that you can eat out or buy packaged food products without buying too much food. It is patently obvious that British restaurants serve too much food, that British supermarkets sell food in packages that are too large, and that as a result the only way for British people not to eat too much is through constant acts of will – leaving half the food you paid for, buying only fresh food in small amounts every day (which is only possible in certain wealthy inner city suburbs), and carefully controlling where, when and how you eat. This is possible but it requires either that you move in a very wealthy cultural circle where the environment supports this kind of thing, or that you personally exert constant control over your life. And that latter choice will inevitably end in failure, because constantly controlling every aspect of your food intake in opposition to the environment where you purchase, prepare and consume food is very very difficult.

When you live in Japan you live in a different food environment, which encourages small serving sizes, fresh and raw foods, and low fat and low sugar foods. In Japan you live in a food environment where you are always close to a small local supermarket with convenient opening hours and fresh foods, and where convenience stores sell healthy food in small serving sizes. This means that you can choose to buy small amounts of fresh food as and when you need them, and avoid buying in bulk in a pattern that encourages over consumption. When your food choices fail (for example you have to eat out, or buy junk food) you will have access to a small, healthy serving. If you are a woman you will likely have access to a “woman’s size” or “princess size” that means you can eat the smaller calorific food that your smaller calorific requirements suggest is wisest. It is easy to be thin in Japan, and so most people are thin. Overeating in Japan really genuinely is a choice that you have to choose to make, rather than the default setting. This difference in food environment is simple, obvious and especially noticeable when (as I just did) you hop on a plane to the UK and suddenly find yourself confronted with double helpings of everything, and super markets where everything is “family sized”. The change of food environment forces you to eat more. It’s as simple as that.

What Britain needs is a change in the food environment. And achieving a change in food environment requires first of all recognizing that British people eat too much, and have been eating too much for way too long. Monbiot’s article is an exercise in denialism of that simple fact, and he should change it or retract it.

The journal Molecular Autism this week published an article about the links between Hans Asperger and the Nazis in world war 2 Vienna, Austria. Hans Asperger is the paediatric pscyhiatrist on whose work Asperger’s syndrome is based, and after whom the syndrome is known. Until recently Asperger was believed to have been an anti-Nazi, someone who resisted the Nazis and risked his own career to protect some of his developmentally delayed patients from the Nazi “euthanasia” program, which killed or sterilized people with certain developmental disabilities for eugenics reasons.

The article, entitled Hans Asperger, National Socialism, and “race hygiene” in Nazi-era Vienna, is a thorough, well-researched and extensively documented piece of work, which I think is based on several years of detailed examination of primary sources, often in their original German. It uses these sources – often previously untouched – to explore and rebut several claims Asperger made about himself, and also to examine the nature of his diagnostic work during the Nazi era to see whether he was resisting or aiding the Nazis in their racial hygiene goals. In this post I want to talk a little about the background of the paper, and ask a few questions about the implications of these findings for our understanding of autism, and also for our practice as public health workers in the modern era. I want to make clear that I do not know much if anything about Asperger’s syndrome or autism, so my questions are questions, not statements of opinion disguised as questions.

What was known about Asperger

Most of Asperger’s history under the Nazis was not known in the English language press, and when his name was attached to the condition of Asperger’s syndrome he was presented as a valiant defender of his patients against Nazi racial hygiene, and as a conscientious objector to Nazi ideology. This view of his life was based on some speeches and written articles translated into English during the post war years, in particular a 1974 interview in which he claims to have defended his patients and had to be saved from being arrested by the Gestapo twice by his boss, Dr. Hamburger. Although some German language publications were more critical, in general Asperger’s statements about his own life’s work were taken at face value, and seminal works in 1981 and 1991 that introduced him to the medical fraternity did not include any particular reference to his activities in the Nazi era.

What Asperger actually did

Investigation of the original documents shows a different picture, however. Before Anschluss (the German occupation of Austria in 1938), Asperger was a member of several far right Catholic political organizations that were known to be anti-semitic and anti-democratic. After Anschluss he joined several Nazi organizations affiliated with the Nazi party. His boss at the clinic where he worked was Dr. Hamburger, who he claimed saved him twice from the Gestapo. In fact Hamburger was an avowed neo-nazi, probably an entryist to these Catholic social movements during the period when Nazism was outlawed in Vienna, and a virulent anti-semite. He drove Jews out of the clinic even before Anschluss, and after 1938 all Jews were purged from the clinic, leaving openings that enabled Asperger to get promoted. It is almost impossible given the power structures at the time that Asperger could have been promoted if he disagreed strongly with Hamburger’s politics, but we have more than circumstantial evidence that they agreed: the author of the article, Herwig Czech, uncovered the annual political reports submitted concerning Asperger by the Gestapo, and they consistently agreed that he was either neutral or positive towards Nazism. Over time these reports became more positive and confident. Also during the war era Asperger gained new roles in organizations outside his clinic, taking on greater responsibility for public health in Vienna, which would have been impossible if he were politically suspect, and his 1944 PhD thesis was approved by the Nazis.

A review of Asperger’s notes also finds that he did send at least some of his patients to the “euthanasia” program, and in at least one case records a conversation with a parent in which the child’s fate is pretty much accepted by both of them. The head of the institution that did the “euthanasia” killings was a former colleague of Asperger’s, and the author presents pretty damning evidence that Asperger must have known what would happen to the children he referred to the clinic. It is clear from his speeches and writings in the Nazi era that Asperger was not a rabid killer of children with developmental disabilities: he believed in rehabilitating children and finding ways to make them productive members of society, only sending the most “ineducable” children to institutional care and not always to the institution that killed them. But it is also clear that he accepted the importance of “euthanasia” in some instances. In one particularly compelling situation, he was put in charge – along with a group of his peers – of deciding the fate of some 200 “ineducable” children in an institution for the severely mentally disabled, and 35 of those ended up being murdered. It seems unlikely that he did not participate in this process.

The author also notes that in some cases Asperger’s prognoses for some children were more severe than those of the doctors at the institute that ran the “euthanasia” program, suggesting that he wasn’t just a fairweather friend of these racial hygiene ideals, and the author also makes the point that because Asperger remained in charge of the clinic in the post-war years he was in a very good position to sanitize his case notes of any connection with Nazis and especially with the murder of Jews. Certainly, the author does not credit Asperger’s claims that he was saved from the Gestapo by Hamburger, and suggests that these are straight-up fabrications intended to sanitize Asperger’s role in the wartime public health field.

Was Asperger’s treatment and research ethical in any way?

Reading the article, one question that occurred to me immediately was whether any of his treatments could be ethical, given the context, and also whether his research could possibly have been unbiased. The “euthanasia” program was actually well known in Austria at the time – so well known in fact that at one point allied bombers dropped leaflets about it on the town, and there were demonstrations against it at public buildings. So put yourself in the shoes of a parent of a child with a developmental disability, bringing your child to the clinic for an assessment. You know that if your child gets an unfavourable assessment there is a good chance that he or she will be sterilized or taken away and murdered. Asperger offers you a treatment that may rehabilitate the child. Obviously, with the threat of “euthanasia” hanging over your child, you will say yes to this treatment. But in modern medicine there is no way that we could consider that to be willing consent. The parent might actually not care about “rehabilitating” their child, and is perfectly happy for the child to grow up and be loved within the bounds of what their developmental disability allows them; it may be that rehabilitation is difficult and challenging for the child, and not in the child’s best emotional interests. But faced with that threat of a racial hygiene-based intervention, as a parent you have to say yes. Which means that in a great many cases I suspect that Asperger’s treatments were not ethical from any post-war perspective.

In addition, I also suspect that the research he conducted for his 1944 PhD thesis, in addition to being unethical, was highly biased, because the parents of these children were lying through their teeth to him. Again, consider yourself as the parent of such a child, under threat of sterilization or murder. You “consent” to your child’s treatment regardless of what might be in the child’s best developmental and emotional interests, and also allow the child to be enrolled in Asperger’s study[1]. Then your child will be subjected to various rehabilitation strategies, what Asperger called pedagogical therapy. You will bring your child into the clinic every week or every day for assessments and tests. Presumably the doctor or his staff will ask you questions about the child’s progress: does he or she engage with strangers? How is his or her behavior in this or that situation? In every situation where you can, you will lie and tell them whatever you think is most likely to make them think that your child is progressing. Once you know what the tests at the clinic involve, you will coach your child to make sure he or she performs well in them. You will game every test, lie at every assessment, and scam your way into a rehabilitation even if your child is gaining nothing from the program. So all the results on rehabilitation and the nature of the condition that Asperger documents in his 1944 PhD thesis must be based on extremely dubious research data. You simply cannot believe that the research data you obtained from your subjects is accurate when some of them know that their responses decide whether their child lives or dies. Note that this problem with his research exists regardless of whether Asperger was an active Nazi – it’s a consequence of the times, not the doctor – but it is partially ameliorated if Asperger actually was an active resister to Nazi ideology, since it’s conceivable in that case that the first thing he did was give the parent an assurance that he wasn’t going to ship their kid off to die no matter what his diagnosis was. But since we now know he did ship kids off to die, that possibility is off the table. Asperger’s research subjects were consenting to a research study and providing subjective data on the assumption that the study investigator was a murderer with the power to kill their child. This means Asperger’s 1944 work probably needs to be ditched from the medical canon, simply on the basis of the poor quality of the data. It also has implications, I think, for some of his conclusions and their influence on how we view Asperger’s syndrome.

What does this mean for the concept of the autism spectrum?

Asperger introduced the idea of a spectrum of autism, with some of the children he called “autistic psychopaths” being high functioning, and some being low functioning, with a spectrum of disorder. This idea seems to be an important part of modern discussion of autism as well. But from my reading of the paper [again I stress I am not an expert] it seems that this definition was at least partly informed by the child’s response to therapy. That is, if a child responded to therapy and was able to be “rehabilitated”, they were deemed high functioning, while those who did not were considered low functioning. We have seen that it is likely that some of the parents of these children were lying about their children’s functional level, so probably his research results on this topic are unreliable, but there is a deeper problem with this definition, I think. The author implies that Asperger was quite an arrogant and overbearing character, and it seems possible to me that his assumption that he is deeply flawed in assuming his therapy would always work and that if it failed the problem was with the child’s level of function. What if his treatment only worked 50% of the time, randomly? Then the 50% of children who failed are not “low-functioning”, they’re just unlucky. If we compare with a pharmaceutical treatment, it simply is not the case that when your drugs fail your doctor deems this to be because you are “low functioning”, and ships you off to the “euthanasia” clinic. They assume the drugs didn’t work and give you better, stronger, or more experimental drugs. Only when all the possible treatments have failed do they finally deem your condition to be incurable. But there is no evidence that Asperger considered the possibility that his treatment was the problem, and because the treatment was entirely subjective – the parameters decided on a case-by-case basis – there is no way to know whether the problem was the children or the treatment. So to the extent that this concept of a spectrum is determined by Asperger’s judgment of how the child responded to his entirely subjective treatment, maybe the spectrum doesn’t exist?

This is particularly a problem because the concept of “functioning” was deeply important to the Nazis and had a large connection to who got selected for murder. In the Nazi era, to quote Negan, “people were a resource”, and everyone was expected to be functioning. Asperger’s interest in this spectrum and the diagnosis of children along it wasn’t just or even driven by a desire to understand the condition of “autistic psychopathy”, it was integral to his racial hygiene conception of what to do with these children. In determining where on the spectrum they lay he was providing a social and public health diagnosis, not a personal diagnosis. His concern here was not with the child’s health or wellbeing or even an accurate assessment of the depth and nature of their disability – he and his colleagues were interested in deciding whether to kill them or not. Given the likely biases in his research, the dubious link between the definition of the spectrum and his own highly subjective treatment strategy, and the real reasons for defining this spectrum, is it a good idea to keep it as a concept in the handling of autism in the modern medical world? Should we revisit this concept, if not to throw it away at least to reconsider how we define the spectrum and why we define it? Is it in the best interests of the child and/or their family to apply this concept?

How much did Asperger’s racial hygiene influence ideas about autism’s heritability?

Again, I want to stress that I know little about autism and it is not my goal here to dissect the details of this disease. However, from what I have seen of the autism advocacy movement, there does seem to be a strong desire to find some deep biological cause of the condition. I think parents want – rightly – to believe that it is not their fault that their child is autistic, and that the condition is not caused by environmental factors that might somehow be associated with their pre- or post-natal behaviors. Although the causes of autism are not clear, there seems to be a strong desire of some in the autism community to see it as biological or inherited. I think this is part of the reason that Andrew Wakefield’s scam linking autism to MMR vaccines remains successful despite his disbarment in the UK and exile to America. Parents want to think that they did not cause this condition, and blaming a pharmaceutical company is an easy alternative to this possibility. Heritability is another alternative explanation to behavioral or environmental causes. Asperger of course thought that autism was entirely inherited, blaming it – and its severity – on the child’s “constitution”, which was his phrase for their genetic inheritance. This is natural for a Nazi, of course – Nazis believe everything is inherited. Asperger also believed that sexual abuse was due to genetic causes (some children had a genetic property that led them to “seduce” adults!) Given Asperger’s influence on the definition of autism, I think it would be a good idea to assess how much his ideas also influence the idea that autism is inherited or biologically determined, and to question the extent to which this is just received knowledge from the original researcher. On a broader level, I wonder how many conditions identified during the war era and immediately afterwards were influenced by racial hygiene ideals, and how much the Nazi medical establishment left a taint on European medical research generally.

What lessons can we learn about public health practice from this case?

It seems pretty clear that some mistakes were made in the decision to assign Asperger’s name to this condition, given what we now know about his past. It also seems clear that Asperger was able to whitewash his reputation and bury his responsibilities for many years, including potentially avoiding being held accountable as an accessory to murder. How many other medical doctors, social scientists and public health workers from this time were also able to launder their history and reinvent themselves in the post-war era as good Germans who resisted the Nazis, rather than active accomplices of a murderous and cruel regime? What is the impact of their rehabilitation on the ethics and practice of medicine or public health in the post-war era? If someone was a Nazi, who believed that murdering the sick, disabled and certain races for the good of the race was a good thing, then when they launder their history there is no reason to think they actually laundered their beliefs as well. Instead they carried these beliefs into the post war era, and presumably quietly continued acting on them in the institutions they now occupied and corrupted. How much of European public health practice still bears the taint of these people? It’s worth bearing in mind that in the post war era many European countries continued to run a variety of programs that we now consider to have been rife with human rights abuse, in particular the way institutions for the mentally ill were run, the treatment of the Roma people (which often maintained racial-hygiene elements even decades after the war), treatment of “promiscuous” women and single mothers, and management of orphanages. How much of this is due to the ideas of people like Asperger, propagating slyly through the post-war public health institutional framework and carefully hidden from view by people like Asperger, who were assiduously purging past evidence of their criminal actions and building a public reputation for purity and good ethics? I hope that medical historians like Czech will in future investigate these questions.

This is not just a historical matter, either. I have colleagues and collaborators who work in countries experiencing various degrees of authoritarianism and/or racism – countries like China, Vietnam, Singapore, the USA – who are presumably vulnerable to the same kinds of institutional pressures at work in Nazi Germany. There have been cases, for example, of studies published from China that were likely done using organs harvested from prisoners. Presumably the authors of those studies thought this practice was okay? If China goes down a racial hygiene path, will public health workers who are currently doing good, solid work on improving the public health of the population start shifting their ideals towards murderous extermination? Again, this is not an academic question: After 9/11, the USA’s despicable regime of torture was developed by two psychologists, who presumably were well aware of the ethical standards their discipline is supposed to maintain, and just ignored them. The American Psychological Association had to amend its code in 2016 to include an explicit statement about avoiding harm, but I can’t find any evidence of any disciplinary proceedings by either the APA or the psychologists’ graduating universities to take action for the psychologists’ involvement in this shocking scheme. So it is not just in dictatorships that public policy pressure can lead to doctors taking on highly unethical standards. Medical, pscyhological and public health communities need to take much stronger action to make sure that our members aren’t allowed to give into their worst impulses when political and social pressure comes to bear on them.

These ideas are still with us

As a final point, I want to note that the ideas that motivated Asperger are not all dead, and the battle against the pernicious influence of racial hygiene was not won in 1945. Here is Asperger in 1952, talking about “feeblemindedness”:

Multiple studies, above all in Germany, have shown that these families procreate in numbers clearly above the average, especially in the cities. [They] live without inhibitions, and rely without scruples on public welfare to raise or help raise their children. It is clear that this fact presents a very serious eugenic problem, a solution to which is far off—all the more, since the eugenic policies of the recent past have turned out to be unacceptable from a human standpoint

And here is Charles Murray in 1994:

We are silent partly because we are as apprehensive as most other people about what might happen when a government decides to social-engineer who has babies and who doesn’t. We can imagine no recommendation for using the government to manipulate fertility that does not have dangers. But this highlights the problem: The United States already has policies that inadvertently social-engineer who has babies, and it is encouraging the wrong women. If the United States did as much to encourage high-IQ women to have babies as it now does to encourage low-IQ women, it would rightly be described as engaging in aggressive manipulation of fertility. The technically precise description of America’s fertility policy is that it subsidizes births among poor women, who are also disproportionately at the low end of the intelligence distribution. We urge generally that these policies, represented by the extensive network of cash and services for low-income women who have babies, be ended. [Emphasis in the Vox original]

There is an effort in Trump’s America to rehabilitate Murray’s reputation, long after his policy prescriptions were enacted during the 1990s. There isn’t any real difference between Murray in 1994, Murray’s defenders in 2018, or Asperger in 1952. We now know what the basis for Asperger’s beliefs were. Sixty years later they’re still there in polite society, almost getting to broadcast themselves through the opinion pages of a major centrist magazine. Racial hygiene didn’t die with the Nazis, and we need to redouble our efforts now to get this pernicious ideology out of public health, medicine, and public policy. I expect that in the next few months this will include some uncomfortable discussions about Asperger’s legacy, and I hope a reassessment of the entire definition of autism, Asperger’s syndrome and its management. But we should all be aware that in these troubled times, the ideals that motivated Asperger did not die with him, and our fields are still vulnerable to their evil influence.


fn1: Note that you consent to this study regardless of your actual views on its merits, whether it will cause harm to your child, etc. because this doctor is going to decide whether your child “rehabilitates” or slides out of view and into the T4 program where they will die of “pneumonia” within 6 months, and so you are going to do everything this doctor asks. This is not consent.

The media this week are exploding with news that a company called Cambridge Analytica used shadily-obtained Facebook data to influence the US elections. The data was harvested by some other shady company using an app that legally exploited Facebook’s privacy rules at the time, and then handed over to Cambridge Analytica, who then used the data to micro-target adverts over Facebook during the election, mostly aimed at getting Trump elected. The news is still growing, and it appears that Cambridge Analytica was up to a bunch of other shady stuff too – swinging elections in developing countries through fraud and honey-traps, getting Facebook data from other sources and possibly colluding illegally with the Trump campaign against campaign funding laws – and it certainly looks like a lot of trouble is deservedly coming their way.

In response to this a lot of people have been discussing Facebook itself as if it is responsible for this problem, is itself a shady operator, or somehow represents a new and unique problem in the relationship between citizens, the media and politics. Elon Musk has deleted his company’s Facebook accounts, there is a #deleteFacebook campaign running around, and lots of people are suggesting that the Facebook model of social networking is fundamentally bad (see e.g. this Vox article about how Facebook is simply a bad idea).

I think a lot of this reaction against Facebook is misguided, does not see the real problem, and falls into the standard mistake of thinking a new technology must necessarily come with new and unique threats. I think it misses the real problem underlying Cambridge Analytica’s use of Facebook data to micro-target ads during the election and to manipulate public opinion: the people reading the ads.

We use Facebook precisely because of the unique benefits of its social and sharing model. We want to see our friends’ lives and opinions shared amongst ourselves, we want to be able to share along things we like or approve of, and we want to be able to engage with what our friends are thinking and saying. Some people using Facebook may do so as I do, carefully curating content providers we allow on our feed to ensure they aren’t offensive or upsetting, and avoiding allowing any political opinions we disagree with; others may use it for the opposite purpose, to engage with our friends’ opinions, see how they are thinking, and openly debate and disagree about a wide range of topics in a social forum. Many of us treat it as an aggregator for cat videos and cute viral shit; some of us only use it to keep track of friends. But in all cases the ability of the platform to share and engage is why we use it. It’s the one thing that separates it from traditional mass consumption media. This is its revolutionary aspect.

But what we engage with on Facebook is still media. If your friend shares a Fox and Friends video of John Bolton claiming that Hilary Clinton is actually a lizard person, when you watch that video you are engaging with it just as if you were engaging with Fox and Friends itself. The fact that it’s on Facebook instead of TV doesn’t suddenly exonerate you of the responsibility and the ability to identify that John Bolton is full of shit. If Cambridge Analytica micro target you with an ad that features John Bolton claiming that Hilary Clinton is a lizard person, that means Cambridge Analytica have evidence that you are susceptible to that line of reasoning, but the fundamental problem here remains that you are susceptible to that line of reasoning. Their ad doesn’t become extra brain-washy because it was on Facebook. Yes, it’s possible that your friend shared it and we all know that people trust their friends’ judgment. But if your friends think that shit is reasonable, and you still trust your friend’s judgement, then you and your friend have a problem. That’s not Facebook’s problem, it’s yours.

This problem existed before Facebook, and it exists now outside of Facebook. Something like 40% of American adults think that Fox News is a reliable and trustworthy source of news, and many of those people think that anything outside of Fox News is lying and untrustworthy “liberal media”. The US President apparently spends a lot of his “executive time” watching Fox and Friends and live tweeting his rage spasms. No one forces him to watch Fox and Friends, he has a remote control and fingers, he could choose to watch the BBC. It’s not Facebook’s fault, or even Fox News’s fault, that the president is a dimwit who believes anything John Bolton says.

This is a much bigger problem than Facebook, and it’s a problem in the American electorate and population. Sure, we could all be more media savvy, we could all benefit from better understanding how Facebook abuses privacy settings, shares our data for profit, and enables micro-targeting. But once that media gets to you it’s still media and you still have a responsibility to see if it’s true or not, to assess it against other independent sources of media, to engage intellectually with it in a way that ensures you don’t just believe any old junk. If you trust your friends’ views on vaccinations or organic food or Seth Rich’s death more than you trust a doctor or a police prosecutor then you have a problem. Sure, Facebook might improve the reach of people wanting to take advantage of that problem, but let’s not overdo it here: In the 1990s you would have been at a bbq party or a bar, nodding along as your friend told you that vaccines cause autism and believing every word of it. The problem then was you, and the problem now is you. In fact it is much easier now for you to not be the problem. Back in the 1990s at that bbq you couldn’t have surreptitiously whipped our your iPhone and googled “Andrew Wakefield” and discovered that he’s a fraud who has been disbarred by the GMA. Now you can, and if you choose not to because you think everything your paranoid conspiracy theorist friend says is true, the problem is you. If you’re watching some bullshit Cambridge Analytica ad about how Hilary Clinton killed Seth Rich, you’re on the internet, so you have the ability to cross reference that information and find out what the truth might actually be. If you didn’t do that, you’re lazy or you already believe it or you don’t care or you’re deeply stupid. It’s not Facebook’s fault, or Cambridge Analytica’s fault. It’s yours.

Facebook offers shady operatives like Robert Mercer the ability to micro-target their conspiracy theories and lies, and deeper and more effective reach of their lies through efficient use of advertising money and the multiplicative effect of the social network feature. It also gives them a little bit of a trust boost because people believe their friends are trustworthy. But in the end the people consuming the media this shady group produce are still people with an education, judgment, a sense of identity and a perspective on the world. They are still able to look at junk like this and decide that it is in fact junk. If you sat through the 2016 election campaign thinking that this con-artist oligarch was going to drain the swamp, the problem is you. If you thought that Clinton’s email practices were the worst security issue in the election, the problem is you. If you honestly believed The Young Turks or Jacobin mag when they told you Clinton was more militarist than Trump, the problem is you. If you believed Glenn Greenwald when he told you the real threat to American security was Clinton’s surveillance and security policies, the problem is you. If you believed that Trump cared more about working people than Hilary Clinton, then the problem is you. This stuff was all obvious and objectively checkable and easy to read, and you didn’t bother. The problem is not that Facebook was used by a shady right wing mob to manipulate your opinions into thinking Clinton was going to start world war 3 and hand everyone’s money to the bankers. The problem is that when this utter bullshit landed in your feed, you believed it.

Of course the problem doesn’t stop with the consumers of media but with the creators. Chris Cillizza is a journalist who hounded Clinton about her emails and her security issues before the election, and to this day continues to hound her, and he worked for reputable media organizations who thought his single-minded obsession with Clinton was responsible journalism. The NY Times was all over the email issues, and plenty of NY Times columnists like Maureen Dowd were sure Trump was less militarist than Clinton. Fox carefully curated their news feed to ensure the pussy-grabbing scandal was never covered, so more Americans knew about the emails than the pussy-grabbing. Obviously if no one is creating content about how terrible Trump is then we on Facebook are not able to share it with each other. But again the problem here is not Facebook – it’s the American media. Just this week we learn that the Atlantic, a supposedly centrist publication, is hiring Kevin D Williamson – a man who believes women who get abortions should be hanged – to provide “balance” to its opinion section. This isn’t Facebook’s fault. The utter failure of the US media to hold their government even vaguely accountable for its actions over the past 30 years, or to inquire with any depth or intelligence into the utter corruption of the Republican party, is not Facebook’s fault or ours, it’s theirs. But it is our job as citizens to look elsewhere, to try to understand the flaws in the reporting, to deploy our education to the benefit of ourselves and the civic society of which we are a part. That’s not Facebook’s job, it’s ours. Voting is a responsibility as well as a right, and when you prepare to vote you have the responsibility to understand the information available about the people you are going to vote for. If you decide that you would rather believe Clinton killed Seth Rich to cover up a paedophile scandal, rather than reading the Democratic Party platform and realizing that strategic voting for Clinton will benefit you and your class, then the problem is you. You live in a free society with free speech, and you chose to believe bullshit without checking it.

Deleting Facebook won’t solve the bigger problem, which is that many people in America are not able to tell lies from truth. The problem is not Facebook, it’s you.


Nail them to the wall

In September 2017 Philip Morris International (PMI) – one of the world’s largest cigarette companies – introduced a new foundation to the world: The Foundation for a Smoke Free World. This foundation will receive $80 million per year from PMI for the next 12 years and devote this money to researching “smoking cessation, smoking harm reduction and alternative livelihoods for tobacco farmers”, with the aim to draw in more money from non-tobacco donors over that time. It is seeking advice on how to spend its research money, and it claims to be completely independent of the tobacco industry – it receives money from PMI to the tune of almost a billion dollars, but it claims to have a completely independent research agenda.

The website for the Foundation includes a bunch of compelling statistics on its front page: There is one death every six seconds from smoking, 7.2 million deaths annually, second-hand smoke kills 890,000 people annually, and smoking kills half of all its long-term users. It’s fascinating that a company that as late as the late 1990s was claiming there is no evidence its product kills has now set up a foundation with such powerful admission of the toxic nature of its product. It’s also wrong: the most recent research suggests that 2/3 of users will die from smoking. It’s revealing that even when PMI is being honest it understates the true level of destruction it has wrought on the human race.

That should serve as an object lesson in what this Foundation is really about. It’s not an exercise in genuine tobacco control, but a strategy to launder PMI’s reputation, and to escape the tobacco control deadlock. If PMI took these statistics seriously it could solve the problem it appears to have identified very simply, by ceasing the production of cigarettes and winding up its business. I’m sure everyone on earth would applaud a bunch of very rich tobacco company directors who awarded themselves a fat bonus and simply shut down their business, leaving their shareholders screwed. But that’s not what PMI wants to do. They want to launder their reputation and squirm out from under the pressure civil society is placing on them. They want to start a new business looking all shiny and responsible, and the Foundation is their tool.

PMI have another business model in mind. PMI are the mastermind behind iQos, the heat-not-burn product that they are trialling with huge success in Japan. This cigarette alternative still provides its user with a nicotine hit but it does it through heating a tobacco substance, rather than burning it, avoiding much of the carcinogenic products of cigarettes. PMI have been touting this as the future alternative to cigarettes, and are claiming huge market share gains in Japan based on the product. Heat not burn technologies offer clear harm reduction opportunities for tobacco use: although we don’t know what their toxicity is, it’s almost certainly much lower than tobacco, and every smoker who switches to iQos is likely significantly reducing their long term cancer risk. What PMI needs is for the world to adopt a harm reduction strategy for smoking, so that they can switch from cigarettes to iQos. But the tobacco control community is still divided on whether harm reduction is a better approach than prohibition and demand reduction, which between them have been very successful in reducing smoking.

So isn’t it convenient that there is a new Foundation with a billion dollars to spend on a research platform of “smoking cessation, harm reduction and alternative livelihoods.” It’s as if this Foundation’s work perfectly aligns with PMI’s business strategy. And is it even big money? Recently PMI lost a court case against plain packaging in Australia – because although their foundation admits that smoking kills, they weren’t willing to let the Australian government sell packages that say as much – and have to pay at least $50 million in costs. PMI’s sponsorship deal with Ferrari will cost them $160 million. They spent $24 million fighting plain packaging laws in Urugay (population: 4 million). $80 million is not a lot of money for them, and they will likely spend as much every year lobbying governments to postpone harsh measures, fighting the Framework Convention on Tobacco Control, and advertising their lethal product. This Foundation is not a genuine vehicle for research, it’s an advertising strategy.

It’s a particularly sleazy advertising strategy when you consider the company’s history and what the Foundation claims to do. This company fought any recognition that its products kill, but this Foundation admits that the products kill, while PMI itself continues to fight any responsibility for the damage it has done. This company worked as hard as it could for 50 years to get as many people as possible addicted to this fatal product, but this Foundation headlines its website with “a billion people are addicted and want to stop”. This Foundation will research smoking cessation while the company that funds it fights every attempt to prevent smoking initiation in every way it can. The company no doubt knows that cessation is extremely difficult, and that ten dollars spent on cessation are worth one dollar spent on initiation. It’s precious PR in a time when tobacco companies are really struggling to find anything good to say about themselves.

And as proof of the PR gains, witness the Lancet‘s craven editorial on the Foundation, which argues that public health researchers and tobacco control activists should engage with it rather than ostracizing it, in the hope of finding some common ground on this murderous product. The WHO is not so pathetic. In a press release soon after the PMI was established they point out that it directly contravenes Article 5.3 of the Framework Convention on Tobacco Control, which forbids signatories from allowing tobacco companies to have any involvement in setting public health policy. They state openly that they won’t engage with the organization, and request that others also do not. The WHO has been in the forefront of the battle against tobacco and the tobacco industry for many years, and they aren’t fooled by these kinds of shenanigans. This is an oily trick by Big Tobacco to launder their reputation and try to ingratiate themselves with a world that is sick of their tricks and lies. We shouldn’t stand for it.

I think it’s unlikely that researchers will take this Foundation’s money. Most reputable public health journals have a strict rule that they will not publish research funded by tobacco companies or organizations associated with them, and it is painfully obvious that this greasy foundation is a tobacco company front. This means that most researchers won’t be able to publish any research they do with money from this foundation, and I suspect this means they won’t waste their time applying for the money. It seems likely to me that they will struggle to disburse their research funds in a way that, for example, the Bill and Melinda Gates Foundation do not. I certainly won’t be trying to get any of this group’s money.

The news of this Foundation’s establishment is not entirely bad, though. It’s existence is a big sign that the tobacco control movement is winning. PMI know that their market is collapsing and their days are numbered. Sure they can try and target emerging markets in countries like China but they know the tobacco control movement will take hold in those markets too, and they’re finding it increasingly difficult to make headway. Smoking rates are plummeting in the highest profit markets, and they’re forced to slimmer pickings in developing countries where tobacco control is growing in power rapidly. At the same time their market share is being stolen in developed countries by e-cigarettes, a market they have no control over, and as developing nations become wealthier and tobacco control strengthens e-cigarettes grow in popularity there too. They can see their days are numbered. Furthermore, the foundation is a sign that the tobacco companies’ previous united front on strategy is falling apart. After the UK high court rejected a tobacco company challenge to plain packaging laws, PMI alone decided not to join an appeal, and now PMI has established this foundation. This is a sign that the tobacco companies are starting to lose their previous powerful allegiance on strategy against the tobacco control movement. PMI admits they’ve lost, has developed iQos, and is looking to find an alternative path to the future while the other tobacco companies fight to defend their product.

But should PMI be allowed to take their path? From a public health perspective it’s a short term gain if PMI switch to being a provider of harm reducing products. But there are a bunch of Chinese technology companies offering e-cigarettes as an alternative to smoking. If we allow PMI to join that harm reduction market they will be able to escape the long term consequences of their business decisions. And should they be allowed to? I think they shouldn’t. I think the tobacco companies should be nailed to the wall for what they did. For nearly 70 years these scumbags have denied their products caused any health problems, have spent huge amounts of money on fighting any efforts to control their behavior, and have targeted children and the most vulnerable. They have spent huge amounts of money establishing a network of organizations, intellectuals and front groups that defend their work but – worse still – pollute the entire discourse of scientific and evidence based policy. The growth of global warming denialism, DDT denialism, and anti-environmentalism is connected to Big Tobacco’s efforts to undermine scientific evidence for decent public health policy in the 1980s and 1990s. These companies have done everything they can to pollute public discourse over decades, in defense of a product that we have known is poison since the 1950s. They have had a completely pernicious effect on public debate and all the while their customers have been dying. These companies should not be allowed to escape the responsibility for what they did. Sure, PMI could develop and market a heat-not-burn product or some kind of e-cigarette: but should we let them, when some perfectly innocent Chinese company could steal their market share? No, we should not. Their murderous antics over 70 years should be an albatross around their neck, dragging these companies down into ruin. They should be shackled to their product, never able to escape from it, and their senior staff should never be allowed to escape responsibility for their role in promoting and marketing this death. The Foundation for a Smoke Free World is PMI’s attempt to escape the shackles of a murderous poison that it flogged off to young and poor people remorselessly for 70 years. They should not be allowed to get away with it – they should be nailed to the wall for what they did. Noone should cooperate with this corrupt and sleazy new initiative. PMI should die as if they had been afflicted with the cancer that is their stock in trade, and they should not be allowed to worm out from under the pressure they now face. Let them suffer for the damage they did to human bodies and civil society, and do not cooperate with this sick and cynical Foundation.

Next Page »