Health


My recent post on the case fatality ratio of the new Wuhan Coronavirus sparked a long discussion about the role of European epidemics in the colonization of the new world. There is a theory that after Europeans came to the new world (the Americas, Australia, etc) they brought with them diseases that went through the local populations like wildfire, killing huge proportions of the local populations because they were not previously exposed to these diseases, and so lethality was much higher and even simple diseases that Europeans were used to (like influenza) were highly destructive in these naive populations.

This theory sparked my statistician’s skepticism, and also my cynicism about colonial narratives. Europeans arrived in the Americas in 1492, an era not known for its highly advanced demography, and when they arrived counting the locals wasn’t their primary priority. Epidemiology wasn’t particularly advanced at that time either, and medicine incredibly poor quality, not to mention the difficulty of preserving accounts from that time. Furthermore, I don’t see any evidence that the mortality rates due to diseases like smallpox and plague have changed over time in western populations, and because our recent encounters (in the past 500 years) with immunologically naive populations have been very hostile it’s hard to believe that people bothered to adequately (let alone accurately) record what happened in that time, and it’s hard to imagine that there have been any actual, valid studies of immunologically naive populations in modern times.

Furthermore, there has been a major revisionist movement in the west in the past 20 years, which has tried to deny the reality of genocide in the Americas and Australia, and to cast the white invaders as innocent of any crimes, or at worst having made a few well-meaning mistakes. In Australia this has been spear-headed by Keith Windschuttle, whose Fabrication of Australian History series explicitly attempts to deny violence towards Aborigines and recast the destruction of Australian Aborigines as a consequence of disease and demographic decline. This has been pushed by national newspapers (The Australian, of course, fulfilling their role as propagandists for Satan) and our former prime minister, and its “success” has no doubt sparked similar narratives in other countries. There is even a counter-narrative in the Spanish world of the “Black Legend“, which dismisses claims of violence by Spanish conquistadores as propaganda by England and France. It’s very convenient for these people if they can claim that immunologically naive populations are especially vulnerable, and population decline due to violence is actually the consequence of disease. They can even claim that mass movements of indigenous populations occurred due to disease, not genocide. Handy!

This led me to ask two related questions:

  1. Are immunologically naive populations actually subject to higher mortality rates when disease hits them?
  2. Did disease kill the majority of the population in the Americas, and was that disease introduced by Europeans?

The first question can be answered by looking at the history of black death in Europe, and by genetic studies. The second depends on demographic and epidemiological data, and as I will show, there is none, and all the accounts are extremely dodgy.

The history of diseases in naive populations

A population that is naive to a disease is referred to as a “virgin soil” population, although it appears that this name is never used to describe European populations affected by the plague (which was imported from Asia) – “virgin soil”, along with terra nullius, is a concept reserved for the new world. In fact Europe was virgin soil for the plague in the 14th century, and experienced repeated and horrific epidemics of this disease from the 14th century to the 16th century, with smaller plagues later on. In total the black death is estimated to have killed 30-60% of the population of Europe, and to have precipitated huge social changes across the continent. That was 700 years ago, and yet today the case fatality rate due to plague remains 60%, so 700 years of exposure to this disease hasn’t changed European susceptibility at all.

We can also see this in influenza. The H1N1 epidemic of 2009 killed only 0.01% of people who caught it, even though it was a new strain of influenza to which people could be expected not to be immune. The Spanish flu probably killed 10-20% of people it infected, but it did not do an especially greater job in isolated communities who had never experienced influenza before. For example in Samoa it probably killed about 20% of the population, having infected 90%, which suggests it did not behave particularly egregiously in an unexposed population. Smallpox, which has existed for 10,000 years in humans, had a similar mortality rate over most of its history, with variations in this mortality rate primarily driven by the number of people infected and the quality of the healthcare system. There is some evidence that the mortality rate is lower in Africans, who had been exposed to it for longer, but if so this has taken 10,000 years to manifest, which suggests that in general infectious diseases do not behave differently in “virgin soil” populations, though they can be much worse in populations with inadequate health care or infection control methods.

It’s worth noting that many estimates of the impact of these diseases rely on extremely dubious estimates of population. Putting aside demographic methods of the 14th century, Samoa in 1918 was a colony managed by New Zealand, with a colonial management so incompetent that they allowed people to disembark from a plague ship flying a yellow quarantine flag, and then mismanaged the resulting epidemic so badly that everyone on the island got infected. Did New Zealand’s colonial administration have any incentive to accurately count the population before the epidemic? Did they accurately register newborns and elderly people, or did they only record the working age population? How good were their records? If the Samoa population is underestimated by a small amount then the mortality rate plummets, and conclusions about the effectiveness of the disease in this naive population are significantly changed. And was the population even naive? Were the NZ colonial administrators previously recording every influenza epidemic on the island?

These problems are an order of magnitude worse when we try to understand what happened in native populations.

How many Spaniards went to Mexico?

Accounts of the effect of epidemics depend ultimately on our knowledge of the population affected, and population estimation is a very modern science. How was this done in 15th century America, by people who were busy slaughtering the people we now wish they were counting? What was the variation in population estimates and who was recording population, how and why? Fortunately we have a partial answer to questions about how population was recorded, because a historian called David P. Henige wrote a book called Numbers from Nowhere: The American Indian Contact Population Debate, much of which can be read on google books, that makes a lot of strong criticisms of recording of population at that time. Sadly his specific chapters on over-estimation of epidemics are not available online, but he does provide an analysis of accounts by Spanish reporters of the numbers of Spanish soldiers present at certain actions on the continent. As an example, he reports on the number of deaths recorded during the noche tristes, an uprising in the city of Tenochtitlan in which the Aztecs rose against their Spanish occupiers and slaughtered them, driving them out of the city. Spanish accounts of that event – by people who were there – record the number of deaths as between 150 and 1170, with Cortes (the general in charge) recording the lowest number. Henige also notes accounts of expeditionary forces that vary by up to 10% between reporters who were on the scene, and may not even mention Indian attachments that probably far outnumber the Spanish forces. He reports on a famous Spanish reporter on the continent (las Casas) who misreports the size of the continent itself by a huge amount, and notes that a room that was supposed to be filled with treasure as tribute was given radically different sizes by different Spanish observers, as was the amount of treasure deposited therein. He also notes huge discrepancies (up to a factor of 10) in population estimates by colonial administrations in north America. He writes

If three record books showed Ted Williams lifetime batting average as .276, .344 and .523 respectively, or if three atlases recorded the height of Mt. Everest as 23,263 feet, 29,002 feet, and 44,083 feet, or if three historical dictionaries showed King William XIV as ruling 58 years, 72 years and 109 years, their users would have every right to be thoroughly bemused and would be justified in rejecting them all, even though in each case research could show that in each case one of the figures was correct. Yet these differences are of exactly the same magnitude as those among the sources for the size of Atahulpa’s treasure room that Hemming [an author reporting this story] finds acceptable

These are all relatively trivial examples but they make the point: almost nothing reported from the colonies in the 15th century was accurate. In the absence of accurate reporting, what conclusions can we draw about the role of infectious diseases? And what scientific conclusions can we draw about their relative mortality in virgin soil populations?

Scientific estimates of epidemic mortality in Latin America

A first thing worth noting about scientific reports of epidemic mortality in the Americas is that they often use very old sources. For example, this report of the environmental impact of epidemics in the Americas  cites McNeill’s Plagues and Peoples (1977), Dobyns’s estimates of population from 1966 and 1983, Cook’s work from 1983, and so on. It also relies on some dubious sources, using references extensively from Jared Diamond’s 1997 breakout work Guns, Germs and Steel. Some of these works receive criticism in Henige’s work for their credulity, and Diamond’s work has been universally canned since it was published, though it has been very influential outside of academia. Many of these works were written long before good computational demography was well established, and though it’s hard to access them, I suspect their quality is very poor. Indeed, McNeill’s seminal work is criticized for using the Aryan population model to explain the spread of disease in India. These works are from a time before good scholarship on some of these issues was well established.

Dobyns’s work in turn shows an interesting additional problem, which is that no one knows what caused these epidemics. In his 1993 paper Disease Transfer at Contact, (pdf) Dobyns reports on many different opinions of the diseases that caused the demographic collapse in south America: it may be smallpox, or plague, or Anthrax, or typhus, or influenza, or measles. Dobyns’s accounts also often note that people survived by fleeing, but do not ever consider the possibility that they were fleeing from something other than disease. Contrast that with accounts from north America 400 years later (such as the story of the Pince Nez reported in Bury My Heart at Wounded Knee), which make clear that native Americans were fleeing violence and seeking sanctuary in Canada. There is a lot of certainty missing from these accounts, and we need to be careful before we attribute population decline to disease if we don’t know what the disease was, and are relying on accounts from people who refused to consider the possible alternative explanations for the social collapse they are witnessing.

This is particularly complicated by recent studies which suggest that the epidemic that wiped out much of the Mexican population was actually an endemic disease, that jumped from local rats to the indigenous population, spread from the mountains to the coasts (not from European coastal settlements), and had symptoms completely unrelated to European diseases. In this account, a long period of drought followed by rain triggered a swarm of a type of local rat into overcrowded settlements of native peoples, where a type of hantavirus jumped from those rats to humans and then decimated the population. The disease started inland where the drought had been worse and spread outward, and it primarily affected indigenous people because they were the ones forced to live in unsanitary conditions as a consequence of slave-like working conditions forced on them by the invaders. Note here that the western invaders, presumably completely naive to this disease, were not affected at all, because the main determinants of vulnerability to disease are not genetic.

Further problems with the epidemic explanation for native American population loss arise from the nature of the transatlantic crossing and the diseases it carried. The transatlantic crossing is long, and if anyone were carrying smallpox or influenza when a ship left port the epidemic would be burnt out by the time the ship reached the Americas. In fact it took 26 years for smallpox to reach the continent. That’s a whole generation of people slaughtering the natives before the first serious disease even arrived. During that time coastal populations would have fled inland, social collapse would have begun, crops were abandoned, and some native communities took sides with the invaders and began to work against other native communities. In 9 years of world war 2 the Germans managed to kill 50 million Europeans, several millions of these due to starvation in the East, and created a huge movement of refugee populations that completely changed European demographics and social structures. What did the Spaniards do in 26 years in central America?

It is noticeable that many of the accounts from that time seem not to account for flight and violence. Accounts at that time were highly political, and often reported only information that served whatever agenda the writer was pursuing. Las Casas, for example, whose accounts are often treated as definitive population estimates, appears not to have noticed massive epidemics happening right in front of him. Others did not notice any possible reasons why natives were abandoning their fields and farms, and didn’t seem to be able to consider the possibility that something scarier than disease was stalking the land. The accounts are an obvious mess, with no reliable witnesses and no numbers worth considering for serious study.

Conclusion

Without good quality demographic data, or at least even order of magnitude accuracy in population estimates, it is not possible to study the dynamics of population collapse. Without decent information on what diseases afflicted local populations, it is impossible to conclude that “virgin soil” populations were more vulnerable to specific diseases. There is considerable evidence that disease mortality is not different when populations are naive to the disease, drawn from European experience with plague and global experience with influenza, and there is no solid evidence of any kind to support the opposite view in indigenous populations. Historical accounts are fundamentally flawed because of their subjectivity, lack of accuracy even when their interests are not threatened, and the unscientific nature of 15th century thought. A whole generation of conquistadores acted with extreme violence before dangerous diseases arrived on the continent, so many accounts of population collapse must reflect only war, but even after the diseases arrived it is likely that they were no more dangerous in native populations than they were in Europe, which by the 16th century was experiencing endemic smallpox that regularly killed large numbers of people (in Europe in the 18th century it killed 400,000 people a year). There is no reason to think that the Americas were special, or that their local population was especially vulnerable to this or any disease.

It is important to recognize that these issues – accurate diagnosis of disease, accurate estimates of numbers who died, and accurate population numbers – are not just academic exercises. You can’t put them aside and say “well yes, we aren’t sure what disease did it, how many people died, and what the population was, but by all accounts it was bad in the colonies.” That’s not how epidemiology works. You would never, ever accept that kind of hand-waving bullshit when applied to your own community. Nobody would accept it if the Chinese government said “yeah, this coronavirus seems bad, but you know there aren’t that many people affected, the population of Wuhan is anywhere from 1 million to 20 million, and we don’t even really know it’s not seasonal influenza or smallpox.” You would rightly reject that shit out of hand. It’s no different when you’re talking about any other population. We have no reason to suspect any special impact of epidemics in the Americas or Australia, and no reason to conclude that they were especially influential in the history of those regions compared to the violence inflicted on the locals – which we know happened, and we have many accounts of. To look at the accounts we have of disease in the new world, and conclude anything about them beyond “it happened” is to put undue confidence in very, very vague and very poor reporting. There is no empirical evidence to support many of the claims that have been made in the past 40 years – and especially, by genocide deniers, in the past 20 years – about the role of disease in the destruction of indigenous populations of the new world.

This matters for two reasons. First of all, it matters because it has interesting implications for how we think about the threat of disease, and how new diseases will affect naive populations when they jump from animals to humans (which is how almost all new diseases start). These diseases can be extremely dangerous, killing 30-60% of the affected people in some cases, but the reality is that for them to become pandemics they need to mutate to facilitate human-to-human transmission, and that mutation significantly reduces their mortality rates. It is rare for a disease that transmits easily to also be dangerous, and there is very little in the history of the human race to suggest otherwise. The Spanish flu pandemic of 1918 is perhaps the sole exception, and if so it should show just how rare such events are. We should, rightly, be concerned about coronaviruses, but we should also not expect that just because we’re naive to them they’re going to be extra dangerous. Diseases do what they do, and that is all.

But more importantly, we need to reject this idea that the catastrophe that unfolded in the new world between 1492 and 1973 wasn’t the fault of its perpetrators, white Europeans, and we need to reject even partial explanations based on epidemics. It was not disease that killed the people of America and Australia. There is no evidence to suggest it was, and a lot of reasons to question the limited evidence that some people present. The epidemic explanation is a nice exculpatory narrative, which tells us that even if white Europeans had approached the people of the new world with open minds and hearts in a spirit of trade and collaboration they would still have been decimated by our diseases. In this story we may have done some bad things but it doesn’t matter, because contact was inevitably going to destroy these fragile and isolated peoples. And this story is wrong. It isn’t just uncertain, it is wrong: there is nothing in the historical record to support it. If white Europeans had approached the new world in this spirit, there would have been a generation of trade and growth on both sides before the diseases struck, and then we could have helped them to escape and overcome the diseases we were familiar with, that were no more dangerous to them than they were to us. Their communities would have been better prepared to resist the social consequences of those diseases because they would not have been at war, and would not have been experiencing social collapse, overcrowding, starvation and poverty because of western genocidal policies. They would not have been forced into overcrowded and desperate accommodation on drought-stricken plains as slaves to Spanish industry, and the homegrown epidemic of 1545-48 would not have affected them anywhere near as badly. It’s important to understand that the tragedy that befell native Americans was caused by us, not by our diseases, and our diseases were a minor, final bit of flair on a project of destruction deliberately wrought by western invaders.

This other story – of diseases we couldn’t help but strike them down with, even if we had been pure of heart – is a genocide denier’s story. It’s self-exculpatory nonsense, built on bad statistics and dubious accounts of native life presented by biased observers. It is intended to distract and to deny, to show that even if we did a few bad things the real destruction was inevitable, because these frail and noble savages were doomed from the moment they met us. It is a racist narrative, racist because of its false assumptions about native Americans and racist because of what it assumes about the balance of mortality in the continent, racist for trying to pretend that we didn’t do everything we did. It is superficially appealing, both because it adds interesting complexity to an otherwise simple story, and because it helps to explain the enormity of what Europeans did in the Americas. But it is wrong, and it is racist, and it needs to be rejected. There is no evidence that epidemics played a major role in the destruction of native American communities, no evidence that native Americans were especially vulnerable to our diseases, and nothing in the historical record that exonerates European society from what it did. White Europeans enacted genocide on native Americans, and just a few of them happened to die of some of our diseases during the process. European society needs to accept this simple, horrible fact, and stop looking for excuses for this horrible part of our history.

Since 31st December 2019 there has been an outbreak of a new coronavirus in China. It originated in the city of Wuhan, and over the past 22 days has spread rapidly, including cases in several cities outside China. Initial reports suggested it originated in a seafood market in the city, which had me hoping it was the world’s first fish-to-human infectious disease, though I think we need to wait a while before we establish exactly where it started. It appears to have achieved human-to-human transmission, which is unusual for these zoonotic (animal-origin) viruses. International media are of course reporting breathlessly on it, and you can almost feel them salivating over the possibility of another SARS-style catastrophe. But how dangerous is it?

In this blog post I would like to use some initial data and reports to make an estimate of how dangerous this disease is, for those who might be considering traveling to (or canceling travel to) China. I’d also like to make a few comments on the reporting and politics of this disease, and infectious diseases generally.

The Case Fatality Ratio

For the sake of easy writing, let’s call this new disease Dolphin Flu, since it originated in a fish market. The main measure of how deadly any infectious disease is is its case fatality ratio (CFR), which is the number of people who die divided by the number of people infected, multiplied by 100. It seems to be a natural law of infectious diseases that the more infectious a disease is the less fatal it is, and anyone who has played that excellent pandemic game on their phone will know that there is a cost associated with a disease being infectious, which is usually that – like the common cold – it spreads fast but kills no one. Understanding the CFR is important to understanding how nasty a disease is likely to be. Here are some benchmark CFRs:

  • Untreated HIV: 100% (i.e. 100% of people infected with HIV die if they aren’t treated)
  • Untreated Ebola: 80-90%
  • Malaria (Africa): 0.45%
  • Spanish influenza (1918): ~3%
  • Measles: 0.2%

Nature is pretty, isn’t she? It’s worth noting that Spanish Influenza was a global catastrophe, which had major political and economic consequences, so any disease with a CFR around the level of influenza that is similarly infectious is a very scary deal. Ebola and HIV are extremely deadly but also not very infectious (you have to have sex to get HIV, which means my reader(s) face almost zero risk). It’s the respiratory diseases (the lungers) that really worry us.

Calculating the Case Fatality Ratio for Dolphin Flu

In order to calculate the CFR we need to know how many people are infected, and how many have died. Official government data this morning (reported here) puts the death toll at 17 people, and we can be fairly sure that’s correct, so next we need to calculate the number infected. This excellent website tells us there are 555 confirmed cases, but this is not the right number to use for this calculation, because with all of these respiratory-type diseases there are many cases who never go to a doctor and/or never get confirmed. In ‘flu season we call these “influenza-like illnesses” (ILI) and they are important to understanding how dangerous the disease actually is. In fact for many of these diseases there is an asymptomatic manifestation, in which people get the disease and never really show any symptoms. So we need to have an estimate of the total number of cases including those that were not confirmed. Fortunately the excellent infectious disease team (who do a great course in infectious disease modeling if you have the money) at Imperial College have used the number of cases appearing at non-Chinese cities to estimate the total number of cases using data about travel flows from Wuhan city. Their headline estimate at this time is 4000 cases, with an uncertainty interval from 1000 to 9700.

Next we need some information on other diseases. The CDC website for seasonal flu tells us that in the 2017-2018 season in the USA there were 20,731,323 confirmed cases of influenza, 44,802,629 total cases (including unconfirmed) and 61,099 deaths. A Japanese research paper on the H1N1 pandemic tells me there were 637,598 total cases (including unconfirmed) and 85 deaths due to H1N1. The Wikipedia entry on H5N1 bird flu tells me there were 701 confirmed cases and 407 deaths (I think there were very few unconfirmed cases of bird flu because it was so nasty).

Putting this together, we can get the CFR for confirmed and unconfirmed Dolphin Flu, and compare it with these diseases, shown below.

  • Unconfirmed Dolphin Flu: 0.43%, ranging from 0.22% to 1.7%
  • Confirmed Dolphin Flu: 2.98%
  • Unconfirmed Seasonal Flu (2017-18 season, USA): 0.14%, ranging from 0.11% to 0.16%
  • Confirmed Seasonal Flu (2017-18 season, USA): 0.29%
  • Unconfirmed H1N1 (Japan): 0.01%
  • Confirmed H5N1 (Global): 58.06%

This suggests that Dolphin Flu is between 2 and 10 times as dangerous as seasonal influenza, and about as dangerous as malaria if you are infected with malaria in an African context (i.e you may not be able to afford and access treatment, and you’re so used to idiopathic fevers that you don’t bother going to the doctor until the encephalitis starts).

That may not sound dangerous but it’s worth noting that seasonal influenza is one of the most dangerous things that can happen to an adult of child-bearing age except getting in a car and childbirth. It’s also worth noting that depending on the degree to which the Imperial College team have overestimated the number of unconfirmed cases, Dolphin Flu could be heading towards half as dangerous as Spanish Influenza. We don’t yet know if it is as contagious as influenza, but if it is …

I would say at this stage that Dolphin Flu looks pretty nasty. I probably wouldn’t cancel travel, because it’s still in its early stages and the chance of actually getting it is tiny (especially if you aren’t in Wuhan). But tomorrow is Chinese New Year, the largest movement of people on the planet, so in a week I expect that it will be all over China and it may be much harder to go there without getting it. I guess in that context the decision to quarantine Wuhan makes sense – if it’s half as dangerous as Spanish Flu, it’s worth suffering the short term economic damage of shutting down one of China’s largest cities to avoid spreading a disease that could be a global catastrophe.

So, given that information, would you travel? And what decision would you make if you were an administrator of public health in China?

About Cover Ups and Authoritarianism

Media coverage of disease outbreaks almost invariably follows western stereotypes about the country where they happen. With Ebola it’s all about bushmeat-eating primitives who can’t understand modern medicine; with MERS it was secretive religious lunatics; and with anything coming from China it’s a weird mix of Sinophobia, orientalism and obsessions with China’s authoritarian government. Because China fucked up the SARS response, we can see Western media basically salivating at the chance to report on how they’re covering this up too. But it’s important to understand that unconfirmed cases are not covered up cases. With respiratory diseases there will always be unconfirmed cases and there will always be someone who slips through the net and goes traveling, spreading the disease to other cities. Indeed, with a completely new disease it’s entirely possible that there are asymptomatic cases that no health system can detect.

In fact this time around the Chinese response has been very quick, open and transparent. They notified the disease to the WHO on 31st December, probably very soon after the first cases appeared, and the WHO Director-General has been fulsome in his praise of the Chinese response. Within perhaps 10 days of notifying the disease to the WHO they had isolated the virus and developed tests, and now they have quarantined a city of 12 million people because they know that the impending Chinese New Year could cause major transmission risks. Before complete quarantine they had introduced fever checks at exit points to international destinations, another sign of taking the disease seriously. This is unlikely to be successful if the disease has an asymptomatic phase (since you get on the plane before you have a fever) but short of blood-testing everyone in the city, there is little more that anyone could expect the government to do.

How to handle western media panic

None of this will stop western media from playing to the west’s current fear of China, and once the disease is over you can bet they will start talking about how the Chinese response was too authoritarian. You can also bet that the mistakes the administration inevitably makes will be discussed as if they are hallmarks of a Chinese problem, rather than mistakes any government could be expected to make when trying to control a disease that spreads at the speed of a cough. And this will all be made worse by the way western media get into an absolute lather about infectious disease stories. So be cautious about stories about China’s cover-ups, about authoritarianism, and avoid believing disease panics. Check in with the WHO’s updates, read the Imperial College website, and be careful about the western media’s over-hyping of disease threats and Chinese collapse. For a balanced view of infectious disease issues generally (and excellent coverage of the tragic, ongoing Ebola Virus outbreak in DRC) I recommend the H5N1 blog. For understanding how to interpret risk, I recommend reading David Spiegelhalter’s twitter. And remember, when you’re balancing risks, that getting in a car, or choosing to have a child (if you’re a woman) are probably the two most dangerous things anyone in a developed nation can do in their lives. You don’t need to go to China to experience any of those risks!

Let’s hope that this disease turns out to be another fizzer, keep a level head, and don’t let western media hype scare us!


About the picture: The picture is from the Twitter thread of @CarlZha, an excellent independent Chinese voice. It’s a photo of some guys doing renovation work on a clinic somewhere in China. There isn’t actually a Zombie outbreak yet!

Are you young, American, living in America and scared about where your country is headed? Want to get out before it all goes down? Are you worried about getting shot at school or work, or by the police? Don’t think that the healthcare situation is going to get better or even stay as bad as it is? Have a pre-existing condition and don’t know how you’re going to be able to afford medicines after you turn 26 (or even now)? Are you worried about Roe vs. Wade and pretty sure your reproductive rights are going down the tube in the next few years? Noticed that the new Georgia anti-abortion bill includes ectopic pregnancies, so is actually gynocidal? Are you poor and doubt you’ll ever be able to get into a good university and make a decent career, but don’t want to be stuck in an Amazon warehouse the rest of your life because working class work no longer pays in America? Are you black and don’t want to get shot by the police, or Jewish and a little bit worried about where those Proud Boys are taking your country?

Do you need to get out? This post outlines two strategies for a simple and easy way to get out of the USA, for people aged 16-21 who are either finishing high school or finishing university, and not sure what to do next. If you’re confident that even if the Dems win the next presidential election things still aren’t going to get better, you might want to consider one of these two strategies. Both involve leaving America for Japan, and this post is to tell you how.

Strategy 1: English Teacher

Lots of young people don’t know about this, but there are lots of private English teaching companies in Japan that are always looking for staff from native English speaking countries to work in them. To get a job at an English teaching company in Japan you need three basic qualifications: you need to be a native speaker, you need a bachelor’s degree, and you should still be in possession of a face[1]. Most of the big English teaching companies do recruitment tours in the USA, but they usually also have open recruitment on their websites. You can find them pretty easily on google. For a company like Aeon you will go to a day-long recruitment seminar that doubles as an interview, and usually you’ll get a job offer as a result. You just need to turn up looking presentable, act like you care, and be willing to work with kids. You do not need to be able to speak Japanese or have any knowledge of Japanese culture (though knowing more about Japan than “manga!” and “geisha!” would be helpful probably).

Once you get the job the English teaching company will place you in a random city in Japan, pay for your airfare, and organize an apartment for you. This may be a share house or it may be a one room. You’ll get paid probably 200-250k yen per month (about 1800 – 2000 USD) and will have to pay taxes and health insurance from that. Health insurance is affordable, and it covers everything: no pre-existing condition exemptions or any shit like that. It starts from the day you arrive in the country. Usually the company will help you set up bank account, phone etc., so even if you don’t speak Japanese you’ll be good to go. Once you arrive and get settled you can save a bit of money and after a few months you’ll be in a position to move somewhere you like, or change companies to a better one. If you speak Japanese because you were lucky enough to study it at high school you can maybe shift to a better job. But the key thing is you’ve landed in civilization, and you’ll be safe.

The salary isn’t great but it’s enough to save money if you don’t do dumb-arsed things, and you will be able to make occasional short trips in Asia on that salary. Japan is not an expensive country and especially if you aren’t in Tokyo or Osaka it’s a super cheap place to live. The working conditions at teaching companies aren’t great (typically some evening and weekend work, and your days off may not be guaranteed to be Saturday and Sunday) but they don’t have at-will firing over here and even though you’re foreign you have all the employment rights of a local, including unemployment benefits after a minimum period of time in the job. English teachers are generally considered to be the lowest of the low among foreigners living in Japan, for reasons you’ll understand within minutes of meeting your colleagues, but it’s better to be the lowest of the low in Japan than to be middle class in America. So do it!

If you’re a high school student this option isn’t open to you (these companies require a bachelor’s degree) but you can aim for it: they don’t care where your degree is from so you can attend a local low-cost uni (I believe you guys call this “community college”?) and still get accepted when you graduate. See my special notes for high school students below.

There are also similar companies in China and Korea (see my notes on other Asian countries below). There is also an Assistant Language Teacher program where you work in schools, which is apparently a little more demanding to get into. Google is your best friend here!

Strategy 2: Japan government scholarship

The Japanese government runs a large scholarship program for students from overseas, called the Japan Government Scholarship, also known as the MEXT scholarship or Monbusho scholarship. This is available for all education levels: undergraduate, masters or PhD. You apply through your embassy (the US website is here) about now. The scholarship pays your university fees, a monthly living allowance, and a return airfare. You can apply for this for your undergraduate studies, so you apply from high school and go straight to university study in Japan. Unless you are planning on studying certain topics (e.g. Japanese literature) you don’t need to be able to speak or read Japanese: they set a Japanese test during the application process but this is used to determine what level of training you need, not to screen you out. The amazing thing about MEXT scholarships is that they’re not very competitive – not many people know about them and not many people want to move to study in Japan – so even if you don’t have a stellar record you still have a chance. Also they don’t discriminate on race or economic background, as far as I know, and it’s a straight-up merit-based application. The allowance is not great – I think about 100k yen for undergrads and about 150k for postgrads – but you’ll get subsidized uni accommodation and won’t pay tax, so it’s perfectly viable. If you go for Masters you need to find a supervisor who teaches in English and isn’t an arsehole – this is a big challenge – but you can do it if you try. One big benefit of the MEXT scholarship at postgrad is you get a year as a “research student” during which you don’t study in the department you’ve chosen but instead just learn Japanese. You can get really good at Japanese this way if you pay attention. Another great thing is that once you’re in the MEXT program it’s easier to go to the next step – so you can go from undergraduate to masters to PhD. Theoretically you could go from 1st year undergraduate to the end of a post-doc on Japan government money, which would put you in Japan for 11 years and probably stand you in a good position for a permanent faculty position, which are like hens’ teeth in the USA but quite common here. ALSO, if you do undergraduate study here you have a very good chance of being able to get a job in a Japanese company when you graduate, probably quite a good one, and build a career here.

The application period is usually about now so get busy!

Special notes for high school students

Note that if you’re finishing high school you can target all of these strategies now. Apply for the MEXT scholarship and if you don’t get it, go to a local community college or whatever they’re called. Target one where you can study an Asian language, either Chinese, Korean or Japanese. Then apply for MEXT again at the end of your undergraduate, and if you don’t get it apply for an English-teaching company in whatever country you studied the language for. You can use this English teaching job as a base to find a job in whatever field you actually want to work, because you’ve got four years of language training under your belt and so should be able to speak the local language reasonably well. If this falls through you’re still okay because no matter how shit your degree was at that community college, a second language is a skill you can take to the bank. You can probably then find an okay job in a US company targeting that country. This means you’re still trapped in a failing state, but at least your attempt to get out didn’t doom you to work at Starbucks (though who knows, four years from now maybe America won’t have any industry except Starbucks).

Remember, if you get the MEXT scholarship you’re going to graduate from university with no debt, proficient in a second language, and with a full career path in Japan likely right there in front of you.

Notes on other Asian countries

Most Asian countries have the English-teaching option available – for sure you can get to China or Korea if you don’t want to go to Japan, and they all have approximately the same requirements. All three countries now have functioning health insurance systems and you won’t get shot in any of them. They’re all aging and need young people, and at least in Korea as well as Japan Americans are generally still viewed well (for now; this is changing). Obviously there are some issues about personal freedom in China and if things continue to go south in the US-China relationship you might not feel safe from reprisals from the government. Other countries like Thailand, Vietnam etc. also have English-teaching jobs but I’m not sure about the pay and conditions – you might find you can’t save money in these countries and it becomes a kind of trap. I don’t know. But any of the high-income Asian countries are good places to teach English.

China also offers scholarships for overseas students through the CSC. The Chinese education system is very good and if you get a degree at a good Chinese university you’re probably getting a better education than you’d expect in any American uni. I don’t know if the CSC offers scholarships to Americans (since, let’s face it, you guys suck) or what the long-term consequences of that will be for your career in either country, but it could be worth investigating. You might also want to consider Singapore, which has excellent universities, but I have no idea how it works.

A note on the long-term risks of English teaching

You can make a life time career as an English teacher in Japan but it won’t be well paid and you’ll remain permanently lower middle class, which is not a big deal over here (Japan is an equitable country) but also not the best working life to pursue. But most importantly, if you spend more than a few years as an English teacher straight out of uni, your employability in your home country will take a nose dive, because you have no skills or experience relevant to a real job. So you need to make an exit plan if you want to return to the west. One option is to get an English as a second language (ESL) masters (you can do this online) and try to move into teaching English at uni, which pays slightly more and has a bit more prestige, but is a slightly riskier career (it can mean a permanent career as an adjunct, which is tough). Another option is to try and jump ship to a real company using whatever skills you’ve got but this can take time and may not lead you to a good place. If your Japanese is good you can maybe shift to being a standard office worker, but if you have no Japanese you need to bear in mind that English teaching is a trap if you do it for more than a few years. Bear in mind that Japan is aging fast, the pool of available workers is dropping in size, and as time goes on opportunities for foreigners here (even foreigners with weak language skills) are only going to grow. Also contrary to what you’ve heard (see below) Japan is becoming more and more open and welcoming to foreigners, even under supposedly militarist Prime Minister Abe, so things will just get easier as time passes. It’s worth risking for a year or two to try and build an escape plan, and if it doesn’t work out what have you lost? Just be ready to jump out if you see that trap closing before it’s too late.

Why Japan?

I’m recommending this escape plan because I know Japan: I live here and I know it’s a good place to live. You’ve probably heard that it’s expensive, treats foreigners badly and is very inward-looking. None of this is true. You’re not going to experience much racism at all, if you’re a woman you’re not going to get sexually assaulted on the train, and it’s not an expensive place to live. Rent is affordable even in Tokyo on an English teacher’s wage, your health insurance is fixed at a small proportion of your salary and is always affordable, food is good and cheap, and you can live a good life here even on low wages. You can’t live an American life of huge housing, a car, an assault rifle and all the home-delivered pizza you can eat but that’s a good thing, not a bad thing: those are the reasons your country is killing the planet and itself.

If you live in Japan you will be safe, you will be healthy, and you’ll be able to build a life for yourself even on a low income. If you want to live here long term you’ll need to learn the language (which is boring and bothersome to do); you may find that as a foreigner you are not going to be able to ascend to the peak of your career here no matter what it is. It may be hard for you to buy a place here either because your low salary precludes saving a lot of money for a deposit, or the bank won’t loan you money if you don’t have permanent residency. You won’t be able to afford to go back to America a lot unless you get out of the English teaching trade, and you will be restricted to short visits to nearby Asian countries. You’ll probably have to work hard and if you choose the wrong company after university (or the wrong post-graduate supervisor) you’ll be bullied and overworked. These are risks of moving here! But you’ll definitely have healthcare, you’ll have no risk of being shot by either crazy white guys or police, if you’re a woman you can walk safely at night no matter what time or how deserted the streets, and no matter what you earn people will show you the respect you deserve as a human being. And the government is not going crazy, nor will it.

So if you’re young and scared and worried about your future in America, and you really want to get out, consider these two strategies, and get out while you still can.


fn1: Actually I’m not sure if they care about whether you have a face. But just to be sure, apply now before some lunatic gets a chance to shoot you in the face.

Uhtred son of Uhtred, regular ale drinker, who I predict will die of injury (but will go to Valhalla, unlike you you ale-sodden wretch)

There has been some fuss in the media recently about a new study showing no level of alcohol use is safe. It received a lot of media attention (for example here), reversed a generally held belief that moderate consumption of alcohol improves health (this is even enshrined in the Greek food pyramid, which has a separate category for wine and olive oil[1]), and led to angsty editorials about “what is to be done” about alcohol. Although there are definitely things that need to be done about alcohol, prohibition is an incredibly stupid and dangerous policy, and so are some of its less odious cousins, so before we go full Leroy Jenkins on alcohol policy it might be a good idea to ask if this study is really the bees knees, and does it really show what it says it does.

This study is a product of the Global Burden of Disease (GBD) project, at the Institute for Health Metrics and Evaluation (IHME). I’m intimately acquainted with this group because I made the mistake of getting involved with them a few years ago (I’m not now) so I saw how their sausage is made, and I learnt about a few of their key techniques. In fact I supervised a student who, to the best of my knowledge, remains the only person on earth (i.e. the only person in a population of 7 billion people, outside of two people at IHME) who was able to install a fundamental software package they use. So I think I know something about how this institution does its analyses. I think it’s safe to say that they aren’t all they’re cracked up to be, and I want to explain in this post how their paper is a disaster for public health.

The way that the IHME works in these papers is always pretty similar, and this paper is no exception. First they identify a set of diseases and health conditions related to their chosen risk (in this case the chosen risk is alcohol). Then they run through a bunch of previously published studies to identify the numerical magnitude of increased risk of these diseases associated with exposure to the risk. Then they estimate the level of exposure in every country on earth (this is a very difficult task which they use dodgy methods to complete). Then they calculate the number of deaths due to the conditions associated with this risk (this is also an incredibly difficult task to which they apply a set of poorly-accredited methods). Finally they use a method called comparative risk assessment (CRA) to calculate the proportion of deaths due to the exposure. CRA is in principle an excellent technique but there are certain aspects of their application of it that are particularly shonky, but which we probably don’t need to touch on here.

So in assessing this paper we need to consider three main issues: how they assess risk, how they assess exposure, and how they assess deaths. We will look at these three parts of their method and see that they are fundamentally flawed.

Problems with risk assessment

To assess the risk associated with alcohol consumption the IHME used a standard technique called meta-analysis. In essence a meta-analysis collects all the studies that relate an exposure (such as alcohol consumption) to an outcome (any health condition, but death is common), and then combines them to obtain a single final estimate of what the numerical risk is. Typically a meta-analysis will weight all the risks from all the studies according to the sample size of the study, so that for example a small study that finds banging your head on a wall reduces your risk of brain damage is given less weight in the meta-analysis than a very large study of banging your head on a wall. Meta-analysis isn’t easy for a lot of reasons to do with the practical details of studies (for example if two groups study banging your head on a wall do they use the same definition of brain damage and the same definition of banging?), but once you iron out all the issues it’s the only method we have for coming to comprehensive decisions about all the studies available. It’s important because the research literature on any issue typically includes a bunch of small shitty studies, and a few high quality studies, and we need to balance them all out when we assess the outcome. As an example, consider football and concussion. A good study would follow NFL players for several seasons, taking into account their position, the number of games they played, and the team they were in, and compare them against a concussion free sport like tennis, but matching them to players of similar age, race, socioeconomic background etc. Many studies might not do this – for example a study might take 20 NFL players who died of brain injuries and compare them with 40 non-NFL players who died of a heart attack. A good meta-analysis handles these issues of quality and combines multiple studies together to calculate a final estimate of risk.

The IHME study provides a meta-analysis of all the relationships between alcohol consumption and disease outcomes, described as follows[2]:

we performed a systematic review of literature published between January 1st, 1950 and Dec 31st 2016 using Pubmed and the GHDx. Studies were included if the following conditions were met. Studies were excluded if any of the following conditions were met:

1. The study did not report on the association between alcohol use and one of the included outcomes.

2. The study design was not either a cohort, case-control, or case-crossover.

3. The study did not report a relative measure of risk (either relative risk, risk ratio, odds-ratio, or hazard ratio) and did not report cases and non-cases among those exposed and un-exposed.

4. The study did not report dose-response amounts on alcohol use.

5. The study endpoint did not meet the case definition used in GBD 2016.

There are many, many problems with this description of the meta-analysis. First of all they seem not to have described the inclusion criteria (they say “Studies were included if the following conditions were met” but don’t say what those conditions were). But more importantly their conditions for exclusion are very weak. We do not, usually, include case-control and case-crossover studies in a meta-analysis because these studies are, frankly, terrible. The standard method for including a study in a meta-analysis is to assess it according to the Risk of Bias Tool and dump it if it is highly biased. For example, should we include a study that is not a randomized controlled trial? Should we include studies where subjects know their assignment? The meta-analysis community have developed a set of tools for deciding which studies to include, and the IHME crew haven’t used them.

This got me thinking that perhaps the IHME crew have been, shall we say, a little sloppy in how they include studies, so I had a bit of a look. On page 53-55 of the appendix they report the results of their meta-analysis of the relationship between atrial fibrillation and alcohol consumption, and the results are telling. They found 9 studies to include in their meta-analysis but there are many problems with these studies. One (Cohen 1988) is a cross-sectional study and should not be included, according to the IHME’s own exclusion criteria. 6 of the remaining studies assess fribillation only, while 2 assess fibrillation and fibrial flutter, a pre-cursor of fibrillation. However most tellingly, all of these studies find no relationship between alcohol consumption and fibrillation at almost all levels of consumption, but their chart on page 54 shows that their meta-analysis found an almost exponential relationship between alcohol consumption and fibrillation. This finding is simply impossible given the observed studies. All 9 studies found no relationship between moderate alcohol consumption and fibrillation, and several found no relationship even for extreme levels of consumption, but somehow the IHME found a clear relationship. How is this possible?

Problems with exposure assessment

This problem happened because they applied a tool called DISMOD to the data to estimate the relationship between alcohol exposure and fibrillation. DISMOD is an interesting tool but it has many flaws. Its main benefit is that it enables the user to incorporate exposures that have many different categories of exposure definition that don’t match, and turn them into a single risk curve. So for example if one study group has recorded the relative risk of death for 2-5 drinks, and another group has recorded the risk for 1-12 drinks, DISMOD offers a method to turn this into a single curve that will represent the risk relationship per additional drink. This is nice, and it produces the curve on page 54 (and all the subsequent curves). It’s also bullshit. I have worked with DISMOD and it has many, many problems. It is incomprehensible to everyone except the two guys who programmed it, who are nice guys but can’t give decent support or explanations of what it does. It has a very strange response distribution and doesn’t appear to apply other distributions well, and it has some really kooky Bayesian applications built in. It is also completely inscrutable to 99.99% of people who use it, including the people at IHME. It should not be used until it is peer reviewed and exposed to a proper independent assessment. It is application of DISMOD to data that obviously shows no relationship between alcohol consumption and fibrillation that led to the bullshit curve on page 54 of the appendix, that does not have any relationship to the observed data in the collected studies.

This also applies to the assessment of exposure to alcohol. The study used DISMOD to calculate each country’s level of individual alcohol consumption, which means that the same dodgy technique was applied to national alcohol consumption data. But let’s not get hung up on DISMOD. What data were they using? The maps in the Lancet paper show estimates of risk for every African and south east Asian country, which suggests that they have data on these countries, but do you think they do? Do you think Niger has accurate estimates of alcohol consumption in its borders? No, it doesn’t. A few countries in Africa do and the IHME crew used some spatial smoothing techniques (never clearly explained) to estimate the consumption rates in other countries. This is a massive dodge that the IHME apply, which they call “borrowing strength.” At its most egregious this is close to simply inventing data – in an earlier paper (perhaps in 2012) they were able to estimate rates of depression and depression-related conditions for 183 (I think) countries using data from 97 countries. No prizes to you, my astute reader, if you guess that all the missing data was in Africa. The same applies to the risk exposure estimates in this paper – they’re a complete fiction. Sure for the UK and Australia, where alcohol is basically a controlled drug, they are super accurate. But in the rest of the world, not so much.

Problems with mortality assessment

The IHME has a particularly nasty and tricky method for calculating the burden of disease, based around a thing called the year of life lost (YLL). Basically instead of measuring deaths they measure the years of your life that you lost when you died, compared to an objective global standard of life you could achieve. Basically they get the age you died, subtract it from the life expectancy of an Icelandic or Japanese woman, and that’s the number of YLLs you suffered. Add that up for every death and you have your burden of disease. It’s a nice idea except that there are two huge problems:

  • It weights death at young ages massively
  • They never incorporate uncertainty in the ideal life expectancy of an Icelandic or Japanese woman

There is an additional problem in the assessment of mortality, which the IHME crew always gloss over, which is called “garbage code redistribution.” Basically, about 30% of every country’s death records are bullshit, and don’t correspond with any meaningful cause of death. The IHME has a complicated, proprietary system that they cannot and will not explain that redistributes these garbage codes into other meaningful categories. What they should do is treat these redistributed deaths as a source of error (e.g. we have 100,000 deaths due to cancer and 5,000 redistributed deaths, so we actually have 102500 plus/minus 2500 deaths), but they don’t, they just add them on. So when they calculate burden of disease they use the following four steps:

  • Calculate the raw number of deaths, with an estimate of error
  • Reassign dodgy deaths in an arbitrary way, without counting these deaths as any form of uncertainty
  • Estimate an ideal life expectancy without applying any measure of error or uncertainty to it
  • Calculate the years of life lost relative to this ideal life expectancy and add them up

So here there are three sources of uncertainty (deaths, redistribution, ideal life expectancy) and only one is counted; and then all these uncertain deaths are multiplied by the number of years lost relative to the ideal life expectancy.

The result is a dog’s breakfast of mortality estimates, that don’t come even close to representing the truth about the burden of disease in any country due to any condition.

Also, the IHME apply the same dodgy modeling methods to deaths (using a method that they (used to?) call CoDMoD) before they calculate YLLs, so there’s another form of arbitrary model decisions and error in their assessments.

Putting all these errors together

This means that the IHME process works like this:

  • An incredibly dodgy form of meta-analysis that includes dodgy studies and miscalculates levels of risk
  • Applied to a really shonky estimate of the level of exposure to alcohol, that uses a computer program no one understands applied to a substandard data set
  • Applied to a dodgy death model that doesn’t include a lot of measures of uncertainty, and is thus spuriously accurate

The result is that at every stage of the process the IHME is unreasonably confident about the quality of their estimates, produces excessive estimates of risk and inaccurate measures of exposure, and is too precise in its calculations of how many people died. This means that all their conclusions about the actual risk of alcohol, the level of exposure, and the magnitude of disease burden due to the conditions they describe cannot be trusted. As a result, neither can their estimates of the proportion of mortality due to alcohol.

Conclusion

There is still no evidence that moderate alcohol consumption is bad for you, and solid meta-analyses of available studies support the conclusion that moderate alcohol consumption is not harmful. This study should not be believed and although the IHME has good press contacts, you should ignore all the media on this. As a former insider in the GBD process I can also suggest that in future you ignore all work from the Global Burden of Disease project. They have a preferential publishing deal with the Lancet, which means they aren’t properly peer reviewed, and their work is so massive that it’s hard for most academics to provide adequate peer review. Their methods haven’t been subjected to proper external assessment and my judgement, based on having visited them and worked with their statisticians and their software, is that their methods are not assessable. Their data is certainly dubious at times but most importantly their analysis approach is not correct and the Lancet doesn’t subject it to proper peer review. This is going to have long term consequences for global health, and at some point the people who continue to associate with the IHME’s papers (they have hundreds or even thousands of co-authors) will regret that association. I stopped collaborating with this project, and so should you. If you aren’t sure why, this paper on alcohol is a good example.

So chill, have another drink, and worry about whether it’s making you fat.


fn1: There are no reasons not to love Greek food, no wonder these people conquered the Mediterranean and developed philosophy and democracy!

fn2: This is in the appendix to their study

No this really is not “the healthy one”

Today’s Guardian has a column by George Monbiot discussing the issue of obesity in modern England, that I think fundamentally misunderstands the causes of obesity and paints a dangerously rosy picture of Britain’s dietary situation. The column was spurred by a picture of a Brighton Beach in 1976, in which everyone was thin, and a subsequent debate on social media about the causes of the changes in British rates of overweight and obesity in the succeeding half a decade. Monbiot’s column dismisses the possibility that the growth in obesity could be caused by an increase in the amount we eat, by a reduction in the amount of physical activity, or by a change in rates of manual labour. He seems to finish the column by suggesting it is all the food industry’s fault, but having dismissed the idea that the food industry has convinced us to eat more, he is left with the idea that the real cause of obesity is changes in the patterns of what we eat – from complex carbohydrates and proteins to sugar. This is a bugbear of certain anti-obesity campaigners, and it’s wrong, as is the idea that obesity is all about willpower, which Monbiot also attacks. The problem here though is that Monbiot misunderstands the statistics badly, and as a result dismisses the obvious possibility that British people eat too much. He commits two mistakes in his article: first he misunderstands the statistics on British food consumption, and secondly he misunderstands the difference between a rate and a budget, which is ironic given he understands these things perfectly well when he comments on global warming. Let’s consider each of these issues in turn.

Misreading the statistics

Admirably, Monbiot digs up some stats from 1976 and compares them with statistics from 2018, and comments:

So here’s the first big surprise: we ate more in 1976. According to government figures, we currently consume an average of 2,130 kilocalories a day, a figure that appears to include sweets and alcohol. But in 1976, we consumed 2,280 kcal excluding alcohol and sweets, or 2,590 kcal when they’re included. I have found no reason to disbelieve the figures.

This is wrong. Using the 1976 data, Monbiot appears to be referring to Table 20 on page 77, which indicates a yearly average of 2280 kCal. But this is the average per household member, and does not account for whether or not a household member is a child. If we refer to Table 24 on page 87, we find that a single adult in 1976 ate an average of 2670 kCal; similar figures apply for two adult households with no children (2610 kCal). Using the more recent data Monbiot links to, we can see that he got his 2,130 kCal from the file of “Household and Eating Out Nutrient Intakes”. But if we use the file “HC – Household nutrient intakes” and look at 2016/17 for households with one adult and no children, we find 2291 kCal, and about 2400 as recently as 10 years ago. These are large differences when they accrue over years.

This is further compounded by the age issue. When we look at individual intake we need to consider how old the family members are. If an average individual intake is 2590 kCal in 1976 including alcohol and sweets, as Monbiot suggests, we need to rebalance it for adults and children. In a household with three people we have 7700 kCal, which if the child is eating 1500 kCal means that the adults are eating close to 3100 kCal each. That’s too much food for everyone in the house, even using the ridiculously excessive nutrient standards provided by the ONS.  It’s also worth remembering that the age of adults in 1976 was on average much younger than now, and an intake of 2590 might be okay for a young adult but it’s not okay for a 40-plus adult, of which there are many more now than there were then. This affects obesity statistics.

Finally it’s also worth remembering that obesity is not evenly distributed, and an average intake of 2100 kCal could correspond to an average of 2500 in the poorest 20% of the population (where obesity is common) and 1700 kCal in the richest, which is older and thinner. An evenly distributed 2100 kCal will lead to zero obesity over the whole population, but an unevenly distributed 2100 kCal will not. It’s important to look carefully at the variation in the datasets before deciding the average is okay.

Misunderstanding budgets and rates

Let’s consider the 2590 kCal that Monbiot finds as the average intake of adults in 1976, including alcohol and sweets. This is likely wrong, and the average is probably more like 3000 kCal including alcohol and sweets, but let’s go with it for now. Monbiot is looking to see what has changed in our diet over the past 40 years to lead to current rates of obesity, because he is looking for a change in the rate of consumption. But he doesn’t consider that all humans have a budget, and that a small excess of that budget over a long period is what drives obesity. The reality is that today’s obesity rates do not reflect today’s consumption rates, but the steady pattern of consumption over the past 40 years. What made a 55 year old obese today is what they ate in 1976 – when they were 15 – not what the average person eats today. So rather than saying “we eat less today than we did 40 years ago so that can’t be the cause of obesity”, what really matters is what people have been eating for the past 40 years. And the stats Monbiot uses suggest that women, at least, have been eating too much – a healthy adult woman should eat about 2100 kCal, and if the average is 2590 then a woman in 1976 has been at or above her energy intake every year for the past 40 years. It doesn’t matter that a woman’s intake declined to 2100 kCal in 2016, because she has been eating too much for the past 35 years anyway. It’s this budget, not changes over time, which determine the obesity rate now, and Monbiot is wrong to argue that it’s not overeating that has caused the obesity epidemic. Unless he accepts that a woman can eat 2590 kCal every year for 40 years and stay thin, he needs to accept that the problem of obesity is one of British food culture over half a century.

What this means for obesity policy

Somewhat disappointingly and unusually for a Monbiot article, there are no sensible policy prescriptions at the end except “stop shaming fat people.” This isn’t very helpful, and neither is it helpful to dismiss overeating as a cause, since everyone in public health knows that overeating is the cause of obesity. For example, Public Health for England wants to reduce British calorie intake, and the figures on why are disturbing reading. Reducing calorie intake doesn’t require shaming fat people but it does require acknowledgement that British people eat too much. This comes down not to individual willpower but to the food environment in which we all make choices about what to eat. The simplest way, for example, to reduce the amount that people eat is not to give them too much food. But there is simply no way in Britain that you can eat out or buy packaged food products without buying too much food. It is patently obvious that British restaurants serve too much food, that British supermarkets sell food in packages that are too large, and that as a result the only way for British people not to eat too much is through constant acts of will – leaving half the food you paid for, buying only fresh food in small amounts every day (which is only possible in certain wealthy inner city suburbs), and carefully controlling where, when and how you eat. This is possible but it requires either that you move in a very wealthy cultural circle where the environment supports this kind of thing, or that you personally exert constant control over your life. And that latter choice will inevitably end in failure, because constantly controlling every aspect of your food intake in opposition to the environment where you purchase, prepare and consume food is very very difficult.

When you live in Japan you live in a different food environment, which encourages small serving sizes, fresh and raw foods, and low fat and low sugar foods. In Japan you live in a food environment where you are always close to a small local supermarket with convenient opening hours and fresh foods, and where convenience stores sell healthy food in small serving sizes. This means that you can choose to buy small amounts of fresh food as and when you need them, and avoid buying in bulk in a pattern that encourages over consumption. When your food choices fail (for example you have to eat out, or buy junk food) you will have access to a small, healthy serving. If you are a woman you will likely have access to a “woman’s size” or “princess size” that means you can eat the smaller calorific food that your smaller calorific requirements suggest is wisest. It is easy to be thin in Japan, and so most people are thin. Overeating in Japan really genuinely is a choice that you have to choose to make, rather than the default setting. This difference in food environment is simple, obvious and especially noticeable when (as I just did) you hop on a plane to the UK and suddenly find yourself confronted with double helpings of everything, and super markets where everything is “family sized”. The change of food environment forces you to eat more. It’s as simple as that.

What Britain needs is a change in the food environment. And achieving a change in food environment requires first of all recognizing that British people eat too much, and have been eating too much for way too long. Monbiot’s article is an exercise in denialism of that simple fact, and he should change it or retract it.

The journal Molecular Autism this week published an article about the links between Hans Asperger and the Nazis in world war 2 Vienna, Austria. Hans Asperger is the paediatric pscyhiatrist on whose work Asperger’s syndrome is based, and after whom the syndrome is known. Until recently Asperger was believed to have been an anti-Nazi, someone who resisted the Nazis and risked his own career to protect some of his developmentally delayed patients from the Nazi “euthanasia” program, which killed or sterilized people with certain developmental disabilities for eugenics reasons.

The article, entitled Hans Asperger, National Socialism, and “race hygiene” in Nazi-era Vienna, is a thorough, well-researched and extensively documented piece of work, which I think is based on several years of detailed examination of primary sources, often in their original German. It uses these sources – often previously untouched – to explore and rebut several claims Asperger made about himself, and also to examine the nature of his diagnostic work during the Nazi era to see whether he was resisting or aiding the Nazis in their racial hygiene goals. In this post I want to talk a little about the background of the paper, and ask a few questions about the implications of these findings for our understanding of autism, and also for our practice as public health workers in the modern era. I want to make clear that I do not know much if anything about Asperger’s syndrome or autism, so my questions are questions, not statements of opinion disguised as questions.

What was known about Asperger

Most of Asperger’s history under the Nazis was not known in the English language press, and when his name was attached to the condition of Asperger’s syndrome he was presented as a valiant defender of his patients against Nazi racial hygiene, and as a conscientious objector to Nazi ideology. This view of his life was based on some speeches and written articles translated into English during the post war years, in particular a 1974 interview in which he claims to have defended his patients and had to be saved from being arrested by the Gestapo twice by his boss, Dr. Hamburger. Although some German language publications were more critical, in general Asperger’s statements about his own life’s work were taken at face value, and seminal works in 1981 and 1991 that introduced him to the medical fraternity did not include any particular reference to his activities in the Nazi era.

What Asperger actually did

Investigation of the original documents shows a different picture, however. Before Anschluss (the German occupation of Austria in 1938), Asperger was a member of several far right Catholic political organizations that were known to be anti-semitic and anti-democratic. After Anschluss he joined several Nazi organizations affiliated with the Nazi party. His boss at the clinic where he worked was Dr. Hamburger, who he claimed saved him twice from the Gestapo. In fact Hamburger was an avowed neo-nazi, probably an entryist to these Catholic social movements during the period when Nazism was outlawed in Vienna, and a virulent anti-semite. He drove Jews out of the clinic even before Anschluss, and after 1938 all Jews were purged from the clinic, leaving openings that enabled Asperger to get promoted. It is almost impossible given the power structures at the time that Asperger could have been promoted if he disagreed strongly with Hamburger’s politics, but we have more than circumstantial evidence that they agreed: the author of the article, Herwig Czech, uncovered the annual political reports submitted concerning Asperger by the Gestapo, and they consistently agreed that he was either neutral or positive towards Nazism. Over time these reports became more positive and confident. Also during the war era Asperger gained new roles in organizations outside his clinic, taking on greater responsibility for public health in Vienna, which would have been impossible if he were politically suspect, and his 1944 PhD thesis was approved by the Nazis.

A review of Asperger’s notes also finds that he did send at least some of his patients to the “euthanasia” program, and in at least one case records a conversation with a parent in which the child’s fate is pretty much accepted by both of them. The head of the institution that did the “euthanasia” killings was a former colleague of Asperger’s, and the author presents pretty damning evidence that Asperger must have known what would happen to the children he referred to the clinic. It is clear from his speeches and writings in the Nazi era that Asperger was not a rabid killer of children with developmental disabilities: he believed in rehabilitating children and finding ways to make them productive members of society, only sending the most “ineducable” children to institutional care and not always to the institution that killed them. But it is also clear that he accepted the importance of “euthanasia” in some instances. In one particularly compelling situation, he was put in charge – along with a group of his peers – of deciding the fate of some 200 “ineducable” children in an institution for the severely mentally disabled, and 35 of those ended up being murdered. It seems unlikely that he did not participate in this process.

The author also notes that in some cases Asperger’s prognoses for some children were more severe than those of the doctors at the institute that ran the “euthanasia” program, suggesting that he wasn’t just a fairweather friend of these racial hygiene ideals, and the author also makes the point that because Asperger remained in charge of the clinic in the post-war years he was in a very good position to sanitize his case notes of any connection with Nazis and especially with the murder of Jews. Certainly, the author does not credit Asperger’s claims that he was saved from the Gestapo by Hamburger, and suggests that these are straight-up fabrications intended to sanitize Asperger’s role in the wartime public health field.

Was Asperger’s treatment and research ethical in any way?

Reading the article, one question that occurred to me immediately was whether any of his treatments could be ethical, given the context, and also whether his research could possibly have been unbiased. The “euthanasia” program was actually well known in Austria at the time – so well known in fact that at one point allied bombers dropped leaflets about it on the town, and there were demonstrations against it at public buildings. So put yourself in the shoes of a parent of a child with a developmental disability, bringing your child to the clinic for an assessment. You know that if your child gets an unfavourable assessment there is a good chance that he or she will be sterilized or taken away and murdered. Asperger offers you a treatment that may rehabilitate the child. Obviously, with the threat of “euthanasia” hanging over your child, you will say yes to this treatment. But in modern medicine there is no way that we could consider that to be willing consent. The parent might actually not care about “rehabilitating” their child, and is perfectly happy for the child to grow up and be loved within the bounds of what their developmental disability allows them; it may be that rehabilitation is difficult and challenging for the child, and not in the child’s best emotional interests. But faced with that threat of a racial hygiene-based intervention, as a parent you have to say yes. Which means that in a great many cases I suspect that Asperger’s treatments were not ethical from any post-war perspective.

In addition, I also suspect that the research he conducted for his 1944 PhD thesis, in addition to being unethical, was highly biased, because the parents of these children were lying through their teeth to him. Again, consider yourself as the parent of such a child, under threat of sterilization or murder. You “consent” to your child’s treatment regardless of what might be in the child’s best developmental and emotional interests, and also allow the child to be enrolled in Asperger’s study[1]. Then your child will be subjected to various rehabilitation strategies, what Asperger called pedagogical therapy. You will bring your child into the clinic every week or every day for assessments and tests. Presumably the doctor or his staff will ask you questions about the child’s progress: does he or she engage with strangers? How is his or her behavior in this or that situation? In every situation where you can, you will lie and tell them whatever you think is most likely to make them think that your child is progressing. Once you know what the tests at the clinic involve, you will coach your child to make sure he or she performs well in them. You will game every test, lie at every assessment, and scam your way into a rehabilitation even if your child is gaining nothing from the program. So all the results on rehabilitation and the nature of the condition that Asperger documents in his 1944 PhD thesis must be based on extremely dubious research data. You simply cannot believe that the research data you obtained from your subjects is accurate when some of them know that their responses decide whether their child lives or dies. Note that this problem with his research exists regardless of whether Asperger was an active Nazi – it’s a consequence of the times, not the doctor – but it is partially ameliorated if Asperger actually was an active resister to Nazi ideology, since it’s conceivable in that case that the first thing he did was give the parent an assurance that he wasn’t going to ship their kid off to die no matter what his diagnosis was. But since we now know he did ship kids off to die, that possibility is off the table. Asperger’s research subjects were consenting to a research study and providing subjective data on the assumption that the study investigator was a murderer with the power to kill their child. This means Asperger’s 1944 work probably needs to be ditched from the medical canon, simply on the basis of the poor quality of the data. It also has implications, I think, for some of his conclusions and their influence on how we view Asperger’s syndrome.

What does this mean for the concept of the autism spectrum?

Asperger introduced the idea of a spectrum of autism, with some of the children he called “autistic psychopaths” being high functioning, and some being low functioning, with a spectrum of disorder. This idea seems to be an important part of modern discussion of autism as well. But from my reading of the paper [again I stress I am not an expert] it seems that this definition was at least partly informed by the child’s response to therapy. That is, if a child responded to therapy and was able to be “rehabilitated”, they were deemed high functioning, while those who did not were considered low functioning. We have seen that it is likely that some of the parents of these children were lying about their children’s functional level, so probably his research results on this topic are unreliable, but there is a deeper problem with this definition, I think. The author implies that Asperger was quite an arrogant and overbearing character, and it seems possible to me that his assumption that he is deeply flawed in assuming his therapy would always work and that if it failed the problem was with the child’s level of function. What if his treatment only worked 50% of the time, randomly? Then the 50% of children who failed are not “low-functioning”, they’re just unlucky. If we compare with a pharmaceutical treatment, it simply is not the case that when your drugs fail your doctor deems this to be because you are “low functioning”, and ships you off to the “euthanasia” clinic. They assume the drugs didn’t work and give you better, stronger, or more experimental drugs. Only when all the possible treatments have failed do they finally deem your condition to be incurable. But there is no evidence that Asperger considered the possibility that his treatment was the problem, and because the treatment was entirely subjective – the parameters decided on a case-by-case basis – there is no way to know whether the problem was the children or the treatment. So to the extent that this concept of a spectrum is determined by Asperger’s judgment of how the child responded to his entirely subjective treatment, maybe the spectrum doesn’t exist?

This is particularly a problem because the concept of “functioning” was deeply important to the Nazis and had a large connection to who got selected for murder. In the Nazi era, to quote Negan, “people were a resource”, and everyone was expected to be functioning. Asperger’s interest in this spectrum and the diagnosis of children along it wasn’t just or even driven by a desire to understand the condition of “autistic psychopathy”, it was integral to his racial hygiene conception of what to do with these children. In determining where on the spectrum they lay he was providing a social and public health diagnosis, not a personal diagnosis. His concern here was not with the child’s health or wellbeing or even an accurate assessment of the depth and nature of their disability – he and his colleagues were interested in deciding whether to kill them or not. Given the likely biases in his research, the dubious link between the definition of the spectrum and his own highly subjective treatment strategy, and the real reasons for defining this spectrum, is it a good idea to keep it as a concept in the handling of autism in the modern medical world? Should we revisit this concept, if not to throw it away at least to reconsider how we define the spectrum and why we define it? Is it in the best interests of the child and/or their family to apply this concept?

How much did Asperger’s racial hygiene influence ideas about autism’s heritability?

Again, I want to stress that I know little about autism and it is not my goal here to dissect the details of this disease. However, from what I have seen of the autism advocacy movement, there does seem to be a strong desire to find some deep biological cause of the condition. I think parents want – rightly – to believe that it is not their fault that their child is autistic, and that the condition is not caused by environmental factors that might somehow be associated with their pre- or post-natal behaviors. Although the causes of autism are not clear, there seems to be a strong desire of some in the autism community to see it as biological or inherited. I think this is part of the reason that Andrew Wakefield’s scam linking autism to MMR vaccines remains successful despite his disbarment in the UK and exile to America. Parents want to think that they did not cause this condition, and blaming a pharmaceutical company is an easy alternative to this possibility. Heritability is another alternative explanation to behavioral or environmental causes. Asperger of course thought that autism was entirely inherited, blaming it – and its severity – on the child’s “constitution”, which was his phrase for their genetic inheritance. This is natural for a Nazi, of course – Nazis believe everything is inherited. Asperger also believed that sexual abuse was due to genetic causes (some children had a genetic property that led them to “seduce” adults!) Given Asperger’s influence on the definition of autism, I think it would be a good idea to assess how much his ideas also influence the idea that autism is inherited or biologically determined, and to question the extent to which this is just received knowledge from the original researcher. On a broader level, I wonder how many conditions identified during the war era and immediately afterwards were influenced by racial hygiene ideals, and how much the Nazi medical establishment left a taint on European medical research generally.

What lessons can we learn about public health practice from this case?

It seems pretty clear that some mistakes were made in the decision to assign Asperger’s name to this condition, given what we now know about his past. It also seems clear that Asperger was able to whitewash his reputation and bury his responsibilities for many years, including potentially avoiding being held accountable as an accessory to murder. How many other medical doctors, social scientists and public health workers from this time were also able to launder their history and reinvent themselves in the post-war era as good Germans who resisted the Nazis, rather than active accomplices of a murderous and cruel regime? What is the impact of their rehabilitation on the ethics and practice of medicine or public health in the post-war era? If someone was a Nazi, who believed that murdering the sick, disabled and certain races for the good of the race was a good thing, then when they launder their history there is no reason to think they actually laundered their beliefs as well. Instead they carried these beliefs into the post war era, and presumably quietly continued acting on them in the institutions they now occupied and corrupted. How much of European public health practice still bears the taint of these people? It’s worth bearing in mind that in the post war era many European countries continued to run a variety of programs that we now consider to have been rife with human rights abuse, in particular the way institutions for the mentally ill were run, the treatment of the Roma people (which often maintained racial-hygiene elements even decades after the war), treatment of “promiscuous” women and single mothers, and management of orphanages. How much of this is due to the ideas of people like Asperger, propagating slyly through the post-war public health institutional framework and carefully hidden from view by people like Asperger, who were assiduously purging past evidence of their criminal actions and building a public reputation for purity and good ethics? I hope that medical historians like Czech will in future investigate these questions.

This is not just a historical matter, either. I have colleagues and collaborators who work in countries experiencing various degrees of authoritarianism and/or racism – countries like China, Vietnam, Singapore, the USA – who are presumably vulnerable to the same kinds of institutional pressures at work in Nazi Germany. There have been cases, for example, of studies published from China that were likely done using organs harvested from prisoners. Presumably the authors of those studies thought this practice was okay? If China goes down a racial hygiene path, will public health workers who are currently doing good, solid work on improving the public health of the population start shifting their ideals towards murderous extermination? Again, this is not an academic question: After 9/11, the USA’s despicable regime of torture was developed by two psychologists, who presumably were well aware of the ethical standards their discipline is supposed to maintain, and just ignored them. The American Psychological Association had to amend its code in 2016 to include an explicit statement about avoiding harm, but I can’t find any evidence of any disciplinary proceedings by either the APA or the psychologists’ graduating universities to take action for the psychologists’ involvement in this shocking scheme. So it is not just in dictatorships that public policy pressure can lead to doctors taking on highly unethical standards. Medical, pscyhological and public health communities need to take much stronger action to make sure that our members aren’t allowed to give into their worst impulses when political and social pressure comes to bear on them.

These ideas are still with us

As a final point, I want to note that the ideas that motivated Asperger are not all dead, and the battle against the pernicious influence of racial hygiene was not won in 1945. Here is Asperger in 1952, talking about “feeblemindedness”:

Multiple studies, above all in Germany, have shown that these families procreate in numbers clearly above the average, especially in the cities. [They] live without inhibitions, and rely without scruples on public welfare to raise or help raise their children. It is clear that this fact presents a very serious eugenic problem, a solution to which is far off—all the more, since the eugenic policies of the recent past have turned out to be unacceptable from a human standpoint

And here is Charles Murray in 1994:

We are silent partly because we are as apprehensive as most other people about what might happen when a government decides to social-engineer who has babies and who doesn’t. We can imagine no recommendation for using the government to manipulate fertility that does not have dangers. But this highlights the problem: The United States already has policies that inadvertently social-engineer who has babies, and it is encouraging the wrong women. If the United States did as much to encourage high-IQ women to have babies as it now does to encourage low-IQ women, it would rightly be described as engaging in aggressive manipulation of fertility. The technically precise description of America’s fertility policy is that it subsidizes births among poor women, who are also disproportionately at the low end of the intelligence distribution. We urge generally that these policies, represented by the extensive network of cash and services for low-income women who have babies, be ended. [Emphasis in the Vox original]

There is an effort in Trump’s America to rehabilitate Murray’s reputation, long after his policy prescriptions were enacted during the 1990s. There isn’t any real difference between Murray in 1994, Murray’s defenders in 2018, or Asperger in 1952. We now know what the basis for Asperger’s beliefs were. Sixty years later they’re still there in polite society, almost getting to broadcast themselves through the opinion pages of a major centrist magazine. Racial hygiene didn’t die with the Nazis, and we need to redouble our efforts now to get this pernicious ideology out of public health, medicine, and public policy. I expect that in the next few months this will include some uncomfortable discussions about Asperger’s legacy, and I hope a reassessment of the entire definition of autism, Asperger’s syndrome and its management. But we should all be aware that in these troubled times, the ideals that motivated Asperger did not die with him, and our fields are still vulnerable to their evil influence.

 


fn1: Note that you consent to this study regardless of your actual views on its merits, whether it will cause harm to your child, etc. because this doctor is going to decide whether your child “rehabilitates” or slides out of view and into the T4 program where they will die of “pneumonia” within 6 months, and so you are going to do everything this doctor asks. This is not consent.

This week the US Congress passed a set of censorship laws, commonly called FOSTA/SESTA, that aimed to prevent online sex trafficking but in practice work to shut down all forms of online sex work advertising. The laws were developed in the wake of claims that the website backpage was being used to buy and sell trafficked women, and basically make the website’s provider criminally liable for any sex trafficking that happens on the site. They do so by creating a trafficking exception to a section of a US law that exempts internet providers from being treated as media organizations. Currently under US law websites are treated as carriers, which means they aren’t responsible for the content of material that their users post online. This exemption is the reason that websites like reddit, craigslist and facebook can host a wide range of user-generated content with impunity.

In jurisdictions where sex work is illegal, sex workers use online resources like craigslist and backpage to advertise their services and screen clients. Many sex workers and porn stars who have a good community following also use Twitter and Instagram and other social networking services to manage their community and their client relationships, including organizing events and dates and discussing their work. But since the new law was passed all these websites have had to shutdown their services or warn users that any solicitation or discussion of business is now illegal. Craigslist has shutdown its personals page, which was often used by sex workers, and websites like Fetlife have had to put strict warnings on user content. Because they can be held liable under the new law for any sex work related content, they have had to tell users that no such content can be tolerated at all. At Fetlife this extends to consensual financial domination activities, and at Craigslist the only way they have been able to stop sex work related activity has been to stop all consensual dating of any kind. Because apps like Tinder are also sometimes used for sex work purposes, it’s also possible that these sites are going to have to toughen up their moderation and rules, though it’s unclear yet how they will do this or how serious the impact of the law will be.

The Cut has an overview of why sex workers disapprove of this law, and Vox has a summary of the history of its development and arguments about its impact. For the past few weeks sex worker rights organizations like SWOP have been providing advice to women about how to back up their online presence and what actions they may need to take to protect their online presence, potentially including self censorship. It is unclear at this stage what impact the law will have on online sexual activities outside of sex work, but it’s clear from Craigslist’s reaction that the effect will be chilling. For countries like the UK, Germany, Australia, Japan and Singapore where sex work is legal to varying degrees and women can safely and legally work in brothels or advertise publicly on locally hosted websites the effect may be minimal, but for women in countries like the USA and parts of Europe the impact will likely be huge. It will force women away from the internet and back onto the streets and into unsafe situations where they are unable to screen potential clients, cannot share information about dangerous clients, and cannot support each other or record client information for self protection. Sex worker rights organizations in the USA have been deeply concerned about the impact of these laws for months and worked hard to prevent them, but in the end the money and the politics was against them.

It is worth considering exactly why these laws were passed and who supported them. Although they were developed and pushed by conservatives and republicans, they were passed with bipartisan support and pushed by a coalition of christian conservatives and feminists. The advertising campaign was supported by liberal comedians like Amy Schumer and Seth Meyers, and after some reform it was also supported by major internet content providers and entertainment organizations like Disney. This should serve as a reminder that Disney is not a liberal organization (despite the complaints of some Star Wars fans that its liberalism wrecked the latest awful episode), and that in the American political landscape “liberals” are actually deeply conservative about sex and sexuality. In particular any feminist organization that supported this law should be ashamed of itself. This includes organizations like Feminist Current and other radical feminist groups that think prostitution is a crime against women, rather than a choice that women make. I have said before that this strain of radical feminism is deeply misogynist and illiberal, and is always willing to use state power to override the personal choices of women it sees as enemies to its cause.

These feminist movements need to recognize though that while tactically they may have scored a win, this strategy is very bad for women everywhere. Nothing angers a christian conservative man more than a woman who is financially and sexually independent, and sex workers are the model of a financially and sexually independent woman. Sex workers are uniquely vulnerable to legislative action and uniquely annoying to these legislators, but they’re just the canary in the coal mine. These christian conservative legislators want to destroy all forms of sexual freedom and they won’t stop at sex work. It’s unlikely that they’re shedding any tears over the fact that their pet law led Craigslist to shut down all its non-sex work dating functions – especially since they were especially well used by LGBT people. You can bet that they are already looking for ways to use some kind of indecency based argument to target a section 230 exception for LGBT people, probably arguing on obscenity or public health grounds; and I don’t doubt that ALEC and the Heritage Foundation are already wondering if there is a racketeering-based argument by which they can make a similar exception that can be used to target unions and other forms of left wing activism. It might trouble Feminist Current a little, but I doubt christian conservatives will be feeling particularly worried if Tinder has to shut down, and if this law makes it harder for consenting adults to fuck freely then conservative christians everywhere will be chuffed. Just as the 1980s alliance of feminists and christians distorted the porn industry and made it more misogynist and male dominated, laws like SESTA will distort the world of casual sex to make it more favourable to predatory men and less safe for ordinary women. Sex workers may always be first in the sights of christian conservatives but they are never last. Whatever your personal beliefs about paying for sex, supporting sex worker rights is always and everywhere better for women, better for LGBT people, and better for liberalism.

As a final aside, I would like to sing the praises of sex worker rights organizations. Their activism is strongly inclusive, and while their focus is obviously on protecting the rights of their sex worker membership, their viewpoint is always strongly liberal and aimed at broadening everyone’s rights. They’re strong supporters of free speech and free association, and they include everyone in their movement. As organizations they are strongly inclusive of all sexualities and genders, they are always aware of disability rights and the needs of people with disabilities, and they are opposed to any forms of restrictions on what consenting adults do. They are a consistent powerful voice for liberal rights, worker’s rights, and sexual freedom. These laws will likely restrict their ability to raise their voice in support of these issues, and that ultimately weakens all our rights. Sex worker organizations are a powerful voice for good, and sex workers are not victims, but an important part of our society doing a difficult job. Wherever you are in the world, you should support these organizations and the women, men and transgender people who do this job. Hopefully with our support they can overturn these laws, and through their work and activism broaden the scope for sexual expression for all humans no matter our gender or our sexual preference.

Nail them to the wall

In September 2017 Philip Morris International (PMI) – one of the world’s largest cigarette companies – introduced a new foundation to the world: The Foundation for a Smoke Free World. This foundation will receive $80 million per year from PMI for the next 12 years and devote this money to researching “smoking cessation, smoking harm reduction and alternative livelihoods for tobacco farmers”, with the aim to draw in more money from non-tobacco donors over that time. It is seeking advice on how to spend its research money, and it claims to be completely independent of the tobacco industry – it receives money from PMI to the tune of almost a billion dollars, but it claims to have a completely independent research agenda.

The website for the Foundation includes a bunch of compelling statistics on its front page: There is one death every six seconds from smoking, 7.2 million deaths annually, second-hand smoke kills 890,000 people annually, and smoking kills half of all its long-term users. It’s fascinating that a company that as late as the late 1990s was claiming there is no evidence its product kills has now set up a foundation with such powerful admission of the toxic nature of its product. It’s also wrong: the most recent research suggests that 2/3 of users will die from smoking. It’s revealing that even when PMI is being honest it understates the true level of destruction it has wrought on the human race.

That should serve as an object lesson in what this Foundation is really about. It’s not an exercise in genuine tobacco control, but a strategy to launder PMI’s reputation, and to escape the tobacco control deadlock. If PMI took these statistics seriously it could solve the problem it appears to have identified very simply, by ceasing the production of cigarettes and winding up its business. I’m sure everyone on earth would applaud a bunch of very rich tobacco company directors who awarded themselves a fat bonus and simply shut down their business, leaving their shareholders screwed. But that’s not what PMI wants to do. They want to launder their reputation and squirm out from under the pressure civil society is placing on them. They want to start a new business looking all shiny and responsible, and the Foundation is their tool.

PMI have another business model in mind. PMI are the mastermind behind iQos, the heat-not-burn product that they are trialling with huge success in Japan. This cigarette alternative still provides its user with a nicotine hit but it does it through heating a tobacco substance, rather than burning it, avoiding much of the carcinogenic products of cigarettes. PMI have been touting this as the future alternative to cigarettes, and are claiming huge market share gains in Japan based on the product. Heat not burn technologies offer clear harm reduction opportunities for tobacco use: although we don’t know what their toxicity is, it’s almost certainly much lower than tobacco, and every smoker who switches to iQos is likely significantly reducing their long term cancer risk. What PMI needs is for the world to adopt a harm reduction strategy for smoking, so that they can switch from cigarettes to iQos. But the tobacco control community is still divided on whether harm reduction is a better approach than prohibition and demand reduction, which between them have been very successful in reducing smoking.

So isn’t it convenient that there is a new Foundation with a billion dollars to spend on a research platform of “smoking cessation, harm reduction and alternative livelihoods.” It’s as if this Foundation’s work perfectly aligns with PMI’s business strategy. And is it even big money? Recently PMI lost a court case against plain packaging in Australia – because although their foundation admits that smoking kills, they weren’t willing to let the Australian government sell packages that say as much – and have to pay at least $50 million in costs. PMI’s sponsorship deal with Ferrari will cost them $160 million. They spent $24 million fighting plain packaging laws in Urugay (population: 4 million). $80 million is not a lot of money for them, and they will likely spend as much every year lobbying governments to postpone harsh measures, fighting the Framework Convention on Tobacco Control, and advertising their lethal product. This Foundation is not a genuine vehicle for research, it’s an advertising strategy.

It’s a particularly sleazy advertising strategy when you consider the company’s history and what the Foundation claims to do. This company fought any recognition that its products kill, but this Foundation admits that the products kill, while PMI itself continues to fight any responsibility for the damage it has done. This company worked as hard as it could for 50 years to get as many people as possible addicted to this fatal product, but this Foundation headlines its website with “a billion people are addicted and want to stop”. This Foundation will research smoking cessation while the company that funds it fights every attempt to prevent smoking initiation in every way it can. The company no doubt knows that cessation is extremely difficult, and that ten dollars spent on cessation are worth one dollar spent on initiation. It’s precious PR in a time when tobacco companies are really struggling to find anything good to say about themselves.

And as proof of the PR gains, witness the Lancet‘s craven editorial on the Foundation, which argues that public health researchers and tobacco control activists should engage with it rather than ostracizing it, in the hope of finding some common ground on this murderous product. The WHO is not so pathetic. In a press release soon after the PMI was established they point out that it directly contravenes Article 5.3 of the Framework Convention on Tobacco Control, which forbids signatories from allowing tobacco companies to have any involvement in setting public health policy. They state openly that they won’t engage with the organization, and request that others also do not. The WHO has been in the forefront of the battle against tobacco and the tobacco industry for many years, and they aren’t fooled by these kinds of shenanigans. This is an oily trick by Big Tobacco to launder their reputation and try to ingratiate themselves with a world that is sick of their tricks and lies. We shouldn’t stand for it.

I think it’s unlikely that researchers will take this Foundation’s money. Most reputable public health journals have a strict rule that they will not publish research funded by tobacco companies or organizations associated with them, and it is painfully obvious that this greasy foundation is a tobacco company front. This means that most researchers won’t be able to publish any research they do with money from this foundation, and I suspect this means they won’t waste their time applying for the money. It seems likely to me that they will struggle to disburse their research funds in a way that, for example, the Bill and Melinda Gates Foundation do not. I certainly won’t be trying to get any of this group’s money.

The news of this Foundation’s establishment is not entirely bad, though. It’s existence is a big sign that the tobacco control movement is winning. PMI know that their market is collapsing and their days are numbered. Sure they can try and target emerging markets in countries like China but they know the tobacco control movement will take hold in those markets too, and they’re finding it increasingly difficult to make headway. Smoking rates are plummeting in the highest profit markets, and they’re forced to slimmer pickings in developing countries where tobacco control is growing in power rapidly. At the same time their market share is being stolen in developed countries by e-cigarettes, a market they have no control over, and as developing nations become wealthier and tobacco control strengthens e-cigarettes grow in popularity there too. They can see their days are numbered. Furthermore, the foundation is a sign that the tobacco companies’ previous united front on strategy is falling apart. After the UK high court rejected a tobacco company challenge to plain packaging laws, PMI alone decided not to join an appeal, and now PMI has established this foundation. This is a sign that the tobacco companies are starting to lose their previous powerful allegiance on strategy against the tobacco control movement. PMI admits they’ve lost, has developed iQos, and is looking to find an alternative path to the future while the other tobacco companies fight to defend their product.

But should PMI be allowed to take their path? From a public health perspective it’s a short term gain if PMI switch to being a provider of harm reducing products. But there are a bunch of Chinese technology companies offering e-cigarettes as an alternative to smoking. If we allow PMI to join that harm reduction market they will be able to escape the long term consequences of their business decisions. And should they be allowed to? I think they shouldn’t. I think the tobacco companies should be nailed to the wall for what they did. For nearly 70 years these scumbags have denied their products caused any health problems, have spent huge amounts of money on fighting any efforts to control their behavior, and have targeted children and the most vulnerable. They have spent huge amounts of money establishing a network of organizations, intellectuals and front groups that defend their work but – worse still – pollute the entire discourse of scientific and evidence based policy. The growth of global warming denialism, DDT denialism, and anti-environmentalism is connected to Big Tobacco’s efforts to undermine scientific evidence for decent public health policy in the 1980s and 1990s. These companies have done everything they can to pollute public discourse over decades, in defense of a product that we have known is poison since the 1950s. They have had a completely pernicious effect on public debate and all the while their customers have been dying. These companies should not be allowed to escape the responsibility for what they did. Sure, PMI could develop and market a heat-not-burn product or some kind of e-cigarette: but should we let them, when some perfectly innocent Chinese company could steal their market share? No, we should not. Their murderous antics over 70 years should be an albatross around their neck, dragging these companies down into ruin. They should be shackled to their product, never able to escape from it, and their senior staff should never be allowed to escape responsibility for their role in promoting and marketing this death. The Foundation for a Smoke Free World is PMI’s attempt to escape the shackles of a murderous poison that it flogged off to young and poor people remorselessly for 70 years. They should not be allowed to get away with it – they should be nailed to the wall for what they did. Noone should cooperate with this corrupt and sleazy new initiative. PMI should die as if they had been afflicted with the cancer that is their stock in trade, and they should not be allowed to worm out from under the pressure they now face. Let them suffer for the damage they did to human bodies and civil society, and do not cooperate with this sick and cynical Foundation.

Last week the Lancet Public Health published a comment piece by me about the challenges it faces in the near future. This comment was linked to a research article that found a huge increase in elderly people with care needs in the UK population over the next 10 years. This article predicted that 10 years from now there will be a 25% increase in the number of people aged over 65 who have care needs, which corresponds to a numerical increase of 560,000 people. The largest growth will be in dementia-related disability, which may perhaps have been a slightly stinging finding for the government given that Prime Minister May had released a deeply unpopular policy for paying for dementia care in the same week. The article and my comment received some media coverage (see e.g. here), focusing on the impending massive increase in care needs and the risks to the NHS. My article made the point that this growth in elderly people needing care comes at a time when a unique combination of policy challenges confronts the incoming government: an underfunded social care service, an NHS in crisis, a looming workforce shortage, and the risk that Brexit will lead to an immediate loss of staff and a long term reduction in the number of staff entering the NHS. I made the simple point that the British health and social care system needs more money and a commitment to expand the local workforce to make up for the looming drop off in European staff. This is particularly pressing for the social care sector, which unlike the NHS employs large numbers of very low paid staff who have a very high turnover rate and are very often European. Once Brexit hits that turnover is going to bite, because new staff simply won’t be there to replace the high churn rate. There is no solution to this problem except to increase pay and improve working conditions to ensure this sector of the economy can attract British workers and retain them.

The problem is not limited to social care, however: something between 5-10% of staff in the NHS are recruited from Europe, which means that even if the final Brexit deal allows existing staff to stay, over the medium term natural attrition will mean that the NHS needs to increase local recruitment to cover that 5-10% of new staff who are not being recruited from Europe. Worse still, Brexit will hit just as the health workforce hits a wave of retirements of staff recruited from the baby boomer generation, and as junior doctors show increasing signs of burnout and the nurses association is talking about striking to preserve pay and conditions (the strikes themselves will not necessarily be a crisis – though I’m sure Jeremy Hunt can turn them into one! – but the underlying problems they signify will be). It takes 10 years to make a new doctor and about 7 years to make a new nurse, so the entire workforce planning system in the UK needs to be restructured and enhanced rapidly in the next 1-2 years if the UK health and social care system is to be ready to handle this. To be clear the issues are huge: A rapid increase in disability and health risks in elderly British people occurring after a decade of leakage of staff back to the EU, as a generation of older staff retire, and just as the cut to the nurse’s bursary and NHS funding leads to a shortfall in new staff, with no way to make it up through EU recruitment. This will affect every aspect of coverage, quality of care, equality of access, and timeliness of access in a system that is already struggling to handle basic pressures.

Today the Nuffield Trust released a report that adds to the pressures revealed by the article I was commenting on, by discussing additional health system pressures that will arise from leaving the EU. This report finds that:

  • If the Brexit agreement does not properly support UK citizens abroad and the welfare sharing arrangements they benefit from, 190,000 elderly Britons will return home and cost the government an extra 500 million pounds a year
  • If these elderly Britons return home they will require hospital beds equivalent to two new hospitals to care for them
  • If the NHS cannot continue to recruit nurses from the EU there will be a shortfall of 20,000 by 2025
  • The 350 million pounds a week that can be saved by leaving the EU was a myth, but in the first two years after leaving there may be more money to pay for health and social care – if the government is willing to spend it

The publication I commented on predicted an extra 560,000 people with care needs by 2027; this Nuffield Trust finds 190,000 more elderly people the study didn’t cover, and suggests they will have significant care needs currently being (basically) paid for by Europe, and it quantifies the shortfall in staff I identified. It’s worth noting that the NHS employs 320,000 nurses, so the 20,000 shortfall is about 6% of the workforce, but this 6% shortfall comes also when a large number of nurses will be retiring, and about the same time as the current reduced nursing student cohort hits the workforce. A lot of these numerical details are very hard to predict, but it appears likely that there is going to be a major reduction in a nursing workforce that is already not well stocked by OECD standards. Nurses are the bedrock of a functioning health system, and although there is no international evidence on the best nursing levels, a rapid decrease in numbers is only a bad thing, especially if combined with a rapid increase in health care demand.

This problem will face whoever wins the election in two weeks, since a lot of these pressures are the result of a Brexit decision we are supposed to believe is set in stone, and population ageing. But any party that does not have a plan to increase the health workforce, to restore funding to social care, and to improve payment, retention, credentialling and work conditions for the workers at the bottom of the social care heirarchy, is not serious about the depth and seriousness of the crisis the NHS faces. Although the Tories like to talk about working better rather than increasing funding, the reality is that the NHS desperately needs more money; and so long as Labour continue to dance around the issue of exactly how they will handle free movement, they present no serious plans to handle the looming workforce crisis. The British people voted for Brexit without having any clear information about what it would mean for the social care sector, while Boris Johnson flounced around the country in a bus that was advertising a clear lie. Now the election looms, and both parties have to come up with policies to handle this unavoidable crisis on a 10 year deadline. I think from a brutally practical standpoint, the real winner of this election will be the party that loses it, because whoever wins is going to be held responsible not just for Brexit’s short term economic damage, but for the long-term health and social care crisis that neither party is properly prepared to deal with.

The NHS needs more money and more staff. Without it, unless the winning party can deliver a truly miraculous Brexit deal, the UK health and social care system is heading for two decades of increasing and unavoidable crisis. I’m not confident that anyone in British politics is ready to deal with this problem, or even listening to the warnings. Let’s hope, for the sake of Britain’s elderly population, that I’m wrong.

In the wake of the Republicans’ catastrophic inability to repeal Obamacare, many people have begun to accept that the Patient Protection and Affordable Care Act is the new basis on which the US health system will be built. This means that for the foreseeable future, assuming the Republicans are not able to suddenly develop a competent and coherent health financing agenda, progress towards universal health coverage (UHC) in the USA will depend upon improvements of and reform to the free market system as it is regulated by Obamacare. Obamacare is unusual among developing nation health financing systems for its heavy reliance on private insurers as the fundamental providers of risk pooling, as opposed to most other health financing systems where some form of government insurer provides the overwhelming majority of national health financing. For a lot of critics of Obama and Clinton from the left this is seen as a failure, and a sign that they are neoliberal sellouts: under this view of health financing reform, no market-based system will work and Obama sold out his own supporters when he put forward a plan that did not include single payer or a public option. For conservative policy makers in non-crazy countries – for example the UK[1] or Canada – and also in developing countries moving towards UHC, this offers an opportunity to see whether a free market approach to health financing can deliver the key goals of universal coverage and financial risk protection. The problem for conservative thinkers on health care is that there seems to be very little evidence that free market systems work, and the problem for left wing critics of Obamacare is that there is no evidence single payer could have been delivered in the modern US political environment. So for both far left critics and moderate right wing admirers of Obamacare the obvious question is: can UHC be achieved without a single payer system?

This week’s issue of the Journal of the American Medical Association has published an opinion piece addressing this issue. Entitled Achieving universal health coverage without a single payer: Lessons from 3 countries, it gives a brief overview of how Singapore, Germany and Switzerland have achieved UHC with at least nominally non single-payer systems. It attempts to address some of the key differences between these systems and the USA, and some ways in which the health market in those countries is different. Since JAMA is behind a pay wall, I thought I would give a brief summary of a few of these points.

First the article opens with a clanger, asserting that “Universal coverage is a top priority not only for Democrats but also for President Trump,” which does lead one to wonder how critical the authors are. It then goes on to dismiss summarily one of the key ideas raised by Republicans for making private health coverage more affordable in the US: high risk pools. The intention of a high risk pool is that patients with high cost or pre-existing conditions be offered insurance from a special fund financed by the government, thus removing them from the main private insurance risk pool and enabling insurance companies to reduce the cost of mainstream health insurance products. The problem with this model is that it is enormously expensive and there is no evidence that it works. The article points out that no US government will be able to justify the amount of money required to properly finance high risk pools, and that it probably costs upwards of 8 billion US$ a year to do this. It also notes that – contra Paul Ryan’s assertion that pre-ACA high risk pools worked great – most of the state-based high risk pools in the pre-ACA era were hideously expensive and did not work. The article also points out that a preferred strategy of some left-wing critics of Obamacare – shifting high risk patients onto Medicare – may also not work, since Medicare is already a high risk pool and expanding it by dumping in the highest cost patients will be impossible without increased funding (the article uses the language of sustainability, about which I’m suspicious because of its origins, but it cites well-respected sources on the challenges of continuing to finance Medicare if it is treated as a high risk pool).

So given this, the only way that a private system will be able to achieve universal coverage is if everyone is enrolled in insurance, and insurance is properly financed. The article describes the systems in Singapore, Germany and Switzerland, and how each of them force all their citizens into insurance coverage. For example, about Singapore it says:

Singapore institutes compulsory contributions from employers on behalf of their employees to create medical savings accounts. Employees maintain these accounts for health care expenses such as health and disability insurance premiums, hospitalization, surgery, rehabilitation, end-of-life care, and outpatient services. Those failing to pay their premiums are subject to garnished wages and other legal actions that can force payment of back premiums, penalties, and interest. Unemployed or low-income individuals are eligible for government subsidies that enable them to pay for the premiums.

and it points out that Germans are enrolled automatically in “private” funds that take a guaranteed 7.3% of their income. It’s hard to imagine any such plan being popular in the modern US, where the individual mandate has been subjected to years of withering don’t-tread-on-me type criticism and the idea of paying an income-based premium is terrifying to the GOP’s donors. In Switzerland and Singapore, where the systems do not use tax-based payments, they have government subsidies for (according to the article) up to a quarter of their population. So these systems – which by all accounts are functioning, affordable and tolerated by their citizens – share Obamacare’s key tactics of means-tested subsidies and individual mandates.

The article also makes the point that these systems have a very healthy free market structure, with much more vibrant private markets than the USA:

Germany in 2015, for example, had 124 sickness funds and 42 private health insurance companies, and the average resident of Switzerland in 2011 could choose from 59 health insurers offering coverage, with the 5 largest insurers covering 43% of the population. By comparison, in California, a state with approximately half Germany’s population, only 7 firms covered more than 95% of privately insured individuals in 2011, with the 3 largest firms covering 75%. In Massachusetts, with a population slightly smaller than Switzerland’s, 3 insurance companies enrolled 79% of individuals with private insurance.

I think this might be pushing the comparison a little bit, because many of the “sickness funds” in Germany are likely union-run or industry-based mutual associations with very strict management criteria, non-profit structures and guaranteed membership, and they may be regionally based so not actually directly competing with each other[2]. Also, I’m very confident that all three countries studied have rigorous price regulation and strict government oversight of providers (hospitals and clinics), so that they cannot for example price gouge the insurance provider for an infamous $500 band aid as they can in the USA. It’s much easier for private insurers to compete with each other for market share when they know what the cost of the insurance payout is likely to be, and can be confident that the provider won’t charge them arbitrary amounts, and I suspect that this certainty also removes a whole layer of administrative staff at both provider and insurer, for which the US system is infamous.

Having given an overview of these systems the article draws a simple conclusion and gives a firm recommendation: Obamacare needs tougher enforcement of a more punishing individual mandate. I think this conclusion is only partially correct, missing the role of price regulation and cross-subsidization from general taxation that protects these private markets.  So I think that the article is a little strong in concluding that the USA can definitely achieve universal health coverage without at least, for example, introducing a public option to every market place (or at least the rural areas). But it does make the point that a better regulated insurance market with better subsidies and a much tougher mandate would likely encourage competition, and achieve universal health coverage (or close to it) without driving up costs. It certainly seems that the architects of Obamacare knew this and had a long term plan for its expansion and improvement, and assuming the world survives Kim Jong Il’s birthday this weekend, hopefully the Democrats will be back in power in the USA soon enough to begin taking the next steps along that road. I’m not convinced yet, but it is still possible that Obamacare could show the way to a genuinely private, free market alternative to achieving UHC without single payer. In my view, however, if Obamacare (and human civilization!) does survive the Trump presidency, it is likely to become an increasingly state-regulated and state run system, rather than a robust private market place, because introducing a public option, slowly squeezing out private provides, and then making health insurance premiums fully means-tested and tax-based, is a much more reliable way to make everyone happy.

Still, for genuinely interested conservative policy-makers outside of America (whose “conservatives” have no interest in anything resembling policy), the next few years of Obamacare offers an exciting opportunity to develop new pathways to UHC. Given the complexity of movement towards UHC in some low income countries, and the very limited government finances in many of them, it would be interesting to see whether Obamacare’s roll out, expansion and improvement offers a new and more viable pathway to UHC than those currently on offer. I’m not holding my breath, but it will be interesting to see what lessons we can learn from this new and quite unique approach to one of America’s (and the developing world’s) big remaining problems.

First we have to survive the Trump presidency, though.


fn1: Caveats on the use of “non-crazy” should be inserted here, especially after Brexit

fn2: Interestingly, these sickness funds sound a lot like the non-profit mutuals that Obamacare was supposed to encourage, and which US “conservative” critics of Obamacare constantly sneer at and declare completely unviable.

« Previous PageNext Page »