• Today’s issue of PLOS Medicine contains an interesting debate between Australia’s own anti-smoking paladin, Simon Chapman, and a professor Jeff Collin from Scotland, over whether governments should introduce a license for smokers. Chapman puts the case for a license, while Collin opposes it, and the debate is refreshingly free of jargon or paywalls, so quite accessible to non-public health types. I think the license is an interesting idea: basically, anyone who wants to smoke would be required to pay a fee to obtain a license, and no one without a license can purchase cigarettes. Licenses would be available for various quantities of cigarettes, and by registering the licenses with a fixed central database it would be possible to ensure that people could only consume within the licensed amount. Those who want to give up smoking could turn in their license and get a refund on all the years’ fees they’ve paid, plus interest. Meanwhile, the government would be able to accurately track smoking use statistics, which is very useful from a public health perspective. Chapman also suggests that, just like a driver’s license, one should be required to pass a test to get the license, thus in his words ensuring

    that new smokers were making an informed choice, something the tobacco industry has long declared that it believes applies to smokers’ decisions

    and guaranteeing that people who take up smoking have been required to inform themselves of its risks and of the difficulty in giving up. Chapman’s article also offers arguments to dismiss claims that a license would be intrusive, discriminate against the poor, or stigmatize smokers, and proposes a gradual lifting of the minimum age for acquiring the license in order to make numbers of new smokers less and less common. He compares the license with a license to drive or own a gun and, quite interestingly, with a prescription to take pharmaceuticals, which he represents as a kind of temporary license. On its own lights, it is quite a strong argument.

    The opposing case by Collin takes a more structural, less drug-user-focused approach to the challenge of reducing smoking rates. He argues that we should continue to focus on regulating the pharmaceutical companies to combat what he calls an “industrial epidemic,” and says we should strengthen measures which

    should centre on changing a system of manufacture and promotion of such harmful products centred on the corporation, an institution that is staggeringly ill-suited to such roles when viewed from a public health perspective

    He suggests that further measures targeting users are both discriminatory and stigmatizing, and that increasing attempts to manipulate prices and cost barriers will punish existing poor smokers the most (and smoking, at least in developed nations, is a much bigger problem amongst the poor). This is a point that Chapman had disputed, but Chapman’s argument against it is at least partly based on dismissing these complaints as crocodile tears from the tobacco industry and its front organizations – of which I sincerely doubt Collin is a member. Collin argues, furthermore, that the idea of a tobacco smoker’s license is fundamentally illiberal, and grounds most extant bans of tobacco users‘ behavior in a liberal philosophical framework:

    Smoke-free policies have been recognised and understood as unambiguously liberal measures rather than authoritarian intrusions on personal freedom. In advancing a case focused on the protection of non-smokers, workers, and children, such legislation conforms to JS Mill’s classic formulation of the harm principle in On Liberty: “(t)he only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others”

    His argument, then, is that we should avoid anti-tobacco legislation that targets the users themselves, except to prevent harm to others, and focus instead on the source of the harm (the corporations). He even suggests that the imposition of licenses would represent a propaganda “gift” to the industry, and further punish poor people who smoke relative to the wealthier.

    Overall I think Collin’s arguments are less coherent and consistent, but I am inclined towards his position on the issue. I think the license would probably be a good idea from a public health perspective, but represents a curtailment of individual liberty that is unnecessary. It doesn’t actually have any serious civil liberties implications – registering smokers is not the beginning of the police state – but it does shift the focus of efforts away from the source of the harm to its most immediate victims, and it does play a stigmatizing role. Collin also observes that the major goals of the Framework Convention on Tobacco Control (FCTC) are institutional, and in many countries have not been achieved, and it is better to work on systems for improving countries’ ability to meet those goals than to divert our efforts towards restricting users’ behavior. I agree with him on this point: many countries are a long way from a proper implementation of the basic goals of the FCTC – higher tobacco taxes, curbs on illicit tobacco, and indoor smoking restrictions, for example – and strengthening those countries’ ability to resist tobacco company money and marketing is a much better goal for anti-smoking activists. The reality is that smoking in the developed world is on the decline and will continue to do so, and as a result the tobacco companies are aggressively targeting developing nations. It is in those developing nations that activists should be fighting a battle for improved governance and institutional structures that will help those countries protect their health systems from this “industrial epidemic.”

    The debate raises a related issue for me, which is: have some countries gone far enough in their anti-tobacco measures? Australia, for example, having now passed plain packaging laws, has pretty much made smoking as unattractive and difficult as it can do without actually banning it. Should we stop there? The reason this is an issue for me is that I play a violent sport, and I recognize that violent sports represent a deliberate choice by people to take risks with their health in pursuit of a certain pleasure. So does drinking to get drunk, and so does casual sex, both activities of which I approve. At some point we have to recognize that people have the right to trade health for fun, and although that doesn’t give people carte blanche to, for example, go surfing in a frankenstorm or dance naked in front of lions, it does mean that at some point we have to draw a line beyond which public health measures must stop. From a public health perspective, so long as anyone is smoking, “more needs to be done.” But from a civil liberties perspective, at some point the barriers to smoking and anti-smoking education are such that we can safely say people who take up the habit know the risks and are suitably reminded of them that there is no reason to further intrude on their personal decisions. Have some developed nations reached that point? For Australia at least I’m not sure there is much more that can be done except to introduce a license, or introduce the rolling bans mentioned in Chapman’s article. Do we need to go that far, or is the current status quo sufficient? Should the anti-tobacco lobby in Australia be relaxing their national attention simply to being vigilant against new tobacco industry efforts, and instead begin focusing more of their energy on the other countries in the West Pacific where smoking remains a serious and growing problem?

    There comes a point where you have to accept that the activity harms no one else, the person engaged in it is willing and aware of the risks, and the activity is suitably challenged in everyday life that they must be committed and really want to do it. At that point, perhaps public health organizations need to step back, and instead of further restricting the behavior, defend the right of those engaged in it to do so, and to get healthcare for the problems it causes. This is what we do now for mountain-climbing and rugby, two very dangerous but well-respected activities. I think it is possible that in some developed nations, smoking has reached that point, and maybe in those countries enough has been done.

  • Figure 1: Dwarven character creation flow chart

    Following yesterday’s post, here I present flow charts for the best survival options for Dwarves (Figure 1) and Halflings (Figure 2). Both charts are based on CART analysis of the simulation data generated for yesterday’s post.

    Figure 2: Flow chart for halfling fighter creation

    For dwarves, weapon and armour choice is crucial, and weapon finesse is a decision so bad that it actually negatively affects survival: with two feats to choose, wasting one on weapon finesse is a very bad idea. For hobbits, like humans, toughness is only important if the PC doesn’t have good constitution, and weapon choice is only important for clumsy fighters. Note that if a halfling has no strength bonus, constitution is irrelevant to survival but dexterity is important. This is also true for humans, though weapon choice is not important in their case – presumably because they don’t suffer the size penalty on damage dice. For weak dwarves constitution bonus is also not important, but both weapon and armour type make a difference to survival.

    Elven decision rules will be posted later…

  • Introduction

    In previous posts in this series, I showed the differences between fighter builds, and especially that “fast fighters” are a weak decision that is particularly bad for halflings and elves even though they are the more agile races. In this post I will approach the question of fighter builds from a different angle, that of the most effective choice of feats, armour and weapons for given attribute scores. Ultimately, the aim of this work is to develop decision models (expressed as flowcharts) for PC development. We will do this through a generalized version of the simulations run to date, in combination with classification and regression tree (CART) methods.

    Methods

    For this study a completely random character generation method was developed. This simulation program generated random races, ability scores, weapon and armour types and feats subject to the rules in the online Pathfinder System Reference Document (SRD). Weapons were restricted to three choices: rapier, longsword and two-handed sword. Armour types were studded leather, scale, chain shirt and chain mail. There were eight possible feats: improved initiative, dodge, shield focus, weapon focus, power attack, desperate battler, weapon finesse and toughness. Ability scores were generated uniformly within the range 9 to 18, and racial modifiers then applied: the human +2 bonus was applied randomly to the three physical attributes. Feats were assigned randomly, with humans having three feats and non-humans two. All fighters with a one-handed weapon were given a light wooden shield. Halflings were given size benefits and disadvantages as described in the SRD. Initial investigation revealed that ability score values were only important in broad categories: ability scores that gave bonuses greater than 0 were good, and bonuses of 0 or less were bad. For further analysis, therefore, all ability scores were categorized accordingly into values of that gave a bonus of +1 or greater vs. those that did not.

    All fighters were pitted in one-to-one melee combat against an Orc, which had randomly determined hit points and the fully operative ferocity special ability. This happened in a cage deep beneath Waterdeep, so no one could run away. Winners were promised a stash of gold and the chance to buy a farm on the Sword Coast, but were actually subsequently press-ganged into military service in the far south, where most of them died of dysentery. A million fights were simulated.

    Once data had been collected it was analyzed using classification and regression tree (CART) models implemented in R. CART models enable data to be divided into groups based on patterns within the predictor variables, which enables complex classification and decision rules to be made. Although it is more complex and less reliable than standard regression, CART enables the data to be divided into classification groups without the formulaic restrictions of classical linear models. Results of CART models can be expressed as a kind of flowchart describing the relationship between variables, with ultimate classification giving an estimate of the probability of observing the outcome. In this case the outcome was a horrible death at the hands of an enraged orc, and the probability of this outcome is expressed as a number between 0 and 1. CART results were presented separately by race, in case different races benefited from different choices of feats.

    Some univariate analysis was also conducted to show the basic outline of some of the (complex) relationships between variables in this dataset. Univariate analysis was conducted in Stata, and CART was conducted in R.

    Results

    Of the million brave souls who “agreed” to participate in this experiment, 498000 (49.8%) survived. Survival varied by race, with 55% of humans surviving and only 45% of halflings making it out alive. Some initial analysis of proportions suggested quite contradictory results for the different feats, with some feats appearing to increase mortality. For example, 47% of those with improved initiative survived, compared to 51% of those without; and 46% of those with shield focus, compared to 52% of those without. This probably represents the opportunity cost of choosing these feats, or some unexpected confounding effect from some other variable.

    The three combinations of ability scores and feats with the highest number of observations and the best survival rate were:

    • Dwarf with +3 strength, +3 dexterity, +3 constitution, chain mail armour, rapier, weapon focus and desperate battler (15 observations, 100% survival)
    • Dwarf with +3 strength, +0 dex, +4 con, scale armour, two-handed sword, toughness and weapon focus (13 observations, 100% survival)
    • Dwarf with +3 strength, +2 dex, +3 con, studded leather armour, longsword, desperate battler and power strike (13 observations, 100% survival)

    Despite the apparent success of Dwarves, a total of 55% of all unique combinations of ability scores, feats, weapon and armour types with 100% survival were in humans. The majority of the most frequent survival categories appeared to be in non-humans, however – this bears further investigation.

    CART results varied by race. For humans, ability scores were most important; for dwarves, weapon type and armour type were important, while constitution was largely irrelevant. For elves and halflings, the only important feat was toughness; weapon finesse was only important for humans, and sometimes only as a negative choice. The key results from the CART analysis were that strength is the single most important variable, followed by dexterity for elves and halflings, or constitution for dwarves; and then by decisions about armour and weapons. Feats are largely relevant only for those with weak ability scores.

    As an example, the CART results for humans are presented as a flowchart in Figure 1 (click to enlarge). It is clear that after strength and dexterity, heavy armour and constitution are important determinants of survival. Weapon finesse is only important as a feat to avoid for those with low dexterity – for those with high dexterity it is largely irrelevant. Toughness primarily acts as a counter-balance to poor constitution in those with high dexterity and strength.

    Figure 1: Character creation decision model for humans

    Decision models for other races will be uploaded in future posts.

    Conclusion

    This study once again shows that strength is the single most important ability for determining survival in first level fighters, and that feats are largely used to improve survival chances amongst those who already have good ability scores. In previous posts dexterity appeared to be irrelevant, but analysis with CART shows that the absence of a dexterity bonus makes a large difference to survival – those with no dexterity score bonus do not benefit from feat choices, while those who have a dexterity bonus can benefit further by careful choice of armour and feats. Although previous posts found that “tough” fighters have a very high survival rate, this post finds that constitution is not in itself a priority ability score. By following the decision model identified in this study, players can expect to generate a fighter with the highest average survival chance given their ability scores.

  • Today’s Guardian reports on an exchange of letters between Salman Rushdie and John le Carre, from 1997, in which they disagree vehemently about the limits of free speech. At this point in his career Rushdie was in hiding from Islamic fundamentalists, and le Carre was in trouble for criticizing Israel – which of course put him in line for claims of anti-semitism, about which he was most outraged. Unfortunately, 10 years earlier he had apparently claimed that “Nobody has a god-given right to insult a great religion,” and Rushdie was apparently incensed that le Carre should suddenly be demanding victim status after the religious “thought police” turned on him.

    The subsequent exchange – which the Guardian now reports both sides have declared they regret – is a hilarious example of how debates on freedom of expression were conducted before the existence of blogs. Apparently, they are conducted viciously through the medium of newspapers. But the letters themselves read like something straight out of a modern blog flame war – further proof, if any were needed, that the medium has not really changed the message or its tone.

    Some of these exchanges are quite pretty, though. le Carre goes in heavy with his concerns about the girl in the mail room getting her hands blown off, and demands a less colonialist approach to the topic of freedom of expression (though thankfully he doesn’t apply this to Rushdie himself, just his admirers). Less colonialist? Since when is it colonialist to criticize the Iranian regime for putting a price on a writer’s head? Rushdie may be a self-canoniser, but a threat to the Iranian regime he is not. Were he some lunatic militarist with actual political power, pushing for the reoccupation or isolation of Iran, le Carre might have a point – but a religious critic?

    In reply, Rushdie thanks le Carre for “refreshing our memories as to what a pompous ass he is” and adds that “‘ignorant’ and ‘semi-literate’ are dunces’ caps he has skilfully fitted on his own head.” Isn’t it just like reading an exchange on one of the better major bloggers’ sites, when they have one of their blog wars? Only all of it in the Guardian letter’s page.

    I haven’t read Rushdie’s work, but I find it hard not to take his side on the matter. I’ve no doubt that le Carre’s experience of drawing the ire of the Jewish “thought police,” as Rushdie describes them, was much less frightening than Rushdie’s, but one would have hoped it would have given him a hint as to how hard it might be to be in the firing line, whether figuratively or literally. Whether you think his attack on Islam was warranted or not, and whether you think it deserves the ire of Muslims, the fatwa was an outrageous response and even if purely symbolic is still a Very Bad Thing. I would have thought one could have a nuanced debate about colonialism, revolutionary defensiveness, and the responsibilities of western authors, without ignoring the egregious nature of the response, or belittling Rushdie’s genuine difficulties after the fatwa was declared. And if I were Rushdie, I’d certainly be mighty wrathful with writers who failed to defend my rights.

    All of which makes for some entertaining reading, 15 years after the fact, and reminds us that modern blogwars do not necessarily have a lower tone than public debate showed before the invention of this anonymous medium. I guess it just significantly increases the amount that gets said (and thus, by application of basic theorems, the number of debates that get Godwinned). In the case of your average blogger, this is probably not a net positive for the world – but had Rushdie and le Carre been blogging between 1985 and 2000, it would have been quite fascinating, I’m sure.

    If only the internet had been invented sooner, we could have been given the pleasure of blogposts by such luminaries as Orwell, Rushdie, Abbie Hoffman … imagine the colour and light such blogs would bring to the medium. Imagine if Steinbeck had a blog during the Great Depression, or Dr. Seuss in the lead up to world war 2. I doubt it would have changed anything, but it would certainly have been great reading…

  • The Guardian has a short video featuring three British actors reading war poems. The first and last is Sean Bean reading Wilfred Owen’s Anthem for Doomed Youth and The Last Laugh. Wilfred Owen is one of my favourite poets but I’ve never heard him read professionally before. It’s quite moving, though I’m not sure what I think of the idea of actors reading war poems on remembrance day – the slickness and glamour of it is a bit offputting. Nonetheless, if you’re a fan of Boromir or just want to hear some sad poetry read by professionals, it’s worth a look.

  • Congratulations America! With the American electorate[1] having given a resounding endorsement[2] of the policies of the Revolutionary Islamic Socialist Party of Kenya, America will finally see a form of healthcare financing reform. Depending on who you read, this reform seems to be either an insane policy that will bankrupt America, or not much change. I think I speak in concert with 314,731,000 Americans when I declare that I’m no expert on American healthcare – let’s face it, a system that complex is hardly going to be comprehensible to mere mortals – but from my position of limited knowledge I’m inclined towards the latter view. But in health financing, not much change can mean a lot to the minority of the population who are most vulnerable to healthcare-related financial catastrophe, and so not much change is probably, in this case, a Very Good Thing. Just how good will become more apparent over the next few years, and I’m guessing that for health system researchers around the world Obama’s election victory is a huge boon, because it means they can watch what is pretty much the only largely private health financing system in the developed world being reformed from a radically different perspective to the standard vision of universal health cover.

    Although reading conservative commentators one gets the impression that Obamacare is a massive socialist-fascist system of monolithic oppression, in reality it appears to be an attempt to impose careful, minimalist regulation on the system, to ensure that it maintains its character of essentially privatized healthcare insurers, but regulates it to improve efficiency and reduce inequality. The efficiency improvements are intended to reduce long-term growth in costs, and the inequality improvements to ensure that everyone gets coverage of some kind, regardless of ability to pay or pre-existing conditions. These latter improvements are intended to eliminate the problem of the uninsured without disrupting the essentially private nature of the marketplace for health insurance. Whether this will work or not is a big gamble, but in the long term it could have huge benefits economically and socially for ordinary Americans.

    I’m struck by the extent to which the problem of healthcare-related financial catastrophe is researched in developing countries but left largely undescribed in the USA. I’m also struck by the ease with which developing nations like Indonesia, the Philippines, Thailand and other places have been able to introduce innovative financing schemes, while the USA has languished. So I thought while I’m taking a break from a busy work schedule, that I would consider an alternative to Obamacare based on a careful restructuring of the entire US insurance market, using the existing Medicare system as a base. I lack any in-depth knowledge about the American system, and so this post is entirely speculative, but it gives an opportunity to think about ways of gradually moving from a private to a public system, using primarily market means, and allowing the users of the system to determine the final mix of private and public insurers through their consumption decisions. Once again, it’s entirely and completely speculative, being done purely for fun, and comments demolishing it on all its particulars are welcomed, nay, encouraged.

    First, though, a word about the flaws in the current Medicare system.

    Does Medicare work?

    The New England Journal of Medicine (NEJM) has been running a series of opinion pieces (and some research) on health policy reform for a while now, and on the week of Obama’s reelection it published a fascinating article describing the failings of Medicare. The key message of this article is that Medicare fails as both an insurance package and as a cost containment mechanism. I was shocked to discover that Medicare does not include a cap on costs, so although it is an insurance package it doesn’t stop beneficiaries’ out of pocket expenses from destroying their budget. Compare this with, for example, Japan’s universal insurance scheme, implemented in 1961, has a cap on personal expenses and has been responsible for restraining costs to below the OECD average of 9.6% (according to wikipedia[3]). Granted, other universal health coverage schemes are universal, so they have better risk sharing (Medicare is for the elderly), but still … the USA is the richest country in the world, you’d think sorting this out wouldn’t be soooo hard. According to the NEJM article, in 2009 15% of Medicare recipients faced payment of 5000 $US or more, when the maximum(?) income for pensioners in the USA is something like $15,000. In studies of financial catastrophe in developing nations, this sort of statistic is considered disastrous, though it should be noted that the stats in the article aren’t sufficient to identify rates of financial catastrophe[4]. The article then notes that because of the lack of a cap, Medicare recipients often pay for secondary insurance to pay the out-of-pocket expenses. This has the dual effect of increasing their insurance costs and, if they choose a good insurance package, encouraging unnecessary use of medical care, since a good secondary insurance package enables free healthcare usage and thus increases costs. The article also references a paper suggesting that half of America’s increase in healthcare costs in the last 40 years can be slated home to the growth of private health insurance (I haven’t read this reference and have no idea how good it is). The article’s recommendation is that the government should put a cap on medicare costs while simultaneously restricting the ability of insurance companies to cover Medicare’s out of pocket costs, and references many other reports that have suggested the same thing.

    On the basis of that report, Medicare hardly seems to be a good starting point for health insurance reform, does it?

    An alternative vision for Obamacare: extending Medicare

    Given Obama’s approach to healthcare reform, it seems that a fundamental assumption of any alternative vision is that it should not radically alter current market structures. Obamacare appears to be, fundamentally, a suite of regulatory changes to the current marketplace. He hasn’t suggested, for example, nationalizing all existing insurers to form a single-payer government-run monolith. So, any alternative vision for Obamacare that is going to be consistent with Obama’s obvious preference for creeping incrementalism is going to need to use existing systems to achieve its goals. How can we do this? Let’s try building on Medicare.

    The first step of the Faustian plan would be to put a cap on expenses under Medicare – looking at the tables in the NEJM, about $1500 seems like a good limit. Then, to achieve a gradualist change in the American healthcare system, Faustuscare would consist of a simple decision to allow anyone to enrol in Medicare. In Japan the cost of the single-payer insurance system varies by state, so Obama could implement a similar system: anyone can join Medicare, based on paying a rate that varies according to the population and its distribution in their state. This would make Faustuscare cheap in the most populous and youngest states (just as it is in Japan). The one condition on Medicare would be that it can’t ban people from joining on the basis of pre-existing conditions, and has no age-dependent pricing structure… or, if you want to be really brutal, the price a member pays is fixed by the age at which they join, not their current age.

    The idea, of course, is to use the power of the government to tax rich idlers like Mitt Romney. Obama fixes the cost of joining Medicare at less than that of the popular big medical plans, and makes up the shortfall from general taxation. It’s almost certain that making Medicare available to people under 65 – even those with pre-existing conditions – is going to reduce overall risk, so he can afford to lower prices. Then, he offers companies a further concession – they can move employees to the new system at some reduced rate, provided that they cut half of the difference with their employees. With such a condition he is going to recruit lots of new members quickly, and everyone who gets recruited is going to essentially get a pay rise.

    The plan here is obvious – use the power of general taxation to supplement a reasonably priced health insurance plan, with no health-related joining conditions, to undercut existing insurance companies. The new entrant to the insurance market already has everyone over 65 as a customer, and by introducing the (equality-improving) cap on payments, has caused a lot of those seniors to ditch their existing supplemental insurance. In order to compete with this new market entrant the existing companies are going to have to find a way to drop prices and do away with pre-existing-illness conditions. The result of this will be a massive, across-the-board efficiency gain. The likely survivors of the government’s entry to the market will be the HMOs, which are already ruthlessly efficient, comparatively cheap, and already offer reasonably good health outcomes. Obama can choose to restrain Medicare’s power to ensure that some insurers survive in a mixed market, or he can use the power of general taxation to force them all out of business, nationalizing them one by one as they fold. I would recommend the former, since the American health market is obviously built on competition between both providers and commissioners. Keeping Medicare in the market as the insurer of last resort will ensure that the other insurers lower their prices and/or offer a basic package that is competitive with Medicare, but they will still offer “bonus” packages that appeal to the rich or the health-obssessed.

    I have a suspicion that much of this plan could be achieved through administrative rather than legislative changes. It can be sold as a partially free market solution to the health insurance problem, and I suspect a lot of big companies would jump on the chance to shift their insurance payments to such a system. I think the American system needs two forms of competition: competition at the bottom of the market, and plans that don’t discriminate on pre-existing conditions. Any such plan needs to be able to recruit low-risk people to balance its risk profile, and (probably) additionally need some form of subsidy. Medicare is the obvious vehicle, since it already exists, and offering it at reasonable cost to young people could potentially rapidly expand its coverage. Since it already is huge, further expansion of coverage would give it additional power to negotiate cost-cutting with providers – which would force other insurers to do the same.

    America’s problem in reforming its health system gradually (rather than the crash-through or crash approach of the original NHS) is to find a way to manipulate free markets to be equitable. Obama appears to be taking the road of regulation, but the alternative is nationalisation by stealth, and Medicare offers the vehicle by which to do this. What do you think?

    fn1: Well, six swing states anyway

    fn2: When results are measured to at least two decimal places

    fn3: I really should be able to do better than this

    fn4: I’ve not done a literature search but I have a strong suspicion that healthcare-related financial catastrophe – a very real phenomenon in the modern USA – is better-understood in developing nations than it is in the USA. What does this have to say about health services researchers attitudes towards the world?

  • Christianity’s fundamental promise is of eternal life, and the risk of refusing to accept God’s grace is generally accepted to be eternal damnation. While the truth of these statements is still subject to debate, there is little empirical evidence of the benefit of eternal life, and little research exploring the possible drawbacks of a decision to forego evil in exchange for the promise of eternal salvation. In a world of finite resources, decisions about how best to dispose of available resources while alive need to take into account the long-term and (if certain cosmological properties are shown to hold) potentially eternal consequences of the choice between good and evil. In this blog post, we will examine the costs and benefits of baptism and rejection of sin from an econometric standpoint. Of specific interest in this blog post is the relationship between the benefits of accepting God’s grace and the discount rate society applies to years of life not yet lived.

    The immediate use of an analysis of the costs and benefits of accepting god’s grace is obvious, but from a wider perspective a clear understanding of the economic aspects of this theological decision may help us to understand the persistence of evil in a world where humans have free will, and to answer the eternal question: why does evil exist in a world shaped according to God’s will?

    Methods

    Standard cost-effectiveness analysis methods were applied to two simple decision problems. The first decision problem is the question of whether or not to baptize a child, on the assumption that baptism grants the child God’s grace, causing them to live a holy life but to lose the benefits that might accrue to an evil-doer. The analysis was then extended to consider a problem implicit in a great deal of modern rhetoric about the soul and sexuality, viz: if homosexuality is a choice, and that choice leads only to hell, is it cost-effective to choose to be homosexual? This question was answered in terms of numbers of partners foregone, and quality-adjusted life years gained from the sacrifice.

    The basic decision problem: whether to baptize

    The basic decision problem was addressed using standard measures of effectiveness. It was assumed that were a child to be baptized they would be eligible to enter heaven upon their death, and would thus be able to live forever. Were they not to be baptized, they are assumed to enter hell at death. Each year of life lived was assumed to grant the individual a full quality adjusted life year (QALY); each year in heaven (from now until the rapture, i.e. infinite years from now) was also assumed to grant 1 QALY; while entry into hell was considered to grant 0 QALYs. All QALYs were discounted using the standard formula, and the effect of the discounting rate on the benefits of each decision were calculated over three different life expectancies: 45 years (enlightenment-era), 70 years (biblical lifespan) and 80 years (the life expectancy granted by modern materialist living). Effectiveness was then assessed for a wide range of discount rates, varying from 0.5% to 5%. The difference in QALYs gained (the incremental effect) was then calculated for all these scenarios.

    Cost-effectiveness calculation for the baptism problem

    Having calculated the incremental effect of baptism, the cost was then calculated under the assumption that evil people make more money. This assumption is implicit in, for example, Mark 8:36, when Jesus asks

    What good is it for a man to gain the whole world, yet forfeit his soul?

    which suggests that doing good requires some form of material sacrifice. This is, of course, also obvious in the early doctrine of the Dominican and Franciscan orders, and much of pre-enlightenment religious debate was focused around this struggle between material goods and goodness.

    This contrast was modeled by a variable \alpha, which represents the percentage of additional annual income an unbaptized sinner earns relative to a person living in grace. For example, if a sinner earns 10% more than a convert, then \alpha=0.1. Then, assuming a fixed average income for god-fearing individuals, we can calculate the lost income due to being good. This is the incremental cost of salvation. From this calculated incremental cost and the incremental benefit, we can estimate an incremental cost effectiveness ratio (ICER), and estimate whether the decision to baptize is cost-effective.

    In keeping with standard practice as used by, for example, the National Institute for Health and Clinical Excellence, we set the basic income of one of the saved to be the mean income of the UK, and define baptism as “cost-effective” if its ICER falls below a threshold of three times the annual mean income of the UK. We also establish a formula for the cost-effectiveness of salvation, based on the relative difference in income between the good and the evil, the discount rate, and the human lifespan.

    All income in future years was discounted in the same way as future QALYs.

    The costs and benefits of voluntary homosexuality

    Finally, we address a problem implicit in some forms of modern christian rhetoric, that of the wilful homosexual. Many religious theorists seem to think (either implicitly or openly) that homosexuality is a choice. If so, then the choice can be modeled in terms of an exchange of sexual partners for eternal damnation. In this analysis, we calculated the number of sexual partners a potentially homosexual male will forego over a 20 year sexual career commencing at age 15. We assumed that all life years before age 15 are irrelevant to the calculation (that is, we assumed that all individuals make a choice at age 15 as to whether to be good or evil), and that a person foregoing homosexuality will have 0 partners. Other assumptions are the same as those made above. The ICER for being good was then calculated as the cost in foregone sexual partners (discounted over a wide range of rates) divided by the QALYs gained through foregoing this lifestyle and gaining access to heaven.

    Faustian discount rates and the problem of heavenly utilities

    Commonly used discount rates range from 3 to 5%, but these are potentially inconsistent with the discount rates preferred by evil-doers. In this study we did not model differential discount rates between evil-doers and the elect, but we did consider one special case: that in which everyone observes a discount rate equal to that observed by Dr. Faust. As is well known, Dr. Faust sold his soul to Mephistopheles in exchange for earthly power, and after 24 years his soul was taken into hell. Since he knew the time frame at the beginning of the deal, this implies that he was following a discount rate sufficient to rate all time more than 24 years in the future at 0 value. Under standard discounting practice such a rate does not exist, but we can approximate it by the rate necessary to value all time more than 24 years in the future at no more than 5% of current value. This discount rate, which we refer to as the Faustian Discount Rate, is approximately 12.5%. All scenarios were also tested under this discount rate.

    A further problem is the problem of calculating utility weights for a year spent in heaven or hell. Given the lack of empirical data on utility of a year in heaven, and the paucity of first hand accounts, we assumed that a year in heaven was equivalent to a year without pain or suffering of any kind, i.e. one full QALY. According to the site What Christians Want to Know, Revelations 4:8 describes heaven as

    a constant chant of holy angels that are continually proclaiming Holy, Holy, Holy over the throne of God.  The Mercy Seat in heaven where God sits is surrounded by magnificent angels full of glory and power that proclaim and bless the holy name of God without ceasing.  Some of these are described as beasts, full of eyes, with six wings and neither rest day or night in their proclaiming the holiness of God.

    For those of us who don’t enjoy doom metal, this would probably have a utility value of less than one. In the interests of a conservative analysis, we assign heaven a utility of 1.

    A similar problem applies to assigning utilities for hell. Many people claim to have been to hell and back, but their accounts of their time at a Celine Dion concert are not convincing and it is unlikely that accurate data on the state of hell exists. Popular conception of hell suggests a realm of eternal torture, but it is worth noting that in standard burden of disease studies even the most unpleasant and torturous diseases – such as end states of cancer, AIDS, and severe disability – are assigned positive utility weights, often quite a lot higher than 0. It is therefore reasonable to suppose that hell should be assigned a positive but small utility. However, again in the interests of conservative analysis, we assign a utility weight of 0 to a year spent in hell – that is, it is equivalent to death.

    Results

    Incremental benefit of salvation

    The formula for the incremental benefit of salvation can be derived as

    LY_{g}=\frac{\exp(-rl)}{r}

    where here,

    • LY_{g} is the incremental benefit of being good, in QALYs
    • r is the discount rate
    • l is the human life expectancy

    Figure 1 charts this incremental benefit over a wide range of discount rates for three different life expectancies.

    Figure 1: Incremental benefit of salvation for three different life expectancies

    It is clear that as the discount rate increases the incremental benefit of salvation decreases rapidly. At the Faustian Discount Rate, the incremental benefit of salvation is a mere 0.03 QALYs for a 45 year life expectancy, or 0.0004 for a human with an 80 year life expectancy. That is, even if Faustus had been offered and then rejected his bargain at birth, and expected to live to 45 years only, he would have seen the benefit to himself as being only about 0.03 years of life, due to his tendency to discount the value of years far in the future.

    The cost-effectiveness of baptism

    We now consider the cost-effectiveness of baptism. Let the income of one of the saved be given by c_{g}, and that of an evil-doer be c_{e}=(1+\alpha)c_{g}. Then the income foregone in order to enter heaven is given by the formula

    C=\alpha c_{g}(\frac{1-\exp(-rl)}{r})

    where all parameters are defined as before. Then the incremental cost effectiveness ratio (incremental cost divided by incremental benefit) is

    ICER=\alpha c_{g}(\exp(rl)-1)

    The ICER is plotted in figure 2 for two common life expectancies across a range of values of the discount rate, assuming a mean annual income of 26,000 pounds and that evil-doers earn 10% more income than the saved.

    Figure 2: Incremental cost-effectiveness of salvation for two different life expectancies

    At a Faustian Discount Rate, life expectancy of 70 years, and 26,000 pound mean income, the ICER for baptism is 16,202,218 pounds per QALY gained.

    We can estimate a general condition on society’s discount rate for baptism to be cost-effective, in terms of the additional income gained by being evil and the life expectancy. This formula is given by:

    r \le \frac{1}{l}ln\Bigl (\frac{3+\alpha}{\alpha}\Bigr)

    For a life expectancy of 70 years, assuming that the damned earn 10% more than the saved, the required discount rate for baptism to be cost-effective is 4.3% or less; if the damned earn 20% more this threshold drops to 3.5%. It is clear that damnation doesn’t have to be much more materially rewarding before it becomes attractive even under quite reasonable discount rates.

    The costs and benefits of voluntary homosexuality

    We now consider the situation of a callow 15 year old youth, considering embarking on a life of sodomite sin. What should he choose? Obviously, from the perspective of a simple youth, the costs need to be weighed up in terms of foregone lovers. Assuming an average of five sexual partners a year, a sexual career beginning at age 15 (which is set to time 0 in this analysis) and lasting 20 years, and the same conditions on discount rates, eternal damnation, etc. as described above, a simple formula for the number of partners this man would be foregoing by refusing to choose the love that dare not speak its name can be derived as

    p=\frac{5}{r}(1-exp(-20r))

    and from this the incremental cost effectiveness ratio (measured in partners foregone per QALY gained) as

    ICER=5\Bigl(\frac{1-exp(-20r)}{1-exp((15-l)r)}\Bigr)

    Note that this ICER is not dependent on the human lifespan. It is in fact almost linear in the discount rate (Figure 3). At the Faustian Discount Rate, the potential gay man is looking at a cost of 4.6 lovers foregone for every QALY gained. Note these values change for different annual average numbers of lovers.

    Figure 3: Incremental cost-effectiveness of foregoing a life of sodomy

    It might be possible to construct an experiment that assessed individuals’ discount rates using this formula: their answers to the question “how many years of life would you give up to win an additional 5 lovers” could be used to identify their value of r.

    Conclusion

    In Mark 8:36, Jesus asks the rhetorical question

    What good is it for a man to gain the whole world, yet forfeit his soul?

    Although usually presented as a question with no clear answer, it is actually quite easy to investigate this question empirically, and to draw conclusions about its implied cost-effectiveness analysis. The results presented here show that, in general, the good gained by forfeiting one’s soul is quite great, and the decision to forego baptism and live a life of evil (including wilful homosexuality) is generally the best decision one would expect a rational actor to make. At very low life expectancies and unrealistically low discount rates it is more beneficial to forego evil and embrace salvation, but at the discount rates usually used by economists, and assumed to reflect rational decisions made by ordinary individuals, salvation is not a profitable course of action.

    These findings have interesting theological implications. First, we note that the Church is most likely to gain converts in a society which has a very low discount rate – but in general, the societies where the Church first took hold were societies with high rates of infant mortality and all-cause mortality, which were likely to put a low value on the later years of life – that is, to have high discount rates. But such societies are not naturally sympathetic to the message of eternal damnation, unless they can be convinced to forego rationality in moral decision making. This might explain the Church’s historical resistance to scientific endeavour, and willingness to foment superstitious practices.

    These findings also explain christianity’s historical opposition to usury. It is naturally the case that buying something today and paying for it later – i.e. borrowing – is inconsistent with a very low discount rate, which tends to value future years of lost income almost as much as now. Furthermore, usurers operating in the open market will set interest rates well above 0.05%, and it is likely that the practice of usury plus the publishing of interest rates will encourage a society with higher discount rates (in fact, it is likely that this would be encouraged by the lending class). This directly undermines the church’s lesson of salvation, which depends on very low discount rates to work.

    Finally, low discount rates are often associated with environmentalism – care for future generations, priority setting that considers costs in the distant future, etc. – but on the central issue of our time (global warming) many of the born again religious organizations that most fervently preach the message of salvation also vehemently oppose any message of custodianship and environmental care. These organizations would probably make better progress in convincing people to give up the joys of the here-and-now for an indeterminate heaven (that seems to involve a lot of noise pollution) if they could find a theoretically consistent approach to discount rates.

    This post has shown a simple explanation for the problem of evil: most people operate with discount rates closer to Dr. Faust than to St. Christopher, and as a result they are unlikely to accept the distant benefits of heaven over the joys of the material world. Until the church can find a way to convince us that all our tomorrows are as important as today, the problem of evil will never be solved.

  • One possible consequence of the collapse of the summer arctic ice cover is that storms like Sandy will become the new normal. There are reasons to think that the freak conditions that caused Sandy to become so destructive are related to the loss of arctic ice, and although the scientific understanding of the relationship between the arctic and northern hemisphere weather in general is not robust, there seems to be at least some confidence that the ice and weather around the Atlantic are related.

    It’s worth noting that what is happening in the arctic this year is well in advance of scientific expectations. The 2007 Intergovernmental Panel on Climate Change (IPCC) report, for example, predicted an ice free arctic in about the year 2100. The cryosphere blogs, however, are running bets on about 2015 for “essentially ice free,” and no ice in 2020, as shown, for example, in this excellent post on ice cover prediction by Neven. Results presented by the IPCC are one of the main mechanisms by which governments make plans to manage climate change – in fact this was their intention – and one would think that events happening 80 years sooner than the IPCC predicts would make a big difference to the plans that governments need to consider.

    One of the biggest efforts to make policy judgments based on current predictions of future effects of climate change was the Stern Review, published in 2006 and based on the best available scientific predictions in the previous couple of years. The key goal of the Stern Review was to assess the costs and benefits of different strategies for dealing with climate change, to answer the question of whether and when it was best to begin a response to climate change, and what that response should be.

    The Stern Review received a lot of criticism from the anti-AGW crowd, and also from a certain brand of economists, partly because of the huge uncertainties involved in predicting such a wide range of events and outcomes so far in the future, and partly because of it particular assumptions. Of course, some people rejected it for being based on “alarmist” predictions from organizations like the IPCC, or rejected its fundamental assumption that climate change was happening. But one of the most persistent and effective criticisms of the Review was that it used the wrong discount rate, and thus it overemphasized the cost of rare events in the future compared to the cost of mitigation today.

    I think Superstorm Sandy and the arctic ice renders that criticism invalid, and instead a better criticism of the Stern Review should now be that it significantly underestimates the cost of climate change, regardless of its choice of discount rate. Here I will attempt to explain why.

    According to its critics, the Stern Review used a very low discount rate when it considered future costs. A discount rate is essentially a small percentage by which future costs are discounted relative to current costs, in order to reflect the preference humans have for getting stuff now. The classic, simplest discount rate simply applies an exponential reduction in costs over time with a very small rate (typically 2-5%), so that costs incurred 10 years from now are reduced by an amount exp(-10*rate). I use this kind of discounting in cost-effectiveness analysis, and a good rough approximation to its effects is to assume that, if costs are incurred constantly over a human’s lifetime, actually only about 40% of the total costs a person might be expected to incur will actually be counted now.

    For example, if I am considering an intervention today that will save a life, and I assume that life will last 80 years, then from my perspective today that life is actually only really worth about 30 years. This reflects the fact that the community prefers to save years of life now, rather than in 70 years’ time, and also the fact that a year of life saved in 20 years time from an intervention enacted today is only a virtual year of life – the person I save tomorrow could be hit by a bus next week, and all those saved life years will be splattered over the pavement. The same kinds of assumptions can be applied to hurricane damage – if I want to invest $16 billion  now on a storm surge barrier for New York, I can’t offset the cost by savings from a $50 billion storm in 50 years time, because $16 billion is worth more to people now than in 50 years’ time, even if we don’t consider inflation. I would love to have $16 billion now, but I probably wouldn’t put much stock on a promise of $16 billion in 50 years’ time, and wouldn’t change my behavior much in order to receive it[1]. Stern is accused of rejecting this form of discounting, and essentially using a discount rate of 0%, so that future events have the same value as current events.

    There are arguments for using this type of discounting when discussing climate change, because climate change is an intergenerational issue and high discount rates (of e.g. 3%) fundamentally devalue future generations relative to our own. Standard discounting is probably a logic that should only be applied when considering decisions made by people about issues in their own lifetimes. This defense has been made (the wikipedia link lists some people who made it), and it’s worth noting that many of the conservative economists who criticized the Stern Review for its discounting choice implicitly use Stern’s type of discounting when they talk about government debt – they complain extensively about “saddling future generations” with “our” debt, when their preferred discounting method would basically render the cost to those generations of our debt at zero. This debate is perhaps another example of how economists are really just rhetoricists rather than philosophers. But for now, let’s assume that the Stern Review got its discounting wrong, and should have used a standard discounting process as described above.

    The Stern Review also made judgments about the effects of climate change, largely along the lines of the published literature and especially on the material made available to the world through previous rounds of IPCC reports. For example, if you actually access the Stern Review, you will note that a lot of the assumptions it makes about the effects of climate change are essentially related to the temperature trend. That is, it lists the effects of a 2C increase in temperature, and then applies them in its model at the point that the temperature crosses 2C. For example, from page 15 of Part II, chapter 5 (the figure), we have this statement:

    If storm intensity increases by 6%, as predicted by several climate models for a doubling of carbon dioxide or a 3°C rise in temperature, this could increase insurers’ capital requirements by over 90% for US hurricanes and 80% for Japanese typhoons – an additional $76 billion in today’s prices.

    The methods in the Stern Review are unclear, but this seems to be suggesting that the damage due to climate change is delayed in the analysis until temperature rises by 3C[2] – which will happen many years from now, in most climate models.

    The assumptions in the Stern Review seem to be that the worst effects of climate change will begin many years from now, perhaps after 2020, and many (such as increased storm damage) will have to wait until the temperature passes 2C. There seems to be an assumption of a linear increase in storm damage, for example, which loads most storm damage into the far future.

    This loading of storm and drought damage into the far future is the reason the discount issue became so important. If the storm damage is in the far future, then it needs to be heavily discounted, and the argument becomes that we should wait until much closer to the time to begin mitigating climate change. This argument is flawed for other reasons (you can’t stop climate change overnight, you have to act now because it’s the carbon budget, not the rate of emissions, that is most important to future damage), but it is valid as it applies to the debate about whether we should be acting to prevent climate change or prepare for climate change.

    However, recent events have shown that this is irrelevant. Severe storm damage and droughts are happening now, and at least in the Atlantic rim these events are probably related to the collapse of the arctic ice load, and reductions in snow albedo across the far north. Stern’s analysis was based on most of these events happening in the far future, not now, and as a result his analysis has two huge flaws:

    1. It underestimates the total damage due to climate change. Most economic analyses of this kind are conducted over a fixed time frame (e.g. 100 years), but for any fixed time frame, a model that assumes a gradual increase in damage over time is going to underestimate the total amount of damage that occurs over the period relative to a model that assumes that the damage begins now. Stern couldn’t assume the damage begins now, because those kinds of things weren’t known in 2006. But it has begun now – we need to accept that the IPCC was wrong in its core predictions. That means that the total damage occurring in the next 100 years is not going to be $X per year between 2050 and 2100, but $X per year between 2010 and 2100 – nearly twice as much damage.
    2. The discount rate becomes irrelevant. Discount rates affect events far in the future, and have minimal effect now. If Stern had used a standard discount rate of 3%, then from his perspective in 2006 the current estimates of storm damage in the USA due to Sandy ($50 billion) would be about $42 billion. Also, all the damage in the USA due to Sandy is excess damage, because without the collapse of the arctic ice fields, Sandy would probably have headed out to sea, and done 0 damage. The estimated cost of the storm surge barrier mentioned above was $16 billion, so assuming that this cost is correct (unlikely) and it could have been built by now (impossible), that investment alone would have been worthwhile. Whereas if we assume a storm like Sandy won’t happen until 2050, the cost of the storm from Stern’s perspective is $14 billion, and we shouldn’t bother building the barrier now.

    This means that the main conservative criticism of the Stern Review is now irrelevant – all that arcane debate about whether it’s more moral to value our future generations equally with now (Amartya Sen[3]) or whether we should focus on building wealth now and let our kids deal with the fallout (National Review Online) becomes irrelevant, because the damage has started now, and is very real to us, not to our potential grandchildren.

    The bigger criticism that needs to be put is that Stern and the IPCC got climate change wrong. The world is looking at potentially serious food shortages next year, and in the last two years New York has experienced two major storm events (remember Irene’s storm surge was only 30cm below the level required to achieve the flooding we saw this week). Sandy occurred because of a freak coincidence of three events that are all connected in some way to global warming. We need to stop saying “it’s just weather” and start recognizing that we have entered the era of extreme events. Instead of writing reviews about what this generation needs to do to protect the environment for its children, we need to be writing reviews about what this generation can do to protect itself. Or better still, stop writing reviews and start acting.

    fn1: This is a problem that has beset the organized religions for millenia. An eternity in heaven is actually not equivalent to many years on earth, if you discount it at 3% a year.

    fn2: Incidentally, I’m pretty sure I was taught in physics that the use of the degree symbol in representing temperatures is incorrect. Stern uses the degree symbol. Economists!!! Sheesh!

    fn3: Incidentally, I think in his published work, Sen uses the standard discounting method.

  • A few weeks ago I put up a post about the research challenges in studying online communities, in which I suggested that online surveys are  an essential but flawed tool for the study of communities that are largely defined by their internet rather than their physical presence. My post was in the context of the controversy over Lewandowsky’s analysis of data from online climate “skeptics,” but in contrasting online communities with marginalized or stigmatized groups, I implied that online surveys are part of a broader problem in research, dating from before the internet, in how to access people who are not easy to trace or very rare.

    Today’s issue of the journal PLOS Medicine has an article about ethical considerations in online research, which is published in the context of online surveys of medical treatments and genome studies. The paper, available open access here, has a nice description of the types of bias that enter online studies, and also a discussion of the ethical implications that arise both from the nature of these biases and from the general properties of online surveys. They mention the possibility that data will be shared and the importance of telling participants, something which I suspect neither Lewandowsky nor the follow up survey at WUWT explained to their participants; they also mention the possibility of data sales and the additional complexities of obtaining consent from non-identifiable participants.

    The article doesn’t make any revolutionary claims or present any strong judgments about whether online research is bad, good or better than other forms of research, but it does note the growth of this type of research, discusses its applicability and generalizability, and points out some of the ethical implications that arise from the ease with which people can conduct online research. I was certainly surprised to read the kind of data people are willing to exchange with online research organizations of the type identified in the article and I wonder if there is a broader problem here in that our technological ability to collect private and sensitive data from strangers has outpaced the community’s understanding of the risks and ethics of such practice. If the wash-up of the Lewandowsky affair has you wondering about the broader issues surrounding that type of research, then the article is certainly worth reading.

  • image

    Last night I went to a work-related dinner with foreign guests, and my students did the stereotypical ’80s Japanese thing of subjecting the foreigners to every gross food that Japanese people eat, without warning. Uterus, raw octopus with horseradish, frogs legs, stewed guts, grilled chicken skin, pickled plums, some kind of tiny fish that tastes like socks… and inago, grilled grasshopper. Me being one too many sheets to the wind (and having secret theories about the morality of insect eating), I tried it.

    Inago is actually very good. It is sweet-salty, crunchy and bite sized. I was expecting some kind of gross prawn-brain flavour that needs to be washed down with beer (and carries the attendant risk of an explosion of fishy aftertaste, that the Japanese call namakusai and appreciate). It has none of these things, so is vastly superior as a beer snack to either dried squid or deep fried whole peawns. I recommend grasshopper, though I can’t vouch for the effect of eating a whole bowl.

    If you want to try this “delicacy,” the old-fashioned restaurant chain Hanbei (半兵ヱ?) sells them, and has an english menu. There is one in Shibuya near the station. Good luck!