• When I played AD&D I think one of the first aspects of its magic system I dropped was the material components. It’s a shame, but they just represented too much of a constraint on what was already a hideously underpowered class (especially at first level). Some of the material components even for first level spells are quite challenging to provide, and they’re consumed in the casting of the spell. Consider, for example, the following spells:

    • Alarm: A tiny bell and a very fine piece of silver wire
    • Armor: A piece of finely cured leather that has been blessed by a priest
    • Color Spray: A pinch each of powder or sand colored red, blue and yellow
    • Dancing Lights: A bit of phosphorus or wychwood, or a glowworm
    • Friends: Chalk, lampblack and vermillion
    • Identify: A pearl worth 100gp and an owl feather soaked in wine
    • Light: A firefly or a piece of phsophorescent moss
    • Protection from Evil: Powdered silver

    and so on.  The spells Burning Hands, Detect Magic, Charm Person and Magic Missile require no material components of any kind. These material components are very cool and really add to the romance and style of wizards, but they’re an enormous burden, especially on low level wizards. A first level wizard starts with 20-50 gps, so will not be able to cast Identify and probably can’t afford the ingredients for Protection from Evil, Dancing Lights or Color Spray in most medieval settings. That’s without considering the difficulty of carrying phosphorus, glow-worms and phosphorescent moss. Some of these spells also can’t be cast in the casting time given in their description, because the ingredients need to be steeped, smeared or scattered in a circle. Find Familiar, much more powerful than its 3rd Edition version, requires 1000Gps of herbs and incense. Even Sleep is probably beyond the reach of a lot of wizards, requiring as it does a pinch of sand – sand would have been a rare sight in 12th Century Glastonbury, I’m willing to bet. So here you have a first level wizard with 40 GPs, and before he goes adventuring he needs to gather together a piece of silver wire, several portions of powdered silver, a collection of tiny bells, some phosphorescent moss, some sand and a drop of bitumen (!! for Spider Climb).

    One can imagine what happens if the party kills a gnome, who has a small admantite file in his toolkit. The file is worth 50gps and everyone else just wants to sell it, but the Wizard recognizes here an opportunity to make himself self-sufficient in powdered minerals, and snaffles it up. A libertarian party would probably charge him 200gps premium for it[1]. And at higher levels it gets ridiculous, of course:

    • Invisibility: An eyelash encased in gum arabic[2]
    • Melf’s Minute Meteors: nitrite[3], sulphur, pine tar and a (reusable) fine tube of gold worth 1000gps
    • Evard’s Black Tentacles: a piece of tentacle from a giant octopus or squid
    • Feeblemind: a handful of clay, crystal, glass or mineral spheres
    • Chain Lightning: A piece of fur, an amber, glass, or crystal rod, and a small silver pin for each experience level of the wizard

    Some of these material components are very very difficult to get hold of. I doubt I could get most of them easily, even living in Tokyo. If one were to rigorously adhere to the spell components rules, every wizard would need the regular services of an alchemist, silversmith, blacksmith, and a couple of other extremely talented craftspeople; the wizard would also need to be very assiduous about cutting up and preserving any roadkill or adventure-kill he or she came across. There’s no doubt that this sort of thing makes these PCs much more interesting, but it also makes them virtually unplayable, because it essentially restricts the number of spells the PC knows in any one day, as well as the number they can cast – effectively it puts a bunch of spells beyond the PC’s reach at any time, while maintaining daily limits on those that the player does have the ability to use. A good example is Identify: a wizard at first level can’t use it, but by second level may be able to afford a pearl of suitable value. They can then cast the spell; but they can only cast it once, on one object, and they can’t cast it in the dungeon because they only know two spells a day and they need Shield and Magic Missile in the dungeon. So the party stumbles upon a ring that may be of great use right there and then, but the wizard can’t cast the spell even though it was a week’s work to find the owl feather and the pearl. So then they have to wait till they leave the dungeon, at which point they have a second item to identify but they can’t do so because they don’t have enough ingredients. Alternatively suppose that the wizard has spent all their treasure on pearls and owl feathers; they can still only cast the spell once today, because they couldn’t memorize more than two spells; but the party is pressed, and has found a magic sword and armour that they really need to use now, in the dungeon. Even though the wizard has spent his last money on two pearls and two owl feathers, he can only identify one item today.

    Suppose then, that instead of using the standard approach to magic of AD&D, one introduced a simpler system in which a wizard can cast any spell they know as often as they like, provided they have the material components. This would mean that the wizard would usually have some spells (such as Burning Hands) on rotation, but I don’t see this as a bad thing. A first level wizard with Burning Hands once per round at will can do 1d3+2 hps damage per round on anyone within combat range (save for 1/2). It’s not a game changer; free use of Magic Missile makes a high level wizard pretty scary, doing 5-25 damage per round with no saving throw, but a few tweaks on minor spells (e.g. fixing magic missile at a maximum of two missiles) would easily solve that problem. Alternatively, you could give these spells simple material components: magic missile could require an arrow per missile, for example. Burning hands could require the wizard be carrying a lit flame source, that is extinguished by the spell. This would reduce the spell to the potency of WFRP 3rd Edition, where wizards have basically unlimited spell use but mostly have to use one every other round.

    Even for high level spells with simple components, like the Bigby’s Hand spells, this method wouldn’t lead to infinite amounts of spell casting. Bigby’s Hand requires a glove; no one can realistically carry more than, say, 10 gloves in their equipment if they also have to carry: a small bag full of crystal spheres; a collection of test tubes carrying the components for Melf’s Minute Meteors and Invisibility; 8 or 10 small pouches of different powders, nitrites and the like; a sheath or case with several different rods; some vials of acids, pure water, tears, etc; additional pouches carrying fur, bits of leather, feathers and wings; a jar with a pickled piece of a giant octopus tentacle; a small cage of fireflies; a pestle and mortar to crush gems with; a couple of miniature platinum swords; and a collection of iron, silver, and bronze mirrors. Sure, this would make the task of spell-casting a little like a complex system of inventorying, but you could handle it, I’m sure, and if it’s hard for the player imagine how complex it is for the PC! You could also argue that if a Wizard is carrying components for more than, say, 5 spells on their person, they can’t cast a spell every round (they need a round to find the item[4]).

    Furthermore, one could introduce different effects for more imaginative components. E.g. Invisibility lasts a round longer if the eyelash is from a thief (handy if you have a thief in the party); the component is never destroyed if the eyelash is from an Invisible Stalker. Water from another plane makes a spell that uses it more powerful, and the effect of spells like Identify is enhanced with more expensive pearls or more esoteric feathers (e.g. from a Sphinx). Expending a magic arrow adds one to the damage of a Magic Missile spell, and so on. You could also rule that every time a wizard is struck in combat one of their more fragile components is damaged or destroyed (randomly determined). It would also make wizards very eager to kill or capture each other, since they can loot their rivals’ components as well as their spell book.

    Power limits could be obtained easily by dividing wizards into specialties, so that from first level they are limited only to conjuring or evocation, etc. Many RPGs do this, so that wizards have access to very few spells over their career. This would prevent a single wizard from being able to cast Burning Hands (alteration), Magic Missile (evocation), Charm Person (enchantment), and Chill Touch (Necromancy). I would make the conjuration, divination and abjuration specialties common to all wizards and then force them to choose one of the other four

    fn1: libertarian parties probably last as long as the first Cure Light Wounds spell, and then decide socialism is the way to go.

    fn2: According to Wikipedia, gum arabic was an extremely valuable export commodity and is an essential ingredient in soft drinks, and the Sudanese president recently implied he could bring down the western world through suspending its export

    fn3: I find it hard to believe that nitrite was readily available in the medieval world but nitrates were as saltpeter, again not exactly your common or garden middle-ages corner store product

    fn4: This could be a good rule for PCs with more than 5 magic items in general, I think.

  • In the Australian state of New South Wales, final year mathematics exams were held a few days ago and the Sydney Morning Herald reports the advanced maths exam was “cruel and difficult.” Students on some message board are posting sad messages saying they might as well not have bothered because it was so hard, and some teacher says:

    I am appalled that an examination committee could set such a difficult paper which gives the competent student little chance to show what they know

    Poor kids! I was interested in this because when I did my year 12 (in South Australia) in 1990, the NSW assessment was famously challenging, and we were in awe of the effort the students put in. There’s a certain pride that comes from completing a year 12 advanced maths exam, and I can understand why even if the results are scaled (so you don’t fail if the exam was too hard), it’s discouraging and mean to put out an exam that is too hard for the subject content. I’m also interested because in my opinion Australians are much more numerate than British, but much less than Japanese, and I’m interested in our educational trajectory.
    Fortunately, the herald also gives an example from this exam, and here it is:

    A game is played by throwing darts at a target. A player can choose to throw two or three darts.

    Darcy plays two games. In Game 1, he chooses to throw two darts, and wins if he hits the target at least once. In Game 2, he chooses to throw three darts, and wins if he hits the target at least twice.

    The probability that Darcy hits the target on any throw is p, where 0 < p < 1.

    (i) Show that the probability that Darcy wins Game 1 is 2p – p[squared].

    (ii) Show that the probaility that Darcy wins Game 2 is 3p[squared] – 2p[cubed].

    (iii) Prove that Darcy is more likely to win Game 1 than Game 2.

    (iv) Find the value of p for which Darcy is twice as likely to wine Game 1 as he is to win Game 2.

    So I’m interested to know … do my readers think this is challenging? I did it on a single sheet of paper in 10 minutes yesterday, and it really didn’t seem tough. Admittedly I should be able to do this stuff quickly, but when I compare it to the work I did in 1990 it doesn’t seem very hard at all. Questions i and ii are basic applications of probability theory, without even any conditional or joint probability questions; part ii requires use of basic combinatorics but I remember this stuff was not too hard in year 12 when I did. Questions iii and iv are trivial exercises in problem solving with quadratics: you need to do a sign diagram for iv and complete the square of a quadratic but if you can’t identify and solve such a problem in year 12 surely you have stuffed up somewhere? Also, you don’t need to get i and ii right to do iii and iv, which in my opinion is very far from cruel. I would have been very happy to see that option in an exam when I was doing year 12! Basically, the first two questions are year 11 level probability (at most!) and the last two are year 10 functions.

    So I’m wondering, have standards slipped in Australia in the last 20 years, or am I turning into one of those teachers I hated when I was at university, who say “this is trivial high school maths” as they introduce a path integral that can only be solved numerically? I’m pretty sure it’s the former (or the question the Herald gave is not representative) and 38% of people who answered the poll on the Herald website agree with me. Dissenting opinions (and reminiscences about the horrors of your own school days) are welcome in comments…

    Update: I found on reddit some photos of two other questions: question 5 and question 7. I think these both look tough though I think I could do question 7 (I think you use differentiation and a change of variables in part i, then ii and iii are just straight nasty old manipulation; though maybe part i is induction). I’ve always been terrible at trigonometry, and I remember fluffing a question very similar to (possibly the same as!) number 5 in my exam in 1990. I don’t think I’d do better this time round. But I’m not sure that this material is excessive for a year 12 maths exam; maybe question 7 is more a first year university question …? But I don’t think so. Kids should be doing series and induction in year 12 for sure …

  • The Guardian has 6 pictures from an early collection of Tolkien’s sketches for the Hobbit, that were apparently discovered recently. I particularly like number 3, which despite its roughness gives the sky and Smaug a certain vitality.

  • In comments to my post on shitty GMing it has been suggested that the problem simply came down to a GM who was running the game as a “neutral arbiter” and had I known that I wouldn’t have felt hard done by. Putting aside the particular exigencies of that case, I don’t believe that it’s possible for a GM to be a truly neutral arbiter, nor do I think that it’s particularly desirable. Here I shall give some reasons why it’s not possible, giving some examples from the module that we played during the particular case in question (which is available online here), and give my preferred role for the GM in play.

    The Problem of GM Preferences

    The GM participates in the game for his or her own fun, and is not actually a referee in the strict sense of the word. Every GM brings their own preferences for gameplay and interaction to the table, and it’s inevitable that the GM will reward play that matches their preferences, and discourage play that doesn’t. In a one-off game this may not be noticeable but in an ongoing group the players get used to the GM’s preferences and change their play accordingly (usually). The players are usually aware that the GM also needs to enjoy the game, and they do tend to adapt accordingly. But if they don’t, the GM has – and will generally use – a variety of techniques to ensure that the game will be rewarding for the GM as well as the players. I don’t think it’s possible for a GM to remain neutral while pursuing their own fun.

    The Problem of Shared Experience

    Much of an RPG in practice proceeds according to a series of descriptions by the GM, and responses by the players. How the players respond depends on their understanding of what the GM told them, and in my experience as both player and GM what the players understand of what the GM told them is very different to either a) what the GM expects them to understand or b) what the GM thinks they understand. Things the GM thinks are obvious remain completely invisible to the players; things the players focus on are irrelevant to the GM. It becomes the GM’s responsibility to do something about this: whether the response is one of correcting player misconceptions or riffing of these misconceptions, neither response is neutral. The genuinely neutral response is to either correct the players’ misconceptions (so there is no risk that the shared experience is corrupted by the medium of expression) or to ignore it (being “neutral”). I think many people who think it’s possible for a GM to be neutral couldn’t even agree on which of these actions is the mark of a neutral GM, or even which is possible. In reality I think the concept of a neutral arbiter relies, in gaming just as in real life, on the assumption of information exchange being perfect. This just doesn’t happen in games, and it’s no one’s fault that players suddenly yell “I’ll jump out the window!” when you’ve just described a subterranean room with no windows; it happens all the time. Players are tired, checking facebook, drinking beer, reading a spell description, checking whether they have used up that item … and you’re imparting a crucial piece of information that they not only fail to hear but fail to realize is crucial.

    This problem is especially pernicious where the game depends on setting-specific knowledge. In this case the “neutral” GM has to decide which aspects of setting knowledge the PCs already know (and thus what the players can learn for nothing) and what they are supposed to find out the hard way. This is not the kind of information that has even a concept of neutrality attached to it.

    The Problem of Knowledge

    Everyone who comes to an RPG has their own specific knowledge and real life experience, and this has a significant bearing on their understanding of the game world. What people believe is possible or impossible, what they think their PCs can and can’t do, what they even think of doing with their PCs, depends on their understanding of the world they’re in. Recently Hill Cantons reproduced a few “design notes” from two popular RPGs, and the attitude towards knowledge in one of them (Chivalry and Sorcery, I think) was noteworthy:

    We believe that it is necessary to provide a coherent world if fantasy roleplaying is to be a coherent activity…[Feudalism] also has the virtue of being a real way of life, existing for well over 1000 years in Europe…The feudal system was a working culture, and thus it can be used to very good effect as a model on which to base a fantasy role playing culture that will also work, often to the finest detail.

    This kind of attitude towards setting obviously assumes that everyone playing understands what a feudal world is and how it works. But this is almost never true. Lots of people know almost nothing about the “real way of life” under feudalism, and everyone brings their own prejudice and misconceptions to the setting. The most important of these prejudices and misconceptions are, obviously, those of the GM. What is possible politically, socially and financially in a feudal world is completely dependent on the GM, and there is no sense in which the GM can be neutral in arbitrating this stuff. Provide you stick to a set of disconnected module-based dungeon crawls this may not be an issue, but as soon as you aim for a game more complex than killing people and stealing their stuff, conflict between GM and players over assumptions and knowledge will enter the game.

    This conflict also occurs in task resolution and challenges. A GM who is experienced in rock climbing and mountaineering will have a different concept of what is possible in these settings than one who is experienced in surfing or computing. I think lots of gamers are know-it-all nerds who think they have a good grounding in a wide range of knowledge, but in general they’re straight-up wrong about most of their wikipedia-based insights; and often very stubborn about defending them to boot. The GM may think he or she is being neutral in arguing that it’s not possible to do X, but if there is someone in the group who is familiar with X and didn’t learn about it yesterday on a dodgy message board, they’re likely to misinterpret the GM’s neutrality as pig-headed stupidity. The GM is not a database of unbiased knowledge; which way their biases leans depends heavily on what they know and what they don’t, and how they value the knowledge they do have.

    The Problem of Facilitation

    The GM is usually charged with the task of resolving conflicts within a group, that is often composed of people with little in common except their desire to game together. This manifests most commonly as a need to control the more ebullient and aggressive players, and to draw out the shyer and more timid players. It’s not possible to do this and remain neutral, because it involves favoring some people and being stern with others. Furthermore, the GM often has to resolve conflicts about actions and consequences, and occasionally quite bitter disputes about (for example) treasure, PC conflict, and game direction. Sometimes the GM has to shut a player up who is dominating the game beyond any kind of reasonably alloted time, and if a player is disrupting the group it is usually the GM who is charged with the task of deciding what to do (and communicating it to that player). Who, if not the GM, gets charged with the task of delicately explaining to the neckbeard that they stink and need to wash before attending sessions? OH, the joy of GMing. And when the GM does this they bring their own social biases and problems to the fore, and usually don’t stay neutral for very long – and they are usually responding to a group dynamic that they only have partial control over. It’s very hard for GMs to stay neutral in these situations, just as it’s hard for GMs to avoid playing favorites, or getting pissed off with particular players and acting irrationally, and so on. Some players just have a style that a GM will like or hate, and it will be rewarded or punished accordingly. This is not neutrality.

    The Demands of the Module

    Using the Rahasia module as an example, we can see a few immediate situations where the GM is tasked with a non-neutral stance by the designer, or set challenges that demand a departure from running the game-as-written. The Rahasia module introduction suggests that the GM

    Encourage the players to think of ways of capturing and defeating the witches without inflicting physical damage

    and the game is built on the assumption that GM and players will go along with this idea. This sets up a framework – including penalties of lost experience points – that is very far from neutral. Furthermore, the background information about the dungeon itself is very limited and not much at all is said about the structure of the dungeon. The trapdoor through which I climbed to my death is described thus:

    Directly behind the statue, in the floor of the temple, is a secret door that opens over a staircase to the lower treasure room

    No information is given anywhere about whether secret doors are locked or how to handle them, so the decision to make the room accessible to anyone from below is implicitly up to the GM. A decision to allow access is a decision by the GM to make the dungeon more dangerous; it might be taken unthinkingly or deliberately, but it’s not a neutral decision. Especially in light of this statement about the golem in that room:

    This golem hopeless outclasses any typical party, so the players must think of a way past this creature (the robes work, of course)

    This statement makes it clear that the adventure is not supposed to funnel the players into conflict with the golem; they aren’t at any point meant to be its match. Instead, the GM has to at least give the players a chance to stop and assess the situation and find a way to know that the golem is there. Allowing them to access the foot of the statue as soon as they enter the dungeon is not consistent with the intention of the module, but the module nowhere makes clear a way to avoid this. The GM’s decisions about trap doors, use of portals, and ways of passing through the dungeon are tied in with the nature of this final beast, and the option of playing the module “as written” is a dead one. The GM must choose a non-neutral position on this module in order to run it in the sense that it was intended.

    The Fallacy of Behaviorism

    Another common view I read on the internet about GMing and player reactions is the idea that players “learn” from their mistakes, and the GM has a role as a “teacher” to help them understand the risks of the world they’re in. This is particularly common in old school play, in my experience. I think this is both fallacious and patronizing. It’s patronizing because we’re all adults, and I don’t give up hours of my downtime to be schooled in the harsh “realities” of fantasy life by a self-important neckbeard. I want to play in a shared world where my understanding of that world is assumed to be an adult’s understanding, and my mistakes are handled, not judged. But it’s also fallacious. Adults don’t learn in this way, and punishing adults for their mistakes is pointless; it’s a classic example of a fallacy based on regression to the mean to think that adults will learn this way. Furthermore, what the GM may think is a mistake, the players may think was a reasonable action. On top of this, there is an additional behaviorist nonsense. Most of us learnt the game as teenagers being taught by bad teenage GMs in fairly immature social settings. If this behaviorist approach to learning from “mistakes” has any truth to it, by the time we get to game as mature adults we’re going to be well past correction, and will be gaming primarily based on the experiences of our (mostly crap) teen years. If so, “teaching” us is going to have to be done some other way, and is going to involve the GM coming down from their neutral pedestal to make judgements about what is wrong with our play style. But who’s to say, given the backgrounds of the adult participants in this hobby, that it’s the players who learnt all the mistakes? Just as likely it’s the GM who needs to be “taught” about their mistakes. The best approach is to drop this ideal altogether and accept that everyone involved in the game is probably flawed and their flaws and mistakes demand understanding rather than “teaching.”

    The GM as Facilitator

    I think the GM is inherently biased: he or she is there to enjoy a game, and wants the game to run in a way that entertains him or her. But on top of this, the GM is charged with preparing for the game, managing conflicts, and ensuring that the players have fun. These conflicting tasks are inconsistent with a neutral position, just as the players’ role is inconsistent with a purely selfish one (they are also meant to be aware of the work the GM has put in, his or her desire to enjoy the game, and the needs and perspectives of their fellow players). The GM thus functions best as a facilitator, ensuring that the players enjoy a game full of challenges and exciting situations, in which they will have fun and everyone will got what they are looking for. A neutral GM cannot help this happen, and I don’t believe it’s possible for someone to be a neutral GM to start with. There are too many conflicting pressures and responsibilities for the GM to remain neutral in all circumstances. By pretending that this is possible, we simply create a set of false assumptions and expectations that let everyone down: better to understand everyone’s biases and perspectives upfront, and respond accordingly, than to try and pretend they can all be hidden or put aside during an activity that, in its own way, can be as frantic, demanding and engrossing as anything else that adults do.

     

  • In looking at the cost-effectiveness of health interventions in fantasy communities we have shown that the infinite lifespan of elves creates analytical problems, and other commenters have suggested that the cost-effectiveness of clerical interventions to reduce infant mortality should be balanced against the need for clerics to go to war. Well, Professor John Quiggin at the Crooked Timber blog recently broached the issue of doing a benefit-cost analysis of US military spending, and has found that the US defense department has killed a million Americans since 2001. His benefit-cost analysis is really just an exercise in peskiness, though it does have a valid underlying point, and I think actually you could show with a simple cost-effectiveness analysis that the wars of the last 10 years have, under quite reasonable assumptions, not been a cost-effective use of American money. Of course, we don’t make judgments about military spending on cost-effectiveness or cost-benefit grounds.

    In comments at Crooked Timber[1], I listed a few examples of how US Defense Department money could be better spent, and one of those examples was vaccination. Obviously, disease eradication would be a very good use of this money, because of its long-term implications, but in thinking about the cost-effectiveness (or cost-benefit) of this particular intervention, I think we can see another clear example of how these purely economic approaches to important policy debates just don’t work. So, here I’m going to look at this in a little more detail, and give some examples of how we can come to outrageous policy conclusions through looking at things through a purely econometric lens. I think I came to this way of thinking by considering the cost-effectiveness of interventions in elven communities, and ultimately it’s relevant to the debate on global warming, because a common denialist tactic is to demand that AGW abatement strategies be assessed entirely in terms of cost-benefit analyses, which are very hard to do and, as one can see from the comments thread at Crooked Timber, are anathema to supporters of the military establishment. As we can see here, they also break down in quite viable real-life circumstances.

    The Problem of Disease Eradication

    So, you’re the US president in 2001, and you’re reading a book on goats to some schoolkids, and as happens in this situation, you have to make a snap decision about how to spend 200 billion US $ over the next 10 years. You could spend it going to war with a small nation that harbours terrorists; let’s suppose that if you don’t your country will be subject to one 9/11 -style attack every year for the next 20 years (until OBL dies). If you do, you’ll commit your own and the next administration to spending 200 billion US $. Is this a good use of your money? 200 billion US $ to save about 50,000 US lives over 20 years, minus the casualties (wikipedia tells me it’s about 5000). So you get a net benefit of 45,000 lives, or 4,444,444  US $ per life – this actually comes under the US government’s 5 million US$-per-life-saved threshold, so it’s a viable use of your money. But one of your alternatives is to spend the money on eradicating HIV using a vaccine that was recently developed, and it has been shown that by spending 200 billion US$ over 10 years you could eliminate HIV from the face of the earth. You don’t care about the face of the earth, but you need to eradicate it everywhere to make Americans safe from it. Should you ignore the terrorist attacks and spend the money?

    For a standard cost-effectiveness analysis you would calculate the incremental benefit (in lives saved) from this vaccine compared to the war on terror. Lives saved in the future are discounted at a fixed rate (usually about 3% a year) and decline in value over the term of the intervention. But the problem with this calculation for disease eradication (specifically) is that the term of the intervention is infinite. All future lives saved forever go into the calculation. The actual formula for this calculation is the integral over (the negative exponent of (discount rate*time t)) multiplied by (lives saved at time t)[2]. Usually we model a policy over 20 or 30 years, giving a finite result; but in this case we need to model the benefit over all future time, and the integral of any bounded function multiplied by the negative exponent, over an infinite range, is infinite. So even with furious discounting we get an infinite benefit from eradicating any disease. Not only does this make comparing disease eradication decisions – e.g. smallpox vs. HIV – impossible, but it makes comparing disease eradication to any other policy objective impossible, and it tells us – quite reasonably, I should say – that we should bend all our health care resources to this task.

    In this case, the president of the USA should decide not to go to war because 20 September 11ths are a small price to pay for the eradication of HIV. Eventually Osama bin Laden will give up[3]; HIV won’t. But the stupidity of this decision doesn’t end here. If it costs a trillion dollars to eradicate HIV, the president would be better off defunding his army and paying the price than not; and if Mexico were to invade, killing a million Americans, the infinite benefit of having eradicated HIV would still outweigh the loss.

    Now, one argument against this logic is that you shouldn’t include the yet-unborn in a policy evaluation; yet this is standard practice. For example, in considering the cost-effectiveness of different interventions to reduce HIV transmission, we might run a model on the 15-64 year old population, and when we do this we allow for maturity into and out of the population; if we run the model for more than 15 years we are implicitly allowing the yet-unborn into the model. Furthermore, you could surely argue that modeling disease eradication without including the unborn devalues the whole concept – what is disease eradication except a project to protect the unborn generations of the future?

    So we can’t use econometric analyses by themselves to assess the value of interventions, because a perfectly reasonable economic analysis of a valid healthcare goal throws up an impossible contradiction. The world expects – with significant help from Bill Gates, I might add – to eliminate polio by 2015 and with the recent announcement of a vaccine for malaria you can bet that the international health movement will turn its gaze on that bastard protozoan next. And there is no economic argument you can mount against spending money on it – even if the cost is everything you own.

    Implications for the Global Warming Debate

    A common argument mounted by “hard-headed realists” and AGW deniers is that money spent on AGW mitigation needs to be justified by a solid cost-benefit analysis, because the alternative is to spend this money on targeting real problems now, especially in third world countries (often also the countries most vulnerable to AGW’s worst effects). Money spent on infant mortality now, they argue, is far better than money spent on AGW mitigation in the future – even if you accept that the negative effects of AGW are a certainty. This is a particularly powerful argument since we don’t have solid evidence for exactly how bad the effects of AGW will be, and we know that the future benefits of reducing infant mortality now are huge. This economic defense will usually also depend on discount rates – we’re much more interested in lives saved now than in the future, and AGW mitigation’s effects will be felt in the future, not now. Exactly what the relative benefits of mitigation will be are very sensitive to discount rates.

    In this case, though, one can argue: well, let’s spend the entire defense department’s money on eradicating HIV. If we test everyone in Africa every 6 months – surely possible with the full funding of the US military on the case – and treat them immediately (or, hey, just treat everyone in Africa with anti-HIV drugs for the next 30 years – let’s put them in the water!) then we can eliminate HIV, and save an infinite number of lives. It’s guaranteed on both cost-benefit and cost-effectiveness grounds, with the added benefit that you don’t need to quibble over the discount rate – it’s guaranteed to be cost-effective under any finite discount rate. The natural argument against this will be that someone might invade America. But we can say in response to this, “uh uh! Precautionary principle! You don’t know how bad that invasion will be or even if it will happen.” If the precautionary principle doesn’t apply to the putative risks of AGW, why should it apply to defense? Or rather, if we need to attach a monetary value to the future risks of AGW, why not attach one to the future invasion of the USA? And when we do, it will be of lower value than the benefits from elimination of HIV, even if the entire population is wiped out during the invasion.

    Which brings us back to the simple problem that we can’t assess any policy in isolation using only the econometric tools at our disposal. Everyone understands this, of course, which is why people on the Crooked Timber thread are bridling at Professor Quiggin’s analysis. They attach additional, non-economic considerations to these problems. But one of the rear-guard actions of the anti-AGW movement is to demand that we use exclusively economic methods for assessing the value of AGW mitigation – and it was in response to this fiction that the Stern review was commissioned. I think it needs to be recognized that these econometric tools offer false clarity, and only apply within a very limited framework, that of limited improvements in a limited temporal framework (pacemakers vs. aspirins, essentially). Defense, disease elimination, and AGW mitigation lie outside that framework. This should be abundantly clear to anyone who has tried to do a cost-effectiveness calculation of the relative merits of slavery and genocide for elven communities. It’s just a shame that most economists haven’t bent their mind to these truly important questions; fortunately, we at the C&C University are here to help with the more profound philosophical questions. No, don’t thank me, we do it for free. Or, alternatively, pick apart the argument in the comments … I’m eager to hear how a valid mathematical framework can be constructed for the analysis of disease eradication goals, because it’s relevant to my work…

    Update

    Actually while I was watching a band in Kichijoji at 3am last night I realized that my interpretation of the formula for total effectiveness in the disease eradication was wrong[5]. Ultimately, the benefits that accrue from disease eradication are approximately (1/(discount rate))*average number of lives saved in any year. So for a discount rate of 3% and 1,000,000 lives saved per year from (e.g. ) eradicating malaria you would get a total benefit of about 33 million. It’s not infinite but it’s very very large. So the general argument holds, but it is possible to compare disease eradication programs. Note that there’s an argument that can be made for a lower discount rate in the case of disease eradication (it is all about saving future generations, not the current generation) and even a small change in the discount rate makes a big difference to the outcome. Also, under certain conditions (exponential population growth bigger than the discount rate) the benefits of disease eradication are infinite; I think most people expect the population to stabilize at 7 billion though so this doesn’t apply on earth.

    fn1: for historical reasons I comment there as sg

    fn2: or something similar

    fn3: Actually it’s an interesting question, isn’t it? If you ignore a terrorist who is incapable of waging a conventional war on you, refuse to give into his demands, mount a purely law-enforcement operation to prevent his worst excesses, and wait him out, how long will it be before he just gives up and goes away? How long can OBL recruit people for if killing Americans leads to … nothing? And if after a few years the US said quietly to the Taliban, “we’ll give you a billion a year in aid if you get rid of him,” how long would it be before he had no safe bases?

    fn4: I find this very interesting. A few years ago it was getting hard to find doctors in the west who would perform circumcisions on babies; ten years ago doctors were equivocal on the issue and there has been a long-standing community opposition to circumcision for non-medical reasons; yet now we’re recommending it (and funding it!) en masse in African countries. I wonder how Americans would have felt if, in 1987, Nelson Mandela or Robert Mugabe had come to the USA and suggested that the solution to their growing HIV problem was to circumcise all adult gay men?

    fn5: I did this calculation only recently, so I really should have got this right from the start…

  • Definitely in the 1%

    I found this in the tumblr We are the 53%, your go-to page for people who vote against their own interests. Darth Vader and his stormtroopers are classic examples of people who voted against their own interests. You’re an exemplar for us all, Lord Vader!

  • No Place for the Warm-hearted

    This is the plan for a campaign setting in one of the earlier eras of my Compromise and Conceit campaign setting, to be run in English using Warhammer Fantasy Role-play 3. This campaign will be set in Svalbard in summer 1635, early in the period of time in which Europe began to rediscover magic, through infernalism. I discussed some reasons for the Svalbard setting some time ago, and I’ve recently done a little research that suggests setting it in the 17th century gives me an opportunity to combine political intrigue, pirates and polar exploration. It also gives a chance to test a campaign setting where the environment is itself an adversary for the PCs, and to explore some more of the political and infernal concepts of the Compromise and Conceit setting. The last adventure enabled my players to explore the complex and violent politics of the French and Indian war, and ultimately to change the course of American history. Maybe this time we can explore the possibilities inherent in Scandinavia.

    Svalbard in 1635: Political Context

    This era is the beginning of a long period of infernal exploration, and the near end of the Age of Discovery, which was still playing out in Northern Europe and the Arctic. Svalbard had only been discovered 40 years previously, and was not yet controlled by any single power. Instead, companies from different nations – primarily England, Denmark, France and Holland – would come to Svalbard in the summer for whaling and seal hunting, establishing camp in bases along primarily national lines and hunting furiously during the limited months of sunlight. The nation states that backed these companies had limited authority out in the wilderness of Svalbard, and the whaling companies would come into often violent conflict with each other – even with companies from the same nation. These whaling companies were essentially freebooters, pirates with a semi-official backing from their home nation, and they would use quite vicious methods to ensure access to the lucrative whaling zones of what was then known as Spitsbergen. Political and mercantile tensions from Europe would be played out in these freezing waters.

    The main nation with a solid, long-term interest, however, was Denmark: at this time Denmark, Norway and Sweden had united under the Kalmar Union and had also absorbed Iceland, which had accepted Lutheranism 80 years earlier after the beheading of its last Catholic Priest. By adding Spitsbergen to its crown Denmark would control all the islands of the Arctic, and access to the fabled Northwest Passage. It would also be able to exert control over lucrative whaling regions, and all the fisheries and any natural resources of those islands. During the middle part of the 17th century the Danish crown turned its attention on consolidating complete power over the union of Scandinavian nations, and although unable to back its claims of sovereignty over Svalbard with military force, was undoubtedly up to mischief on the island. With the rediscovery of magic in Europe, the Lutheran church also found itself facing a resurgence of interest in Odinism and paganism, and so the church as well needed to extend its powers across the distant archipelago.

    Svalbard itself is a harsh environment for piracy or adventure, and in fact until 1634 no one had ever wintered on the Island. The Little Ice Age was well underway, and this meant sea ice in the Northern and Eastern edge of Svalbard for 9-10 months of the year, and freezing temperatures all year round. The North Eastern side of the archipelago was yet unexplored, and even traversing the main Island (Spitsbergen) was a formidable challenge for 17th century explorers. Against this political and environmental backdrop the Danish were attempting to establish a permanent presence on the Island sufficient to guarantee a long-term hold over the arctic, and its lucrative whale oil trade. At this time the full promise of Infernalism and the materials and technologies it would make available to Europe had not yet been revealed, and resources like whale oil were of great importance.

    Svalbard in 1635: Infernal Context

    With Shakespeare only recently dead and Marlowe long in his grave, the groundwork had been laid for the expansion of infernalism across Europe. Marlowe’s objections to the use of Demonology to bolster the power of King and God had been washed away in blood under suspicious circumstances 40 years earlier, giving Shakespeare 20 years to preach the gospel of Infernalism. His lessons had taken hold but the full benefits – magical and technological – that would flow from Infernalism, as well as its future challenges, were not yet known, and a diverse array of magical schools and colleges flourished throughout Europe. Their understanding of magic was fragmented and their power limited, Descartes had not yet written his Meditations or Principles, and the systematization of magic – as well as its restriction to a handful of schools – was not to come until the end of the century, under Newton, Liebnitz and the years after the Glorious Revolution in England. For the period from Shakespeare’s death until the English civil war magic remained a kind of cottage industry, and its practitioners a diverse and unruly bunch.

    Settlements on Svalbard

    There are five main locations on Svalbard, numbered in the map above:

    1. Smeerenburg (“Blubber Town”): The Dutch settle at Smeerenburg in the summer, and hunt whales from here. Their activity was so frenzied and the sights the settlement offered so disgusting that the town was given the name “blubber-town” by those who work there. The Danes were driven out of Smeerenburg a few years earlier, and now only a few Danish traders visit during the period of activity.
    2. Danskoya (Ny-Alesund): The combined settlement of Danish and Dutch whalers forms the de facto political base for these two nations, as well as a resupply base for Smeerenburg, which is further north, and the official point of communication with the English and French whalers to the South. This town is equally frenzied in its pursuit of whale meat, but also contains some non-whaling related commercial activities, primarily hunting and trapping. It is also the first area of Svalbard to be turned into a permanent settlement. Just South of Danskoya is a small French settlement, called Refuge Francaise, and largely dependent upon Danskoya for protection and resupply.
    3. The Silent Tower: A group of Norwegian monks have set up a small monastery here, in the ruins of an ancient stone tower that no one seems able to account for. The tower provides excellent protection from the elements and seems to have a permanent supply of fresh water, and the monks are able to winter in the tower. They have been doing so for at least the last 10 years, and no one really knows anything about them: they have taken a vow of silence, and most people assume that they see the long months of winter darkness as an opportunity for contemplation undistracted from the concerns of the mortal world.
    4. Ice Fjord: This is the main base of the London Whaling Company, and also the unofficial English government outpost, the Ice Fjord base has the best weather conditions in summer and is also blessed with the permanent monastery on its Northern side. The London company wrested this base by force from the Danes a few years earlier, and although Danish boats may now dock here and some traders come and go, there is a tacit agreement that they will engage in no whaling South of Prins Karls Forland, giving the British free reign of the whole South western half of Spitsbergen. This doesn’t mean they don’t come into conflict, of course.
    5. Bell Sound: The base of the English Muscovy company, famous for having opened up trade with the Russians a few years earlier, but also for having lost a major sea battle with the London company a few years ago and having been driven into Bell Sound, a much less profitable whaling location than Ice Fjord. The two companies regularly come into conflict. There are rumours that the Muscovy company has begun to focus on overland exploration, and may also be prospecting inland of its camp, but of course no one knows anything about the commercial activities of this company

    Aside from a few small survival huts set up in between the main outposts, these are the only established settlements on the island. Until 1635 the island was completely silent and dark in winter, save for the Silent Tower; it becomes a hive of frenzied activity in summer, focused on the mass slaughter of the whales that throng to the island. Against this backdrop various tales of murder, piracy, industrial espionage, sabotage and theft will be played out every summer. Anyone who survives the summer will leave the island rich with whale oil, but the death rate, like the stakes, is high.

    The First Adventure

    In 1634 the Danish wintered for the first time in their temporary settlement at Danskoya. The first winter squad consisted of only seven men, well supplied and dug into a deep and well-built shack. When the first Danish explorers arrived in spring 1635 the hut was empty, the men all gone, and some signs of a struggle could be seen. The Danish are concerned that one of the other companies on the island also over-wintered there, and launched a daring mid-winter raid to kill the Danish crew. If so, this has alarming implications both for what the other companies are willing to do and for their winter-survival technology. The Danish whaling company needs to send a squad of adventurers to Spitsbergen to investigate who did it and how. Once they know this they are to kill the people responsible. They will travel there under the guise of guards for a Danish royal expedition, which aims to draw maps of the whole archipelago over the next few summers. This expedition will spend the first summer traveling up the west coast conducting initial soundings and exploration, and so the PCs will be able to visit every settlement over the course of a few weeks, giving them a good sense of who is where and what they are doing. With the cartographer as cover, they can then visit any settlement they need to for further investigations.

    Simple, surely?

  • This is to be my last post on what I’ve learnt from John Dower’s War Without Mercy, and it is also to be my most speculative. Did the feverish anti-Japanese propaganda of the Pacific war era influence at all the allies’ decision to engage in large scale bombing of urban areas in Japan, and/or their decision to use nuclear weapons? In this sense I’m not interested in whether these tactics were “right” or “wrong,” though I think we can all take it as read that a decision to drop a nuclear weapon on a city is definitely wrong in anything except the most extreme of circumstances. My question is more about whether our subsequent interpretation of these decisions (which remain controversial) and the decisions themselves is clouded by the propaganda that was being used at the time, and the general beliefs about Japanese and allied behavior in the war, as they existed then and exist now.

    I have always accepted what for this post I will call the “standard” view of the urban bombing campaign and the nuclear attacks: that in the absence of convincing proof that they would be destroyed as a nation the Japanese were not going to surrender and were going to fight a long and protracted military campaign that would lead to the deaths of millions of Japanese and potentially hundreds of thousands of allied soldiers. In the standard view, the allies discovered on Okinawa that the invasion of the mainland was going to be a hideous affair, and decided to use terror bombing to bring the war to a close so that they didn’t have to expend many lives. This view can even take the pesky form of having been for the good of the Japanese too: I don’t think it’s hard to find examples of people saying that less civilians died in the bombing campaign than would have died if the allies had invaded the mainland.

    I have also read Dresden, which contains a passionate defense of the terror bombing of German cities on strategic grounds and argues that the frantic German efforts to defend major cities represented a huge drain on their military resources and hastened the end of the war. I’m inclined to accept this view of the strategic value of the terror bombings of Germany, and against the backdrop of all the horrors of that war I can understand why Stalin was pleading with the allies to do more of the same. But just because it worked in Germany doesn’t mean it was strategically necessary in the Pacific, and my suspicion is that decisions about when to start the bombing, how intense to make it, and why it was necessary, were influenced by the extreme propaganda about Japan. We have established that there was an eliminationist sentiment to this propaganda, that it was extremely racist and that the underlying principles of the propaganda were believed by the public and war planners alike. We also know that the allies got up to all manner of nasty war crimes in the Pacific, were not particularly inclined to see the Japanese as human, and that just as their behavior towards Japanese was different to Germans, so was their propaganda. So it doesn’t seem a stretch to me to imagine that the allies were also inclined to favor brutal tactics, and that decisions about the necessity of these tactics would be colored by some genuinely held beliefs about how unreasonable, crazy, childish and brutal the Japanese were. Also underlying the allied response to the Japanese is a need to remind the other “sub-humans” of the Pacific that rising up against the accepted international order is a very bad idea, and a fear that the Japanese “lesson” might be learnt by others in Malaysia and Indonesia. There are also a few examples from Dower’s book of specific beliefs about the unwillingness of the Japanese to surrender, and specific actions taken by the allies that suggest that the terror bombings weren’t embarked on reluctantly or purely for military/strategic reasons. I’ll cover these first.

    Beliefs About the Chances of Surrender

    The allies based their understanding of Japanese war-time thinking on a whole suite of crazy sociological theories about the Japanese psyche: that the nation was stuck in a child-like stage of development, that they were crazy, that they could not be reasoned with, and that they could not be trusted. Many allied planners seemed to think that the Japanese would use any kind of honourable or conditional surrender as a chance to regroup before attacking again, and the Japanese were generally viewed as treacherous and shifty. Dower describes the generally held view that the Japanese would need to be thoroughly defeated, possibly “to the last man” because their nation had a suicide psychology and needed a “psychological purge.” Allied planners may have expected the Japanese to behave as a nation the way they (also erroneously) believed Japanese as individuals preferred suicide to surrender. Furthermore, Japanese treachery and savagery meant that only by the complete destruction of their current order could the Japanese desire to dominate Asia be prevented. Allied propaganda also maintained that the Japanese were “patient” and sinister (common traits ascribed to Orientals) and would happily wait 100 years to launch another war of domination, as Germany had done after world war 1, and so the only way to prevent them going to war again was their complete destruction. This view is particularly interesting because there really was no historical basis for thinking that the Japanese have a long-standing interest in dominating their region – they had chosen isolation over expansion, and their first international military activity was against Russia in 1905. The allies were nonetheless willing to believe that the war represented a manifestation of some constant belief in Japanese culture.

    Lack of Interest in Surrender

    In addition to a general belief that Japanese did not surrender, allied soldiers and their leaders did not show much interest in obtaining surrender from their enemies. In military engagements allied soldiers would kill soldiers who did surrender, or would refuse to accept a surrender and force Japanese soldiers to fight on to their deaths. Dennis Warner reports this exchange between two high-ranking officers in Bouganville:

    “But sir, they are wounded and want to surrender,” a colonel protested [to a major general] at the edge of the cleared perimeter after a massive and unsuccessful Japanese attack.

    “You heard me, Colonel,” replied [the major general], who was only yards away from upstretched Japanese hands. “I want no prisoners. Shoot them all.”

    They were shot.

    Accounts from Marines in Okinawa also suggest the same behavior in Okinawa, and not just towards soldiers: marines also killed civilians. This account from a war correspondent summarizes the battlefield philosophy of the Americans:

    What kind of war do civilians suppose we fought, anyway? … We shot prisoners in cold blood, wiped out hospitals, strafed lifeboats, killed or mistreated enemy civilians, finished off the enemy wounded, tossed the dying into a hole with the dead, and in the Pacific boiled the flesh off enemy skulls to make table ornaments for sweethearts, or carved their bones into letter openers.

    This was published in The Atlantic Monthly in 1946, when the memories and philosophies of the war were still clear in people’s minds and admitting such atrocities was still acceptable. By now, of course, we look back on our soldiers as having fought for a noble cause, and no longer discuss the barbarity of the time. It’s clear from these accounts that the mistreatment of prisoners and refusal to accept surrender crossed military types (navy, air force and army) and was held at all levels of command. It’s also clear that the blood-letting on Okinawa was not entirely the fault of Japanese unwillingness to surrender, and suggests that whatever judgments military planners were making about a battle on the mainland, to some extent at least the numbers of dead they were expecting to see were being partly brought about by their own soldiers’ misconduct. With such a disinterest in either surrender or treating the enemy population kindly, perhaps they were inclined to see a protracted campaign of urban destruction as a good thing on its own terms?

    Destruction for its Own Sake

    The saddest example of this interest in destruction as an end in itself is the final air raid on Tokyo. This happened on the night of August 14th, just hours before the Japanese officially surrendered, and when everyone on both sides knew the surrender was going to happen. The raid was the biggest of the war, consisting of 1014 planes, and suffered not a single loss. The planes had not yet returned to their bases when Japan’s unconditional surrender was announced. There is no chance that this raid was necessary, or that even a single death it caused could possibly have advanced the end of the war by even a heartbeat. It is perhaps the clearest example of simple cruelty on the part of the allies, in which a city was destroyed merely for the sake of it. From this act we can see that the allies valued destruction for its own sake, and were acting on Churchill’s demand to lay all the cities of Japan to ash, even where they didn’t need to.

    The Question of the Bombings

    This leads us to the question at the heart of this post: could the allies have negotiated an end to the war in some other way, without the use of terror bombing and atomic weapons; could they have used less terror bombing and no atomic attacks? Were their decisions driven by a desire to destroy as much of Japan as possible, rather than purely strategic concerns? And if their decisions were based on a genuine belief that the Japanese would not surrender and would fight to the last, to what extent was that belief correct, and was it at least partially clouded by their own stereotypes of and fantastic notions about the Japanese psyche? What portion of the decision to destroy Hiroshima and Nagasaki was strategic, what portion was cruel, and what portion was based on misconceptions about the Japanese psyche that were, ultimately, founded in racism?

    The decision to end the war in this way may also have been driven by the desire to assert colonial power over Asia – a conditional surrender would probably have meant allowing the Japanese to retain some colonial possessions, and the implication from this would be that Asia could control its own destiny. Furthermore, they needed to end the war before the Soviets invaded Japan. But it seems to me that there are other approaches they could have taken: for example, after Okinawa they could have ceased all aggressive action targeting civilians, used their overwhelming naval power to enforce Japan’s isolation, and just waited them out. I don’t know, but I have never heard from any source that the allies genuinely attempted to negotiate surrender before the bitter end. One doesn’t hear stories of attempts to subvert the military clique in charge, to foment civil disorder, or to use captured Japanese soldiers as propaganda tools – it’s as if they just all assumed such actions would be impossible, and I think these assumptions may have been wrong.

    In essence then, I strongly suspect that much of the barbarity of the final year of the war, and especially the terror-bombing campaign, was unnecessary and was driven by a complex mix of racist and colonialist beliefs. I think the allies may have been able to negotiate a different end to the war, but they didn’t believe it was possible due to racist assumptions about “orientals,” and they didn’t want to because they wanted to punish the Japanese and inflict a defeat on them that would send a signal throughout Asia. I think this means that, while in retrospect the bombing of Japan has been painted as a necessary tactic, it can only be portrayed as such if we accept the racist premises of the propaganda of the time, and overlook the wanton cruelty of the allied forces. Is a more realistic historical interpretation that allied thinking about Japan and the Japanese was deeply flawed, and the policy of mass destruction that “won” the war was both unnecessary and heavily influenced by this same racist worldview?

  • The Chief Whip insists you toe the party line…

    Yesterday Australia passed a carbon pricing scheme, over the strenuous objections of the opposition. In fact, the opposition’s objections were so strenuous that their leader, Tony Abbot, has promised a “blood oath” to revoke the legislation.

    I guess he’s thinking of a blood oath in the demonological sense of signing a contract in blood to make it more binding. It’s the natural extension of Tony Abbot’s rather unfortunate recent admission that the only promises he makes that can be trusted are promises that are written down. This surely means that promises written in blood are much more manly and believable than those written in mere ink.

    This opens up a few worrying questions for me:

    • Does Tony Abbot secretly believe that contract law should be changed to make blood-based signatory agreements more powerful, and if so how?
    • Is this an extension of his willingness to “sell his arse” to a willingness to “sell his soul”? And if so what kind of policy-making process does this represent?
    • Given the paucity of soul in the nasty little blighter, and given he can only sell it once, how much policy benefit can we gain from a government that functions in this way?
    • Given he used to be a monk and now he’s become a demonologist, is this further evidence that he’s not really very trustworthy?
    • Given he used to be a monk and now he’s become a demonologist, is this more of an indictment of him or the catholic church?
    • This kind of language seems very fitting for a role-player, something I never suspected Abbot to be capable of. Is he actually a fantasy role-player, and if so is his party aware of how damning this is for his electoral prospects? Do they seriously think the mortgage belt is going to vote for someone that nerdy?
    • If he’s a role-player, what system does he use, is he a GM or player, and where does he fall on the Gamist-Narrative-Simulationist debate?

    The obvious good point of this “blood oath” is that he has finally made his position on demonology explicit. The current minority government is in the hands of the Australian Labor Party, who are widely rumoured to have sold their souls en masse to satan in order to gain admission to the party (or at least, to get the numbers for pre-selection). It’s also generally accepted that they will eat their own young and no act of treachery is too low for them. Of course rumours have long abounded that the Liberal Party are just as bad, but their god-fearing family-loving image has saved them from general acceptance of this rumour. At least now Abbot has admitted that, yes, shock! everyone in politics is up to their necks in satan’s semen, and we can all heave a sigh of relief and get back to analyzing the polls.

    Politically this pledge could be a disaster for Abbot. As if suspicions of satanism and (omfg!) role-playing were not bad enough, it will probably be very hard to undo the legislation without revoking the tax cuts that came with it, which is obvious political suicide. Furthermore the only practical way he can revoke it is to get it through the Australian Senate, which is currently controlled by the realms of faerie (the Greens). Long-standing agreements between the Seelie Court, the CIA and Rupert Murdoch mean that the only way that Abbot will be able to drive through his legislation is likely to be a double-dissolution election, which means that Abbot will have to go to the next election with the pledge that he will “hold another election within 6 months of this one.” That’s not going to be popular in a country where only two things are compulsory: apathy and voting.

    While overall it’s nice to see Abbot finally embracing the inevitable spiritual compromises necessary to succeed in Australian politics, and being so open about it, I don’t think this is going to be good for the party. Also, how is he going to manage to resist Satan’s demands for compulsory abortion and gay marriage?

  • Today I am celebrating my first publication in my new job, and since it’s about a topic I’ll probably be coming back to a lot in the next year, I thought I’d cover it here. It’s not much of a publication – just a letter in the journal Addiction – but it covers what I think is an interesting topic, and it shows some of the complexity of modern health policy analysis. The article, entitled Equity Considerations in the Calculation of Cost-Effectiveness in Substance Use Disorder Populations[1], can be found here[2]. It’s only 400 words, but I thought I’d give an explanation in more detail here, and explain what I’m trying to say in more detail. The background I’m presenting here may be useful for some future material I’m hoping to post up here. I’ll give a brief overview of the “cost effectiveness” concept, explain what the problem is that I’m addressing in this paper, and then give a (slightly) mathematical example in extremis to show where cost-effectiveness analysis might lead us. I’ll also add some final thoughts about cost-effectiveness analysis (CEA) in fantasy populations, with perhaps a final justification for genocide. Or at least an argument for why Elves should always consider it purely on cost-effectiveness grounds.

    Cost-Effectiveness Analysis, QALYs and the IDU Weight

    Traditional epidemiological analysis of interventions is pretty simple: cholera, for example, kills X people, so let’s prevent it. However, we run into problems when we have limited resources and need to compare two different interventions (e.g. turning off a pump vs. handing out disinfectant pills). In this situation we need to compare which intervention is more effective, and we do this by assessing the cost per life saved under each intervention – if turning off the pump is cheaper and saves more lives, then it’s better. This is usually represented mathematically as the ratio of the cost difference between the intervention and some control (the incremental cost) and the effect difference (the incremental effects). The ratio of the two is the incremental cost effectiveness ratio (ICER). This is what I used in assessing clerical interventions to prevent infant mortality. However, when we are dealing with chronic diseases the incremental effects become harder to measure, because a lot of interventions for chronic illness don’t actually save lives: they extend life, or they improve the quality of life a person experiences before they die. In this case we use Quality-Adjusted Life Years (QALYs). These are usually defined by conducting a study in which people are asked how they would weight a year of their life under some condition relative to fully healthy – or, more usually, relative to their health as it is now. For example, blindness in one eye might be rated a QALY of 0.9 relative to being fully-sighted. There is some interesting debate about whether these ratings should be assessed by those who have the condition or the community as a whole; the logic here can be perverse and complex and is best avoided[4].

    So in essence, you rate one year of life as having the value of 1 when fully healthy, and then other states are rated lower. We can use the issue of Voluntary Testing and Counselling as an HIV intervention to see how this works.

    Example: Voluntary Testing and Counselling

    It’s fairly well-established that good post-test counselling can successfully reduce a person’s risk behavior, so if you can get people at high risk of HIV (e.g. men who have sex with men (MSM)) to undergo voluntary testing, you can catch their HIV disease at an early stage and get them to change their behavior. In theory, doing this fast enough and effectively will reduce the rate at which HIV spreads. Furthermore, catching HIV earlier means initiating treatment earlier (before it becomes symptomatic), and early treatment with anti-retroviral drugs leads to longer survival[5]. However, discovering one is HIV positive is not a pleasant experience and knowing you are HIV positive lowers your overall quality of life, even if the disease is asymptomatic. So if the survival benefits of early testing don’t outweigh the loss of utility, then it’s not worth it. So 10 years ago, when treatment extended your life by perhaps 10%, but testing reduced your remaining QALYs from 1 to 0.9, then the benefits might not outweigh the costs. Additionally, treatment is expensive, and it might be more cost effective on a population level to run health promotion campaigns that reduce risk behavior: reduced risk behavior means less infections, means less QALYs lost to HIV.

    In essence, it’s a kind of rigorous implementation of the old bar room logic: sure I’d live longer if I didn’t drink, but why would I want to?

    Recently, however, some analysts have introduced a sneaky new concept, in which they apply a weight to all QALY calculations involving injecting drug users (IDUs). The underlying logic for this is that IDU is a mental illness, and people with a mental illness have a lower utility than people without. This weight is applied to all QALY calculations: so a year of life as a “healthy” IDU is assigned a value of, e.g. 0.9, and all other HIV states (for example) are given a value of 0.9 times the equivalent values for a non-IDU.

    What is Wrong with the IDU Weight

    This has serious ramifications for cost-effectiveness and, as I observe in my article, fucks up any attempt to get a cost-effectiveness analysis past the British NICE, since it breaks their equity rule (for good reason). In addition to its fundamentally discriminative nature, it’s also technically a bit wonky, and in my opinion it clouds cost effectiveness analysis (“which treatment for disease X provides better value for money?”) with cost-benefit analysis (“who should we spend our money on?”). It’s cool to do the latter vs. the former, but to cloud them together implicitly is very dangerous.

    Technical Wonkiness

    Suppose you have a population of IDUs with a weight of 0.9, and you need to compare two interventions to prevent the spread of HIV. One possible intervention you could use is methadone maintenance treatment (MMT), which is very good at reducing the rate at which IDUs take injecting risks. You want to compare this with some other, broader-based intervention (e.g. voluntary testing and treatment, VTT, which also affects MSM and low-risk people).  Then the average QALY for an MSM with asymptomatic HIV is about 0.9 (to pick a common value). Because you’ve applied the weight to IDUs but not to (e.g.) MSM, the average QALY for an IDU with asymptomatic HIV is 0.9*0.9=0.81. Now suppose that you implement MMT: this intervention reduces the risk of transmission of HIV, but it also treats IDU’s mental illness, so the weight for all the successfully-treated IDUs drops away and you gain 0.09 QALYs per IDU you treat; but then you gain 0.1 additional QALYs for every case of HIV prevented by the MMT intervention. This means that VTT has to be almost twice as effective as MMT to be considered cost effective, if they cost roughly the same amount. That is, in this case the cost-effectiveness of MMT is exaggerated relative to VTT by dint of your weighting decision – even though half of the benefits gained don’t actually have anything to do with reducing the spread of HIV (which implies you can prevent half as much HIV for the same QALY gains). On the other hand, if you implement an intervention that doesn’t treat IDU but does prevent HIV in IDU (such as needle exchange), its effectiveness will be under-estimated due to the IDU weight. In both cases, introducing the cost-benefit element to the analysis has confused your outcome.

    Opening Pandora’s Box

    The real problem with this IDU weight, though, is if we decided to extend the logic to all cost-effectiveness analysis where identifiable groups exist. For example, we could probably argue that very old people have lower QALYs than younger people, and any intervention which affects older people would gain less benefit than one which affects young people. An obvious example of this is anything to do with road accidents: consider, for example, mandatory eye testing vs. raising the minimum driving age. Both would result in lower rates of injury (and thus gain QALYs) but the former would primarily affect older people, and so would be assigned lower effectiveness, even if it prevented a hugely greater number of injuries[6]. When we start considering these issues, we find we’ve opened Pandora’s box, and particularly we’ve taken ourselves to a place that no modern health system is willing to contemplate: placing a lower value on the lives of the old, infirm, or mentally ill. As is often the case with social problems, the marginalized and desperate (in this case, IDUs) are the canaries in the coalmine for a bigger problem. I don’t think any health system is interested in going down the pathway of assigning utility weights to otherwise healthy old people (or MSM, or people with depression, or…)

    An Example in Extremis

    Let’s consider an obscene example of this situation. Suppose we apply a weight, let’s call it beta, to some group of recognizable people, who we call “the betamaxes.” Now imagine that these people are the “carriers” for a disease that doesn’t afflict them at all (i.e doesn’t change their quality of life) but on average reduces the quality of life of those who catch it to a value alpha. Suppose the following conditions (for mathematical simplicity):

    • The people who catch the disease are on average the same age as the betamaxes (this assumption makes comparison of life years easier; breaking it simply applies some ratio effects to our calculation)
    • The disease is chronic and incurable, so once a member of the population gets the disease their future quality of life is permanently reduced by a factor of alpha
    • One betamax causes one case of disease in his or her life
    • Preventing the disease is possible through health promotion efforts, but costs (e.g.) $10000 per disease prevented
    • Betamaxes are easily identifiable, and identifying and killing a betamax costs $10000

    I think we can all see where I’m going here. Basically, under these (rather preposterous) conditions, identifying and killing betamaxes is a more cost-effective option than the health promotion campaign whenever alpha>1-beta. Obviously permanent quarantine (i.e. institutionalization) could also be cost-effective.

    This may seem like a preposterous example (it is), but there’s something cruel about these calculations that makes me think this weighting process is far from benign. Imagine, for example, the relative QALY weights of people with dementia and their carers; schizophrenia and the injuries caused by violence related to mental health problems; or paedophilia. I think this is exactly why health systems avoid applying such weights to old people or the mentally ill. So why apply them to IDUs?

    Cost-Effectiveness Analysis in Fantasy Communities

    There’s an obvious situation where this CEA process breaks down horribly: if you have to apply it to elves. Elves live forever, so theoretically every elf is worth an infinite amount of QALYs. This means that if a chronic disease is best cured by drinking a potion made of ground up human babies, it’s always cost-effective for elves to do it, no matter how concentrated the baby souls have to be. If a human being should ever kill an elf due to some mental health problem, then it’s entirely reasonable for the elven community to consider exterminating the entire human community just in case[7]. Conversely, any comparison of medical interventions for chronic disease amongst elves on cost-effectiveness grounds is impossible, because all treatments will ultimately produce an infinite gain in QALYs: this means that spending the entire community’s money on preventing a single case of HIV has an incremental cost effectiveness of 0 (it costs a shitload of money, but saves an infinite number of QALYs). But so does spending the entire community’s money to prevent a single case of diabetes. How to compare?

    Similar mathematical problems arise for Dwarves, who have very long lives: you’d have to give them a weight of 0.25 (for being beardy bastards) or less to avoid the same problems vis a vis the use of humans in medicinal treatment that arise with elves.

    This might explain why these communities have never gone for post-scarcity fantasy. When you have an infinite lifespan, no intervention of any kind to improve quality of life is cost-effective. You might as well just live in squalor and ignorance, because doing anything about it is a complete waste of money.

    Cost Effectiveness Analysis as a Justification for Goblin Genocide

    Furthermore, we can probably build a mathematical model of QALYs in an AD&D world: some people have better stats than others, so they probably have better quality of life. We could construct a function in terms of the 6 primary stats, and obviously goblins come out of this equation looking pretty, ah, heavily downward weighted. Given that they lead short and brutish lives, and are prone to kill humans when the two communities interact, the obvious effect of weighting their QALYs from this mathematical model is pretty simple: kill the fuckers. The QALY gains from this (and the low cost, given the ready availability and cheap rates of modern adventurers) makes it a guaranteed investment. In fact, compared to spending money paying clerics to prevent infant mortality, it could even be cost-effective.

    Conclusion

    Cost-effectiveness analysis needs to be applied very carefully to avoid producing perverse outcomes, and the logical consequences of applying weights to particular groups on the basis of their health state are not pretty. We should never weight people “objectively” to reflect their poor health in dimensions other than that under direct consideration in the cost-effectiveness analysis, in order to avoid the risk of applying a cost-benefit analysis to a cost-effectiveness situation. Furthermore, even if we are comfortable with a “discriminatory” weight, of the “oh come on! they’re just junkies!” sort, it can still have perverse outcomes, leading to over-estimates of the cost-effectiveness of treatments for the mental illness compared to other interventions. Furthermore, we should never ever ever allow this concept to become popular amongst elven scholars.

    I’ll be coming back to this topic over the next few months, I think, in a way I hope is quite entertaining for my reader(s). Stay tuned…

    fn1: The slightly cumbersome title arose because the journal now doesn’t like to refer to “substance abuse” or “substance abusing populations” so I had to change it to the un-verbable “Substance Use Disorder”

    fn2: If you download the pdf version it comes with a corker of a letter about French tobacco control policy[3]

    fn3: Which is a contradiction in terms, surely?

    fn4: For a full explanation of this and other matters you can refer to the famous text by Drummond, which is surprisingly accessible

    fn5: In fact we are now looking at very long survival times for HIV – up towards 30 years, I think – provided that we initiate good quality treatment early, and so it is no longer necessarily a death sentence, if one assumes a cure will be available within the next 30 years

    fn6: This applies even if you ignore deaths and focus only on short-term minor injuries, and thus avoid the implicit bias in comparing old people with young people (interventions that save life-years in old people will always be less “effective” than those that save life years in young people, unless the effect of the intervention is very short-lived, because old people have less years of life to save).

    fn7: In fact you can go further than this. All you need is for an elven propagandist to argue that there is a non-zero probability that a single crazy human will kill a single elf at any point in the future, and the expected value of QALYs lost will always be greater than the QALY cost of killing all humans on earth, no matter how small the probability that the human would do this