• In comments to my post on shitty GMing it has been suggested that the problem simply came down to a GM who was running the game as a “neutral arbiter” and had I known that I wouldn’t have felt hard done by. Putting aside the particular exigencies of that case, I don’t believe that it’s possible for a GM to be a truly neutral arbiter, nor do I think that it’s particularly desirable. Here I shall give some reasons why it’s not possible, giving some examples from the module that we played during the particular case in question (which is available online here), and give my preferred role for the GM in play.

    The Problem of GM Preferences

    The GM participates in the game for his or her own fun, and is not actually a referee in the strict sense of the word. Every GM brings their own preferences for gameplay and interaction to the table, and it’s inevitable that the GM will reward play that matches their preferences, and discourage play that doesn’t. In a one-off game this may not be noticeable but in an ongoing group the players get used to the GM’s preferences and change their play accordingly (usually). The players are usually aware that the GM also needs to enjoy the game, and they do tend to adapt accordingly. But if they don’t, the GM has – and will generally use – a variety of techniques to ensure that the game will be rewarding for the GM as well as the players. I don’t think it’s possible for a GM to remain neutral while pursuing their own fun.

    The Problem of Shared Experience

    Much of an RPG in practice proceeds according to a series of descriptions by the GM, and responses by the players. How the players respond depends on their understanding of what the GM told them, and in my experience as both player and GM what the players understand of what the GM told them is very different to either a) what the GM expects them to understand or b) what the GM thinks they understand. Things the GM thinks are obvious remain completely invisible to the players; things the players focus on are irrelevant to the GM. It becomes the GM’s responsibility to do something about this: whether the response is one of correcting player misconceptions or riffing of these misconceptions, neither response is neutral. The genuinely neutral response is to either correct the players’ misconceptions (so there is no risk that the shared experience is corrupted by the medium of expression) or to ignore it (being “neutral”). I think many people who think it’s possible for a GM to be neutral couldn’t even agree on which of these actions is the mark of a neutral GM, or even which is possible. In reality I think the concept of a neutral arbiter relies, in gaming just as in real life, on the assumption of information exchange being perfect. This just doesn’t happen in games, and it’s no one’s fault that players suddenly yell “I’ll jump out the window!” when you’ve just described a subterranean room with no windows; it happens all the time. Players are tired, checking facebook, drinking beer, reading a spell description, checking whether they have used up that item … and you’re imparting a crucial piece of information that they not only fail to hear but fail to realize is crucial.

    This problem is especially pernicious where the game depends on setting-specific knowledge. In this case the “neutral” GM has to decide which aspects of setting knowledge the PCs already know (and thus what the players can learn for nothing) and what they are supposed to find out the hard way. This is not the kind of information that has even a concept of neutrality attached to it.

    The Problem of Knowledge

    Everyone who comes to an RPG has their own specific knowledge and real life experience, and this has a significant bearing on their understanding of the game world. What people believe is possible or impossible, what they think their PCs can and can’t do, what they even think of doing with their PCs, depends on their understanding of the world they’re in. Recently Hill Cantons reproduced a few “design notes” from two popular RPGs, and the attitude towards knowledge in one of them (Chivalry and Sorcery, I think) was noteworthy:

    We believe that it is necessary to provide a coherent world if fantasy roleplaying is to be a coherent activity…[Feudalism] also has the virtue of being a real way of life, existing for well over 1000 years in Europe…The feudal system was a working culture, and thus it can be used to very good effect as a model on which to base a fantasy role playing culture that will also work, often to the finest detail.

    This kind of attitude towards setting obviously assumes that everyone playing understands what a feudal world is and how it works. But this is almost never true. Lots of people know almost nothing about the “real way of life” under feudalism, and everyone brings their own prejudice and misconceptions to the setting. The most important of these prejudices and misconceptions are, obviously, those of the GM. What is possible politically, socially and financially in a feudal world is completely dependent on the GM, and there is no sense in which the GM can be neutral in arbitrating this stuff. Provide you stick to a set of disconnected module-based dungeon crawls this may not be an issue, but as soon as you aim for a game more complex than killing people and stealing their stuff, conflict between GM and players over assumptions and knowledge will enter the game.

    This conflict also occurs in task resolution and challenges. A GM who is experienced in rock climbing and mountaineering will have a different concept of what is possible in these settings than one who is experienced in surfing or computing. I think lots of gamers are know-it-all nerds who think they have a good grounding in a wide range of knowledge, but in general they’re straight-up wrong about most of their wikipedia-based insights; and often very stubborn about defending them to boot. The GM may think he or she is being neutral in arguing that it’s not possible to do X, but if there is someone in the group who is familiar with X and didn’t learn about it yesterday on a dodgy message board, they’re likely to misinterpret the GM’s neutrality as pig-headed stupidity. The GM is not a database of unbiased knowledge; which way their biases leans depends heavily on what they know and what they don’t, and how they value the knowledge they do have.

    The Problem of Facilitation

    The GM is usually charged with the task of resolving conflicts within a group, that is often composed of people with little in common except their desire to game together. This manifests most commonly as a need to control the more ebullient and aggressive players, and to draw out the shyer and more timid players. It’s not possible to do this and remain neutral, because it involves favoring some people and being stern with others. Furthermore, the GM often has to resolve conflicts about actions and consequences, and occasionally quite bitter disputes about (for example) treasure, PC conflict, and game direction. Sometimes the GM has to shut a player up who is dominating the game beyond any kind of reasonably alloted time, and if a player is disrupting the group it is usually the GM who is charged with the task of deciding what to do (and communicating it to that player). Who, if not the GM, gets charged with the task of delicately explaining to the neckbeard that they stink and need to wash before attending sessions? OH, the joy of GMing. And when the GM does this they bring their own social biases and problems to the fore, and usually don’t stay neutral for very long – and they are usually responding to a group dynamic that they only have partial control over. It’s very hard for GMs to stay neutral in these situations, just as it’s hard for GMs to avoid playing favorites, or getting pissed off with particular players and acting irrationally, and so on. Some players just have a style that a GM will like or hate, and it will be rewarded or punished accordingly. This is not neutrality.

    The Demands of the Module

    Using the Rahasia module as an example, we can see a few immediate situations where the GM is tasked with a non-neutral stance by the designer, or set challenges that demand a departure from running the game-as-written. The Rahasia module introduction suggests that the GM

    Encourage the players to think of ways of capturing and defeating the witches without inflicting physical damage

    and the game is built on the assumption that GM and players will go along with this idea. This sets up a framework – including penalties of lost experience points – that is very far from neutral. Furthermore, the background information about the dungeon itself is very limited and not much at all is said about the structure of the dungeon. The trapdoor through which I climbed to my death is described thus:

    Directly behind the statue, in the floor of the temple, is a secret door that opens over a staircase to the lower treasure room

    No information is given anywhere about whether secret doors are locked or how to handle them, so the decision to make the room accessible to anyone from below is implicitly up to the GM. A decision to allow access is a decision by the GM to make the dungeon more dangerous; it might be taken unthinkingly or deliberately, but it’s not a neutral decision. Especially in light of this statement about the golem in that room:

    This golem hopeless outclasses any typical party, so the players must think of a way past this creature (the robes work, of course)

    This statement makes it clear that the adventure is not supposed to funnel the players into conflict with the golem; they aren’t at any point meant to be its match. Instead, the GM has to at least give the players a chance to stop and assess the situation and find a way to know that the golem is there. Allowing them to access the foot of the statue as soon as they enter the dungeon is not consistent with the intention of the module, but the module nowhere makes clear a way to avoid this. The GM’s decisions about trap doors, use of portals, and ways of passing through the dungeon are tied in with the nature of this final beast, and the option of playing the module “as written” is a dead one. The GM must choose a non-neutral position on this module in order to run it in the sense that it was intended.

    The Fallacy of Behaviorism

    Another common view I read on the internet about GMing and player reactions is the idea that players “learn” from their mistakes, and the GM has a role as a “teacher” to help them understand the risks of the world they’re in. This is particularly common in old school play, in my experience. I think this is both fallacious and patronizing. It’s patronizing because we’re all adults, and I don’t give up hours of my downtime to be schooled in the harsh “realities” of fantasy life by a self-important neckbeard. I want to play in a shared world where my understanding of that world is assumed to be an adult’s understanding, and my mistakes are handled, not judged. But it’s also fallacious. Adults don’t learn in this way, and punishing adults for their mistakes is pointless; it’s a classic example of a fallacy based on regression to the mean to think that adults will learn this way. Furthermore, what the GM may think is a mistake, the players may think was a reasonable action. On top of this, there is an additional behaviorist nonsense. Most of us learnt the game as teenagers being taught by bad teenage GMs in fairly immature social settings. If this behaviorist approach to learning from “mistakes” has any truth to it, by the time we get to game as mature adults we’re going to be well past correction, and will be gaming primarily based on the experiences of our (mostly crap) teen years. If so, “teaching” us is going to have to be done some other way, and is going to involve the GM coming down from their neutral pedestal to make judgements about what is wrong with our play style. But who’s to say, given the backgrounds of the adult participants in this hobby, that it’s the players who learnt all the mistakes? Just as likely it’s the GM who needs to be “taught” about their mistakes. The best approach is to drop this ideal altogether and accept that everyone involved in the game is probably flawed and their flaws and mistakes demand understanding rather than “teaching.”

    The GM as Facilitator

    I think the GM is inherently biased: he or she is there to enjoy a game, and wants the game to run in a way that entertains him or her. But on top of this, the GM is charged with preparing for the game, managing conflicts, and ensuring that the players have fun. These conflicting tasks are inconsistent with a neutral position, just as the players’ role is inconsistent with a purely selfish one (they are also meant to be aware of the work the GM has put in, his or her desire to enjoy the game, and the needs and perspectives of their fellow players). The GM thus functions best as a facilitator, ensuring that the players enjoy a game full of challenges and exciting situations, in which they will have fun and everyone will got what they are looking for. A neutral GM cannot help this happen, and I don’t believe it’s possible for someone to be a neutral GM to start with. There are too many conflicting pressures and responsibilities for the GM to remain neutral in all circumstances. By pretending that this is possible, we simply create a set of false assumptions and expectations that let everyone down: better to understand everyone’s biases and perspectives upfront, and respond accordingly, than to try and pretend they can all be hidden or put aside during an activity that, in its own way, can be as frantic, demanding and engrossing as anything else that adults do.

     

  • In looking at the cost-effectiveness of health interventions in fantasy communities we have shown that the infinite lifespan of elves creates analytical problems, and other commenters have suggested that the cost-effectiveness of clerical interventions to reduce infant mortality should be balanced against the need for clerics to go to war. Well, Professor John Quiggin at the Crooked Timber blog recently broached the issue of doing a benefit-cost analysis of US military spending, and has found that the US defense department has killed a million Americans since 2001. His benefit-cost analysis is really just an exercise in peskiness, though it does have a valid underlying point, and I think actually you could show with a simple cost-effectiveness analysis that the wars of the last 10 years have, under quite reasonable assumptions, not been a cost-effective use of American money. Of course, we don’t make judgments about military spending on cost-effectiveness or cost-benefit grounds.

    In comments at Crooked Timber[1], I listed a few examples of how US Defense Department money could be better spent, and one of those examples was vaccination. Obviously, disease eradication would be a very good use of this money, because of its long-term implications, but in thinking about the cost-effectiveness (or cost-benefit) of this particular intervention, I think we can see another clear example of how these purely economic approaches to important policy debates just don’t work. So, here I’m going to look at this in a little more detail, and give some examples of how we can come to outrageous policy conclusions through looking at things through a purely econometric lens. I think I came to this way of thinking by considering the cost-effectiveness of interventions in elven communities, and ultimately it’s relevant to the debate on global warming, because a common denialist tactic is to demand that AGW abatement strategies be assessed entirely in terms of cost-benefit analyses, which are very hard to do and, as one can see from the comments thread at Crooked Timber, are anathema to supporters of the military establishment. As we can see here, they also break down in quite viable real-life circumstances.

    The Problem of Disease Eradication

    So, you’re the US president in 2001, and you’re reading a book on goats to some schoolkids, and as happens in this situation, you have to make a snap decision about how to spend 200 billion US $ over the next 10 years. You could spend it going to war with a small nation that harbours terrorists; let’s suppose that if you don’t your country will be subject to one 9/11 -style attack every year for the next 20 years (until OBL dies). If you do, you’ll commit your own and the next administration to spending 200 billion US $. Is this a good use of your money? 200 billion US $ to save about 50,000 US lives over 20 years, minus the casualties (wikipedia tells me it’s about 5000). So you get a net benefit of 45,000 lives, or 4,444,444  US $ per life – this actually comes under the US government’s 5 million US$-per-life-saved threshold, so it’s a viable use of your money. But one of your alternatives is to spend the money on eradicating HIV using a vaccine that was recently developed, and it has been shown that by spending 200 billion US$ over 10 years you could eliminate HIV from the face of the earth. You don’t care about the face of the earth, but you need to eradicate it everywhere to make Americans safe from it. Should you ignore the terrorist attacks and spend the money?

    For a standard cost-effectiveness analysis you would calculate the incremental benefit (in lives saved) from this vaccine compared to the war on terror. Lives saved in the future are discounted at a fixed rate (usually about 3% a year) and decline in value over the term of the intervention. But the problem with this calculation for disease eradication (specifically) is that the term of the intervention is infinite. All future lives saved forever go into the calculation. The actual formula for this calculation is the integral over (the negative exponent of (discount rate*time t)) multiplied by (lives saved at time t)[2]. Usually we model a policy over 20 or 30 years, giving a finite result; but in this case we need to model the benefit over all future time, and the integral of any bounded function multiplied by the negative exponent, over an infinite range, is infinite. So even with furious discounting we get an infinite benefit from eradicating any disease. Not only does this make comparing disease eradication decisions – e.g. smallpox vs. HIV – impossible, but it makes comparing disease eradication to any other policy objective impossible, and it tells us – quite reasonably, I should say – that we should bend all our health care resources to this task.

    In this case, the president of the USA should decide not to go to war because 20 September 11ths are a small price to pay for the eradication of HIV. Eventually Osama bin Laden will give up[3]; HIV won’t. But the stupidity of this decision doesn’t end here. If it costs a trillion dollars to eradicate HIV, the president would be better off defunding his army and paying the price than not; and if Mexico were to invade, killing a million Americans, the infinite benefit of having eradicated HIV would still outweigh the loss.

    Now, one argument against this logic is that you shouldn’t include the yet-unborn in a policy evaluation; yet this is standard practice. For example, in considering the cost-effectiveness of different interventions to reduce HIV transmission, we might run a model on the 15-64 year old population, and when we do this we allow for maturity into and out of the population; if we run the model for more than 15 years we are implicitly allowing the yet-unborn into the model. Furthermore, you could surely argue that modeling disease eradication without including the unborn devalues the whole concept – what is disease eradication except a project to protect the unborn generations of the future?

    So we can’t use econometric analyses by themselves to assess the value of interventions, because a perfectly reasonable economic analysis of a valid healthcare goal throws up an impossible contradiction. The world expects – with significant help from Bill Gates, I might add – to eliminate polio by 2015 and with the recent announcement of a vaccine for malaria you can bet that the international health movement will turn its gaze on that bastard protozoan next. And there is no economic argument you can mount against spending money on it – even if the cost is everything you own.

    Implications for the Global Warming Debate

    A common argument mounted by “hard-headed realists” and AGW deniers is that money spent on AGW mitigation needs to be justified by a solid cost-benefit analysis, because the alternative is to spend this money on targeting real problems now, especially in third world countries (often also the countries most vulnerable to AGW’s worst effects). Money spent on infant mortality now, they argue, is far better than money spent on AGW mitigation in the future – even if you accept that the negative effects of AGW are a certainty. This is a particularly powerful argument since we don’t have solid evidence for exactly how bad the effects of AGW will be, and we know that the future benefits of reducing infant mortality now are huge. This economic defense will usually also depend on discount rates – we’re much more interested in lives saved now than in the future, and AGW mitigation’s effects will be felt in the future, not now. Exactly what the relative benefits of mitigation will be are very sensitive to discount rates.

    In this case, though, one can argue: well, let’s spend the entire defense department’s money on eradicating HIV. If we test everyone in Africa every 6 months – surely possible with the full funding of the US military on the case – and treat them immediately (or, hey, just treat everyone in Africa with anti-HIV drugs for the next 30 years – let’s put them in the water!) then we can eliminate HIV, and save an infinite number of lives. It’s guaranteed on both cost-benefit and cost-effectiveness grounds, with the added benefit that you don’t need to quibble over the discount rate – it’s guaranteed to be cost-effective under any finite discount rate. The natural argument against this will be that someone might invade America. But we can say in response to this, “uh uh! Precautionary principle! You don’t know how bad that invasion will be or even if it will happen.” If the precautionary principle doesn’t apply to the putative risks of AGW, why should it apply to defense? Or rather, if we need to attach a monetary value to the future risks of AGW, why not attach one to the future invasion of the USA? And when we do, it will be of lower value than the benefits from elimination of HIV, even if the entire population is wiped out during the invasion.

    Which brings us back to the simple problem that we can’t assess any policy in isolation using only the econometric tools at our disposal. Everyone understands this, of course, which is why people on the Crooked Timber thread are bridling at Professor Quiggin’s analysis. They attach additional, non-economic considerations to these problems. But one of the rear-guard actions of the anti-AGW movement is to demand that we use exclusively economic methods for assessing the value of AGW mitigation – and it was in response to this fiction that the Stern review was commissioned. I think it needs to be recognized that these econometric tools offer false clarity, and only apply within a very limited framework, that of limited improvements in a limited temporal framework (pacemakers vs. aspirins, essentially). Defense, disease elimination, and AGW mitigation lie outside that framework. This should be abundantly clear to anyone who has tried to do a cost-effectiveness calculation of the relative merits of slavery and genocide for elven communities. It’s just a shame that most economists haven’t bent their mind to these truly important questions; fortunately, we at the C&C University are here to help with the more profound philosophical questions. No, don’t thank me, we do it for free. Or, alternatively, pick apart the argument in the comments … I’m eager to hear how a valid mathematical framework can be constructed for the analysis of disease eradication goals, because it’s relevant to my work…

    Update

    Actually while I was watching a band in Kichijoji at 3am last night I realized that my interpretation of the formula for total effectiveness in the disease eradication was wrong[5]. Ultimately, the benefits that accrue from disease eradication are approximately (1/(discount rate))*average number of lives saved in any year. So for a discount rate of 3% and 1,000,000 lives saved per year from (e.g. ) eradicating malaria you would get a total benefit of about 33 million. It’s not infinite but it’s very very large. So the general argument holds, but it is possible to compare disease eradication programs. Note that there’s an argument that can be made for a lower discount rate in the case of disease eradication (it is all about saving future generations, not the current generation) and even a small change in the discount rate makes a big difference to the outcome. Also, under certain conditions (exponential population growth bigger than the discount rate) the benefits of disease eradication are infinite; I think most people expect the population to stabilize at 7 billion though so this doesn’t apply on earth.

    fn1: for historical reasons I comment there as sg

    fn2: or something similar

    fn3: Actually it’s an interesting question, isn’t it? If you ignore a terrorist who is incapable of waging a conventional war on you, refuse to give into his demands, mount a purely law-enforcement operation to prevent his worst excesses, and wait him out, how long will it be before he just gives up and goes away? How long can OBL recruit people for if killing Americans leads to … nothing? And if after a few years the US said quietly to the Taliban, “we’ll give you a billion a year in aid if you get rid of him,” how long would it be before he had no safe bases?

    fn4: I find this very interesting. A few years ago it was getting hard to find doctors in the west who would perform circumcisions on babies; ten years ago doctors were equivocal on the issue and there has been a long-standing community opposition to circumcision for non-medical reasons; yet now we’re recommending it (and funding it!) en masse in African countries. I wonder how Americans would have felt if, in 1987, Nelson Mandela or Robert Mugabe had come to the USA and suggested that the solution to their growing HIV problem was to circumcise all adult gay men?

    fn5: I did this calculation only recently, so I really should have got this right from the start…

  • Definitely in the 1%

    I found this in the tumblr We are the 53%, your go-to page for people who vote against their own interests. Darth Vader and his stormtroopers are classic examples of people who voted against their own interests. You’re an exemplar for us all, Lord Vader!

  • No Place for the Warm-hearted

    This is the plan for a campaign setting in one of the earlier eras of my Compromise and Conceit campaign setting, to be run in English using Warhammer Fantasy Role-play 3. This campaign will be set in Svalbard in summer 1635, early in the period of time in which Europe began to rediscover magic, through infernalism. I discussed some reasons for the Svalbard setting some time ago, and I’ve recently done a little research that suggests setting it in the 17th century gives me an opportunity to combine political intrigue, pirates and polar exploration. It also gives a chance to test a campaign setting where the environment is itself an adversary for the PCs, and to explore some more of the political and infernal concepts of the Compromise and Conceit setting. The last adventure enabled my players to explore the complex and violent politics of the French and Indian war, and ultimately to change the course of American history. Maybe this time we can explore the possibilities inherent in Scandinavia.

    Svalbard in 1635: Political Context

    This era is the beginning of a long period of infernal exploration, and the near end of the Age of Discovery, which was still playing out in Northern Europe and the Arctic. Svalbard had only been discovered 40 years previously, and was not yet controlled by any single power. Instead, companies from different nations – primarily England, Denmark, France and Holland – would come to Svalbard in the summer for whaling and seal hunting, establishing camp in bases along primarily national lines and hunting furiously during the limited months of sunlight. The nation states that backed these companies had limited authority out in the wilderness of Svalbard, and the whaling companies would come into often violent conflict with each other – even with companies from the same nation. These whaling companies were essentially freebooters, pirates with a semi-official backing from their home nation, and they would use quite vicious methods to ensure access to the lucrative whaling zones of what was then known as Spitsbergen. Political and mercantile tensions from Europe would be played out in these freezing waters.

    The main nation with a solid, long-term interest, however, was Denmark: at this time Denmark, Norway and Sweden had united under the Kalmar Union and had also absorbed Iceland, which had accepted Lutheranism 80 years earlier after the beheading of its last Catholic Priest. By adding Spitsbergen to its crown Denmark would control all the islands of the Arctic, and access to the fabled Northwest Passage. It would also be able to exert control over lucrative whaling regions, and all the fisheries and any natural resources of those islands. During the middle part of the 17th century the Danish crown turned its attention on consolidating complete power over the union of Scandinavian nations, and although unable to back its claims of sovereignty over Svalbard with military force, was undoubtedly up to mischief on the island. With the rediscovery of magic in Europe, the Lutheran church also found itself facing a resurgence of interest in Odinism and paganism, and so the church as well needed to extend its powers across the distant archipelago.

    Svalbard itself is a harsh environment for piracy or adventure, and in fact until 1634 no one had ever wintered on the Island. The Little Ice Age was well underway, and this meant sea ice in the Northern and Eastern edge of Svalbard for 9-10 months of the year, and freezing temperatures all year round. The North Eastern side of the archipelago was yet unexplored, and even traversing the main Island (Spitsbergen) was a formidable challenge for 17th century explorers. Against this political and environmental backdrop the Danish were attempting to establish a permanent presence on the Island sufficient to guarantee a long-term hold over the arctic, and its lucrative whale oil trade. At this time the full promise of Infernalism and the materials and technologies it would make available to Europe had not yet been revealed, and resources like whale oil were of great importance.

    Svalbard in 1635: Infernal Context

    With Shakespeare only recently dead and Marlowe long in his grave, the groundwork had been laid for the expansion of infernalism across Europe. Marlowe’s objections to the use of Demonology to bolster the power of King and God had been washed away in blood under suspicious circumstances 40 years earlier, giving Shakespeare 20 years to preach the gospel of Infernalism. His lessons had taken hold but the full benefits – magical and technological – that would flow from Infernalism, as well as its future challenges, were not yet known, and a diverse array of magical schools and colleges flourished throughout Europe. Their understanding of magic was fragmented and their power limited, Descartes had not yet written his Meditations or Principles, and the systematization of magic – as well as its restriction to a handful of schools – was not to come until the end of the century, under Newton, Liebnitz and the years after the Glorious Revolution in England. For the period from Shakespeare’s death until the English civil war magic remained a kind of cottage industry, and its practitioners a diverse and unruly bunch.

    Settlements on Svalbard

    There are five main locations on Svalbard, numbered in the map above:

    1. Smeerenburg (“Blubber Town”): The Dutch settle at Smeerenburg in the summer, and hunt whales from here. Their activity was so frenzied and the sights the settlement offered so disgusting that the town was given the name “blubber-town” by those who work there. The Danes were driven out of Smeerenburg a few years earlier, and now only a few Danish traders visit during the period of activity.
    2. Danskoya (Ny-Alesund): The combined settlement of Danish and Dutch whalers forms the de facto political base for these two nations, as well as a resupply base for Smeerenburg, which is further north, and the official point of communication with the English and French whalers to the South. This town is equally frenzied in its pursuit of whale meat, but also contains some non-whaling related commercial activities, primarily hunting and trapping. It is also the first area of Svalbard to be turned into a permanent settlement. Just South of Danskoya is a small French settlement, called Refuge Francaise, and largely dependent upon Danskoya for protection and resupply.
    3. The Silent Tower: A group of Norwegian monks have set up a small monastery here, in the ruins of an ancient stone tower that no one seems able to account for. The tower provides excellent protection from the elements and seems to have a permanent supply of fresh water, and the monks are able to winter in the tower. They have been doing so for at least the last 10 years, and no one really knows anything about them: they have taken a vow of silence, and most people assume that they see the long months of winter darkness as an opportunity for contemplation undistracted from the concerns of the mortal world.
    4. Ice Fjord: This is the main base of the London Whaling Company, and also the unofficial English government outpost, the Ice Fjord base has the best weather conditions in summer and is also blessed with the permanent monastery on its Northern side. The London company wrested this base by force from the Danes a few years earlier, and although Danish boats may now dock here and some traders come and go, there is a tacit agreement that they will engage in no whaling South of Prins Karls Forland, giving the British free reign of the whole South western half of Spitsbergen. This doesn’t mean they don’t come into conflict, of course.
    5. Bell Sound: The base of the English Muscovy company, famous for having opened up trade with the Russians a few years earlier, but also for having lost a major sea battle with the London company a few years ago and having been driven into Bell Sound, a much less profitable whaling location than Ice Fjord. The two companies regularly come into conflict. There are rumours that the Muscovy company has begun to focus on overland exploration, and may also be prospecting inland of its camp, but of course no one knows anything about the commercial activities of this company

    Aside from a few small survival huts set up in between the main outposts, these are the only established settlements on the island. Until 1635 the island was completely silent and dark in winter, save for the Silent Tower; it becomes a hive of frenzied activity in summer, focused on the mass slaughter of the whales that throng to the island. Against this backdrop various tales of murder, piracy, industrial espionage, sabotage and theft will be played out every summer. Anyone who survives the summer will leave the island rich with whale oil, but the death rate, like the stakes, is high.

    The First Adventure

    In 1634 the Danish wintered for the first time in their temporary settlement at Danskoya. The first winter squad consisted of only seven men, well supplied and dug into a deep and well-built shack. When the first Danish explorers arrived in spring 1635 the hut was empty, the men all gone, and some signs of a struggle could be seen. The Danish are concerned that one of the other companies on the island also over-wintered there, and launched a daring mid-winter raid to kill the Danish crew. If so, this has alarming implications both for what the other companies are willing to do and for their winter-survival technology. The Danish whaling company needs to send a squad of adventurers to Spitsbergen to investigate who did it and how. Once they know this they are to kill the people responsible. They will travel there under the guise of guards for a Danish royal expedition, which aims to draw maps of the whole archipelago over the next few summers. This expedition will spend the first summer traveling up the west coast conducting initial soundings and exploration, and so the PCs will be able to visit every settlement over the course of a few weeks, giving them a good sense of who is where and what they are doing. With the cartographer as cover, they can then visit any settlement they need to for further investigations.

    Simple, surely?

  • This is to be my last post on what I’ve learnt from John Dower’s War Without Mercy, and it is also to be my most speculative. Did the feverish anti-Japanese propaganda of the Pacific war era influence at all the allies’ decision to engage in large scale bombing of urban areas in Japan, and/or their decision to use nuclear weapons? In this sense I’m not interested in whether these tactics were “right” or “wrong,” though I think we can all take it as read that a decision to drop a nuclear weapon on a city is definitely wrong in anything except the most extreme of circumstances. My question is more about whether our subsequent interpretation of these decisions (which remain controversial) and the decisions themselves is clouded by the propaganda that was being used at the time, and the general beliefs about Japanese and allied behavior in the war, as they existed then and exist now.

    I have always accepted what for this post I will call the “standard” view of the urban bombing campaign and the nuclear attacks: that in the absence of convincing proof that they would be destroyed as a nation the Japanese were not going to surrender and were going to fight a long and protracted military campaign that would lead to the deaths of millions of Japanese and potentially hundreds of thousands of allied soldiers. In the standard view, the allies discovered on Okinawa that the invasion of the mainland was going to be a hideous affair, and decided to use terror bombing to bring the war to a close so that they didn’t have to expend many lives. This view can even take the pesky form of having been for the good of the Japanese too: I don’t think it’s hard to find examples of people saying that less civilians died in the bombing campaign than would have died if the allies had invaded the mainland.

    I have also read Dresden, which contains a passionate defense of the terror bombing of German cities on strategic grounds and argues that the frantic German efforts to defend major cities represented a huge drain on their military resources and hastened the end of the war. I’m inclined to accept this view of the strategic value of the terror bombings of Germany, and against the backdrop of all the horrors of that war I can understand why Stalin was pleading with the allies to do more of the same. But just because it worked in Germany doesn’t mean it was strategically necessary in the Pacific, and my suspicion is that decisions about when to start the bombing, how intense to make it, and why it was necessary, were influenced by the extreme propaganda about Japan. We have established that there was an eliminationist sentiment to this propaganda, that it was extremely racist and that the underlying principles of the propaganda were believed by the public and war planners alike. We also know that the allies got up to all manner of nasty war crimes in the Pacific, were not particularly inclined to see the Japanese as human, and that just as their behavior towards Japanese was different to Germans, so was their propaganda. So it doesn’t seem a stretch to me to imagine that the allies were also inclined to favor brutal tactics, and that decisions about the necessity of these tactics would be colored by some genuinely held beliefs about how unreasonable, crazy, childish and brutal the Japanese were. Also underlying the allied response to the Japanese is a need to remind the other “sub-humans” of the Pacific that rising up against the accepted international order is a very bad idea, and a fear that the Japanese “lesson” might be learnt by others in Malaysia and Indonesia. There are also a few examples from Dower’s book of specific beliefs about the unwillingness of the Japanese to surrender, and specific actions taken by the allies that suggest that the terror bombings weren’t embarked on reluctantly or purely for military/strategic reasons. I’ll cover these first.

    Beliefs About the Chances of Surrender

    The allies based their understanding of Japanese war-time thinking on a whole suite of crazy sociological theories about the Japanese psyche: that the nation was stuck in a child-like stage of development, that they were crazy, that they could not be reasoned with, and that they could not be trusted. Many allied planners seemed to think that the Japanese would use any kind of honourable or conditional surrender as a chance to regroup before attacking again, and the Japanese were generally viewed as treacherous and shifty. Dower describes the generally held view that the Japanese would need to be thoroughly defeated, possibly “to the last man” because their nation had a suicide psychology and needed a “psychological purge.” Allied planners may have expected the Japanese to behave as a nation the way they (also erroneously) believed Japanese as individuals preferred suicide to surrender. Furthermore, Japanese treachery and savagery meant that only by the complete destruction of their current order could the Japanese desire to dominate Asia be prevented. Allied propaganda also maintained that the Japanese were “patient” and sinister (common traits ascribed to Orientals) and would happily wait 100 years to launch another war of domination, as Germany had done after world war 1, and so the only way to prevent them going to war again was their complete destruction. This view is particularly interesting because there really was no historical basis for thinking that the Japanese have a long-standing interest in dominating their region – they had chosen isolation over expansion, and their first international military activity was against Russia in 1905. The allies were nonetheless willing to believe that the war represented a manifestation of some constant belief in Japanese culture.

    Lack of Interest in Surrender

    In addition to a general belief that Japanese did not surrender, allied soldiers and their leaders did not show much interest in obtaining surrender from their enemies. In military engagements allied soldiers would kill soldiers who did surrender, or would refuse to accept a surrender and force Japanese soldiers to fight on to their deaths. Dennis Warner reports this exchange between two high-ranking officers in Bouganville:

    “But sir, they are wounded and want to surrender,” a colonel protested [to a major general] at the edge of the cleared perimeter after a massive and unsuccessful Japanese attack.

    “You heard me, Colonel,” replied [the major general], who was only yards away from upstretched Japanese hands. “I want no prisoners. Shoot them all.”

    They were shot.

    Accounts from Marines in Okinawa also suggest the same behavior in Okinawa, and not just towards soldiers: marines also killed civilians. This account from a war correspondent summarizes the battlefield philosophy of the Americans:

    What kind of war do civilians suppose we fought, anyway? … We shot prisoners in cold blood, wiped out hospitals, strafed lifeboats, killed or mistreated enemy civilians, finished off the enemy wounded, tossed the dying into a hole with the dead, and in the Pacific boiled the flesh off enemy skulls to make table ornaments for sweethearts, or carved their bones into letter openers.

    This was published in The Atlantic Monthly in 1946, when the memories and philosophies of the war were still clear in people’s minds and admitting such atrocities was still acceptable. By now, of course, we look back on our soldiers as having fought for a noble cause, and no longer discuss the barbarity of the time. It’s clear from these accounts that the mistreatment of prisoners and refusal to accept surrender crossed military types (navy, air force and army) and was held at all levels of command. It’s also clear that the blood-letting on Okinawa was not entirely the fault of Japanese unwillingness to surrender, and suggests that whatever judgments military planners were making about a battle on the mainland, to some extent at least the numbers of dead they were expecting to see were being partly brought about by their own soldiers’ misconduct. With such a disinterest in either surrender or treating the enemy population kindly, perhaps they were inclined to see a protracted campaign of urban destruction as a good thing on its own terms?

    Destruction for its Own Sake

    The saddest example of this interest in destruction as an end in itself is the final air raid on Tokyo. This happened on the night of August 14th, just hours before the Japanese officially surrendered, and when everyone on both sides knew the surrender was going to happen. The raid was the biggest of the war, consisting of 1014 planes, and suffered not a single loss. The planes had not yet returned to their bases when Japan’s unconditional surrender was announced. There is no chance that this raid was necessary, or that even a single death it caused could possibly have advanced the end of the war by even a heartbeat. It is perhaps the clearest example of simple cruelty on the part of the allies, in which a city was destroyed merely for the sake of it. From this act we can see that the allies valued destruction for its own sake, and were acting on Churchill’s demand to lay all the cities of Japan to ash, even where they didn’t need to.

    The Question of the Bombings

    This leads us to the question at the heart of this post: could the allies have negotiated an end to the war in some other way, without the use of terror bombing and atomic weapons; could they have used less terror bombing and no atomic attacks? Were their decisions driven by a desire to destroy as much of Japan as possible, rather than purely strategic concerns? And if their decisions were based on a genuine belief that the Japanese would not surrender and would fight to the last, to what extent was that belief correct, and was it at least partially clouded by their own stereotypes of and fantastic notions about the Japanese psyche? What portion of the decision to destroy Hiroshima and Nagasaki was strategic, what portion was cruel, and what portion was based on misconceptions about the Japanese psyche that were, ultimately, founded in racism?

    The decision to end the war in this way may also have been driven by the desire to assert colonial power over Asia – a conditional surrender would probably have meant allowing the Japanese to retain some colonial possessions, and the implication from this would be that Asia could control its own destiny. Furthermore, they needed to end the war before the Soviets invaded Japan. But it seems to me that there are other approaches they could have taken: for example, after Okinawa they could have ceased all aggressive action targeting civilians, used their overwhelming naval power to enforce Japan’s isolation, and just waited them out. I don’t know, but I have never heard from any source that the allies genuinely attempted to negotiate surrender before the bitter end. One doesn’t hear stories of attempts to subvert the military clique in charge, to foment civil disorder, or to use captured Japanese soldiers as propaganda tools – it’s as if they just all assumed such actions would be impossible, and I think these assumptions may have been wrong.

    In essence then, I strongly suspect that much of the barbarity of the final year of the war, and especially the terror-bombing campaign, was unnecessary and was driven by a complex mix of racist and colonialist beliefs. I think the allies may have been able to negotiate a different end to the war, but they didn’t believe it was possible due to racist assumptions about “orientals,” and they didn’t want to because they wanted to punish the Japanese and inflict a defeat on them that would send a signal throughout Asia. I think this means that, while in retrospect the bombing of Japan has been painted as a necessary tactic, it can only be portrayed as such if we accept the racist premises of the propaganda of the time, and overlook the wanton cruelty of the allied forces. Is a more realistic historical interpretation that allied thinking about Japan and the Japanese was deeply flawed, and the policy of mass destruction that “won” the war was both unnecessary and heavily influenced by this same racist worldview?

  • The Chief Whip insists you toe the party line…

    Yesterday Australia passed a carbon pricing scheme, over the strenuous objections of the opposition. In fact, the opposition’s objections were so strenuous that their leader, Tony Abbot, has promised a “blood oath” to revoke the legislation.

    I guess he’s thinking of a blood oath in the demonological sense of signing a contract in blood to make it more binding. It’s the natural extension of Tony Abbot’s rather unfortunate recent admission that the only promises he makes that can be trusted are promises that are written down. This surely means that promises written in blood are much more manly and believable than those written in mere ink.

    This opens up a few worrying questions for me:

    • Does Tony Abbot secretly believe that contract law should be changed to make blood-based signatory agreements more powerful, and if so how?
    • Is this an extension of his willingness to “sell his arse” to a willingness to “sell his soul”? And if so what kind of policy-making process does this represent?
    • Given the paucity of soul in the nasty little blighter, and given he can only sell it once, how much policy benefit can we gain from a government that functions in this way?
    • Given he used to be a monk and now he’s become a demonologist, is this further evidence that he’s not really very trustworthy?
    • Given he used to be a monk and now he’s become a demonologist, is this more of an indictment of him or the catholic church?
    • This kind of language seems very fitting for a role-player, something I never suspected Abbot to be capable of. Is he actually a fantasy role-player, and if so is his party aware of how damning this is for his electoral prospects? Do they seriously think the mortgage belt is going to vote for someone that nerdy?
    • If he’s a role-player, what system does he use, is he a GM or player, and where does he fall on the Gamist-Narrative-Simulationist debate?

    The obvious good point of this “blood oath” is that he has finally made his position on demonology explicit. The current minority government is in the hands of the Australian Labor Party, who are widely rumoured to have sold their souls en masse to satan in order to gain admission to the party (or at least, to get the numbers for pre-selection). It’s also generally accepted that they will eat their own young and no act of treachery is too low for them. Of course rumours have long abounded that the Liberal Party are just as bad, but their god-fearing family-loving image has saved them from general acceptance of this rumour. At least now Abbot has admitted that, yes, shock! everyone in politics is up to their necks in satan’s semen, and we can all heave a sigh of relief and get back to analyzing the polls.

    Politically this pledge could be a disaster for Abbot. As if suspicions of satanism and (omfg!) role-playing were not bad enough, it will probably be very hard to undo the legislation without revoking the tax cuts that came with it, which is obvious political suicide. Furthermore the only practical way he can revoke it is to get it through the Australian Senate, which is currently controlled by the realms of faerie (the Greens). Long-standing agreements between the Seelie Court, the CIA and Rupert Murdoch mean that the only way that Abbot will be able to drive through his legislation is likely to be a double-dissolution election, which means that Abbot will have to go to the next election with the pledge that he will “hold another election within 6 months of this one.” That’s not going to be popular in a country where only two things are compulsory: apathy and voting.

    While overall it’s nice to see Abbot finally embracing the inevitable spiritual compromises necessary to succeed in Australian politics, and being so open about it, I don’t think this is going to be good for the party. Also, how is he going to manage to resist Satan’s demands for compulsory abortion and gay marriage?

  • Today I am celebrating my first publication in my new job, and since it’s about a topic I’ll probably be coming back to a lot in the next year, I thought I’d cover it here. It’s not much of a publication – just a letter in the journal Addiction – but it covers what I think is an interesting topic, and it shows some of the complexity of modern health policy analysis. The article, entitled Equity Considerations in the Calculation of Cost-Effectiveness in Substance Use Disorder Populations[1], can be found here[2]. It’s only 400 words, but I thought I’d give an explanation in more detail here, and explain what I’m trying to say in more detail. The background I’m presenting here may be useful for some future material I’m hoping to post up here. I’ll give a brief overview of the “cost effectiveness” concept, explain what the problem is that I’m addressing in this paper, and then give a (slightly) mathematical example in extremis to show where cost-effectiveness analysis might lead us. I’ll also add some final thoughts about cost-effectiveness analysis (CEA) in fantasy populations, with perhaps a final justification for genocide. Or at least an argument for why Elves should always consider it purely on cost-effectiveness grounds.

    Cost-Effectiveness Analysis, QALYs and the IDU Weight

    Traditional epidemiological analysis of interventions is pretty simple: cholera, for example, kills X people, so let’s prevent it. However, we run into problems when we have limited resources and need to compare two different interventions (e.g. turning off a pump vs. handing out disinfectant pills). In this situation we need to compare which intervention is more effective, and we do this by assessing the cost per life saved under each intervention – if turning off the pump is cheaper and saves more lives, then it’s better. This is usually represented mathematically as the ratio of the cost difference between the intervention and some control (the incremental cost) and the effect difference (the incremental effects). The ratio of the two is the incremental cost effectiveness ratio (ICER). This is what I used in assessing clerical interventions to prevent infant mortality. However, when we are dealing with chronic diseases the incremental effects become harder to measure, because a lot of interventions for chronic illness don’t actually save lives: they extend life, or they improve the quality of life a person experiences before they die. In this case we use Quality-Adjusted Life Years (QALYs). These are usually defined by conducting a study in which people are asked how they would weight a year of their life under some condition relative to fully healthy – or, more usually, relative to their health as it is now. For example, blindness in one eye might be rated a QALY of 0.9 relative to being fully-sighted. There is some interesting debate about whether these ratings should be assessed by those who have the condition or the community as a whole; the logic here can be perverse and complex and is best avoided[4].

    So in essence, you rate one year of life as having the value of 1 when fully healthy, and then other states are rated lower. We can use the issue of Voluntary Testing and Counselling as an HIV intervention to see how this works.

    Example: Voluntary Testing and Counselling

    It’s fairly well-established that good post-test counselling can successfully reduce a person’s risk behavior, so if you can get people at high risk of HIV (e.g. men who have sex with men (MSM)) to undergo voluntary testing, you can catch their HIV disease at an early stage and get them to change their behavior. In theory, doing this fast enough and effectively will reduce the rate at which HIV spreads. Furthermore, catching HIV earlier means initiating treatment earlier (before it becomes symptomatic), and early treatment with anti-retroviral drugs leads to longer survival[5]. However, discovering one is HIV positive is not a pleasant experience and knowing you are HIV positive lowers your overall quality of life, even if the disease is asymptomatic. So if the survival benefits of early testing don’t outweigh the loss of utility, then it’s not worth it. So 10 years ago, when treatment extended your life by perhaps 10%, but testing reduced your remaining QALYs from 1 to 0.9, then the benefits might not outweigh the costs. Additionally, treatment is expensive, and it might be more cost effective on a population level to run health promotion campaigns that reduce risk behavior: reduced risk behavior means less infections, means less QALYs lost to HIV.

    In essence, it’s a kind of rigorous implementation of the old bar room logic: sure I’d live longer if I didn’t drink, but why would I want to?

    Recently, however, some analysts have introduced a sneaky new concept, in which they apply a weight to all QALY calculations involving injecting drug users (IDUs). The underlying logic for this is that IDU is a mental illness, and people with a mental illness have a lower utility than people without. This weight is applied to all QALY calculations: so a year of life as a “healthy” IDU is assigned a value of, e.g. 0.9, and all other HIV states (for example) are given a value of 0.9 times the equivalent values for a non-IDU.

    What is Wrong with the IDU Weight

    This has serious ramifications for cost-effectiveness and, as I observe in my article, fucks up any attempt to get a cost-effectiveness analysis past the British NICE, since it breaks their equity rule (for good reason). In addition to its fundamentally discriminative nature, it’s also technically a bit wonky, and in my opinion it clouds cost effectiveness analysis (“which treatment for disease X provides better value for money?”) with cost-benefit analysis (“who should we spend our money on?”). It’s cool to do the latter vs. the former, but to cloud them together implicitly is very dangerous.

    Technical Wonkiness

    Suppose you have a population of IDUs with a weight of 0.9, and you need to compare two interventions to prevent the spread of HIV. One possible intervention you could use is methadone maintenance treatment (MMT), which is very good at reducing the rate at which IDUs take injecting risks. You want to compare this with some other, broader-based intervention (e.g. voluntary testing and treatment, VTT, which also affects MSM and low-risk people).  Then the average QALY for an MSM with asymptomatic HIV is about 0.9 (to pick a common value). Because you’ve applied the weight to IDUs but not to (e.g.) MSM, the average QALY for an IDU with asymptomatic HIV is 0.9*0.9=0.81. Now suppose that you implement MMT: this intervention reduces the risk of transmission of HIV, but it also treats IDU’s mental illness, so the weight for all the successfully-treated IDUs drops away and you gain 0.09 QALYs per IDU you treat; but then you gain 0.1 additional QALYs for every case of HIV prevented by the MMT intervention. This means that VTT has to be almost twice as effective as MMT to be considered cost effective, if they cost roughly the same amount. That is, in this case the cost-effectiveness of MMT is exaggerated relative to VTT by dint of your weighting decision – even though half of the benefits gained don’t actually have anything to do with reducing the spread of HIV (which implies you can prevent half as much HIV for the same QALY gains). On the other hand, if you implement an intervention that doesn’t treat IDU but does prevent HIV in IDU (such as needle exchange), its effectiveness will be under-estimated due to the IDU weight. In both cases, introducing the cost-benefit element to the analysis has confused your outcome.

    Opening Pandora’s Box

    The real problem with this IDU weight, though, is if we decided to extend the logic to all cost-effectiveness analysis where identifiable groups exist. For example, we could probably argue that very old people have lower QALYs than younger people, and any intervention which affects older people would gain less benefit than one which affects young people. An obvious example of this is anything to do with road accidents: consider, for example, mandatory eye testing vs. raising the minimum driving age. Both would result in lower rates of injury (and thus gain QALYs) but the former would primarily affect older people, and so would be assigned lower effectiveness, even if it prevented a hugely greater number of injuries[6]. When we start considering these issues, we find we’ve opened Pandora’s box, and particularly we’ve taken ourselves to a place that no modern health system is willing to contemplate: placing a lower value on the lives of the old, infirm, or mentally ill. As is often the case with social problems, the marginalized and desperate (in this case, IDUs) are the canaries in the coalmine for a bigger problem. I don’t think any health system is interested in going down the pathway of assigning utility weights to otherwise healthy old people (or MSM, or people with depression, or…)

    An Example in Extremis

    Let’s consider an obscene example of this situation. Suppose we apply a weight, let’s call it beta, to some group of recognizable people, who we call “the betamaxes.” Now imagine that these people are the “carriers” for a disease that doesn’t afflict them at all (i.e doesn’t change their quality of life) but on average reduces the quality of life of those who catch it to a value alpha. Suppose the following conditions (for mathematical simplicity):

    • The people who catch the disease are on average the same age as the betamaxes (this assumption makes comparison of life years easier; breaking it simply applies some ratio effects to our calculation)
    • The disease is chronic and incurable, so once a member of the population gets the disease their future quality of life is permanently reduced by a factor of alpha
    • One betamax causes one case of disease in his or her life
    • Preventing the disease is possible through health promotion efforts, but costs (e.g.) $10000 per disease prevented
    • Betamaxes are easily identifiable, and identifying and killing a betamax costs $10000

    I think we can all see where I’m going here. Basically, under these (rather preposterous) conditions, identifying and killing betamaxes is a more cost-effective option than the health promotion campaign whenever alpha>1-beta. Obviously permanent quarantine (i.e. institutionalization) could also be cost-effective.

    This may seem like a preposterous example (it is), but there’s something cruel about these calculations that makes me think this weighting process is far from benign. Imagine, for example, the relative QALY weights of people with dementia and their carers; schizophrenia and the injuries caused by violence related to mental health problems; or paedophilia. I think this is exactly why health systems avoid applying such weights to old people or the mentally ill. So why apply them to IDUs?

    Cost-Effectiveness Analysis in Fantasy Communities

    There’s an obvious situation where this CEA process breaks down horribly: if you have to apply it to elves. Elves live forever, so theoretically every elf is worth an infinite amount of QALYs. This means that if a chronic disease is best cured by drinking a potion made of ground up human babies, it’s always cost-effective for elves to do it, no matter how concentrated the baby souls have to be. If a human being should ever kill an elf due to some mental health problem, then it’s entirely reasonable for the elven community to consider exterminating the entire human community just in case[7]. Conversely, any comparison of medical interventions for chronic disease amongst elves on cost-effectiveness grounds is impossible, because all treatments will ultimately produce an infinite gain in QALYs: this means that spending the entire community’s money on preventing a single case of HIV has an incremental cost effectiveness of 0 (it costs a shitload of money, but saves an infinite number of QALYs). But so does spending the entire community’s money to prevent a single case of diabetes. How to compare?

    Similar mathematical problems arise for Dwarves, who have very long lives: you’d have to give them a weight of 0.25 (for being beardy bastards) or less to avoid the same problems vis a vis the use of humans in medicinal treatment that arise with elves.

    This might explain why these communities have never gone for post-scarcity fantasy. When you have an infinite lifespan, no intervention of any kind to improve quality of life is cost-effective. You might as well just live in squalor and ignorance, because doing anything about it is a complete waste of money.

    Cost Effectiveness Analysis as a Justification for Goblin Genocide

    Furthermore, we can probably build a mathematical model of QALYs in an AD&D world: some people have better stats than others, so they probably have better quality of life. We could construct a function in terms of the 6 primary stats, and obviously goblins come out of this equation looking pretty, ah, heavily downward weighted. Given that they lead short and brutish lives, and are prone to kill humans when the two communities interact, the obvious effect of weighting their QALYs from this mathematical model is pretty simple: kill the fuckers. The QALY gains from this (and the low cost, given the ready availability and cheap rates of modern adventurers) makes it a guaranteed investment. In fact, compared to spending money paying clerics to prevent infant mortality, it could even be cost-effective.

    Conclusion

    Cost-effectiveness analysis needs to be applied very carefully to avoid producing perverse outcomes, and the logical consequences of applying weights to particular groups on the basis of their health state are not pretty. We should never weight people “objectively” to reflect their poor health in dimensions other than that under direct consideration in the cost-effectiveness analysis, in order to avoid the risk of applying a cost-benefit analysis to a cost-effectiveness situation. Furthermore, even if we are comfortable with a “discriminatory” weight, of the “oh come on! they’re just junkies!” sort, it can still have perverse outcomes, leading to over-estimates of the cost-effectiveness of treatments for the mental illness compared to other interventions. Furthermore, we should never ever ever allow this concept to become popular amongst elven scholars.

    I’ll be coming back to this topic over the next few months, I think, in a way I hope is quite entertaining for my reader(s). Stay tuned…

    fn1: The slightly cumbersome title arose because the journal now doesn’t like to refer to “substance abuse” or “substance abusing populations” so I had to change it to the un-verbable “Substance Use Disorder”

    fn2: If you download the pdf version it comes with a corker of a letter about French tobacco control policy[3]

    fn3: Which is a contradiction in terms, surely?

    fn4: For a full explanation of this and other matters you can refer to the famous text by Drummond, which is surprisingly accessible

    fn5: In fact we are now looking at very long survival times for HIV – up towards 30 years, I think – provided that we initiate good quality treatment early, and so it is no longer necessarily a death sentence, if one assumes a cure will be available within the next 30 years

    fn6: This applies even if you ignore deaths and focus only on short-term minor injuries, and thus avoid the implicit bias in comparing old people with young people (interventions that save life-years in old people will always be less “effective” than those that save life years in young people, unless the effect of the intervention is very short-lived, because old people have less years of life to save).

    fn7: In fact you can go further than this. All you need is for an elven propagandist to argue that there is a non-zero probability that a single crazy human will kill a single elf at any point in the future, and the expected value of QALYs lost will always be greater than the QALY cost of killing all humans on earth, no matter how small the probability that the human would do this

  • A novel I picked up in (surprise!) Iceland, Zombie Iceland is exactly what it says, no more and no less. It’s the tale of a group of survivors in Iceland after a mysterious gas explosion at a local geothermal powerplant turns the good folk of Reykjavik into Zombies, written by a journalist and comedian called Nanna Arnadottir. The book is billed on its official website as a kind of kooky travel guide, and certainly contains a lot of interesting information (in footnotes) about Iceland, which is cool. In addition to wanting to give foreigners information about her country, Ms. Arnadottir seems to have an excellent nerdish pedigree, according to her website:

    The Nannasaurus is a small, bipedal nerdivorous dinosaur of the Theropoda genus native to Reykjavík, Iceland.

    The Nannasaurus’ distinctive features include small feet, tiny nose, as well as a decent rack and a sizable brain.

    The Nannasaurus’ diet predominantly consists of horror films, fantasy books, kókómjólk and whales tears.

    It is widely agreed among respectable scholars that the Nannasaurus’ dream is to own a pet Taun Taun and make goats cheese in her basement.

    Also she has a blog, though it doesn’t seem to be much in use.

    The basic theme of the book is your standard zombie fare: outbreak happens, people have to survive. The particular points that make it interesting are that a) it is set in Iceland – particularly, in Iceland, so that individual streets and place names are given, and we even encounter a zombie Bjork – b) it sets the zombification in a particularly modern context and c) the lead character is self-consciously Zombie Aware: her childhood was spent playing a kind of zombie preparation LARP with her dad, so their house is basically set up for a zombie plague and she has prepared herself for the inevitable…

    Which when you think about it is completely reasonable. There’s no chance of people in the modern world seeing the shambling dead in their street and not understanding the context: as a material threat, the zombie plague is not going to win through unpreparedness. So the lead character, Barbara, has a “bug-out-bag,” a kind of survival backpack; and she and her dad have stocked their basement with food and an old generator, and even considered survival tactics. Unfortunately, they haven’t factored in the behavior of her antisocial sister Loa, or her idiot brother Jonsi.

    The book proceeds on this basis, and follows all the usual tropes, except that they happen in Iceland and are interspersed with a range of footnotes describing Icelandic life. There is also a moment of Icelandic cultural insertion, where one of the characters’ deaths is described in a classic Icelandic poetic form. But this book’s biggest contribution to the zombie genre is its incorporation of this self-referentialism into the story. Everyone knows what a zombie is, no one is going to be surprised by zombification, and there is a lot of debate about the particulars of zombie science. This is what I expect would happen. Furthermore, there is a chapter in which the zombie plague is tracked through facebook updates, which is exactly what one expects would happen in a modern plague. Google are no doubt already tracking flu alerts in googleplus, and I bet they’re keeping an eye out for zombie alerts too.

    Unfortunately, these good points are somewhat impaired by the fact that the book is terribly written and one of the main characters, Loa, is completely awful. Kind of fun awful, but awful. This makes it kind of hard going at times. It’s a short book, however, and the zombie plague self-referentialism makes it interesting, as does the comedy aspect, so if you’re interested in spending 3000 ISK on a badly written book that has some interesting new ideas to add to the zombie-lite (Zlit?) genre, then I recommend it. Otherwise, you can probably skip it…

  • This is a novel about a magician-policeman set in modern London. The policeman, Peter Grant, is drafted from the normal police service to work for a special investigations department that consists of a single policeman, Inspector Nightingale, and takes on all the investigations into things that no one else believes are real. In order to work in this department, Grant must also become the apprentice to Inspector Nightingale, and thus also begins learning the rudiments of “modern” magic – that is, magic as systematized by Sir Isaac Newton back in the day.

    In essence, then, this is a kind of Harry Dresden story, but set in London rather than Chicago, and featuring a policeman rather than a private investigator. It’s the first, apparently, of a series. I hope no one from Chicago will be displeased with or misinterpret me when I say that London is a much more romantic and interesting setting for a novel of this kind than Chicago, and this is not the first novel to use London’s historical complexity and its modern multicultural mish-mash as a setting for the bizarre or the unusual: Gaiman’s Neverwhere and Mieville’s UnLunDun are two other good examples of stories in this field, and both draw heavily on London’s peculiar synthesis of the historical and the modern to lend their tale an additional edge of romance that more uniformly modern cities cannot get. It’s particularly well-suited to a magical policeman story because, well, because London is a city full of crime and trouble. It has a violent and depressing history, and a violent and depressing present, which makes it a bad place to live but a very good place to set a fantastic story of this kind – especially since in novels all the little irritations of London life can be ignored, and the picture can be painted using the broad brushstrokes of history, crime, modernity and multiculturalism.

    Which is what we get in this book. Something is up in London, and Grant has to investigate a spate of murders connected to it. The something-that-is-up is connected to a violent grudge that has passed down through history, and is being played out in the very modern setting of post-2007 Covent Garden. There is also a conflict between the different rivers of London – whose spirits are personified in some amusing human forms, and appear to have come to an “arrangement” with the various departments and authorities of the British government. Grant is investigating all this while also studying magic as a new apprentice, trying to get laid, and trying to enjoy his new life as a freshly-graduated police constable. Much of the context of the story is very ordinary and very real – he goes to pubs I’ve walked past, in streets I’ve frequented, and talks of real very recent events that we’re all familiar with. The author also appears to be familiar with police culture and language, and we get a lot of very British policing attitudes coming through. Also interestingly for a novel from London, the author is very aware of London’s overcrowded and multiracial culture, and this is very smoothly worked through the story so that, for those of us who have lived in London, it really does feel like the London we know rather than the sanitized all-cockney-all-the-time London that, say, the Imperial War Museum likes to present. The lead character is himself mixed-race, his mother West African and his father British, and grew up on a council estate in Peckham. Witness reports are particularly amusing because they present us with such a classic hodge-podge of London life as it is now. There’s a classic report of two hare krishnas beating up their troop leader: one is a New Zealander and the other is from Hemel Hampstead. It’s so mundane, and so spot on in its mundanity, and I think this mundane realism serves to ground the magic and mystery of the story very well, so that you really can believe that someone like Peter Grant can learn to use magic and see ghosts and work for the Metropolitan Police Force.

    The book also shares with the Dresden Files a well-constructed (so far) theory of and structure for the magic Grant uses. It’s very different to Dresden, and a nice attempt to imagine how magic might feel when you work it – he talks of forms, that you can feel in your mind and have to learn to understand like music. There’s also the first exposition that I’ve ever heard of why magic might need to use language to be cast, and why it must be latin at that. On top of that, the book also attempts to explain magic’s bad effect on modern technology, and Grant of course begins experimenting on this issue as soon as he discovers it, so that not only can we generate a working theory of why the problem happens but he can use it as an investigative tool, and find ways to safeguard his stuff. This is how I imagine a modern wizard would work, and it’s very well done in the story. His depiction of ghosts in modern terms, and his attempts to understand all of magic in terms of the language of computers and science that he grew up with, is also very interesting and I think a quite new take on the genre.

    The book is well written and overall the flow of the story is good, though I thought Grant’s first case was a trifle too complex near the end and I’m not sure I understood the relationship between the rivers of London and the case Grant was working on – maybe that will come later. The characters are good and believable and the setting very powerfully like the London I know, but the author has weaved into it all the powerful romance of the city we see in the history books, so that while you always feel like you’re in modern London you don’t forget that this is a London built on layers of history and rich with magic and power. I think in this use of setting the book is definitely superior to a Dresden novel (and, on balance, better written too), and gives a richer and more nuanced vision of a modern magician than Dresden does.

    Other comparisons to the Dresden Files also fascinated me while reading this book. The Rivers of London is to the Dresden Files like Coronation Street is to Beverly Hills 90210. In the Dresden Files, the creatures of faerie are always supernaturally beautiful and amazing, and they and the bad guys live in enormous wealthy villas. Dresden gets his girl and is constantly being offered sex by crazed sex goddess super girls, and when he gets a dog it’s a great big supernatural hound of a thing that is more dangerous than most monsters. Finally, of course, there is a lot of heavy weaponry. In The Rives of London, the spirits live in quite mundane buildings – one of the spirits of the river is a traveller – and the characters’ homes are nothing special. Grant doesn’t get his girl and is either warned off the spirit girls, or gets an erection around them while they completely ignore him. He also inherits a dog as a by-blow of a crime scene, but it’s a stupid little ordinary lap dog that doesn’t have any special powers and is a bit overweight and not very helpful. And the only gun that appears in the story is fired once and then disappears, no one can find it and its appearance is frankly shocking because people in London – even criminals – just don’t use guns. I’ve no doubt that things will get more upmarket as the books go on, but it’s an interesting contrast between the artistic styles of the two countries: just like in soaps and dramas, this story conveys that sense of humility and shame-faced shuffling it’s-not-quite-good-enough-is-it?, frayed-carpet and slightly daggy cardigan atmosphere that the British are willing to put into presentations of their own culture. In short, it lacks the brashness of a similar American story. Usually I’m inclined to prefer the brash and the beautiful in American stories to the grotty and mundane in British ones, but in this case I think I like the ordinariness that bleeds out of the pages of this book. I think it helps me to understand Grant as a newly-minted wizard cop better than I understood Harry Dresden.

    This is an excellent story, overall, and if this series improves as it goes along (which most series like this do, I think) then it’s well worth getting into. I heartily recommend this tale!

  • Not Marion Zimmer Bradley

    The Warlord Chronicles are a low fantasy Arthurian reinterpretation by Bernard Cornwell, author of the famous Sharpe series. They’re also an attempt at a historical novel, setting the Arthurian legend in a gritty, realistic depiction of Britain during the 5th century AD and based on the few historical accounts of the time that mention Arthur and/or the wars he may have been involved in. This is an excellent and challenging project, because Britain in 500AD was a nasty, poor place that doesn’t much resemble the settings of high fantasy at all, and of course it’s difficult to write an Arthurian story with a solid historical base and also incorporate the fantasy elements of swords in stones, druidic magic and ladies in the lake. Cornwell treads a nice line here, and negotiates a lot of complex story elements very well.

    The story is narrated by the orphan Derfel, raised by Merlin and uniquely blessed because he survived a druid’s death pit when he was a child. Derfel goes from strength to strength over the three books and becomes a friend, ally and confidante of Arthur, who in this story is Uther Pendragon’s spurned bastard son. Derfel is writing the story from the vantage point of old age, and is simultaneously sad, cynical and romantic. He describes an epic set against three major political conflicts, and driven by the personal conflict Arthur suffers for his whole life. The political conflicts are the war with the Saxons, who are arriving in boats on the eastern shores of England at a high rate, and every year attempt to capture land from the British; the conflict between the British kings over petty issues of land, wealth and old grudges; and the conflict between pagans and Christians within England. The latter two conflicts are undermining the former, until Arthur takes charge and tries to unite all of Britain against the saxons. Unfortunately Arthur himself is riven with conflict: he wants peace, but he values oaths over personal ambition and will not betray the oath he made to Uther Pendragon to respect Uther’s son Mordred as king of all Britain. The conflict between Arthur’s different oaths, his own unwillingness to take the throne, and his personal demons is best described by Guinevere: she characterizes Arthur as a wagon led by the twin horses of ambition and conscience, and they won’t pull together. Thus the whole tragedy of Arthurian Britain is the story of Arthur’s attempts to navigate these three political conflicts while struggling with his own personal problems. In this sense it can seem like a classic exposition of the “great man” theory of history, but on another level it’s easy to see that almost none of Arthur’s decisions, ultimately, make much difference : he’s constantly negotiating the minefield of his allies’ and followers’ personal ambitions, their foolish mistakes and cunning schemes, and a variety of external disasters. This makes for a very rich and complex historical drama.

    The problem of druidic magic and the conflict between druids and christians is also handled very nicely in the story. We repeatedly see situations where Merlin or his acolyte Nimue use magic, and we’re expected to believe it’s real and its effects are genuine; but at the same time it’s also clear that Merlin is a famous trickster, and many of the effects of magic either depend on the imagination of the targets, or could have occurred by luck. Merlin and Nimue’s greatest enchantments can easily be explained away through natural phenomena, but from the perspective of a 5th century warrior they are clearly powerful magic. This is a very cool trick, and Cornwell carries it off well through the central imaginative achievement of his book: he really gets you to believe that you reading about 5th century warriors. With the possible exception of Arthur and Derfel (see below), you really do feel like the characters you’re reading are superstitious, ignorant pagans, and their belief system and ideals, though radically different from ours, so you really get a strong impression of reading about a different world. This is a hard thing to achieve, since it means that at times Cornwell must have had to ruthlessly stamp out his own conscience and desire to feel sympathy with his characters, because from a modern perspective the people of 5th century Britain are a generally repulsive lot, and the world they have created is harsh and cruel. In making us believe this world, Cornwell had to be hard on his characters, but the result is a cultural milieu that is believably pagan, barbaric and superstitious, so that when Nimue enchants a badgers head and points it at her enemies you can really believe that it affects them, even if the magic is not real.

    But this construction of an alternative world is also perhaps the biggest flaw of the book, for two reasons: the world itself is unpalatable, and I don’t believe Cornwell was able to extend it to include the narrator and the key character. I have complained elsewhere about the rape and violence in these “gritty,” “realistic” worlds and Cornwell’s world has this in spades. Essentially the practice of war is as follows: when you win a battle you kill all your enemy’s soldiers, then when you capture their families you rape all the women and anyone you don’t want to enslave you immediately kill. Before battle you must ritually insult and provoke your enemies, and a routine part of this process is a series of inventive promises about what you’re going to do to your enemies’ women. Throughout the story this process is repeated, and the threat of what will happen is constantly being raised in strategic discussion. Cornwell doesn’t handle this in some kind of salacious or erotic way: it’s a background fact of war, handled in planning (families are sent away, and battle tactics are designed to preserve them), accepted by all and sundry as a terrible thing but seen as the right of victors, always avenged when the chance presents itself, and not really sexualized at all. Also, as far as I could tell rape is not seen as a man’s right with strangers in peacetime, and there was no time when the threat of rape entered the story outside of battle, except in the issue of arranged marriages (there are many jokes, often cynical, about the fate of young women engaged to older men; but again it is seen as a sad fact of life and not eroticized). The overall effect of this is to build a very believable image of a world with completely different morals to our own, and it’s also clear that there are strong environmental pressures forcing some of this brutality: like lions, these primitive peoples don’t have the security of food supply that enables them to show mercy to the vanquished, and they have to murder their enemies if they want to guarantee that they can feed themselves. But regardless of realism and believability, it’s not nice, and for those of us who like to read stories for escapism and fun, reading a book that’s full of the constant threat of rape in war may not be your idea of a good way to spend a few hours of your downtime.

    The other problem with this process is that, like most writers, Cornwell can’t extend this barbarity to his two main characters. In fact Arthur is very much a man out of time and place: he doesn’t believe in the gods, he sees all religious dispute as wasted energy, and he just wants peace. He is a renaissance man in the dark ages, and although this is as good an impetus as any for him to rise above the petty chieftains who surround him, it’s also kind of unbelievable. The same applies to Derfel: Arthur and Derfel are probably the only men in all of Britain who never raped a vanquished enemy’s wife and kids, and never cheated on their partners. Isn’t that convenient? After all, if they behaved like some of the men around them we probably wouldn’t want them to succeed, and wouldn’t be able to understand why they risked what they risked. I’m happy with this from a narrative perspective, but from the point of view of realism it’s a bit of a cop out. When, oh when, will someone tackle a realistic depiction of Arthur himself, in which he behaves worse than all his petty chieftains precisely because he’s an egomaniacal tyrant who united Britain only because he wanted to rule as many people as possible? When hell freezes over, I guess is the answer to that rhetorical question: no one would want to read about such an objectionable and unpleasant chap. When I think about things in this light I then start thinking that, well, if you could exempt our main characters from brutality, maybe you could have sanitized the rest of the world without losing anything…? In this case the world is so well constructed, and the brutality and bigotry and the hard-scrabble poverty serves so well to establish the backdrop to the superstition and the political chicaneries of the novel, that I am happy to read it and glad that Cornwell had the decency to elevate the narrator and Arthur above it – I consider it to be careful crafting. But the same process in the hands of a lesser author would lead to a novel that was simultaneously horrible to read, with completely unbelievable and fantastic lead characters. Well done Bernard Cornwell.

    Another excellent aspect of Cornwell’s creation of this brutal and backward world is that, although he makes it clear that women are subjugated in this world, he doesn’t write them as second rate characters and he doesn’t rest on simple gendered caricatures when assigning women roles in his story. He even rescues Guinevere from the petty “yoko ono broke up the band” type of sexist depiction that is so tempting in this type of story, by grounding her reasons for her behavior in the sexism of her time. Sure, she’s the devious snake behind the throne, but we’re given to understand why she did this in the context of the choices available to a woman of her time, and to understand that actually her devious snake-iness would have worked for Britain if Arthur had not been so stupidly obsessed with his “man’s oath” to Uther. This sort of complex characterization enables us to enter a world of violence and misogyny and on the one hand to view the actions and choices of the people of the time from a modern perspective, without condoning them or interfering in the tapestry of the story; and on the other hand, to retain the classic elements of the story without investing them with their classic sexist theme. In his depiction of Guinevere, Nimue, Morgan, Ceinwyn and even Igraine (a bit part at best) we see women set firmly within the social confines of their time and place, but portrayed for the reader in a way that enables us to understand them through our modern sensitivities; the fabric of the story is not disrupted, but we’re able to interpret it with respect to our own morals as well as those of the time. This in my view is some very, very fine story crafting and saves the books from being a two dimensional neckbeard’s imagination of how  great it would be if all women were subjugated.

    I have two other minor quibbles about this book. Sometimes Cornwell’s writing is a little clumsy, particularly his description of things and places, which can follow this kind of pattern: “he went to the [insert place], which was a [insert description] over the [insert second place] that swept down towards the [geographical feature] where [description of animals doing something] by the tower that was white.” A little pause in there somewhere would be nice! But in other parts of the novel (especially battles and conversations) the writing is really good, so it’s fine. My second quibble is an epidemiologist’s pedantry: Cornwell repeatedly falls for the fallacy of believing that a life expectancy in the ’40s means that people aged over 40 are rare, i.e. he confuses life expectancy and life span. A low life expectancy is usually driven by high infant mortality, and anyone who survives past their 5th birthday can be fairly confident of reaching their 60th, so people over 40 are not uncommon. But to his credit, Cornwell makes a point of mentioning infant and maternal mortality and working it into the background of the world, so that we fully understand how harsh the world is for women both environmentally and socially. Truly, when you enter his world you are submerged in it, and it is developed to such a fine level of detail that you really do understand the pressures and challenges of the 5th century setting.

    Overall this series of books is very impressive: well written, beautifully crafted, tense, exciting and action packed, and accessible at multiple levels of interpretation. A must read for anyone who’s into low fantasy, Arthurian legend, or gritty fantasy, and an excellent introduction to Cornwell – I certainly aim to read a lot more of his work.