• Tell 'em they're dreamin'!
    Tell ’em they’re dreamin’!

    We have an election on in Australia, the cradle of democracy, and as always in federal elections an enormous number of fringe political parties have crawled out from under their rocks. We have the Rotating Leadership Party, which is running on a platform of giving every person in Australia the chance to be prime minister for a day; the anti-Maritime party, which believes that floating on water is a satanic act and is opposed to all forms of shipping; and the Sex Party, which actually has pretty good policies. Who could be opposed to more sex? But in amongst these fringe parties we also have some single-issue groups, and in my opinion the most single issue of the lot is the Bullet Train for Australia Party. Their policies are reviewed here, and can be summarized very simply as: bullet train. This is pure science fiction at its best. Their slogan might confuse non-Australians, since it appears to advocate voter fraud:

    Vote Bullet Train! Then vote as you normally would …

    This is not because of special Australian laws giving nerdy train-spotters two votes each, but because of our complicated preference system, which is itself a work of science fiction and impossible for ordinary mortals to understand. But I think the Bullet Train for Australia Party has summarized their preference policy very nicely in that slogan. I also like the way their website has Australianized the bullet train by getting some pictures of Japanese bullet trains and sticking a kangaroo on them. Who could possibly hate kangaroos? And how can any technological or industrial advance be alien to Australia if it has a kangaroo on it?

    The reason I think that this is basically science fiction is that there is no way a bullet train will ever be a profitable enterprise in Australia. We have 22 million people spread out over an area the size of the Magellan Cloud, living in little clusters of “civilization” separated by vast expanses of nothing. By way of comparison, Japan has 120 million people living in an area the size of Japan, with cities not too far apart that have populations the same size as Australia. That’s why they can run a train between those cities at light speed every 15 minutes, at something resembling a profit. But even then, catching a bullet train in Japan is no cheaper than flying – just enormously more convenient and comfortable. If you cut out all the in between stops (because no decent towns exist), doubled the distance between cities and then reduced the eligible population by a factor of 6 or 10, would it still be cheaper than flying? Especially given the electricity demands? And would it still be 8x more efficient than flying? And would you use a bullet train to get from Sydney to Adelaide? That’s a 21 hour bus trip at 120 km an hour, so probably a 7 hour bullet train trip. Or a 1 hour flight. Hmm, which would you choose? The only way that a bullet train would become an efficient program in Australia is if the Rotating Leadership Party were to seriously act on its on-again off-again “Big Australia” ideas, and double Australia’s population. Then, if the extra people settled in the right places, maybe it would work out.

    Good luck with that.

    As an aside, I am intrigued by the modern opposition to high speed rail in the UK, where it might actually be a viable investment. Apparently the HSR will cost 80 billion pounds to build, and this is a ludicrous amount of money that no modern government can afford. I haven’t done the numbers but I have a strong suspicion that the Japanese shinkansen would have cost a significantly larger portion of GDP when it started in 1958 than HSR would cost in the UK now. Had the Japanese adopted modern craven attitudes towards government spending, they would never have got the bullet train. Yet they have the bullet train, and somehow their society seems to have survived the massive fiscal impost. Could it be that sometimes massive government investment is a good idea? Which isn’t to say that the HSR is the best use of 80 billion pounds of British money, but “it’s a lot of money” doesn’t seem to me to be the best argument against it either …

    Anyway, the Bullet Train for Australia Party are definitely pursuing a crazy science fiction policy, though it would be a pretty cool sight to see a bullet train heading through the desert – on the run from Darwin to Adelaide I imagine it would be able to get up to some pretty phenomenal speeds in the open spaces around Uluru. You could even build a tunnel through Uluru so it doesn’t have to deviate – then instead of climbing the rock, people can say they sped through it in a microsecond. Or you could lay the train nearby, and take iconic pictures of the bullet train shooting past the rock – contrast of old and new, etc. Japanese railways love the picture of a train running through rice paddies with hills in the background, this could be the Australian equivalent. Except that there would be only one person in the train, and enough energy to power the entire city of Darwin being used to propel it.

    I think there’ll be a maglev on Mars before there is a bullet train from Adelaide to Darwin. But at least the Bullet Train for Australia Party have cornered the train-spotting vote!

  • Not enough to save you from castration
    Not enough to save you from castration

    I’ve been reading Anthony Beevor’s The Second World War, and I have been very disappointed by its handling of cryptography. Overall the book is an interesting and fun read, not as engrossing or powerful as Stalingrad or Berlin but retaining his trademark narrative flow, mix of military and personal history, and leavened with analysis of the broader political currents flowing through the war. It also doesn’t ignore colonial history the way earlier generations’ stories did, and  it is willing to present a relatively unvarnished view of Allied commanders and atrocities. The book has many small flaws, and I don’t think it’s as good as previous work. In particular the writing style is not as polished and the tone slightly breathless, occasionally a little adolescent. I’m suspicious that his grasp of the Pacific war is not as great as of Europe, and that he may fall back on national stereotypes in place of detailed scholarship, though I have seen no evidence of that yet. But the main problem the book has is just that the war is too big to fit into one person’s scholarship or one book, and so he glosses over in a couple of sentences what might otherwise have formed a whole chapter. This was particularly striking with the Nanking Massacre, which gets a paragraph or less in this book. That, for those who aren’t sure of it, is about the same amount of coverage it gets in a Japanese middle school history textbook – which also has to cover the whole of World War 2. Interesting coincidence that …

    Anyway, as a result of this a great many things that might be important are given very little description. For example, the famous technology of the war – the Spitfire, the Messerschmitt, the Zero – are introduced without explanation or elucidation, and though constantly referred to by their proper names we don’t know what their strong or weak points are – it’s as if Beevor assumed we were going to check it ourselves on wikipedia. I was a little disappointed when I realized that Beevor had decided to treat the decryption/encryption technologies of the war – and the resulting intelligence race – in this way. So at some point early in the Battle of the Atlantic he starts referring to “Ultra Decrypts,” as if they were simply another technology.

    This is disappointing because Ultra decrypts aren’t just another technology. There was an ongoing battle between mathematicians and engineers of both sides of the war to produce updated technologies and to decrypt them, and the capture and utilization of intelligence related to encryption methods was essential to this effort. The people who participated in this battle were heroes in their own right, though they didn’t have to ever face a bullet, and their efforts were hugely important. Basically every description of every major engagement in the African campaign includes the phrase “fortunately, due to Ultra decrypts, the Allies knew that …”[1]; the battle of Midway was won entirely because of the use of decryption; and much of the battle of the Atlantic depended on it too. These men, though they never fired a shot in anger, saved hundreds of thousands of tons of allied materiel, tens of thousands of lives, and huge tracts of land and ocean from conquest. Yet they aren’t even mentioned by name, let alone given even a couple of sentences to describe what they did and how they worked. This is particularly disappointing given that Alan Turing, who was hugely important to this effort, was cruelly mistreated by the British government after the war and ended up committing suicide. It’s also disappointing because cryptography was an area where many unnamed women contributed to the war effort in a way that was hugely important. In one earlier sentence during the Battle of Britain Beevor refers to “Land Girls,” the famous women who farmed England while the men were at war. It would be nice to also see a reference to “the Calculators,” young women who crunched numbers before computers were invented.

    I find this aspect of Beevor’s book disappointing, and I’m sure that there are similar oversights in reporting the contribution of other “back office” types. Maybe it’s reflective of the modern idea that only “frontline workers” count, and only their stories are important. Or maybe it’s a reflection of a culture in which the contribution of nerds and scientists is always devalued relative to the contribution of adventurers, sportspeople and soldiers. It’s a very disappointing missed opportunity to tell an important and often under-reported story about the huge contribution that science makes to advancing human freedom.

    fn1: And usually also includes the phrase “Unfortunately, [insert British leader] was too [timid/stupid/slow/arrogant] to respond and thus …”

  • In the lee of the seawall
    In the lee of the seawall

    Recently I again visited Minamisoma on business, and while I was there I was taken to visit another area damaged by the tsunami. Last time I visited an area near the inner exclusion zone around the nuclear power plant, where I saw how nature is reclaiming the tsunami-ravaged coastline. This time I visited a different area that is slowly being cleaned up, but is still quite radioactive (perhaps 4 times the level of background radiation in Tokyo). First I visited a place I visited 18 months ago, which has been cleaned up but left to nature, and found a quite beautiful wetland full of a diversity of plant and animal life. A year ago it was a devastated wasteland of twisted metal and mud, but now life has returned. This area was obviously heavily affected by the tsunami, but I was shocked to discover that there were other areas a bit further south that were much worse off. When we arrived at this area I was greeted by the most remarkable site: tetrapods strewn across the landscape perhaps a kilometre inland from the sea, picked up by the tsunami and scattered in a rough line as if they were mere baubles.

    This is a tetrapod
    This is a tetrapod
    A line of tetrapods in the middle distance
    A line of tetrapods in the middle distance

    These tetrapods were on the outside of the sea wall (which is itself perhaps 3-5m high). They had been carried over (or in one case through) the sea wall and dumped inland at this distance. It’s hard to imagine the ferocity of a wave that can do that. But our imagination was assisted when we passed over a nearby rise, and found the second story of a house lodged amongst trees at the top of the knoll, a sofa still trapped in the room.

    The room on the knoll
    The room on the knoll

    The knoll was situated above a raised section of road, which was itself some metres above the surrounding landscape. The wave, having deposited this piece of house on the knoll top, then flowed over the road and gutted a house on the lower section of land beyond the road. Here we were perhaps 1 km inland; looking inland at this gutted house, it was possible to guess the height of the wave at this point to be about 10m. What can anyone do against such intense and savage fury?

    The view inland from the base of the knoll
    The view inland from the base of the knoll
  • A new form of disaster tourism is born ...
    A new form of disaster tourism is born …

    I don’t really think it’s possible to make a movie of World War Z, which is basically a kind of public policy review document. I also don’t think that the new movie is a particularly good first attempt, but it is a lot of fun. As an adaptation of the book it has so many obvious problems – not least of them that nothing that happens in the book is actually in the movie – that it clearly stinks. It changes the personality and job description of the main character, who is now some kind of crack investigator for the UN with experience in investigating troublespots (do these people even exist in the UN structure?); it changes the origin of the zombiepocalypse from a Chinese dam to a US military base in Korea; it has Israel collapsing near the beginning of the epidemic; and it presents a completely different resolution to the whole problem, one that is much, much less cynical than the horrible tragedy that unfolds in the book. It also doesn’t present a series of accounts from different protagonists collected after the fact, and the best we can do is pinch ourselves and pretend that this movie is a kind of prequel to the book, the story of what the book’s (unnamed?) narrator did during the first horrible days of the apocalypse. From memory we never find this out in the book, and indeed the narrator seems to have emerged from the zombiepocalypse largely untouched by it, unlike any of his interviewees.

    It is in this, however, that the movie is most faithful to the book: where the book is a kind of disaster tourism, traveling from trouble spot to trouble spot and zeroing in (mostly) on people whose suffering was genuinely terrible, in the movie we travel from troublespot to troublespot and watch Brad Pitt somehow survive while all around him goes to hell. Everyone in Jerusalem gets eaten alive, but Brad is on the last plane out of there by the most extreme strokes of luck you can conceive of. Sure, he cops a beating and so do those with him (the few who survive, anyway); but compared to what’s going down as he runs away he’s veritably blessed.

    And “Jerry” does do a lot of running for a crack investigator, not that you can really blame him given the (literally overwhelming) odds he faces in every circumstance. The movie has a very good pace, from the first encounter with the zombies to the last (slightly jarring) creepy encounter. The pace and frenetic efforts of the survivors are enhanced by slightly beefing up the zombies compared to the book: these zombies don’t shuffle, but run in chaotic gangs and attack with suicidal intent. They keep the hording properties described in the book, and in the movie they can behave like ants, forming self-organizing bridges to get at prey sources and overwhelming almost any defence with their weight and collective aggression. Street scenes with people running and panicking are great because you can’t tell who is what, and in amongst the chaos people and monsters are flying in every direction, getting up, being broken, giving up, fleeing and dying. The movie also focuses on those first few days when society is failing, rather than (as often happens in zombie movies) picking up once the damage has been done and the survivors are on the run. That’s very much what we saw in the book too, and gives a sense of coherence with the book when every individual aspect of the story is completely different.

    The movie also completely changes the “ending” of the zombiepocalypse, coming up with a different solution to the problem and straying widely from the cynicism of the story. I guess the solution makes sense in a narrative and figurative (if not scientific) sense but it didn’t satisfy me, but I accept it was necessary – you can’t put the original solution into a movie easily because it was by nature a systemic and policy solution, not a magic bullet, and they don’t fit into a two hour movie.

    Which brings me to a final point about this genre in general: modern television has killed the zombie movie. Specifically, The Walking Dead has shown that the best medium for zombie stories is television, not cinema. This is because zombie stories are primarily about the small desperation of ordinary people, gangs of survivors, not about big special effects, and the dramas unfold slowly over long times, as people starve and get alienated and fight and die. You can’t show this stuff easily in cinema, but you can unravel a group of desperate no-hopers over 12 brutal hours on television very nicely. Similarly, you could do a very nice version of World War Z on television, with a different account each week building to a broad story arc about both the original disaster, its causes and its solutions and even about the rebuilding process. You can’t do that at the movies, which is why this movie is a completely faithless rendition of the book.

    Still, it’s a really fun movie. There are some clips on youtube (including an illegal 8 minute clip of the Jerusalem scenes) which should help to show the tension and pace of the movie. If you’re into zombie movies and don’t care about a great book being completely corrupted for cinema, then I recommend this movie. If you are one of those fanboys who gets irate if even the smallest detail of your much-loved canon is corrupted, then steer clear, because this one will make you pop a gasket!

  • I’m working on a couple of complex multi-level models at the moment, using Stata, and I’ve run into some big problems getting them to work. In the spirit of helping others to solve problems I’ve spent a lot of time attacking, in this post I’m going to describe the problems I encountered and my solutions. In searching the internet for these problems I haven’t found many solutions, so hopefully other people can approach the problems more easily with the help of this post.

    The two main problems I have encountered occur when I encounter the initial values not feasible error, and also problems of the time it takes to run the multi-level models. Solving the first problem raises another interesting and related problem that I think is a Stata bug, and that I have found a workaround for. Solving the second one might be related to the first one, but largely involves utilizing some properties of binomial or Poisson distributions. I have also encountered a third problem in which different multi-level modeling functions give different results, but I haven’t found a solution to that one. I will mention it here for completeness.

    Obviously if you aren’t a statistician, everything that goes on from here down will seem like satanism, and you should probably stop reading.

    The models

    I am currently working on three unrelated and complex multi-level models, all of which use binomial- or Poisson-distributed responses and all of which have very large data sets. I’m not going to give details about the research projects, but the three models are listed here.

    1. An analysis of Demographic and Health Survey (DHS) data from multiple countries and years, with a binomial outcome and about 800,000 records. This model should have individuals clustered within households clustered within clusters within years within countries, but currently I’m not using any household level variables and I only have 1-4 years per country (mostly 1) so I’m ignoring the serial dependence in years. Even so, running a simple model in xtmelogit takes a fairly long time (perhaps 8 hours) on a 12-core PowerPC with Stata/MP 12 and 64Gb of RAM. Because I’m combining surveys across countries and years, I don’t think survey weights are valid, so I’m ignoring probability sampling effects (thank god!)
    2. An analysis of DHS data from a single country with more than a million records, with a binomial outcome. This model has individuals within households clustered within regions within states, so should be a four-level model. It also has a lot of variables (it is examining socioeconomic determinants of health outcomes). Running a single model in xtmelogit with Laplace transformation (the fastest maximization method) on an 8-core Dell Precision platform with 32Gb of RAM takes … well, days, mostly. And currently I’m ignoring one level (region, I think) for simplicity.
    3. An analysis of NHS hospital admission data with about 5 million records, Poisson outcomes, and only two levels: individuals within regions. Fortunately it only has a couple of variables, though it has a massive interaction term, so it runs in a couple of hours. I should probably have crossed levels (hospital X region) but I think this would fry anything it touched (as well as my brain) and probably be unidentifiable (because of lots of regions sending single individuals to big hospitals). This is also running on the 8-core Dell.

    I’ve tried running model 2 in GLLAMM, but models 1 and 3 so far have used only xtmelogit and xtPoisson respectively. Obviously I don’t have infinite time for trouble-shooting, since each model takes at least one night to run; and I can’t run tests on random sub-samples because the problems often only occur in very large data sets; or they will appear to often in poorly-chosen random sub-samples. In all the models I also have to fiddle with categorical variables to ensure that some combinations of predictors don’t become too rare (and thus non-identifiable) in some cluster/country/time combinations. Also, as a final note, the heirarchical structure is essential. Results are completely different without it, and/or some key variables are estimated at regional level rather than individual.

    Problem 1: Non feasible initial values

    The first problem I have encountered is that xtmelogit crashes as soon as it begins initial parameter selection, and returns the following error:

    initial values not feasible

    This is rarely frustrating and rarely unusual in statistical software: usually the starting values don’t matter that much. As an initial solution to this problem I tried using simplified models just to see if I could get it running, but I found I had to simplify them so much they became meaningless. I dug around on the internet and eventually found this kind of solution, which advocates using the – from() – option to load up your own initial values. The option suggested there is to run a non-heirarchical model, extract the coefficients from that, and input them into the xtmelogit using the from option. If you have k levels, Stata will be expecting k additional coefficients, but apparently it can handle this automatically, so you just give it the output from logit and off it goes.

    Unfortunately it doesn’t work. For example, if I run this code:

    xi: logit outcome i.year i.education i.wealthIndex
    mat a=e(b)
    xi: xtmelogit outcome i.year i.education i.wealthIndex || country: || cluster:,laplace from(a)

    I get the following error:

    extra parameter outcome:_Iyear_2001 found
    specify skip option if necessary

    and again the model fails. I’ve never seen this error before. It basically stops me from adding the coefficients from logit into the xtmelogit command. The frustrating thing though is that if you display the matrices of coefficients from the two models, they are ordered the same way and have the same labels for variables, so there is no reason why Stata should find an “extra” parameter. What is happening? I searched for this online, and all I found was this useless advice, or (after much digging) a link I can’t find again, which suggested that the model might be “non-identifiable.” This problem is not arising through non-identifiability or complexity: I know this because if I can find the right starting values (see below) I can get the model to run just fine. Furthermore, this error doesn’t sound like a mathematical problem. It sounds like a programming problem.

    In fact, the solution to this problem is remarkably simple: you simply add the “copy” option to the -from()- option. Without this option, the -from()- option tells xtmelogit to insert values from the matrix of coefficients from logit into their corresponding place in the xtmelogit procedure based on variable labels. When it runs out of variable labels it is then supposed to make up additional starting values using defaults. However, this doesn’t work because for some reason xtmelogit doesn’t understand the matrix output from logit. However, if you use the copy option, xtmelogit inserts the coefficients from the matrix based only on their position in the matrix. This means you need to supply the k extra starting values for the error terms of the random effects, but otherwise you’re good to go. You can supply these by guessing them, giving zeros (don’t know if this is a good idea!) or running an intercept-only heirarchical model and taking them from the results of that. The full code (with me supplying zeros in this case) is then:

    xi: logit outcome i.year i.education i.wealthIndex
    mat a=e(b)
    mat a1=(a,0,0)
    xi: xtmelogit outcome i.year i.education i.wealthIndex || country: || cluster:,laplace from(a1, copy)

    Do this and you won’t run into the extra parameter problem. But note that supplying starting values from logit isn’t always a good idea – they can be radically different to the true final coefficients, even differing in sign, and they can again lead to the initial values not feasible problem.

    In this case the only solution I could find was to run a much simpler multi-level model and then extract the values from that. In fact, I found an intercept-only model was sufficient to provide functioning starting parameters for the full xtmelogit. So if you can’t get your logit starting values to work, try just running a simple intercept-only model with all the necessary levels, and supplying those starting values to xtmelogit.

    This bothers me for two reasons: first of all, the extra parameter error is obviously a programming error; and secondly, if supplying the results of an intercept-only model is enough to make the full model run, this suggests pretty extreme sensitivity to initial values. Is there maybe a more stable starting process for these maximization algorithms? It takes humans days to select good starting values, and if you don’t stumble on them immediately you have to do a search through a range of possibilities – isn’t it faster for the computer to do this? What starting values is it using?

    Problem 2: Optimization time is just too long

    As I mentioned in the introduction, these models can take a long time to run – between 6 hours and a couple of days depending on the model. I had hoped that finding new and better initial values would solve this problem at least partially, but it doesn’t much and the Stata manual admits that spending a long time looking for good initial values will have little impact on the time it takes. So what to do? The number of levels is a huge determinant of the time it takes to run (processor time depends on a factor of 2^k, I think), but if you can’t drop your levels, you’re in big trouble. Fortunately you can use a simple workaround (in some cases) to solve this problem. Because xtmelogit works on binomial data you can reduce the dataset in size by calculating summary data at the lowest level: you collapse the data at this level into a data set of events and trials. Not all Stata logistic regression procedures accept the events/trials framework, but xtmelogit does. If you’re dealing with, e.g. a school with classrooms, each classroom will have only two ages and two sexes. So you may be able to reduce each classroom to just 4 records, containing the count of the number of students in each age/sex combination, and the number of events. I tried this with model 1, and managed to reduce the data set to about 100,000 records, and the processor time by a factor of about 8 or maybe more, and get exactly the same results. Of course, if you have a household level above the individual, this method will be largely impossible, but if you are working with larger low-level clusters it will work a treat. Note also that it doesn’t work where you have a genuinely continuous variable, or a lot of diversity in predictors. But it’s worth trying if you have a lot of reasonably-sized clusters, especially if you are hoping to get a more accurate estimate than the laplace method.

    Problem 3: Different results in GLLAMM and xtmelogit

    I’ve noticed as well that in some cases GLLAMM and xtmelogit produce remarkably different results for the same model. On about page 450 of Rabe-Hesketh’s text she mentions this problem but puts it down to choice of integration points: it appears to me that this isn’t the whole story. The Stata list seems to also think this. I have yet to work out the details of this, so will come back to it when I have a better idea. Right now I’m suspicious that GLLAMM and xtmelogit are doing … well, not quite the same thing.

    A note on software comparisons

    Note that this problem doesn’t just exist in Stata. I found PROC GLIMMIX in SAS to be horribly unstable, and apparently glmer in R uses the Laplace approximation as its default optimization method, and doesn’t allow any other where there are more than two levels! Multi-level modeling with non-normal responses is one of those situations where you really have to be aware of the underlying engineering of the software, and cautious about any results until you have checked every aspect of the defaults. This can be very dangerous if you’re under pressure to produce results and each model is taking a couple of days. Indeed Rabe-Hesketh recommends choosing multiple different optimization procedures (primarily choices of numbers of integration points) to select a stable one before presenting a final model. That’s really pushing your luck if one model takes a couple of days to run, and you have to go through 12 different integration point choices. I wonder if there are any methods for identifying likely optimization complexity from the data itself, without having to run extremely time-consuming guesstimates?

    Probably not.

    Conclusion

    There are a couple of solutions to the initial value problem and they don’t just involve reducing the number of variables. One of these, choosing initial values from a standard generalized linear model, produces an error based entirely on what appears to be a programming but, but you can work around it. You can also workaround the time constraints put on you by big multi-level models, by collapsing data rather than removing or simplifying variables. But be careful about the choice of integration points, and the possibility that GLLAMM would give you a different result …

    Good luck!

  • Horses have never really liked me ... this one has just caught on.
    Horses have never really liked me … this one has just caught on.

    Last week I was invited by collaborators to attend the Nomaoi horse festival in Minamisoma, Fukushima. This festival dates back 1000 years, to the warring states (sengoku) period, and appears to have arisen from some kind of training ritual. It was cancelled in the year of the Great East Japan Earthquake but has otherwise been held every year, even during the war (as far as I know). It is a big event for the towns of Soma and Minamisoma, and I and other collaborators were invited as guests of our local project collaborator. He arranged us excellent seats for all the events, souvenirs and a formal dinner, so overall it was an excellent event. It’s a major tourism event for the town, but it’s also clearly of huge importance for the town itself, with (I think) this year 504 horses and riders participating, and probably an equal number of footmen.

    Summoning the beasts
    Summoning the beasts

    The ceremony lasts three days, but I only saw the second day. This day starts with a parade through town by the samurai, all mounted on their horses and wearing their ceremonial armour. They are arranged in groups according to their sponsors: the most important sponsors are the three shrines that are the focus of the day, but other groups – suburbs, companies, etc. – can also sponsor a squad. The squads are arranged in the style of the armies of old, with a general, colonels, etc. Higher orders wear flags on their backs, and ride ornately decorated horses. They stop at regular intervals along the parade to announce their purpose, and occasional small dramas of military life are played out (with comedic overtones) during these moments.

    A peasant's last sight
    A peasant’s last sight

    This parade is surprising for the amount of activity it involves – in addition to general’s conferences, there are occasionally lieutenants charging up and down the line, drummers announcing the arrival of a new squad, announcements of names and faces over a loud-speaker, and occasional tumbles – I saw one man thrown from his horse, and the people opposite me nearly got run down. The riders are all ages and sexes and all classes – I saw one of my collaborators (an internal surgeon) on horseback, followed soon after by a heavily made-up girl who would probably be judged to be pretty low-class by the locals (I’m not a good judge of these things). Very elderly men rode by on plodding draught horses, followed by children on ponies. The trappings were largely traditional, with the stirrups, saddles and girth all apparently modeled on the ancient fashion. We’ll come back to that …

    After the parade we returned, with military precision, to our base camp for a 10 minute rest, and then headed to the racecourse. Here, the braver warriors gathered to race each other around a 1000m circuit as a huge crowd watched. This racecourse would also be the venue for the final battle, so I was to spend several hours here in our covered tent, enjoying my obento lunchbox and my free beer, and watching warriors try to kill themselves.

    The battleground and warriors in transit
    The battleground and warriors in transit

    I say “kill themselves” because the races were incredibly dangerous. I watched 6 races, with 6 participants per race, and out of the 36 participants identified the following events:

    • 3 fallen riders
    • 2 hospitalized riders
    • 4 escaped horses
    • 1 injured horse

    Fun for all the family! The riders fell because they were hurtling around a tight track on horses without proper stirrups, with massive flags strapped to their backs. The horse fell because it tripped over its rider. No one was wearing a helmet. This is the most dangerous festival I have ever seen in Japan, by a long shot, and with an injury rate of 1 per 12 participants would have to be one of the most injury-prone sports I have ever seen. It was at times quite hideous to watch.

    Finally after the races were (mercifully) finished we got to enjoy the final battle. This battle is a mad scramble to catch flags falling from the sky, in which all the (surviving) samurai gather in the centre of the racecourse and charge after the flags. The flags are, of course, hurled aloft by fireworks, shot out of a kind of mortar, that explode with a huge roar high above the gathered horses. Standing on the hillside, I could look behind me to some of the resting horses and see how they panic when the fireworks cracked. Horses and fireworks mix so well, why not start a battle with a massive explosion? And then do it 10 times? The warriors compete for 40 flags, fired into the air over 10 bouts. I left after 4 bouts, and in that time I saw two warriors fall from their horses – and when they landed they were still wrestling over the flag they had caught. Now that’s commitment …

    Capture the flag, samurai style
    Capture the flag, samurai style

    This festival is a thoroughly engaging and entertaining event, well worth taking the opportunity to view. It’s edgy, exciting and historical, and everyone gathered there is really involved. I strongly recommend, if you’re in Japan at the end of July, making a trip to Minamisoma to experience this unique Japanese event. Just don’t participate if you value your life!

     

  • ... And we'll be rich by christmas!
    … And we’ll be rich by christmas!

    On Sunday afternoon I had my first ever experience of playing Fiasco, a “story-based” role-playing system by Bully Pulpit Games. The basic idea of the game is to build up a narrative, cooperatively-generated storyline that follows the pattern of movies like A Simple Plan: a group of friends/acquaintances/family/colleagues who hatch a scheme to pull off some criminal enterprise, and as the scheme falls apart the conflicting pressures in the group drive it to a situation out of control.

    We observed this in spades.

    The game works pretty simply. There is no GM, so everything is done cooperatively by the players. You set up a scene, build pairs of relationships between the characters, and then generate at least one need (an urgent demand that is placed on one of the players), one location, and one object. You then role-play eight “scenes,” brief interactions between the characters, the outcome of which are represented as black or white dice that you accumulate. Then you roll up the “tilt,” which is a set of conditions that arise to drive the character’s purpose awry. Another eight scenes are played, and at the end of this you roll up the outcome for each character. Then, once the outcomes are determined, you run through a fast and entertaining “aftermath” in which the unresolved details of the final scene are played out and the characters’ fates are described. For most of the characters the game will end very badly, but if you’re really lucky you can make it out rich and famous.

    Our setup

    We chose the pre-packaged scenario “The Ice,” which is set in Antarctica. The four characters were:

    • June Kimura, a research scientist who has received funding from an oil company to research the cancer-curing properties of penguin vomit
    • Michael Jackson, her (estranged) husband who came to work with her on the ice but is really hating it. His job primarily involves farming penguin vomit, and he doesn’t like it at all
    • James P.J. Sinistret III (my character), an ecological terrorist who has come to the ice to destroy the research project and free the penguins
    • Scott Fielding, a considerably-less-committed ecological terrorist who is really just an easily-impressed stoner, and who was originally Kimura’s research assistant before he turned against her penguin-vomit program

    For additional relationships, we chose that James and Michael were “the ones who found the body”. The other details are below.

    • Need 1: Michael Jackson needs to “get out … of responsibility for the accident”
    • Need 2: June Kimura needs to “find out the truth … about the accident”
    • Location: The world’s largest Adelie penguin colony on Ross Island Glacier
    • Object: a crashed helicopter on the road to Ross Island

    The location essentially drove the whole story, from the setup (penguin-vomit based cancer cures) to the finale, and Michael Jackson’s main motivation. The accident was the opening for the adventure and used the object: Michael Jackson, shooting a penguin, missed it and instead killed the pilot of the helicopter James and Scott were flying in. They survived the accident but Scott was unconscious; when Michael came to help, James told him that he would be taking the pilot’s identity, and that if Michael didn’t want to get done for murder he would help bury the body. They buried the body amongst the penguins of the penguin vomit farm – a big mistake, as we will see.

    Things progressed from there, with Scott wavering between wanting to free the penguins and wanting to find an administrative solution to the ecological problem of penguin vomit; June became increasingly suspicious about James (whose pseudonym, Juan, didn’t quite suit his pure aryan looks, and whose stated reason for being on the ice – climate denialist research for the US blog Powerline – was obviously baloney); and Michael Jackson getting increasingly desperate to warn everyone about what was happening among the penguins.

    Because, it turns out, the chemicals being pumped into the penguins to induce their vomit and extract the curative bile, were slowly turning the penguins crazy, causing some kind of contagious craziness disease – and by burying the body amongst them we had turned them into man-eaters.

    The Tilt and the Ending

    For the “tilt” we rolled up “someone panics” and “dangerous creatures get loose.” These scenes were pretty fun to work though. First an oil company executive turned up to put pressure on Kimura about the progress of her project, but he was met by Michael Jackson, who was raving about the penguins being crazy. His best line: “The Penguins are going crazy. No wait! Hear me out!” The executive was not impressed, and sent Kimura an email on his phone to tell her to put Michael Jackson at the top of the agenda, then started bullying Michael Jackson and being very rude.

    The first tilt came: Michael Jackson panicked and shot the oil exec. Of course he had to bury the body – in amongst the penguins. Then he drove to the research lab where Kimura, of course, confronted him – she had received an email but the exec didn’t turn up with Michael, and by now she was already very suspicious about his involvement in the accident. He continued his panic, knocking her out and dragging her to the only remaining escape route from the area, a big snow buggy. Here Scott confronted him, demanding to see the oil exec so he could put his case about closing the project; Kimura woke up at this point and Scott realized that Michael Jackson was trying to run away. They then all saw James driving his snow buggy down to the penguin farm, and followed him to stop him releasing the penguins. There was a confrontation, Michael’s gun ran out of ammo, James released the penguins, and everyone had to flee.

    We rolled for aftermath and found that Scott and Michael both escaped and Scott even managed to make some money and fame; James was seriously injured and permanently damaged, and Kimura’s research career was destroyed, her life ruined. To play this out, Scott dragged my injured body away from the penguins and stuffed it into the snow buggy, but I saw Michael Jackson dragging Kimura the same way. I put my head out of the window to yell at him to leave her behind, but a group of mad penguins leapt up and ate my face. Then Jackson and Kimura flung themselves into the buggy and we fled, leaving behind the enraged penguins.

    The aftermath then featured our futures: Scott was professor of Penguin microbiology at harvard, his theory of penguin reverse hypoallergenic reactivity having been conclusively proven, and pretty students were lined up at his office door to “get his autograph;” Michael Jackson was a famous conspiracy theorist with a movie deal; Kimura had been ruined and was living on the streets, forced to watch buses pass by featuring adverts for her ex-husbands conspiracy movies; and James was living as a cripple in Chile, gutting fish by day, drinking at cheap bars every night, and going to the beach to kill and eat penguins on the weekend. He was infected by whatever virus caused the rage in the penguins, and in the final scene of the aftermath he charged drunk out of one of the beachside bars, attacked a passing stranger, and started biting his face …

    Conclusion

    Fiasco is an entertaining and fun little game of murder and mayhem. It’s easy to setup, learn and run, and it’s easy to make it pan out. The rules are very simple, but it does have some drawbacks. First, it is heavy on role-playing of the acting kind, actually going through conversations and little scenes: many players don’t like this style of role-playing and may find the game uncomfortable. Second, although the rules provide a framework for the unravelling plan and conflict between people with “poor impulse control,” they don’t actually force this to happen, and if you don’t have a clear sense of how these kinds of things should unravel I think it’s possible you could fail to make the game come out as intended. However, the gamebook is written in such a way as to really draw out the intended atmosphere, and to give the proper feeling of how it should run, so I think groups of players would easily make it work. However, if someone is skeptical about story games, not into the style of role-playing it demands, or really unfamiliar with this genre of thriller, they could probably derail the game or just lead it into a boring ending. However, overall, I think the risk of this is low and the game probably resolves successfully (and violently) on most occasions. It took us four hours to play having never played before, so I think once you’re familiar with the rules and the style it is probably a 2-3 hour game – a good way to enjoy an afternoon with your gaming friends and an easy, low preparation thing to do when your standard gaming group can’t must enough numbers to run or is in a hiatus. It’s well worth giving a go, especially if you like a bit of chaos and madness in your gaming!

  • A few weeks ago Australia’s first female Prime Minister, Julia Gillard, was replaced in a leadership challenge by her arch nemesis, Kevin Rudd. She had previously overthrown him in 2010. Gillard and Rudd are leaders of Australia’s “left wing” party, the Australian Labor Party (ALP), and since her replacement there has been a bit of a frisson of excitement amongst lefty Australians because a) Kevin Rudd is much more likely to win the upcoming election and keep the ALP in power and b) many leftists saw Gillard as a right wing stooge and can’t forgive her for the “knife in the back” when she overthrew Rudd in 2010. This lack of forgiveness and view of her as a right-wing stooge has been particularly evident in the left-wing criticism of her mining tax policy and their uncritical acceptance of the superiority of Rudd’s. I am planning a bigger post on the mining tax for the near future, to try and work through my opinion of the two policy options the ALP has presented on the issue, but first I thought I would write some words in praise of Gillard, whose legacy I think is going to be big, and who will be seen in the long-term as a great Labor leader; I also want to say a few things about the mistakes that the Australian left repeatedly makes in its complex relationship with the ALP.

    For my non-Australian reader(s), a few explanatory notes: 1) Australia’s “conservative” party is called the Liberal party; 2) roughly speaking in Australian politics the prime minister (PM) is the leader of the party in the house of representatives that has the majority and is the leader of the country but not the head of state; 3) in Australia you don’t vote for the PM and the party can change your PM at any time by changing its leader; 4) this is a particularly common event in the ALP and is done with savagery and extreme prejudice when it happens. The ALP may be the mainstream party of the left in Australia and its individual members may be great people but one must never ever make the mistake of thinking that the ALP as an institution has anything resembling a soul or a shred of decency. The leadership shenanigans over the last 3 years have been a sordid and sorry tale that I don’t intend to rehash here but for this post I do need to make some judgments about why Gillard replaced Rudd in 2010, and I am going to assume that the current official story is true: that Rudd was a terrible party leader who couldn’t consult, made policy on the run, was a bully and didn’t know how to run a cabinet. I may write something about this below.

    With that said, on to my praise for Julia Gillard. Upfront I should state that I am not a Labor voter but extremely supportive of the Labor project and of trade unionism, and I think that the ALP – as Australia’s longest extant party – has had a huge role in shaping Australia as a nation and making it the great place to live and work that it is today.

    To me, Julia Gillard epitomizes the personal history of a good Labor leader: coming from humble beginnings, she achieved a good education and career prospects through hard work, perseverance and good luck, and then put her qualifications to work in the service of working people. For Gillard this meant going to work in Australia’s most famous pro-worker industrial relations firm, Slater and Gordon, where she worked to represent unions and ordinary working people in their legal battles. If anyone doubts the sincerity of Gillard’s commitment to working people I would urge them to watch any footage of her talking about her work there. Particularly, in her hour-long press conference answering questions about the “AWU affair,” she regularly talks about her work at Slater and Gordon and it is clear that she is proud of bother their history of representing workers, and her personal efforts there. I think personally she shares much in common with Bob Hawke, another famous ALP leader, and it’s no coincidence that he has been very supportive of her career. This puts her in stark contrast to other recent Labor leaders like Kevin Rudd (a career diplomat) and Mark Latham (career politician). Australian politics generally is narrowing the scope of candidates as more and more are drawn from political careers and less and less from ordinary life, and I think this is a bad situation for Australia. Julia Gillard was not part of this trend, and I think her real experience of representing workers shows in her political outlook.

    Like Bob Hawke, Gillard showed an ability to achieve compromise and consensus in politics which enabled her to make policy – and good policy, at that – while managing a hung parliament and facing a completely obstructive opposition. Under her three year leadership the ALP introduced a resource tax that actually works (though not very well); a carbon trading policy that appears to have already had some success in lowering CO2 emissions; a major reform to disability insurance that will be of significant benefit to carers and the disabled; and was on the cusp of completing a major education reform that it appears the Liberals will largely support if the government changes. She also negotiated a major environmental policy to restore the health of the Murray-Darling river system, something which won her widespread praise and has been long overdue, and I think she also made major gains in trade and political arrangements with China and India. Some of these achievements – like the mining tax and the disability insurance scheme – required negotiation with hostile partners such as the mining industry and Liberal state governments, and some (such as earlier  education reforms) required confrontation with unions. In my view this is the mark of a good Labor leader: the ability to negotiate genuine political reforms in the interests of the country, even where they may be against the interest of your primary supporters or may require compromise with political opponents. Bob Hawke was the master of this, and Gillard is obviously also very capable. In contrast, Rudd failed to introduce a carbon trading system and despite calling it “the moral challenge of our times” he first tried to stitch up a weak policy with the Liberals (about 50% of whom are probably denialists) and then, when that was torpedoed walked away from the challenge rather than negotiate with the Greens, who at that time held the balance of power in the Senate and could have passed it. Rudd also fluffed the mining tax, introducing a tax that would never satisfy the mining industry without any consultation with them or his colleagues, and inviting a huge mining industry campaign against him at the coming election. Of course, one could argue (and leftists have, I think, in connection with Rudd’s mining tax) that Australia’s PM shouldn’t have to negotiate with any sectional interest group to pass policy in the national interest; it’s also obviously reprehensible that the mining companies were planning to wage a major campaign against the sitting government, especially given some of those mining companies are foreign-owned. But the reality of Australian political life is that policy is not made without consultation, and a good leader would never have put their party in the position where they were staring down the barrel of a $100 million advertising campaign against a policy. And particularly, despite Liberal fantasies of the ALP as a party of radical union wreckers, the ALP actually has a long history of consensus government, and of shaping Australia through agreement and bringing everyone forward, not through confrontation.

    It is this ALP history of consensus building that also, I think, informs some of the other policy areas in which Gillard disappointed the Australian left. She is opposed to gay marriage, possibly out of personal conviction but possibly also because she understands that the broad community needs to be drawn forward together, not have radical policy foisted on them. Her attitude to welfare is drawn from a long conservative working class tradition of refusing to countenance handouts, which means that she is not well disposed to the unemployed and her welfare policies can be punitive compared to what some on the left like. And she is willing to compromise on secondary environmental goals, in order to keep sectional interest groups satisfied as she draws them forward slowly, together, on the path towards a broader environmental consensus. This is how the ALP has always worked, and in this regard Gillard is disappointing precisely because she understands and respects ALP tradition, not because she is running counter to it. Rudd, on the other hand, was wont to grandstand on social and environmental issues but unwilling to do the hard work required to bring the community into agreement with them: this made him a poster-boy of the non-industrial left but didn’t win him any friends within the ALP.

    Finally, Gillard also seems to have won the respect of everyone she works with. She had to negotiate extensively with the independent politicians in parliament and they all seem to have a great deal of personal respect for her and for her integrity. They obviously enjoyed making policy with her and appreciated the policy that came out of it. It seems that she was well-regarded by those who had to deal with her, and she certainly seems to have been much loved of her cabinet. In contrast, after Rudd won back the leadership a slew of senior cabinet figures resigned from cabinet and from parliament rather than work with him again. Rudd was also the man who refused to meet with Bob Brown, leader of the Australian Greens, for the entire time Rudd was PM, even though Brown held the balance of power in the Senate. That is the mark of a man who doesn’t play well with others, and a worrying sign for the future of the ALP and the government. In my view, Gillard has acted with integrity in her period as PM, she has achieved a lot for Australia, and as leader she did what ALP leaders are expected to do: faced up to big challenges and dealt with them sensibly and in collaboration with all the sectional interest groups that were affected by them. I guess that’s not wonderful praise to go on someone’s political epitaph – “she made good policy well in difficult circumstances” – but in my opinion it’s the best praise a PM can expect to get in peacetime, and it’s certainly better than “didn’t play well with others and flubbed the great moral challenge of our times.” So, mark my words: Gillard’s legacy will be assessed much more positively than the media or the Australian electorate assessed her at the time.

    She will certainly be assessed better by historians than she is currently viewed by a large portion of the Australian left, and I think this is because the Australian (non-industrial) left has a very weak understanding of what the ALP is and how it works. The ALP is the political representative of the industrial left, expressed exclusively in Australia through the trade unions. This means that the ALP has two goals: to advance Australia’s interests and to protect the rights and living conditions of Australian workers. It is not the best vehicle for achieving radical left-wing or social liberal goals, though since the war it has been the primary means by which the radical left and social liberals have achieved their goals. In the breach, the ALP will always first and foremost stand up for the interest of its working class constituency, and for the industries that employ its union members. This is why successive state and federal ALP governments have failed to pass a comprehensive policy protecting Australian old growth forest: because they have to protect the forestry industry that employs members of the Construction, Forestry, Mining and Energy Union (CFMEU). As an example of just how beholden the ALP is to these two sectoral interests, when Mark Latham in opposition announced a forestry policy (without consultation) that would genuinely have protected old growth forests he was heckled by his own union at public meetings in Tasmania. Similarly, Australia has a three mines policy for Uranium mining because the nuclear disarmament movement and the anti-nuclear movement, while they have some sway in the ALP, have less influence than the CFMEU and the mining industry, whose mutual interests the ALP has to support. It’s natural – because of the interweaved nature of leftist politics – that the ALP will always be sympathetic to environmental, social liberal, feminist and Aboriginal rights movements, and to social liberals and the radical left generally. But those movements are not the ALP’s primary constituency and to get change through the ALP they will always be struggling against the social conservatism and economic interests of the industrial left. This means, for example, that Julia Gillard will talk proudly of the work she has done to represent a migrant piece-worker in the garment industry, and will pass laws to protect that woman; but will simultaneously pass draconian policy against migrant workers coming to this country. People on the radical left who expect her to pass laws protecting migrant piece-workers and encouraging the movement of migrant workers are misunderstanding the nature of ALP political goals. They might be able to make a case for both, but they shouldn’t expect it. When the left fails to recognize the limits of industrial unionism and organized labour as a vehicle for radical political change, it will always be disappointed by politicians from the ALP who genuinely understand the political history and culture of the ALP. Instead they will be attracted to and supportive of policy light-weights like Rudd, who are happy to grandstand for social liberal ideals but unwilling to put in the hard work to bring their core constituents along with them.

    I have a great deal of respect for anyone who can balance the competing corporate, union and social liberal interests making demands of the ALP and who can produce good policy from that complex mess. Bob Hawke could do it and Julia Gillard did it, and for this she deserves praise and respect. I am disappointed but not surprised that she was deposed when it looked like she would lose the election, but I am especially disappointed that Rudd got back. I think he will probably win the election and worse still, unless the Liberals make a rapid decision to ditch their current leader, Tony Abbott, Rudd will destroy them again at the polls. This will lead to a new ascendancy of a man who, fundamentally, doesn’t understand ALP culture, doesn’t know how to make good policy, and is only interested in social liberal goals as pin up slogans to attract popularity. In the short term it will be good for the ALP but I worry that in the long-term it will be bad for the country and, by extension, very bad for the ALP. People will look back on Gillard’s era as a lost opportunity for another golden age of Labourism and ALP-led reforms, and I think the non-industrial left is going to regret the harshness with which it judged this supposedly right-wing PM. She was a good PM in difficult times, and she left a legacy that will be well respected in the future.

  • Look here for that which you do not seek...
    Look here for that which you do not seek…

    When Japanese people want to succeed at life’s challenges, they will often visit a shrine and pray to the God therein for aid and succor. Most shrines have a designated purpose, with the God ensconced there tasked with aiding childbirth, or the accumulation of wealth, knowledge, or success in love, for example. Those who live in Japan are familiar with the culture of the shrine visit: washing our hands at the little water trough near the front, passing into the silence and calm of the shrine precinct, the sound of supplicants clapping and ringing the shrine bell, and then one’s own pause for contemplation as one bows one’s head before the God. Shrines are everywhere in Japan, from the huge rambling complexes of Fushimi inari jinja in Kyoto to tiny shrines at roadsides and car parks throughout the nation. There is even one on the edge of Shinjuku’s Kabukicho night life district, surrounded on all sides by skyscrapers and major roads, a little oasis of silence in one of the busiest places in the world – its only concession to the underworld it is situated amidst being the wire cage over the water trough. Most of us who live here come to appreciate these calm moments of contemplation in the midst of the urban bustle, and also understand the importance of countryside shrines as central markers of a traditional way of Japanese life that, despite the frenetic pace of urban existence, refuses to fade away.

    But there is another side to the challenges of Japanese life that is draining and exhausting, and that those little prayers at the shrine serve to support and perpetuate: the constant obligations of a life lived communally. Foreigners in Japan can escape many of them, but Japanese people face a constant stream of obligations great and small. From the demands of everyday good manners to the struggle of working as a corporate samurai, from the scourge of wedding parties that afflicts people in their mid-20s to the tedium of workplace drinking parties and even the obligation to visit a shrine on New Year’s Day, Japanese social life is full of  obligations. Most of these obligations can be embarked upon in spirit of good will and reciprocity: for example, when one finds oneself working back yet again to make things go smoothly, one can also understand that the reason the postal system is so wonderful is that someone there is also working back to make things go smoothly. People are also mutually understanding of the burden of these obligations, and do all they can to lighten the load and to be understanding of the burden of mutuality. And, of course, one can always visit a shrine, throw in 5 yen and pray for a bit of strength in the hard times.

    But sometimes, those obligations carry with them a weight that cannot be wished away at a shrine, and sometimes they come with an additional, crushing challenge: the challenge of having to succeed at them even when one does not want to. The culture of ganbaru, of struggling to do one’s best no matter the difficulty, is an important part of Japanese social and work life, and worse still is the need to show that one is fighting on, even where one wants to fail. This raises the terrible prospect that one can be saddled with an obligation one does not want to carry out, but also forced to do one’s utmost to achieve it, lest one be seen to be shirking or – worse still – embarrass one’s family and colleagues by being openly seen not to want to succeed in a great goal.

    It is easy to imagine these onerous tasks, because they arise often. Perhaps a man is forced to apply for a job he knows will come with a huge workload and boring tasks, that he needs to take for reasons of prestige and money but really doesn’t want: in Australia he would simply fluff the interview, but here in Japan if he was recommended to the job by his superior then he will humiliate his superior if he does anything less than try his best to get the job. Perhaps a woman has been arranged an introduction with a rich and handsome future husband, and is required to do her utmost to please him and his family, but she is secretly conducting a love affair with the Korean migrant son of a pachinko parlour owner – she needs to appear as if she wants the marriage and is trying hard, to avoid embarrassing her family and the introduction agent and to keep the peace, but she really needs this introduction to fail. Perhaps a soldier has been tasked with fabricating an international incident at the Marco Polo bridge, and failure to do his utmost to carry out the incident will lead to his punishment and possibly execution – but he knows that if he succeeds his nation will be dragged into a war that will destroy it.

    It is at times like these that no amount of prayers at any shrine will save you. You need to appear to be doing your best, but you need to fail. And you know that if you do your best, you won’t fail. You are trapped.

    It is at times like this that you need to visit one of the shrines of the God that Failed. You take with you a rotten mandarin, a cup of the cheapest, nastiest sake you can find, and the letter offering you an interview for the job you don’t like. Place the offerings at the shrine, burn the letter, and promise the God that Failed that you will do your utmost to succeed in this honourable task.

    Then, failure is guaranteed.

    The shrines to the God that Failed are not the usual calm, cheerful devotional spots, established long ago and broadcast to the world through their red torii gates and colourful roof. Rather, they are themselves monuments to failure, hidden in plain sight, just as is the failure that their supplicants seek. There are many such shrines, but they can be very difficult to find: the dirty toilet of the convenience store in Japan’s poorest and most crime-ridden suburb, perhaps; or an abandoned shrine just around the corner from Japan’s lowest-ranked and lowest-achieving community college; or a little waving cat statue in a broken self-storage unit, owned by a failed spam-forwarding start up company. These shrines are never far from anyone, but finding them is in itself something of a pilgrimage, a combination of internet searching, rumour-hunting, and then following one’s own innate spiritual sense for discerning failure and sadness. Also, perhaps, one has to be careful in one’s observances to ensure one is not discovered or seen: not only do prayer’s to the God that Failed have to be conducted in the strictest secrecy, but the shrine must be known only to those that seek it – discovery of religious observance by those who do not seek failure can bring down a terrible curse upon the supplicant.

    Many argue that the God that Failed is Japan’s weakest and most aberrant god, but it is possible that actually it is the most powerful. Not only are its shrines everywhere, but it has many followers. While supplication of the God that Failed is always difficult, ordinary daily worship is easy, in a way that the other Gods of Japan do not allow. Ordinary Japanese Gods do not allow one to pray to them from home, or just anywhere, but the God that Failed is aware that many of its best worshippers do not seek anything in life, and it takes their devotion where it can. To worship the God that Failed it is enough to drop out of work and school, and stay home all day on the internet in chat rooms and on 2-channel. It is sufficient to waste one’s money and time in a pachinko parlour, where the whir of the machines serves as a devotional hymn to the God that Failed, your soul slowly leaking through the storm of pinballs into its possession. Worshippers of the God that Failed may not even realize they are in its thrall, but they are everywhere: pachinko junkies, NEETs, that ageing Host who has to work just that little bit longer every morning to make ends meet, that English teacher in the second-rate company who works nights doing skype lessons for sad shift workers … all the silent failures in life who struggle to succeed in a task that everyone else knows is already a lost cause. They flock to the God that Fails, though they don’t realize they are its worshippers, and through the repetitive daily rituals of failure they slowly lose their souls to it. And with the power of these lost souls, it grants the wish of failure to those who are otherwise successful, guaranteeing continued happiness to the successful by robbing the souls of the already-lost.

    If the God that Fails has a grand design outside of the ordinary spiritual precincts of Shintoism, no one knows it. It seems reasonable that the very nature of this God would preclude any greater purpose than to leech on second-rate souls. But it is possible that in its army of followers and its calm, epochal dedication to the cultivation of failure, it actually has a deeper and more evil purpose. Who is to say what grand movements in Japanese history and culture are due to its meddling? Who is to say that it does not have power in the halls of the high and mighty as well as the low and feeble? Were the spiritually aware to notice its designs, perhaps they would uncover a great and evil pattern, that only the very brave and courageous would dare to unravel …

    This deity would be suitable as an adversary or an ally in a Shadowrun- or Feng Shui-style campaign, or perhaps a far Eastern version of a Cthulhuesque horror. It might be a subtle adversary, its influence underlying more obvious criminal and spiritual cliques that use less subtle techniques. Its efforts might manifest through hacking, accidents, suicides and economic ruin, and establishing the pattern of its attacks would thus be very difficult. It is also a perfect excuse for that GM who needs to destroy the plans of an overly powerful group, but hasn’t figured out a detailed storyline for why they came apart. Missed money transfers, transport plans that fail, contacts who don’t show, NPCs who commit suicide just when you most need them – look in those plot moments for the sinister movements of the God that Failed. Should you be afraid, or scornful? Only time will tell …

  • Introduction

    Every year the Arctic Research Consortium of the US runs a competition to predict the mean arctic sea ice extent in September, and this year I have decided to enter. I have been an avid reader of Neven’s Arctic Sea Ice Blog for the last year, and they host predictions there too. The general idea is that in June, July and August a deadline is set for submissions of the prediction of the mean sea ice extent in September, using any methods available. One can submit as individuals or a team, professionally or personally. I thought I would put my modeling skills to the test, and see what I can do.

    As background, arctic sea ice melts from May(ish) to September every year, reaching a minimum sometime in September, before the sun loses its strength and the whole area freezes up again. Over the past 20 years the melt has been strengthening, and recently extent and area have been in freefall. Records were broken in 2007 and then, spectacularly, again in 2012. Activity on Neven’s blog was frantic that year as the sea ice watchers tried to understand the enormity of the drop, and this year you can see again a whole bunch of very professional arctic observers watching the minutiae of the melting process. It’s fascinating because many of them are real experts in their field, and you can watch the joy of scientists learning new things about the world in real time, and see the enormous creativity they put into understanding the processes they are observing.

    More seriously, arctic sea ice melt is expected to have significant effects on northern hemisphere weather, and understanding its accelerating destruction is important to understanding what is going to happen to northern hemisphere weather over the next 10-20 years. So, predictive modeling is not just a fun exercise, but a potentially useful tool to understand where the ice is going.

    Method

    [This is a relatively (for a stats methods section) tech-free methods, so you should be able to understand the gist without any prior education in statistics]

    I used data on arctic sea ice extent and area from the National Snow and Ice Data Center (NSDIC), and northern hemisphere land-sea surface temperatures from the Goddard Institute for Space Studies (GISS, commonly called GISSTemp). I used the following variables to predict sea ice extent:

    • Sea ice extent from the previous September
    • May Extent, Area and northern hemisphere snow cover
    • June Extent, area and northern hemisphere snow cover
    • April and May surface temperature

    June surface temperature was not available. Snow cover and surface temperatures are expressed as anomalies – the latter from the 1951 – 1980 baseline, the former from some baseline I can’t remember. I also used year in the model, since it’s reasonable to assume a trend over time.

    I put all these variables into a Prais-Winsten regression model in Stata/MP 12. Prais-Winsten regression models enable multiple regression fitting of a single outcome variable to multiple predictors under the assumption of residuals with auto-correlation at lag 1, a common assumption that it is necessary to make in order to adjust for the serial dependence inherent in all time series. Since my main interest in this task is the point estimate (mean) of sea ice extent, I could have used a simple linear regression, but this would have given overly narrow confidence intervals. I could have looked for other modeling methods but Prais-Winsten is trivially easy in Stata, and I am lazy.

    I didn’t use any model-building philosophy, just kept all variables in the model regardless of significance. I could have tried a couple of different models in competition, done some best-subset or backwards stepwise fitting, but given the amount of data I had (extent, area, snow cover and temperature readings for every month of the year) there was a big risk of over-fitting, so unless I crafted a careful ensemble model-fitting approach I risk producing a model that can explain everything and predict nothing. I have a day job, folks. So I just ran the one model. I may come back to the ensemble issue for the August estimate.

    I first ran the model for the period 1979-2012 to check its fit and get parameter estimates. I then ran the model for the period 1979-2007, and obtained predicted values for 2008 – 2012. I did this to see if the model could accurately estimate the 2012 crash having only one prior major crash in the training data set. For the sake of interest, I then ran the model to 2011 and re-checked its predictive powers for 2012. Both are plotted in this report. I then ran the model to 2012 and used it to predict the mean September extent in 2013 with 95% prediction interval.

    Results

    The model converged and for the 1979-2012 period had an R-squared of 0.9303, indicating it explained 93% of the variance in the data. That’s quite ridiculous and highly suggestive of over-fitting. Table 1 contains the parameter estimates from this model.

    Table 1: Parameter estimates from the full model (1979-2012)

    Variable

    Coefficient

    Standard Error

    T

    P value

    Lower CI

    Upper CI

    Lagged Minimum Extent

    0.01

    0.11

    0.06

    0.96

    -0.22

    0.24

    May Extent

    0.56

    0.33

    1.67

    0.11

    -0.13

    1.25

    June Extent

    -1.04

    0.41

    -2.51

    0.02

    -1.89

    -0.18

    May Area

    -0.91

    0.46

    -1.98

    0.06

    -1.85

    0.04

    June Area

    1.85

    0.32

    5.75

    0

    1.18

    2.51

    Year

    -0.07

    0.02

    -2.63

    0.02

    -0.12

    -0.01

    June Snow Anomaly

    0.12

    0.04

    2.76

    0.01

    0.03

    0.21

    April Temp

    -1.05

    0.37

    -2.85

    0.01

    -1.81

    -0.29

    May Temp

    1.49

    0.7

    2.13

    0.04

    0.04

    2.93

    Intercept

    136.22

    51.41

    2.65

    0.01

    29.88

    242.57

    These coefficients can be interpreted as indicating the amount by which the September mean extent varies in millions of kilometers for a unit change in the given value. So for example, every degree increase in April temperatures reduces the September extent by just over a million square kilometres, and there is a 70,000 square kilometer decline every year. Note the conflict between area and extent, and the strange protective effect of high temperatures in May. This is could be a sign of a model that is ignorant of physics, but just fits numbers to get the best fit. We probably shouldn’t try and use these coefficients to understand the physics of sea ice loss!

    Figure 1 shows the predictive ability of the model run from 1979-2007. All values within this time frame are “within-sample” predictions, generally with low standard error and expected to be close to the true values. Values from 2008 – 2012 are “out of sample” predictions, with wider confidence intervals and greater risk of departure from the true value.

    Figure 1: Predictive fit for 2008-2012 based on 1979-2007 model run
    Figure 1: Predictive fit for 2008-2012 based on 1979-2007 model run

    As can be seen, the model predicts the 2007 crash very well, but doesn’t handle the 2012 crash particularly brilliantly. It does predict a new record for 2012 though, guessing at a value of 4.13 million square kilometres, just below its 2007 estimate of 4.25. This is half a million square kilometres off the true value (3.63 million square kilometres).

    Figure 2 shows the same fit for the same time periods when the model is run up to 2011. In this case only the year 2012 is an out-of-sample prediction.

    Figure 2: Predictive fit for 2012 based on 1979-2011 model run
    Figure 2: Predictive fit for 2012 based on 1979-2011 model run

    This predictive fit is very good, estimating a value for 2012 of 3.90 million square kilometres – just 270,000 square kilometres off. Note that the 2012 true value is within the 95% confidence intervals for both predictive fits.

    Using the model built for figure 2, I estimated the 2013 mean sea ice extent to be 4.69 million square kilometres, with 95% confidence interval 4.06 – 5.32 million square kilometres.

    Conclusion

    My final estimate for sea ice extent in September 2013 is 4.69 million square kilometres (95% CI: 4.06 – 5.32 million square kilometres). This is a huge recovery from September 2012, of just over 1 million square kilometres, but still historically a very low value. Given what I have read on the updates at Neven’s sea ice blog I find it hard to believe that this recovery could occur, but I also note that a lot of people are impressed by the slow early collapse of the ice, and think that unless the high summer is very unusual melting will be slower than last year. I also note that my model has successfully predicted the previous two crashes when run to one year before them, and doesn’t do a bad job of predicting crashes even five years out. I also note that in noisy series, data points don’t tend to continue below the trend for very long, so it’s about time for a correction. However, there is some concern that the persistent cyclone in May really destroyed a lot of ice and has prepared the arctic for a catastrophic summer.

    Let’s hope my model is right!