We are now eight months into the coronavirus pandemic with little sign that most countries will be able to get it under control without a vaccine, which means that many countries are now attempting to return to normal while managing the virus. For most countries I predict this is going to be disastrous, and even countries that have not yet fully reopened – like France and the UK – are seeing resurgence in cases with the potential for a return of a major epidemic. But some of these countries are planning to reopen schools and universities in the Autumn, despite the risks, on the assumption that personal protective measures can contain those risks. I have expressed before my discomfort with personal protective measures, which will never be as effective at containing an infectious disease as good policy and robust treatment access, but this seems to be the dangerous path most countries have chosen to take. Given this, many universities are now trying to figure out how to return to in-person classes in Autumn, and many professors seem to want to do this. However, after a full semester of teaching entirely online I am unsure why there is so much pressure to return to in-person teaching and supervision. If we are going to move to a new normal I think we should consider the possibility that for some (many?) classes online is better than in-person, and here I would like to outline some of the benefits of online teaching and supervision.

Brief background

I teach classes in basic statistics, basic statistical programming, and some advanced statistics courses, to graduate students who are primarily mature age students working in health and studying part time. Here in Japan the first semester starts in April and in February I pushed for us to go entirely online, because I was working with Chinese colleagues on the coronavirus response in China and I knew how bad it was going to get. Our university already had a partially online component of teaching, to enable working people to take classes – basically students can choose to take an online or physical class for all of our required and many of our elective classes, and those who take the online component get to view recordings of the lectures, along with pre-recorded slides, and a slide set translated into Japanese. We have an online forum for asking questions and students can also join the physical class if they are taking the online component but able to get free time (this doesn’t happen much). Given our university already had this experience with online teaching it was very easy to switch entirely online and the faculty agreed, so we had about 6 weeks to prepare. This was a very good decision: many of our students are clinicians and some work directly in covid-19 treatment and care, so having them gather physically in a room is extremely high risk.

I originally planned to just switch the physical classes to the online component, upload last year’s recordings and use the lectures as a Q&A, but students don’t always have time for this, so I started teaching the classes in zoom (using slide sharing and so on), and I have found many aspects of lecturing in zoom to be superior to physical lecturing. I also reconfigured the statistical programming class to be done in zoom using breakout rooms. The statistical programming class was traditionally taught entirely physically, with me and two teaching assistants (TAs) running around the class answering questions and then reproducing errors on the teacher’s computer to explain specific problems that are relevant to everyone’s education. I could not physically do this anyway this year because I dislocated my kneecap in mid-February and had surgery in mid-April, but even if I had been able to, I found ways to make this work better in zoom. My students this year are learning more and better than last year, using zoom.

Benefits of online teaching

In my experience of first semester there are many aspects of holding classes online that are superior to holding them physically. In no particular order, here they are.

Reduced commuting: Some of my students join the lecture from their workplace, or from locations that vary weekly depending on their schedule. They don’t have to commute, so physically it’s much easier for them. Commuting in Japan is obviously high-risk for coronavirus, but it also reduces pressure on students if they don’t have to bounce from work to school to home. I think surveys in Japan have shown an overwhelming desire for normal workers to continue working from home and commuting is a part of the reason for this.

Better quality lecture materials: Nobody has to squint from the back of the room, or worry about audibility, or any of that stuff. They can see the slides clearly when I share them and can hear my voice clearly, plus can control the audio when they need to. The lecture recordings are also better quality, because instead of recording me standing there against a white screen in a dark room with dubious audio the students can clearly see the high quality of the slides and hear my voice directly in the microphone. This is especially useful for the programming class because it was very hard for students to read the Stata code on the lecture screen but in the zoom lectures it’s very clear

Disability friendly: We have one student who has mobility issues and would find getting into class very exhausting and time consuming, but none of this is a problem for them with zoom. Students also don’t have to suffer a one-size-fits-all computer arrangement for the programming class, and can use whatever ergonomic keyboard or weird screen setup they want. They can also learn in their native operating system and now I can teach in both – I have a mac and one of my TAs has a PC, so we can share screens to show differences (plus we can share students’ screens so we can learn how to work in their setup).

Full computer access: In the past I taught on a shared work laptop in a lecture theatre, or on the bodgy old PC in the computer room, with no access to my own full suite of materials. But now I have my entire setup available, so I can dig back through old files to show code I wrote years ago, or data examples that respond directly to a question rather than being prepared ahead. Obviously I could do this if I brought my laptop to the class but it’s so much more convenient to do this in my own office with all my stuff already set up (and it also means I can access external hard drives connected to my office desktop, etc). Students, too, can share the data they’re working with for their projects if they need to.

Shy and quiet students win: Asian students are generally shy and retiring and don’t like to ask questions but it is much easier for them if their face is not shown or they can do it in a chat window. Questions asked in chat can also be shelved and returned to later (since they’re written down where they can’t be forgotten) or answered by TAs in chat or by other students – in the programming class if someone asks a question we aren’t sure about one of the TAs can google the solution (or dig around in help files) and post the answer in chat while I continue managing the class. I think this makes Q&A better, and also encourages more class involvement by shy or quiet students. In my main stats class this isn’t a huge problem (since it’s just straight lectures) but even there being able to hide your face and/or voice helps shy, insecure, uncertain or scared students, all of whom can be found in a stats class. Also note that in a more interactive class a lecturer could strictly control students’ speaking time using the mute button, and I think in some systems can monitor how much students have spoken so that they can see directly if they’re allowing one student to dominate the class.

Convenience: Students can eat while they watch the lecture, can drink things other than water, can use their own bathroom when they want to, and can even sleep if they need to, knowing they won’t be caught out, won’t be embarrassing themselves in front of peers or lecturers, and won’t miss the class, since it’s recorded. Students are in general more comfortable in their own home or study or in the environment they chose for study, than in a lecture theatre with students they don’t know.

Recorded classes: My older students in particular find the recording of the programming classes very helpful. They have told me they review the same sections over and over while they try to figure out what to do for certain problems and tasks. Also for mathematics they can simply rewind and play again, which is a huge benefit for the slower or less confident students. I think the security of knowing they can’t miss anything makes it easier for students to take in the class, especially since it’s in their second language

Overseas and traveling students can participate: Three of our students were unable to enter Japan because the borders slammed shut the week before they were scheduled to arrive, and one more just slipped through. Given that most of our students are basically self-quarantining to avoid infection, two of our students are eager to return to their home country early so they can take these protective measures in a better environment. Online classes enables these students to continue studying even though they’re overseas. It enables us to maintain a diverse class even though we have pandemic border closures, and potentially in future to extend our classes to students who cannot get a scholarship and cannot afford to study in Japan. This is good!

Given these benefits, I’m not sure why people are eager to return to in-person teaching.

Online supervision and anti-harassment countermeasures

For me, supervising students usually involves working through statistical problems, often on a computer in my office. Last year I investigated ways to set up a shared, easily-accessible screen in my office so that we didn’t have to hunker around a laptop and more than two people could see a person’s work at a time, but the administrative details made me give up. This year of course that’s not a problem – it’s easy for me to supervise groups of students and share screens between them if I want. Nonetheless I still find in-person supervision preferable to online – visual and body-language cues are helpful for understanding whether someone understands what you’re saying, and somehow I feel something missing in online supervision that I don’t feel in online teaching. Also, in-person supervision can mean having a student down the hall who drops in and pesters you with the next stage of a problem on the regular, and this can be a very convenient way to get through difficult parts of a project quickly, but you can’t do this so well online. (You could, of course, just set your zoom on at 9am with your students logged in and working quietly and just use it when you need to, like a shared office – but we haven’t got there yet). So I still somehow prefer in-person supervision. However, there is one way in which I think online supervision is going to radically change the way professor/student and professor/staff relationships work, and that is its use in preventing harassment.

There are many forms of harassment in universities but one of the commonest is power harassment (pawahara in Japanese), in which a senior figure uses their power and authority to ruin the lives of students and junior staff. This is done through straightforward bullying – yelling, threats, insults and the like – as well as through things like taking authorship, demanding excessive work, refusing to share connections, giving unfair assessments, and so on. Things like sharing connections are the sorts of subtle power relations that can never be fought effectively, but the bullying aspects of power harassment take on a very different tone when all meetings need to be conducted online. I was myself bullied by a boss for years, and when I made a formal complaint against him a big problem I had was that much of his behavior – the threats to sack me, the unreasonable demands, the unfair statements about my work and personality, the threats towards my students – was verbal and not recorded, so in the formal complaint this became a case of my word against his. I won that complaint but it was a long slog and the outcome was not as good as I had hoped because the entire part of my complaint about his manners and inter-personal behavior could not be confirmed. This isn’t a problem when your relationships are done through zoom, and it will completely change the balance of power, for the following reasons.

The bully cannot get the same pleasure online: Bullies do what they do for personal pleasure and to bolster their own fragile personalities, so they need a reaction. Sure they do a lot of stuff that has no visible response – threatening emails, yelling over the phone, bitching about you to others – but none of this means anything to them if they can’t also hurt you visibly and viscerally enjoy the pleasure of watching you collapse. This pleasure is obviously going to be reduced if it’s done through a camera but worse still, on zoom you can turn off your own camera and mute yourself and they simply cannot get any pleasure from their words at all. They can try and force you to turn your camera and mic on but you are the one who controls your computer’s settings, and they cannot enjoy bullying as much. If it doesn’t make them feel better they’ll still do it – bullies are bullies after all – but they will have less personal incentive to do it and maybe, just maybe, as a result they won’t do it as much. Also, obviously, the bully cannot do the physical things bullies love – throwing small office objects, throwing paper at you, pushing you or touching you.

Bullies hate to be recorded: This is the real killer for a bully. Bullies always know how power works and are very aware of the risks of power being used against them. This is why the threats and insults are much more commonly and forcefully delivered in person, away from witnesses and not in writing. If you can record your meetings with your boss then he or she is going to have to be super careful about what he or she says, and even if the bully can stop you from recording the zoom session itself they cannot stop you putting your phone next to the speaker and hitting record. The threats to sack me always happened in unplanned ad hoc meetings where I did not have time to surreptitiously bring in my phone and hit record, and in any case it is hard to surreptitiously record people when they can see what you’re doing. But online they cannot guarantee they aren’t being recorded, and this means they will have to be careful. Furthermore, one of the responses a university might consider to bullying is to have a witness present at meetings, but the university cannot do this for ad hoc meetings, hallway interactions and the like. But zoom eliminates those meetings – all meetings need to be scheduled and can be recorded. So you can simply request during mediation to have all meetings recorded, and you already have your bully on a leash. It’s worth noting too that universities are going to be much, much more careful about dismissing bullying claims if they are aware that the recordings of the situation they determined was “not bullying” could end up going viral on twitter. I am aware for example of one famous economist who has a terrible reputation, but no one has ever recorded his rants. Good luck to him supervising online!

Witnesses: One of the great things about zoom is that you don’t know what’s going on on the other side of the computer. Even if the video is on and mute is off, a quiet witness can sit on the other side of the computer listening to the behavior of your bully, and stand as a witness in a complaint. Bullies often gaslight their victims, making sure they say derogatory things in private and then either denying them or saying that they didn’t mean it that way or that you misinterpreted their tone. They can’t get away with that if someone you trust is listening in and can tell what they really meant, and give you feedback later. This is a protection for strict or unreasonable senior staff who are not bullies, because that witness will potentially tell their subordinate that the behavior is unpleasant or unreasonable but not bullying. But for bullies this is a disaster. They can’t break your confidence in your own judgment if there are witnesses to dispute their gaslighting, and they can’t even know the witnesses are there. Also it’s much easier for a victim to strike back verbally if they have a person there offering emotional support, even silently – especially if the conversation is muted and the camera off so that the victim can consult with the witness about what to say. And of course you can have that witness occasionally drift by in the background, so that the bully suddenly discovers that the last 30 minutes of bad behavior may have been heard by an outsider.

Bullies love chaos and unstructured interactions: One thing my boss was fond of doing was barging into my office and yelling at me, or calling me into an impromptu meeting and demanding answers to things I hadn’t prepared for, or catching me after group meetings with unreasonable and unrealistic requests plus insults. Bullies love to have everyone on edge, never sure when they’re going to make demands or suddenly turn foul. Of course they can be erratic and chaotic in zoom meetings but they cannot just barge into your work and yell at you over zoom – they need to schedule appointments by email, and that means telling you what it’s about so you can prepare, or at least leaving a paper trail of failed information. Also when meetings are organized like this you can try to rustle in co-supervisors, colleagues and collaborators to diffuse the aggression – and of course you can schedule a witness to hover behind your computer.

Given these reasons I think online supervision actually takes a lot of power away from senior staff and puts it in the hands of their victims. With tele-working and home-based teaching and research becoming the new normal, I think there is a strong chance that even after the pandemic people will be able to manipulate the new normal to allow for greater amounts of online meetings and supervision, with the ability to get greater control over the environment in which bullying happens. If you are being bullied by your supervisor now, I recommend finding ways to turn the zoom meetings and lack of physical meetings into a tool to collect evidence on your mistreatment, and to gather support from partners and friends to help weather it. A couple of recorded zoom sessions with a powerful bully could transform a workplace harassment case, and especially the implied threat of viral attention will really serve to focus the minds of campus administrators on what to do about bullying senior staff. It is my hope that online supervision and telework in the new normal will revolutionize the way academics work and in particular will enable students and junior staff to better manage the misbehavior of unruly and unpleasant senior faculty.

Online conferences and virtual meetings

One thing I really hate about academia is the conference world. I think it’s a scam that was developed by a previous era of academics to enable international travel for free, and for a while it was great – people could go to exotic locations and take a break on the government’s money. But now that administrators have become aware of the scam and the grant money is getting more competitive conferences are a drag. Even very senior staff now are not allowed to fly business, are required to turn up the day of or the day before a conference and are not allowed to take time off before they fly home, and often have to present certificates of attendance or reports. I find conference attendance exhausting and distracting, and I don’t think it enhances my academic life at all. Shlepping halfway across the world to present a 5 minute presentation at a conference where 90% of the material isn’t relevant to my work, then going straight from the final day to the airport to shlepp all the way back, arriving the day after I left and having to go back to work the next day – it’s just an exhausting and tedious waste of time. The fact that it is relevant to our careers – that junior staff have to take time out from all the other stuff they’re doing to faff on the other side of the world without any pleasurable side benefits in order to pad their CV – is incredibly infuriating. And on so many occasions it is completely unproductive – if you’re not the keynote speaker at an international conference you’re likely to be presenting a 5 minute speech in a windowless room to 5 or 10 other people (3 of whom are from your work anyway) who won’t have any questions and may not even care about your work (5 of them are the other presenters!) It’s very rare that there is any significant interaction or anything productive arises from it. What a waste of time!

Online conferences, on the other hand, are great! You only have to attend the presentations that are interesting, you can do it as part of your day job, and because nobody needs to blow half their grant money on a plane ticket many more people will attend. My Chinese colleague recently attended one where she presented her work to 300 people, rather than the 10 people she would expect at a physical conference – and she did it from her bedroom! This means that way more people see your work, there is much more interaction as a result, time limits can be strictly adhered to, people without grant money or from poorer universities can attend, students can attend … it’s a huge win. I hope that in the new normal conferences will become a thing of the past, and will be recognized as the wasteful scam that they were. Let’s make all our conferences online and save physical work travel for actually meaningful trips to do real work!

Conclusion: Online teaching is great

I have been raised to think of online learning as a scam, a way for unscrupulous universities to fleece low-quality students for second rate degrees. But in the modern world of high connectivity and good quality shared work apps, I think we can move past this and begin to see a way to improve our teaching using the online tools available to us. We can make our classes more inclusive, more interactive and more engaging, and we can find new ways to teach hard topics, using the online tools available to us. We can also change the nature of workplace meetings and hopefully even begin to make real progress on eliminating bullying. And we can finally do away with the ludicrous scam of physical conferences, which will enable us to use our grant money more effectively and get our work out to a wider range of people than we have in the past. Let’s embrace this new normal and use it to make our teaching genuinely inclusive and higher quality!

And let me tell you something
Before you go taking a walk in my world,
…you better take a look at the real world
Cause this ain’t no Mr. Rogers Neighborhood
Can you say “feel like shit?”
Yea maybe sometimes I do feel like shit
I ain’t happy about it, but I’d rather feel like shit
…than be full of shit!

 

There are times in life when it’s necessary to turn to the original gurus of self-righteous self-inspiration, Suicidal Tendencies. Life getting you down, you feel you can’t keep going? Crank up ST and when the boys ask you “Are you feelin’ suicidal?” yell back “I’m suicidal!” and you’ll be back on track in no time. Been meandering through some shit, making mistakes you know are your own dumb fault, and need to kick yourself back onto the straight and narrow? Gotta kill Captain Stupid is what you need. Getting played by conmen who play on your better nature, maybe take you for a ride using your religious impulses? Then you can crank up Send Me Your Money and be reminded that “Here comes another con hiding behind a collar / His only God is the almighty dollar / He ain’t no prophet, he ain’t no healer / He’s just a two bit goddamn money stealer.” That’ll get your cynical radar working again! But the Suicidals’ most useful refrain, the one that applies most often and most powerfully in this shit-stained and terrible world, is the imprecation at the beginning of the second half of their skate power classic, You Can’t Bring Me Down:

Just cause you don’t understand what’s going on
…don’t mean it don’t make no sense
And just cause you don’t like it,
…don’t mean it ain’t no good

This pure reminder of the power of bullshit over mortal men came to me today when I began to delve into the background of the latest Sokal Hoax that has been visited on the social sciences. I’d like to explore this hoax, consider how it would have panned out in other disciplines, make a few criticisms, and discuss the implications of some of their supposedly preposterous papers. So as Mikey would say – bring it on home, brother doc!

The Latest Hoax

The latest hoax comes with its own report, a massive online screed that describes what they did, why they did it, how they did it and what happened. Basically they spent a year preparing a bunch of papers that they submitted to a wide range of social studies journals in a field they refer to as “grievance studies”, which they define by saying

we have come to call these fields “grievance studies” in shorthand because of their common goal of problematizing aspects of culture in minute detail in order to attempt diagnoses of power imbalances and oppression rooted in identity.

This definition of the field is easily the vaguest and most hand-wavey way to select a broad set of targets I have ever seen, and it’s also obviously intended to be perjorative. In fact their whole project could perhaps be described as having the “common goal of problematizing aspects of culture in minute detail” – starting with their definition of the culture.

The authors admit that they’re not experts in the field, but they spent a year studying the content, methods and style of the field, then wrote papers that they submitted to journals under fake names (one real professor gave them permission to use his name) from fake institutions. They submitted 20 papers over the year, writing one every 9 days, and got 7 published, one with a commendation; the other 13 were repeatedly rejected or still under review when somehow their cover was blown and they had to reveal the hoax.

The basic problem with the hoax

The papers they submitted are listed at the website and are pretty hilarious, and some of the papers that were published were obviously terrible (though they may have been interesting reading). Two of the papers they submitted – one on dog parks and one on immersive pornography – used fake data, i.e. academic misconduct, and two were plagiarized parts of Mein Kampf, with some words replaced to reverse them into a feminist meaning of some kind (I guess by replacing “Jew” with “men” or something).

Submitting an article based on fraudulent data is, let’s be clear, academic misconduct, and it is also extremely difficult for peer reviewers to catch. Sure it’s easy in retrospect to say “that data was fake” but when peer reviewers get an article they don’t get the raw data, they have to judge based on the summaries in the paper. This is how the Wakefield paper that led to the collapse in MMR vaccination got published in the Lancet – Wakefield made up his data, and it was impossible for the peer reviewers to know that. The STAPP controversy in Japan – which led to several scientists being disgraced and one suicide – involved doctored images that were only discovered when a research assistant blew the whistle. Medicine is full of these controversies in which data is faked or manipulated and only discovered after a huge amount of detective work, or after a junior staff member destroys their career blowing the whistle. Submitting fraudulent work to peer review – a process which at heart depends on good faith assumptions all around – is guaranteed to be successful. It’s not an indictment of anyone to do this.

Submitting a word-replaced Mein Kampf is incredibly tacky, tasteless and juvenile. Most academics don’t read Mein Kampf, and it’s not a necessary text for most sociological disciplines. If the journal doesn’t use plagiarism software or the peer reviewers don’t, then this is undoubtedly going to slide through, and while much of Mein Kampf is pernicious nonsense a lot of it is actually pretty straightforward descriptions of political strategies and contemporary events. Indeed the chapter they used (chapter 12 of volume 1) is really about organizing and political vision[1], with only passing references to Jewish perfidy – it’s the kind of thing that could be rendered pretty bland with a word replace. But from the description in their report one might think they had successfully published an exterminationist screed. I’m sure the hoaxers thought they were being super clever doing this, but they weren’t. Detecting plagiarism is a journal’s responsibility more than a peer reviewer’s, and not all journals can. It’s not even clear if the plagiarized text would have been easily detected by google searches of fragments if there was a suitable level of word replacement.

So several of their hoax papers were highlighting problems with the peer review process in general, not with anything to do with social studies. Of the remainder, some were substantially rewritten during review, and a lot were rejected or sent back for major revision. While people on twitter are claiming that “many papers” were accepted, in fact the most obviously problematic ones were rejected. For example the paper that recommended mistreating white students, ignoring their work and dismissing their efforts, to teach them about white privilege, was rejected three times, but people on twitter are claiming that the treatment of this paper shows some kind of problematic morality by the peer reviewers.

The next problem with the hoax is that the authors have misrepresented good-spirited, kind-hearted attempts at taking their work seriously with uncritical acceptance of their work. Consider this peer review that they report[2] on a paper on whether men commit sexual violence by masturbating to fantasies of real women (more on this below):

I was also trying to think through examples of how this theoretical argument has implications in romantic consensual relationships. Through the paper, I was thinking about the rise of sexting and consensual pornographic selfies between couples, and how to situate it in your argument. I think this is interesting because you could argue that even if these pictures are shared and contained within a consensual private relationship, the pictures themselves are a reaction to the idea that the man may be thinking about another woman while masturbating. The entire industry of boudoir photography, where women sometimes have erotic pictures taken for their significant other before deploying overseas in the military for example, is implicitly a way of saying, “if you’re going to masturbate, it might as well be to me.” Essentially, even in consensual monogamous relationships, masturbatory fantasies might create some level of coercion for women. You mention this theme on page 21 in terms of the consumption of non-consensual digital media as metasexual-rape, but I think it is interesting to think through these potentially more subtle consensual but coercive elements as well

This is a genuine, good-faith effort to engage with the authors’ argument, and to work out its implications. But this peer reviewer, who has clearly devoted considerable time to engaging with and attempting to improve this paper, now discovers that he or she was being punked the whole time, and the authors were laughing at her naivete for thinking their idea should be taken seriously. He or she did this work for free, as part of an industry where we all give freely of our time to help each other improve their ideas, but actually this good faith effort was just being manipulated and used as part of a cheap publicity stunt by some people who have an axe to grind with an entire, entirely vaguely-defined branch of academia. And note also that after all this peer reviewer’s work, this paper was still rejected – but the hoaxers are using it as ammunition for their claim that “grievance studies” takes preposterous ideas seriously. Is that fair, or reasonable? And is it ethical to conduct experiments on other academics without consent?

I would be interested to know, incidentally, if their little prank was submitted to institutional review before they did it. If I tried to pull this shitty little move in my field, without putting it through an IRB, I think my career would be toast.

But there is another problem with this hoax, which I want to dwell on in a little more detail: some of the papers actually covered interesting topics of relevance in their field, and the fact that the hoaxers think their theories were preposterous doesn’t mean they were actually preposterous. It’s at this point that the Suicidals’ most powerful rule applies: Just because you don’t understand what’s going on, don’t mean it don’t make sense.

The theoretical value of some of the hoax papers

Why don’t men use dildos for masturbation?

Let us consider first the paper the authors refer to as “Dildos”, actual title Going in Through the Back Door: Challenging Straight Male Homohysteria and Transphobia through Receptive Penetrative Sex Toy Use. In this paper the hoaxers ask why men don’t use dildos for masturbation, and suggest it is out of a fear of homosexuality, and transphobia. The hoaxers say that they wrote this paper

To see if journals will accept ludicrous arguments if they support (unfalsifiable) claims that common (and harmless) sexual choices made by straight men are actually homophobic, transphobic, and anti-feminist

But is this argument ludicrous? Why don’t men use dildos more? After all, we know that men can obtain sexual pleasure from anal insertion, through prostate stimulation. There is a genre of porn in which this happens (for both cismen and transgender women), and it is a specialty service provided by sex workers, but it is not generally commonly practiced in heterosexual intercourse or male masturbation. Why? Men can be pretty bloody-minded about sexual pleasure, so why don’t they do this more? There could be many reasons, such as that it’s impractical, or it’s dirty, or (for couple sex) that women have a problem with penetrating men, or because men see sex toys as fundamentally femininized objects – but it could also be out of a residual homophobia, right? This seems prima facie an interesting theory that could be explored. For example, the only mainstream movie I can think of where a woman penetrates a man is Deadpool, and so it should be fairly easy to study reactions to that movie and analyze them for homophobia (reddit should be pretty good for this, or MRA websites). Understanding the reasons for this might offer new ways for men to enjoy sex, and a new diversity of sex roles for women, which one presumes is a good thing. So why is this argument ludicrous?

Why do men visit Hooters?

Another article that was published was referred to by the hoaxers as “Hooters”, actual title An Ethnography of Breastaurant Masculinity: Themes of Objectification, Sexual Conquest, Male Control, and Masculine Toughness in a Sexually Objectifying Restaurant. The article argues that men visit “breastaurants” to assert male dominance and enjoy a particular form of “authentic masculinity,” presumably in contrast to the simpler motive of wanting to be able to look at tits. The authors say they did this article to

see if journals will publish papers that seek to problematize heterosexual men’s attraction to women and will accept very shoddy qualitative methodology and ideologically-motivated interpretations which support this

But again, this is basically an interesting question. Why do men go to restaurants with scantily-clad women? They could eat at a normal restaurant and then watch porn, or just read playboy while they eat. Or they could eat and then go to a strip club. So why do they need to be served in restaurants by breasty girls? And why are some men completely uninterested in these environments, even though they’re seriously into tits? The answer that this is something about performing a type of masculinity, and needing women as props for some kind of expression of dominance, makes sense intuitively (which doesn’t mean it’s right). It’s particularly interesting that this article is being presented as preposterous by the hoaxers now just as debate is raging about why Brett Kavanaugh insisted in sharing his non-consensual sexual encounters with other men, while Bill Cosby did his on the down-low. It’s almost as if Bill and Brett had different forms of masculine dominance to express! Forms of masculine dominance that need to be explored and understood! By academics in social studies, for example!

Also note here that the tone of the hoaxers’ explanation suggests that the idea that visiting breasty restaurants is problematic is obviously wrong and everyone believes them about this. In fact, many Americans of good faith from many different backgrounds don’t consider visiting Hooters to be a particularly savoury activity, and you probably won’t convince your girlfriend you’re not an arsehole by telling her she’s wrong to “problematize heterosexual men’s attraction to women” in the context of your having blown your weekly entertainment budget on a trip to Hooters. Understanding why she has problematized this behavior might help you to get laid the following week!

Do men do violence to women when they fantasize about them?

The hoaxers wrote an article that they refer to as “Masturbation”, real title Rubbing One Out: Defining Metasexual Violence of Objectification Through Nonconsensual Masturbation, which was ultimately rejected from Sociological Theory after peer review. I think this was the most interesting of their fake articles, covering a really interesting topic, with real ethical implications. The basic idea here is that when men fantasize about women without women’s consent (for example when masturbating) they’re committing a kind of sexual violence, even though the woman in question doesn’t know about this. They wrote this article to test

To see if the definition of sexual violence can be expanded into thought crimes

But this way of presenting their argument (“Thought crimes”) and the idea that the definition of sexual violence hasn’t already been expanded to thought crimes, is deeply dangerous and stupid. To deal with the second point first, in many jurisdictions anime or manga that depicts sex with children is banned. But in these comics nobody has been harmed. So yes, sexual violence has been extended to include thought crimes. But if we don’t expand the definition of sexual violence into thought crimes we run into some very serious legal and ethical problems. Consider the crime of upskirting, in which men take secret videos up women’s skirts and put them onto porn sites for other men to masturbate to. In general the upskirted woman has no clue she’s been filmed, and the video usually doesn’t show her face so it’s not possible for her to be identified. It is, essentially, a victimless crime. Yet we treat upskirting as a far more serious crime than just surreptitiously taking photos of people, which we consider to be rude but not criminal. This is because we consider upskirting to be a kind of sexual violence exactly equivalent to the topic of this article! This is also true for revenge porn, which is often public shaming of a woman that destroys her career, but doesn’t have to be. If you share videos of your ex-girlfriend naked with some other men, and she never finds out about it and your friends don’t publicize those pictures, so she is not affected in any way, everyone would agree that you have still done a terrible thing to her, and that this constitutes sexual violence of some kind. I’ve no doubt that in many jurisdictions this revenge porn is a crime even though the woman targeted has not suffered in any way. Indeed, even if a man just shows his friend a video of a one night stand, and the friend doesn’t know the woman, will never meet her, and has no way to harm her, this is still considered to be a disgusting act. So the fundamental principle involved here is completely sound. This is why porn is made – because the women are being paid to allow strangers to watch them have sex. When people sext each other they are obviously clearly giving explicit permission to the recipient to use the photo for sexual gratification (this is why it is called sexting). Couples usually don’t sext each other until they trust each other precisely because they don’t want the pictures shared so that people they don’t know can masturbate to them without their consent. We also typically treat men who steal women’s underwear differently to men who steal other men’s socks at the coin laundry – I think the reason for this is obvious! So the basic principle at the heart of this paper is solid. Yet the hoaxers treat the idea underlying much of our modern understanding of revenge porn and illicit sexual photography as a joke.

I think the basic problem here is that while the hoaxers have mimicked the style of the field, and understand which theoretical questions to target and write about, they fundamentally don’t understand the field, and so things they consider to be ludicrous are actually important and real questions in the topic, with important and real consequences. They don’t understand it, but it actually makes sense. And now they’ve created this circus of people sneering at how bad the papers were, when actually they were addressing decent topics and real questions.

How would this have happened in other fields?

So if we treat these three papers as serious recognizing that two were published, and then discount the paper with fradulent data (dog park) and the paper that was plagiarized (feminist mein kampf) we are left with just three papers that were published that might be genuinely bullshit, out of 20. That’s 15%, or 22% if you drop the plagiarized and fraudulent papers from the denominator. Sounds bad, right? But this brings us to our next big problem with this hoax: there was no control group. If I submitted 20 papers with dodgy methods and shonky reasoning to public health journals, I think I could get 15% published. Just a week or two ago I reported on a major paper in the Lancet that I think has shonky methods and reasoning, as well as poorly-gathered data, but it got major publicity and will probably adversely affect alcohol policy in future. I have repeatedly on this blog attacked papers published in the National Bureau of Economics Research (NBER) archives, which use terrible methods, poor quality data, bad reasoning and poor scientific design. Are 15% of NBER papers bullshit? I would suggest the figure is likely much higher. But we can’t compare because the authors didn’t try to hoax these fields, and as far as I know no one has ever tried to hoax them. This despite the clear and certain knowledge that the R&R paper in economics was based on a flawed model and bad reasoning, but was used to inform fiscal policy in several countries, and the basic conclusions are still believed even though it has been roundly debunked.

The absence of hoaxes (or even proper critical commentary) on other fields means that they can maintain an air of inassailability while social studies and feminist theory are repeatedly criticized for their methods and the quality of their research and peer review. This is a political project, not a scientific project, and these hoaxers have gone to great lengths to produce a salable, PR-ready attack on a field they don’t like, using a method that is itself poorly reasoned, with shonky methodology, and a lack of detailed understanding of the academic goals of the field they’re punking. They also, it should be remembered, have acted very unethically. I think the beam is in their own eye, or as the Suicidals would say:

Ah, damn, we got a lot of stupid people
Doing a lot of stupid things
Thinking a lot of stupid thoughts
And if you want to see one
Just look in the mirror

Conclusion

This hoax shouldn’t be taken seriously, and it doesn’t say anything much about the quality of research or academic editing in the field they’re criticizing. Certainly on the face of it some of the papers that were published seem pretty damning, but some of them covered real topics of genuine interest, and the hoaxers’ interpretation of the theoretical value of the work is deeply flawed. This is a PR stunt, nothing more, and it does nothing to address whatever real issues sociology and women’s studies face. Until people start genuinely developing a model for properly assessing the quality of academic work in multiple fields, with control groups and proper adjustment for confounders, in a cross-disciplinary team that fully understands the fields being critiqued, these kinds of hoaxes will remain just stupid stunts, that play on the goodwill of peer reviewers and academics for the short-term political and public benefit of the hoaxers, but for no longer benefit to the community being punked, and at the risk of considerable harm. Until a proper assessment of the quality of all disciplines is conducted, we should not waste our time punking others, but think harder about how we can improve our own.

 


fn1: I won’t link, because a lot of online texts of Mein Kampf are on super dubious websites – look it up yourself if you wish to see what the punking text was.

fn2: Revealing peer reviews is generally considered unethical, btw

News continues to trickle out concerning the latest bullying scandal in American academia, on which I reported briefly in a previous post. Through the Lawyers, Guns and Money blog I found a link to this excellent Twitter thread on the damage done to the humanities by celebrity academics like Ronell. These celebrity academics don’t just exist in the humanities, and not just in the “literary theory” cul-de-sac of humanities. They also exist in the physical sciences (think of people like Dawkins and Davies), and they are also a thing in public and global health. In public and global health they are typically characterised by the following traits:

  • They build large teams of staff, who are dependent upon the celebrity academic for their positions
  • They have a flagship project or area of research that they completely dominate, making it hard for junior academics outside of their institution to make progress on that topic
  • They attract very large amounts of grant money, a lot of it “soft” money accrued through relationships with NGOs and non-academic institutions like the Bill and Melinda Gates Foundation, the Wellcome Trust, AXA, the World Health Organization, and similar bodies
  • They have cozy relationships with editorial boards and chief editors, so that they get preferential treatment in journals like The Lancet, New England Journal of Medicine, JAMA, etc
  • They attract a lot of applications from students and post-doctoral fellows, who often bring in their own funding in the form of scholarships and prestigious fellowships
  • They often have a media presence, writing commentary articles or having semi-regular invitational positions on local and national newspapers, in medical journals and on certain websites
  • They are on all the boards

This means that these celebrity academics are able to drive large amounts of research work in their field of expertise, which they often parlay into articles in journals that have high impact through friendly relationships with their colleagues on those journals, and they also often get invited into non-academic activities such as reports, inquiries, special seminars and workshops, and so on. Even where these celebrity academics are not bullies, and are known to treat their staff well and with respect, and to be good teachers and supervisors, this kind of celebrity academia has many negative effects on public health. Some of these include:

  • Their preeminence and grip on grant funding means that they effectively stifle the establishment of new voices in their chosen topic, which risks preventing new methods of doing things from being established, or allows shoddy and poorly developed work to become the mainstream
  • Their preferential treatment in major journals pushes other, higher quality work from unknown authors out of those journals, which both reduces the impact of better or newer work, and also prevents those authors from establishing a strong academic presence
  • Their preferential treatment in major journals enables them to avoid thorough peer review, enabling them to publish flawed work that really should be substantially revised or not published at all
  • The scale and dominance of the institution they build around themselves means that young academics working in the same topic inevitably learn to do things the way the celebrity academic does them, and when they move on to other institutions they bring those methods to those other institutions, slowly establishing methods, work practices, and professional behaviors that are not necessarily the best throughout academia
  • Their media presence enables them to launder and protect the reputation of their own work, and their involvement in academic boards and networks gives them a gatekeeper role that is disproportionate to that of other academics
  • Their importance protects them from criticism and safeguards them against institutional intrusion in their behavior, which is particularly bad if they are abusive or bullying, since junior staff cannot protest or complain

This is exactly what we are now learning happened to Reitman from his lawsuit – he tried to transfer his supervision to Yale but discovered the admissions officer there was a friend of his supervisor, he tried to complain to a provost who also turned out to be a friend of his supervisor, and he could not complain while a PhD student because of fear that his supervisor would destroy his job opportunities through her networks. We also see that Ronell (and friends of hers like Butler) have a disproportionate academic influence, which ensures that they maintain a cozy protection against any intrusion into their little literary theory bubble. Ronell’s books are reviewed (positively) by Butler, who then writes a letter defending Ronell from institutional consequences of her own poor behavior, which no doubt Butler knew about. There’s a video going around of a lecture in which Ronell’s weird behavior is basically an open joke, and in signing the letter some of the signatories basically admit that they knew Ronell’s behavior crossed a line but they saw it as acceptable (it was just her “style”). We even have one shameful theorist complaining that if she is punished, academics in this area will be restricted to behaving as “technocratic pedagogues”, because it is simply impossible for them to teach effectively without this kind of transgressive and bullying behavior.

One of the best ways to prevent this kind of thing is to prevent or limit the ascendance of the celebrity academic. But to do so will require a concerted effort across the institutions of academia, not just within a single university like NYU. Some things that need to happen to prevent celebrity academics getting too big for their boots:

  • Large national funding programs need to be restricted so that single academics cannot grab multiple pools of money and seize funding disproportionate to their role. This already happens in Japan, where the national grants from the Ministry of Education are restricted so that an academic can only have one or two
  • Private and government funds such as Ministry funding, and funding from organizations like the Bill and Melinda Gates Foundation, needs to be more transparently accessible from outside the academy, and also more objective and transparent in assessment – you shouldn’t be able to work up a large amount of money for your research group just by being able to go to the write cocktail party / hostess bar / art gallery – basically at every level, as much as possible, grant funding should be competitive and not based on who you know or how much money you’ve already got
  • Journals – and particular senior journal editors – should stay at arms’ length from academics, and journal processes should remain transparent, competitive and anonymous. It simply should not be possible – as often happens in the Lancet, for example – to stitch up a publication by sending an email to a senior editor who you had a chat with at an event a few weeks ago. No matter how many times you have published in a journal before, your next submission to the journal should be treated in substance and spirit as if it were your first ever submission
  • Journals need to make more space for critical responses to articles, rather than making stupid and restrictive rules on who and what can be published in response to an article. I have certainly experienced having a critical response to an article rejected on flimsy grounds that I’m pretty sure were based on a kneejerk response to criticism of a celebrity, and it’s very hard to publish critical responses at all in some journals. A better approach is that pioneered by the BMJ, which treats critical responses as a kind of comment thread, and elevates the best ones to the status of published Letters to the Editor – this insures more voices get to criticize the work, and everyone can see whose critiques were ignored
  • Institutions need to make their complaint processes much more transparent and easy to work with. Often it is the case that serious harassment cases – physical or sexual – are easy for students to complain about the smaller and more common complaints, like academic misconduct and bullying, are much more difficult to complain about. I think it is generally true that if an academic is disciplined early in their career for small infractions of basic rules on misconduct and bullying, they will be much, much less likely to risk major misbehavior later
  • Student complaints need to be handled in a timely manner that ensures that they are able to see resolution before their thesis defense or graduation, so they can change supervisors if necessary
  • Academic advisors should never be able to sit on their own student’s dissertation committee, or on the committees of their close friend and co-author’s students, since this gives them undue influence over the student’s graduation prospects and kills dead any chance of a complaint (I can’t believe this happens in some universities!)
  • The academic advisor’s permission should never be a requirement for submission. At the very least, if your relationship with your advisor goes pear-shaped, you should always be able to just tell them to fuck off, go off and do the work by yourself, and submit it to an independent committee for assessment

I think if these kinds of rules are followed it’s much harder for academics to become celebrities, and much harder for their celebrity status to become overpowering or to enable them to stifle other students’ careers. But a lot of these changes require action by editorial boards, trustees of non-profits and NGOs, and government bodies connected to specific topics (such as ministries of health, or departments responsible for art and culture). Until we see wholesale changes in the way that academics interact with editorial boards, grant committees, private organizations and government agencies, will not see any reduction in the power and influence of celebrity academics. In the short term this influence can be fatal for students and junior academics, but in the long term – as we have seen in literary theory, it appears – it can also drag down the diversity and quality of work in the whole discipline, as a couple of bullies and pigs come to dominate the entire discipline, ensuring that no one deviates from their own line of work and no one ever criticizes their increasingly weak and low quality work. Academia as a whole benefits from genuine competition, diversity of funders and fund recipients, spreading grant money widely and fairly, and maintaining rigorous standards of independence and academic objectivity in assessing work for publication. Celebrity academics weaken all of those processes, and bring the entire academy down.

A final note: I cannot believe that academics invite students alone to their houses, or (as in this case) invite themselves to their student’s houses. There is no legit reason to do this. Every university should tell its academics, from day one: if you invite a student alone to your house and they lodge a sexual harassment complaint against you, you’re on your own – we will believe them every time. Just don’t do it, under any circumstances. And they should tell students from day one: if your supervisor (or any academic) invites you alone to their house, report it immediately. It’s simply terrible behavior, and no good will ever come of it. Reading the report that this student lodged against his supervisor, it’s simply impossible to believe that she wasn’t up to no good, and simply impossible to accept that the university did not uphold his complaint of sexual harassment. He has now launched a lawsuit, so we can now see all the details of what happened to him and how he dealt with it, and it looks like a complete disaster for NYU and for the professor in question. If the university had disciplined this woman much earlier in her career for much lighter infractions; if it had a clear rule forbidding these one-on-one home-based “supervision” arrangements, or at least making clear that they are a sexual harassment death zone for profs; and if the university gave its senior academics a clear sense that they are not protected from such complaints, then this situation would never have arisen. There is no excuse for this kind of unprofessional behavior except “I knew I could get away with it.” And the academic world needs to work to ensure no professor can ever know they can get away with it, no matter how famous and special they are or think they are.

… in Japanese, for my work. Yesterday a group of 40 first year high school students came to my department from Soma City, a town in the tsunami-affected region of Tohoku. I’m not sure why, perhaps as a quid pro quo for research we’re doing up there, but they were brought down for the afternoon and as part of the day’s events we organized them a two hour workshop on Global Health Policy. How do you do this for a bunch of bored 16 year olds? My department’s students, being very much closer in time to bored 16 year olds than me, managed to come up with a cunning scheme. After an initial greeting, they divided the students into eight countries, and set them a role-playing task based on public health.

The task: the students had to imagine they were representatives of their country at the UN. A new disease, “Disease X”, has been identified and declared an international emergency, and they have to decide what their country is going to do about it. Each group was assigned a “policy advisor” from the country in question – i.e. one of the students or staff – and where necessary a Japanese graduate student to help translate. They were given background information on all the countries in the room, including a few salient details about the country that might be relevant to the disease. Then the properties of the disease were explained. Disease X was in fact tuberculosis, so the basic properties were:

  • one third of the world population is affected
  • Treatment takes 6 – 9 months
  • Vaccines are only effective in children
  • It’s potentially fatal
  • It is transmitted by coughing and sneezing

Because there weren’t enough grad students to go around, me and my student from Hong Kong (whose Japanese is very good) were given our groups without a single translator – the grad student who organized the session was nearby and could come over if we had any trouble. Our task was to guide our students to a plan for what to do, in 20 minutes, including time to write up the intervention on a shared presentation (conferenced through google).

The Plan: The background for Australia gave the students the salient numbers about Disease X (low incidence, low prevalence, low death rate) and the key aspects of Australia’s health challenges, which were high migrant inflows, inequality in health between Aborigines and non-Aborigines, and inequality in health between urban and rural areas. In fact, I had downloaded an article from the Australian and New Zealand Journal of Public Health that makes these differences pretty clear: incidence in Australia is 5.4 per 100,000, but in native Australians[1] is 0.9, and in new migrants and Aborigines 6.6. Also in some parts of Australia it is even higher amongst Aborigines, as high as 13 times the rate for non-Aboriginal Australians.

My students didn’t have these detailed figures, only the bullet points highlighting Australian health challenges, and they immediately fixed on migration as a possible key driver of the disease. I had already told them about the three possible levels they could intervene (regional, national, international) and so, when they settled on migration as the challenge I asked them whether they would do national or international-level interventions. After a bit of debate they decided that there’s no point in trying to better control it at the border if the disease is going gangbusters overseas, so they decided to focus on development work in countries with high rates. They then started scrabbling through the country descriptions, comparing incidence and prevalence, and found the two countries with the highest incidence. Once they had identified which one had higher immigration rates to Australia (Bangladesh, made up by me on the spot – I guess the immigration rate is higher than Nigeria but I really don’t know), they examined the challenges written on the Bangladesh country sheet. One of the key ones was lack of access to healthcare amongst the poor, so they decided to send doctors and medicine to Bangladesh, in collaboration with local doctors (I had to point out this detail).

They actually decided on Bangladesh because (in their words) there’s no value to Australia in providing aid to a country it has no migration connection with, so it’s better to spend the money a country where the aid will benefit both countries. This may seem harsh, but it means they recognized a basic principle of tackling inequality (whether global or local) that I try to focus on in my work: with infectious diseases, there is a significant benefit to the community as a whole from reducing inequality by targeting those worse off, since the people with the highest disease incidence are also the ones who will drive the epidemic. By recognizing this they had identified a key difference between targeting those easiest to reach (who usually have the least problems) and those hardest to reach (and having the most benefit both in that group and in the community as a whole).

Once they had done this I told them the statistics on incidence amongst Aborigines, and pointed out that they didn’t necessarily need to look to Bangladesh to target a group that might be vectors for the disease. But actually rates of TB are much, much higher in Bangladesh than in Aboriginal Australians, so they probably ultimately made the right choice.

So, 20 minutes of group work, largely free of railroading by me, and my students had managed to come up with a fairly reasonable intervention plan that might even have some chance of working, and mostly through their own efforts to analyze the data in front of them – and this was their first ever experience of thinking about public health. It wasn’t entirely sandbox-y, but close enough – you can’t run a completely open session in 20 minutes. All but one of the other tables completed their work on time, and I like to think that this is at least partly because in our planning session the day before I gave a few basic pointers to the grad students about how to GM. I didn’t tell them they were GMing, of course, but that’s what they were basically learning how to do.

The denouement: Once the groups had all presented their results, one of the grad students gave a 10 minute presentation on what disease X really is – TB – and the important role Japan has played in developing prevention strategies. He then gave an overview of international health and our role in it, and one of the high school students gave a very cute bouncy speech – in English! – thanking us for the experience. It was all very cute and effective, and the students seemed genuinely happy to have solved the world’s problems in 20 minutes.

Reforming the WHO: Now, many people might have criticisms of the WHO, and might have expected that if our High School students were genuinely going to role-play a WHO experience, they would all sit down and refuse to compromise, and ultimately come up with a wishy washy motherhood statement that enabled every student to go home and make empty promises to their families[2]. They didn’t do this! So this leads to three possible suggestions for ways to reform the WHO:

  1. Send the students from Soma City to the WHO and give them 20 minutes to solve the world’s problems
  2. Send the grad students from my department, whose boundless energy is truly a wonder to behold, and whose ability to ignore the magnitude of actual barriers to implementing a plan, and just do it anyway, is quite amazing
  3. Teach the current representatives at the WHO how to role-play, so they can come to solutions more efficiently

Which would be most successful? I’m guessing suggestion 1…

A well-rounded Graduate Education: I’m sad to report that the students of my department, though great in many ways, lack all the fundamental principles of a well-rounded classical education. None of them have watched Star Wars or Aliens, they don’t even know what role-playing is, and the primary texts necessary for a good understanding of public health – Lord of the Rings, Bladerunner, Conan – are not in their curriculum. How can they assess a problem if they haven’t been taught the critical skills outlined in the clash between good and evil in Star Wars? How can they be qualified to research women’s health without the basic grounding in feminism provided by Aliens? How shallow is one’s understanding of the human condition if one hasn’t been led to consider one’s basic humanity through the eyes of Deckard in Bladerunner, and indeed – how can they properly comprehend the real social and political impact of shortened life expectancy if they haven’t heard Roy Batty’s final speech? My god, at the end of the presentation they were reduced to quoting from a completely peripheral text by Jeffrey D Sachs. But I like to hope that yesterday they learnt a little bit about how to GM, so I’ve gone one small step towards laying the groundwork for a proper classical education. We’ll see if I can get them through the other texts by the end of their degree.

In fact, it’s essential, since they won’t understand my jokes until they have watched those movies …

fn1: since the early 80s, Australia has had a principle of not recording race on census and hospital forms. Instead, we record country of birth, so anyone who is a second generation Australian is recorded as “Australian.” We also record language spoken at home, and Aboriginality, but when we talk about “Australians” we don’t identify race. Eventually, one hopes, Aboriginality will also be able to be dropped from hospital records, but that’s a long time coming.

fn2: This is unduly harsh on the WHO. I know bashing international institutions is like shooting fish in a barrel, but actually the WHO does some pretty good work, in e.g. polio eradication, disaster response, handling outbreaks like SARS, etc. They may not be the best model or the best institution, but given their circumstances they’re doing an okay job, I think

We continue our series on Tim Power’s War Without Mercy with a discussion of the role of social scientists in the construction of propaganda. We have already seen that Japan’s social scientists were working on the question of how to construct a new social order for the pacific under a Japanese empire, but their role by no means ended there, and nor was this kind of distasteful theorizing limited to Japanese scholars. In fact the work we saw in our previous post was largely conducted in secret,and served less to construct propaganda as it drew on existing racial ideology to develop practical plans. And in this we see the nub of a fascinating problem. By the time Japan had spent 10 or more years at war in the Pacific her propaganda had become so entrenched that the social scientists’ work had itself been infected by the kind of foolish ideologies that so much effort had previously been put into convincing the population to believe.

The same can be observed of allied war planers before the war. Based on the theories of racial and social scientists, Britain’s military planners really believed that Japanese would make bad pilots and couldn’t win aerial warfare – they had been told by their scientists that the way Japanese women carry their infants affects their inner ear and makes them unsuited for aerial manoeuvres. Also they believed the Japanese to be short-sighted and timid, and had been told that their lack of initiative would make them predictable and uncreative war planners. Even at Iwo Jima, when the Japanese defence used coordinated heavy artillery, they decided the Japanese must have German support; they assumed this after initial victories in the Pacific as well, because their racial theories didn’t allow non-white races to win.

These fallacies in the support of propaganda were not accidental, either. Sometimes considerable effort would be put into research and justifications for certain political views. Social scientists played a key role here, presenting both academic and popularized descriptions of Japanese culture that supported the views being presented by government propagandists. Extensive effort was put into proving that the Japanese as a race were trapped in a childlike mental state, with the preferred theory appearing to be that Japanese toilet training techniques were so horrific that they arrested the development of the Japanese psyche, rendering them also vicious-tempered and subservient to authority figures. That’s right, a whole race’s psychology traced to it’s choice of toilet paper, and entire theories of wartime conduct developed on this basis.

I don’t think it’s a coincidence that a whole bunch of social scientists spent a large amount of time working on a complex set of theories that ultimately ended up agreeing very closely with the base propaganda of the US government and Leatherneck magazine, any more than that a previous generation of scientists labored to prove that blacks were inferior to whites; or archaeologists managed to prove that the white race settled India. It’s a salient lesson to all of us – especially those of us in or near academia – that the much-vaunted intellectual freedom and independence of academia always ends up telling us what we want to hear. This shouldn’t seem so surprising, given human nature and the way society works, but the history of academia’s service to unpleasant ideas should stop us being too self congratulatory about how free-thinking we really are in our ivory towers. My own field of statistics prides itself, I think, on being quite independent and free-thinking[1], but it’s worth remembering the somewhat unpleasant eugenics of Fisher, and the role of demographers and population planners in the Nazi occupation of eastern Europe – all very good examples of academics supporting the status quo when, in retrospect, the status quo was obviously wrong and in many ways evil.

Maybe things have improved since world war 2, but maybe also they have just become more sophisticated, or the stakes have been lowered. We’ve seen plenty of social science in support of foreign intervention (e.g. The domino effect) and dictatorship (some of our more morally bankrupt economists on Chile, and a wide smattering of pre-70s leftists on Eastern Europe), and the history of population planning hasn’t been free of controversy in the post-war era. So it’s worth remembering that quite often scientists are working as hard to reflect perceived wisdom as they are to uncover genuinely new ideas. Where the propaganda is needed the academics seem to be able to find a basis for it; and where it has already taken hold they are as likely to perpetuate it (or just lend it a little nuanced sophistication) as they are to challenge it. And you certainly can’t rely on us to bear the load of intellectual honesty when the stakes are high. So next time a scientist tells you they have stunning proof of a commonly-held prejudice, you should probably just smile and back away politely. Who knows where their work will end – it could be a population planning document whose contents have long since passed into preposterous fantasy; or it could be a firestorm in Tokyo. But like as not, their work isn’t going to get you to any profound truths – or at least, that is the lesson we can learn from the involvement of academics in the development of the theory underlying propaganda and race hate in world war 2.

fn1: though maybe this field is better characterized as a bunch of ratbag leftists, at least in my experience

He skipped Cultural Studies for Bicycle Tech Class

Continuing this week’s zombie theme, Grey has raised in comments to my last post the possibility that our modern specialization and lack of basic survival skills – farming, hunting, that sort of thing – would be a major problem in surviving the zombie apocalypse. The obvious implication of this is that your average media studies graduate, pasty white-faced urban public servant, is meat hanging on a hook once the gates of hell open up. Now, every time I watch a zombie movie I’m thrown into something of a reverie of thought about this – how would I survive, what would I do, what skills make one a valuable team member? And I’m forced to conclude that the skills of urban man aren’t actually so useless in your classic urban zombie apocalypse. In fact, I think the classic survival skills that one associates with a man[1] of a previous, simpler, less specialized era wouldn’t actually be anywhere near as useful either in the short term or the long term as one might initially think. This post is my classically long-winded attempt to work out why, but first let’s consider two examples of modern urban humans – one “real” and one not – in a short term and long term zombie survival scenario.

The Short Term Survival Skill of Greatest Importance: Media Studies and Jim from 28 Days Later

In the classic post-apocalypse scenario that everyone is familiar with, Jim wakes up from a coma in hospital. We know Jim is a bicycle courier, and he is in a modern (post-2000) world where the infected have taken over the streets. He emerges into the light of day and in a series of classic scenes stumbles through an empty London looking for clues as to what happened. He enters a church and takes altogether too long to figure out what’s going on, and ends up having to flee the scene with a bunch of infected chasing him, until a pair of survivors turn up with a few molotovs and save his bacon.

What was the key skill Jim needed here? He needed to have attended those early morning media studies classes, so that he could understand the narrative signs of a zombie apocalypse. No amount of gun-toting, pig-farming, deer-hunting experience was going to get him out of this one. What he needed was to know that in a deserted London with signs up at Picadilly Circus looking for lost loved ones who have fled to the country, going into an abandoned church is a bad plan. Similarly, the people who rescued him had a key skill they learnt at too many black block demonstrations – throwing molotov cocktails. And when he started to have his freak out, the woman in the group knew enough about medicine and nutrition to make him aware that he was suffering from his sugar-rich diet.

These aren’t skills or adaptation tactics one learns on the farm.

A Statistician in the Wilderness: Experimental Design and Community Survival in a Long-term post-Apocalyptic Scenario

Suppose that a gang of survivors that includes your humble blogger finds itself needing to carve out a long-term existence in the wilderness, having identified that there is no chance of society as we know it re-forming. Obviously we need to start farming at some point, because while survival hunting might be useful in the short term, it’s unlikely to provide sufficient food for a growing community and anyway, there are zombies out there. So, this community needs an efficient way of learning what farming methods are best within a few seasons, based on what knowledge we have between us. A statistician with training in experimental design is very useful for this sort of enterprise – a single season with a few crop yields will be sufficient to identify the best growth techniques in a well-designed trial, and this is very important for protecting a community long-term against crop failure and the destablizing effects of famine. It’s also essential to enable community growth. So even a skill as apparently useless as statistics can be put to work in the long-term interests of a post-apocalyptic community.

The Importance of Education for Adaptation

These examples are both facetious but they show that there is a key skill in surviving a zombie apocalypse – adaptation. And adaptation is facilitated by a wide and advanced education, popular cultural knowledge, exposure to media, and the coherent exchange of specialist skills in a community. In the short term the ability to farm or hunt is irrelevant to survival in a collapsing urban environment – key skills are adaptability, brutality, and knowledge of the urban environment[2]. In the long term survival is best facilitated not by the ability to hunt or grow food, but by the ability to research, learn and adapt.

If an early group of pre-moderns survived a zombie apocalypse[3] and escaped to the wilderness, they might find themselves at a deserted abbey full of books on farming, the origin of zombies, good herbs to cure disease, local hazards, and the quickest and safest way to the coast, but their illiteracy would render all this information meaningless. Finding good mushrooms would be a process of trial and error, as would building a decent roof. It strikes me that my long-term survival strategy would be:

  • Find a pharmacist
  • Loot a library
  • Start a community based around a source of power, a pharmaceutical manufactory, and a farm

You can’t do this with a bow and a good knowledge of how to grow potatoes. In adapting to a new world, common sense is nowhere near as useful, I suspect, as the ability to synthesize new information and turn it to advantage, and this is very much a feature of the modern urban world. Why, even looting a library is not an easy job if you have to do it in a very short period of time before the zombies come – that takes organization, planning, and knowledge of how libraries work and how knowledge is accumulated.

Also, some skills that seem ubiquitous in zombie movies are actually extremely rare and probably more likely to be learnt anew than randomly occur in any group of survivors. The one that springs to mind most readily when watching US movies is gunplay. Not only is this skill extremely rare in the rest of the developed world, but getting guns is difficult and requires research and the ability to move large distances through hostile urban territory to find them. In fact, finding alternatives to guns is probably a much more viable option, and that – again – relies on adaptation. Not to mention that most peoples’ actual training in gunplay doesn’t extend to “using it safely in the presence of your comrades while exploring a deserted warehouse.”[4]

The Huge Range of Neglected Skills in Modern Life

I think it’s fashionable in the modern world to suppose that many of our jobs and skills are useless and really just represent the icing on the cake of civilization. I don’t think that’s necessarily the case. Suppose, for example, that you end up in a gang of survivors composed of a weekend warrior paintballer, a retired cop, a housewife and a history teacher – these are hardly the sorts of people who’re going to build the new world, are they? But these people all have skills you might not expect. The weekend warrior might actually be very good at shooting, which is handy; the retired cop would have first aid skills; the housewife might previously have been an urban planner, with knowledge of the sewage system and how to move through the city safely underground; and the history teacher could be the local organizer for the teacher’s union, with a lot of experience of getting disparate groups of people to work together in a common cause. Someone in the group may have studied agriculture at university; the history teacher may know the location of the city’s key commercial food warehouses, which would be an extremely valuable piece of knowledge.

The Importance of Social Connection

In fact that last example is probably the most important of all, because the history of zombie attacks tells us that the single most important survival skill is the ability to play well with others, and to make judicious rules about how a group of people is to work together. This is the pre-eminent achievement of the modern urban world – advanced skills in group dynamics, planning, and getting shit done. Surviving in the zombie world is about fast collective decision-making and coordinated action, not individual prowess with knife, stick or gun. In the short term the ability to coordinate a raid on a supermarket to maximize your useful acquisitions in the minimum time, while guarding the exits and maintaining clear communication, is vastly more important than how many zombies you can kill or whether you can catch fish. If you have no-one in that supermarket who can quickly and rapidly tell the difference between antibiotics and antidepressants in the pharmacy counter (or if you send them to the clothing department to get padded jackets instead), you’re fucked – and having a good supply of antibiotics and machetes and nutritious tinned food is probably going to keep your group alive longer than a gun, 7000 rounds of ammo and a fishing line. Anyone who has spent time in a modern company knows how to function as a cog in a larger machine, what part to play and how to play it, and it’s likely that most modern urban dwellers if forced could come up with a decent group response to their plight.

Conclusion

Never fear, telephone sanitizers and personal shopping assistants of the world, you have more to fear from the global financial crisis than you do from a zombie apocalypse! Especially if you have done enough team-building exercises with your fellow survivors!

fn1: And I think the classic survivalist scenario always assigns these skills to a man, not a woman

fn2: All well evidenced on any Friday night in the centre of London!

fn3: Which seems like an excellent campaign idea!

fn4: In fact, in this scenario would 15 years’ training in a shooting range be even 10% as effective as 3 weeks playing Time Cop at the local arcade?

On the weekend I spent an hour in a bathtub with a dyed-in-the-wool conservative[1], discussing the merits of various solutions to the world’s problems – not a very fruitful discussion, since we disagree on many things, but we are easily able to agree on the horrible situation the UK faces, and during the discussion I mentioned my plans for a blog post on the Tory education policy, so here it is. The particular question I’m interested in is “will the Tory education reforms reduce inequality?” I don’t want to address the wider question of whether they’re any good, because I don’t know much about the education sector in the UK. It seems prima facie the case that cutting funding to a largely government-maintained sector by 25% (or is it 20?) isn’t going to be good for that sector, at least in the short term, and my impression in the UK was that the sector is generally in pretty poor shape – but I don’t know enough about it to be sure, so I’ll leave my opinions out and focus on the question of inequality.

This post is a question rather than an answer. I’m phrasing my opinions from here on in as definite statements of fact (using words like “is” rather than “appears to be”) but I’m not sure I’m right or wrong on this topic (it’s out of my usual area of concern, that’s for sure!) so I welcome comments with more information or different views.

Also note that I’m writing this post on the assumption that both the previous Labour government and the Tory government care about inequality, and that all policies enacted aren’t window-dressing. Some people think that such a claim about Labour is pretty dubious (and I tend to agree); others think such a claim about the Tories is ridiculous. I actually believe that at least some Tories (i.e. the Bullingdon club) do care about inequality, but it’s my belief that in general their policies are going to be a disaster for this aspect of British society. However, ineffective policy and lack of policy commitment are different issues, so I’m not going to address claims that the Tories don’t care about inequality.

Graduate Tax Education Schemes

The Tory education reforms are, in essence, that they will widen the scope of universities to charge fees to undergraduate students – i.e. they’ll increase the cost of a university degree, basically – and in some cases they will allow universities to charge really shocking amounts, but in exchange they will put in place a bunch of additional measures to ensure access to university for poor students. The policy is really just an extension of the previous policy (detailed here), which was in turn a rip-off of the Australian policy, which has now been around for about 20 years, and which I studied under. The basic way such policies work is:

  • Universities charge all or a portion of the total cost of education to the students
  • Students take out a loan from the government for the cost
  • Students repay the loan after graduation
  • Typically loan repayments are through the tax system, and commence only above a certain wage
  • The loan is usually at lower interest than the market rate

Typically the loan only increases with inflation, rather than charging real interest. This type of policy can be characterized as free education with a graduate tax, which is applied for varying lengths of time depending on the course you undertook. When I went through university (in Australia) the price of the course was only a small portion of its real cost, and the government paid a basic wage equivalent in value to welfare, which was essentially a grant, to all students from poor backgrounds or above a certain age. Since then the fees have increased as a percentage of the cost of the degree, but the previous conservative government (under John Howard) loosened up the rules on that basic wage, so it was more accessible to students. In the UK it appears that students can take a loan for their living expenses[2], which they pay back in a similar fashion to the fees. I think this is the key problem with the system as it stood in the UK – coming from a poor background and having to take a loan for 4 years of education plus fees seems like a pretty big imposition, though I think concerns about the importance of this can be overrated, and don’t take into account the anti-intellectualism of the lower working class, which I’ll come back to at the end of this post.

The Tory Reforms

The Tory reforms are outlined on their website, and basically involved the following:

  • Double the current cap on fees the universities can charge students, from 3000 to 6000 pounds, and allow fees of 9000 pounds in exceptional circumstances
  • Where universities charge above 6000, require them to provide scholarships to poor students to access the university
  • The threshold for repayment of the loan will increase to 21000 (so you have to earn more than 21000 pounds before you need to repay the loan)
  • The loan will be written off after 30 years
  • The loan will be extended to part-time students
  • The government will increase the current living expenses grant for poor students and raise the threshold above which it cuts out
  • Loans for living expenses will be available regardless of income
  • The government will introduce a new 150 million pound scholarship system for low-income students
  • The government will “consult” on ways to prevent rich students from paying off their loan up front and getting out of the progressive repayment system

This policy seems to only contain one bad point – the massive increase in the cost of fees. If playstations were increased in price by 300% tomorrow the nerds would be rioting in the streets, so I can understand student anger at this. But it’s a loan, not an upfront cost, so it doesn’t really matter what the government charges – this is the attitude I took with my education, anyway, and it’s paid off in spades (we’ll get pack to this).

In fact, I think there are key points in this that reduce inequality in access to education significantly. These are (presented in no particular order):

  • Requiring scholarships from top universities: everyone knows it will be the top universities that charge the higher amounts, and requiring them to provide scholarships will mean that potentially more students from poor backgrounds can afford their fees. Access to the top universities in the UK is as close to a guarantee of a good job as you can get in this world, and along with removing the last vestiges of class barriers to entry to these universities (such as interviews) in recent times, these changes will force the universities to admit more poor students
  • Changing the repayment thresholds: Worries about how crippling the debt repayments will be are certainly important factors in the decision to go to university, and setting these repayment rates so they’re affordable but enable students to pay off their loans in a reasonable time is important. It’s also important that the debt doesn’t skyrocket before you can pay it off (as happens in New Zealand) and doesn’t kick in when you are earning too little to afford extra tax. These changes make the repayment rates more progressive
  • Getting rid of early payment benefits: The think that shits me most about the Australian system is that paying your fees upfront gets you a huge discount (currently 25%, I think). While I understand there are economic reasons for doing this (about reducing risk, etc.) it basically means that people with a cool 10000 pounds to spare get their education for 25% less than people with no capital. This is a classic case of “free to those who can afford it” and an example of one of the main ways by which poor people stay poor and rich people stay rich. When you don’t have the spare capital to invest in stuff, you end up paying more – reducing your ability to save up that same capital. It’s an evil poverty trap, and the benefits (guaranteed immediate income for the government) are not worth the inequality effects. Governments can afford to bear risk – that’s why we have governments! – and in this case the deferred income is more than made up for by the inequality avoided. If the Tories do find ways to get around this problem – they were even discussing an early payment penalty recently – then they’ve made significant inroads into killing a huge financial benefit provided to the already-rich.
  • Extending living expenses: For me, a poor student with no capital (I had $250 when I arrived in Adelaide to go to University, enough for the student union fees and nothing else ) and no job and no parental support (my parents contributed $0 to my education and living expenses from the age of 16), the single biggest deterrent for going to university was finding a way to finance my living expenses. I had a pretty burdensome degree (physics) and I didn’t want to work while I was studying, but even if I did, I would have been unable to earn much – or guarantee a job, in 90s Adelaide. Fortunately the Australian government provides a maintenance grant, which though not exactly sustainable in the long-term is sufficient to get you through university. Knowing this, decisions about going to university were easy – I decided to go, and if I couldn’t get a job I’d have the grant. This concern must be a real killer in the UK, where the cost of living is outrageous and the best universities are in rural towns with very little available work. For people from poor backgrounds like me who don’t care about the size of the loan but really worry about how we can pay for food and rent, a good maintenance grant is essential. The new Tory policy seems to provide this.

For me the extension of maintenance grants is the key to enabling access to poor students, especially for universities outside of London where part-time and casual work sufficient to support 4 years of study may be unavailable. I don’t think anyone I studied physics with held down a part-time job after 2nd year due to the enormous amounts of study time involved (we had 6 assignments a week, and Classical Field Theory assignments alone took 12 – 15 hours of our week!) I know that engineers and medicine students had even more work than I did, and couldn’t juggle it the way the humanities kids did, so they weren’t able to easily find work. In Australia this isn’t such an issue because students don’t move away from their home town to study – they mostly live with their parents – but in the UK it’s a significant problem. UK students can take a loan but taking a loan for living expenses and fees leaves you saddled with a huge debt that wealthier kids, or kids who could stay at home, didn’t need to incur. This is a major inequality problem.

Overall I think that the Tory policy contains the four key ingredients needed to make university access more equitable in a graduate tax scheme, and crucially it attacks the two key causes of inequality in education access – it extends maintenance grants and attacks the early payment benefits of previous systems. I suspect a side-effect of this will be more mobility for poorer students, enabling the most talented poor students to take up remote courses – either specialized courses or courses in better universities – that they might previously not have taken due to fears over the cost of living and the risk of taking a huge loan to cover living expenses. This will be good for the UK overall, since better talent accessing more suitable courses means a better workforce.

A side note on anti-intellectualism in the working and lower middle classes

A common complaint about graduate tax schemes is that they saddle poor kids with huge debts that they won’t want to bear, and that poor people are afraid of debt or, having a lower income to start with, see debt of a given size as more prohibitive than wealthier people do. I think this is, within reasonable limits, bullshit. England is going through a massive housing crisis, at least one small part of which is due to people taking out huge house loans they can’t afford, in the hopes of making short term gain on “the property ladder.” Though I don’t believe they were the cause of the crisis, poor people seem to have been just as willing to take these risks as their wealthier compatriots, for no more reason than the possibility of making a 10% profit in a few years. Poor people are quite happy to take a risk on a large loan – in fact, on a loan way larger than those for a uni degree, with much higher repayment rates – and while it could be argued that yes, these people (usually!) have jobs, they don’t get any deferred repayment options or reduced interest, so I think the size should be more rather than less relevant in their case.

Given that it is well established that the single best investment in future income that anyone can make is a university education, the idea that poor people will be discouraged from university by a total debt of a mere 12-24000 pounds is pretty shonky, unless poor people don’t realize that an education is the best future investment possible. If their parents were willing to take a 150,000 pound loan for a high-risk short-term profit opportunity, why should their children be perturbed by a 24000 pound, low-risk guaranteed medium term profit opportunity? The only possible explanation is that poor kids don’t realize that a university degree is the best guarantee of future earnings. And who, largely, is responsible for this misperception? Their parents. The lumpen proletariat, working and lower-middle classes in the UK are strongly anti-intellectual, and value economic risk for material gain over economic risk for intellectual gain. To a lesser extent this is true in Australia too, in my experience, but it’s more noticeable in the UK. If poor people want to help themselves they need to shake this attitude, and time and again you see the same phenomenon – poor kids who went to university return to their communities and find they are no longer understood or respected because they’ve become “posh.” While I think state schools have a role to play in countering this bias[3], ultimately parents and family are the key determinants of these things and poor communities just aren’t interested[4].

Given this, a graduate tax scheme shouldn’t in and of itself be seen as a barrier to poorer communities accessing university, though obviously saddling kids with a huge loan for living expenses – in the new scheme it will possibly total more than 35000 pounds – will be a genuine discouragement. But the basic loan sizes are far smaller than poor families were willing to risk in the housing market, with far more benefit. So a combination of maintenance grants, costs deferred through low-interest loans, and scholarships should not be considered a disincentive for poor people to attend uni, unless those loans are really really high.

Conclusion

I think the Tory education reforms are a significant improvement on the Labour policy, and go some way towards reducing inequality in access to education in the UK.

fn1: who claims he isn’t, so regardless of the worthlessness of terms like “conservative” and “right wing” in describing actual people, I aim to apply this word to him egregiously

fn2: which the administering body will cock up delivery of

fn3: For example, I discovered what University was at the age of 16 through my high school careers councillor – my parents were thoroughly uninterested in my actually using my obvious mathematical and language skills, so even though I’d been saying for years that I wanted to be a scientist they never actually even looked into how I could go about doing this. Science, they seemed to think, was for rich kids.

fn4: For this I also blame the unions, who in the last 10-20 years have retreated from their role as broad enablers of community achievement, instead focussing more and more narrowly on workplace issues[5]

fn5: Not to mention, of course, labour parties, who are the key force for cultural and political change in poor areas, and have given that responsibility away

So I’m still struggling through the introduction of the PhD thesis I promised to read: understandable since the introduction is still going at page 50. In between my last post and this one I’ve had to wade through some sleep-inducing academic wank, but now I’ve got to the outline in the introduction of the importance of race, and its fluidity in cyberpunk.

The first thing to note, mentioned quite a bit in this article, is that Gibson had never been to Japan when he wrote Neuromancer, which was written in 1982. So here we have a North American in 1982 writing a book redolent with themes from a country he has never visited, during an era when North America was afire with fear of what the Japanese were going to do in America (this was the bubble era and Japan had just, apparently, become the largest creditor nation in the world – they were supposedly buying up American businesses and land). This, I think it’s easy to see, is a situation ripe with potential for cultural stereotypes to eclipse nuanced thinking.

It’s worth noting before we go on – and for the rest of any posts I get around to writing about this – that the author of this thesis I’m studying makes it clear at this point that his goal is not  “reading cultural representation for their positive or negative (authentic or inauthentic) portrayals”, but that he is interested in examining the ways that these representations “function to reiterate, challenge, transform and/or create cultural norms”. His interest is the relationship between existing stereotypes of Japan, the way the cyberpunk texts interpret them, and how these interpretation serve to create new images (at least, that’s what I assume this means). I know a lot of (both of) my readers are eager to find examples of transparent whining leftism, so please relax – this chap is trying to do something a little more interesting than that.

So what does the introduction tell us about how race will be handled in the thesis? For a start, in the 4 pages covering “The Fluidity of Race” we don’t see the word “multiculturalism” once, even though Gibson himself states that “I’ve always lived in Vancouver … a Pacific Rim city with a lot of interaction with Japan.” Vancouver, the world’s most multicultural city, in a country with a policy of multiculturalism… it seems that this might have influenced Gibson’s views on race and his power to interpret race, or to imagine multi-racial societies. Also, isn’t Vancouver in … Canada? But the classic interpretation of cyberpunk is as an American urban myth. So for example we find this description of the relationship between America and Japan at the time:

the now obligatory Japanese reference also marks the obsession with the great Other, who is perhaps our own future rather than our past, the putative winner of the coming struggle – whom we therefore compulsively imitate, hoping that thereby the inner mind-set of the victorious other will be transformed to us along with the externals

[this is actually a quote from Jameson, a key post-modernist writer influencing our author’s text]. But is this right to apply to Gibson? If he lived in Vancouver most of his life, is this relevant? Canada is a resource exporting country, and such countries are never threatened by manufacturing countries the way that another manufacturing country (e.g., America) might be – the manufacturing countries need us so long as we have stuff in the ground. The quote as written certainly sounds like something that could be said about Phillip K Dick, or about Allied war propaganda from world war 2, but is it applicable to the mindset of a man who has “always lived” in a multicultural city as relaxed and easy to live in as Vancouver, in a resource-exporting country? I think it might be a little overwrought. And Jameson seems to be saying this about Bladerunner as much as about Gibson’s work.

This part of the introduction concludes with the statement that

in an era of globalisation, Asian Americans are becoming ubiquitous in American popular culture both as producers and consumers. Globalisation … has been accompanied by intensified transnational cultural practices and cultural hybridities in societies around the world. Thus “race and its cultural meanings remain at the core of globalizing media flows and their local receptions”

This leads to the discussion of the other big issue in cyberpunk, globalisation, but it doesn’t seem to me to put the race issue to bed. Is the representation of race in cyberpunk related to globalisation or to the triumph of multiculturalism as a cultural model, if not for everyone in the west, at least for young people from a certain cultural elite? And what does that tell us about the kinds of stereotypes that will enter the work of a man who had never visited Japan when he wrote the book? Will they be stereotypes based on outdated cultural models of Japan, or will they be a combination of the various Oriental things he saw in multicultural Vancouver (including shops, Asian cinema, visits to chinatown, art exhibitions etc.) and the hugely influential Bladerunner? If so, the stereotypes Gibson is building are being built not only from a distant, imagined Orient, but from an Orient which has plonked itself on his doorstep, modified itself to suit a relaxed, multicultural, very Western city, and presented itself to him full of late 70s and early 80s vigour.

If so, what we’re seeing here is the production of stereotypes in a very different way to that envisaged by Said in Orientalism. We’re also seeing, perhaps, the production of images of the Orient in a sub-cultural genre that may not actually be influenced very strongly by the insecurities and biasses of that great producer of modern popular culture, America. Perfect material for the development of a theory of post-modern Orientalism. But our author hasn’t mentioned multiculturalism or paid much attention to Gibson’s Canadian heritage – so is he going to miss this chance when he approaches the topic in more detail?

Only time will tell…

Over at Terra Nova there is news of the release of a study conducted with the help of Sony, which is essentially a large survey of MMO users’ role-playing style, their attitude towards the game, mental health and degree of social exclusion. It’s an interesting attempt to characterise the qualities of MMO players by their degree of interest in role-playing and their sociodemographic and personal profile, and the first study of its kind to use data from the underlying game database. I have some problems with the statistics (outlined after my rant, below) which maybe will be clarified when the final version of the paper is released, but I have bigger problems with the interpretation of the results, and the view that the researchers at Terra Nova are taking of role-players as compared to the “non” role players in the survey.

Specifically, in the summary of the paper, the first author Dmitri Williams states that

Role players come much more often from offline marginalized groups, suggesting that some may engage in the practice to find acceptance or a safe outlet for their identity.

Role players engage in the practice for a number of reasons, but the standout one tended to be for creativity. Escapism was present, but was rarely the main reason.

which suggests a reasonably balanced view of gamers’ reasons for playing in the second paragraph (escapism is rarely the main reason) but a very blunt and anachronistic explanation in the first paragraph. It seems to assume that there is a higher level of escapism in these marginalised groups, which is supported only by a tautological hypothesis. The authors argue that marginalized groups would be more likely to role-play than the non-marginalised, because role-playing is a form of escapism, or a safe outlet for their identity. Having found this statistical difference, they conclude that escapism must be the reason for this higher representation. But the original hypothesis is untested. I see no realistic or reasonable link between marginalization and greater role-playing. It’s not like you get to be gay in an MMO, or your blackness becomes more acceptable, or your non-christian religion. You get to be an elf, or a magician. That there should be a relationship between taking another role in a computer world and being dissatisfied with your role in the real world is a highly dubious claim. The truth of this claim needs to be established before the next postulate can be finalised.

However, the claims get a little more disturbing in a subsequent piece on RMT (Real Money Transactions) by a non-author of the paper, Castronova, who states that this paper

shows pretty clearly that players who desire strong refuge from reality, the sincere role-players, are a distinct minority. My arguments were delivered with a background assumption that very large numbers of people were scrambling over themselves to get out of the real world. Not so. That doesn’t make the arguments wrong, it just indicates that any plea for the right to live in a deep fantasy is less socially resonant than I thought… I’m an advocate for a minority, a somewhat disturbed one at that according to Williams, Kennedy, and Moore.

So Castronova’s assumption is that role-playing is about escapism, and plain and simple – people want to “get out of the real world”. Note in this paragraph Castronova doesn’t change his view that role-playing is about escapism, he just discovers that most people in MMOs don’t role-play much and therefore aren’t doing it for escapism. He goes on to use the loaded language of the claim that they are a “disturbed [minority] at that.” Judging the loonies is always a good look in academia, I find.

My problem with this is that, as far as I can tell, all media are a form of escapism. You can’t run around claiming that only 5% of people who watch movies do it for escapism – they all do! So what’s different about MMOs? Why should it only be some select group of extreme role-players who are doing it for the escapism? Couldn’t it be that everyone is doing the game as a type of escapism, and role-players just have a different style? A style more suited to minorities, apparently, but so what? The assumption underlying the paper and Castronova’s further comments are that those people at the “low” end of the role-playing spectrum, grinding out the levels and the monsters, are not doing it for escapism. I’m sorry, but no matter what style of play you have, when you pay by the month to engage for hours in a computer game where you play an elf, orc or rogue, you’re in it for the escapism. The rest of it is just about style.

So no, role-players are not a “disturbed” minority (at that!) who want to escape reality. They are a small subgroup of a large number of people who play a game as a form of escapism, and do it with a particular slightly pretentious style.

Problems with the statistics of the paper are:

  • they claim the survey is a “stratified random sample” taken on 4 strata (4 different servers) but there is no evidence in the analysis that the stratified random sample has been taken into account
  • They don’t report a response rate for the overall survey or the servers. Maybe “marginalized” heavy role-players were more likely to answer the survey than the non-marginalized heavy role-players?
  • The differences in the groups are in some instances very small and only significant due to the large numbers in the survey, and Cohen’s D statistics don’t really give any additional weight to the results (there are significant problems with the use of these kinds of stats in my experience). Consider the loneliness scale: high role-players differ from the low ones by 2 points on a scale of 4 to 80 (about 2.5%), which is not a big difference no matter how significant it might be. It appears that there was only 1 woman in the High RP group (out of 300 or so people!) but the gender difference between this group and the medium RP Group was statistically significant! These are large-sample anomalies
  • There is no multiple regression analysis, so no adjustment for confounders. Given the supposedly significant demographic differences between groups, it might be wise to have done this. Particularly, adjustment for the 7 categories of education, and for social marginalisation, might have removed the mental health differences between groups
  • Mental health appears to be estimated by a form of self-report. This is always a dubious measure.

So the stats could probably have been better explored…

In my previous post I mentioned stumbling across an analysis of cyberpunk and orientalism, which interests me for a lot of reasons, and I’ve subsequently decided that since I’m living in the shadow of the zaibatsu without a job, maybe it’s time I embarked on a shady criminal information-hacking project, so I’m going to try and read through the thesis I found and draw together some kind of themes or conclusions from the tangled mess that is postmodern critique.

… So to start with I thought I’d do a survey of what is already available on the internet about cyberpunk and postmodernism. According to this (awesomely brief) description,

markers of postmodernism recurring in cyberpunk include: the commodification of culture, the invasive development of information technology, a decentering and fragmentation of the “individual”; and a blurring of the boundaries between “high” and “popular” culture.

which maybe helps to pin down why cyberpunk is considered to have such strong links to postmodernism, and also to nihilism – which, incidentally, I didn’t realise had a whole branch of academic theory devoted to it, primarily stemming from the work of Baudrillard. I don’t want to pursue the discussion of nihilism too far though because I find it seems to get incomprehensible very rapidly. Interestingly though, the intersection of cyberpunk, nihilism – which posits an absence of external morality – and postmodernism, with its reputed objection to “truth”[1], draws in a lot of young christians. For example, this blog describes some common misconceptions about postmodernism held by its christian critics, and maybe helps to show what postmodernism is not. Obviously, those whose religion is based on a single text are going to have some big issues with postmodernism, which is all about criticising the relationship between “the text”[2] and “truth”.

Modern feminism has also found an interest in cyberpunk, as a fictional representation of the liberating effect of technology for modern women. This is briefly discussed here, with again some reference to the Cyborg Manifesto by Donna Haraway.  This could be interesting if it led me back to Haraway, whose work I struggled with many years ago with the help of a friend. I hope it doesn’t, though, because I’m largely not up to dealing with her language… But I don’t think I’ll be pursuing any further feminist involvement in cyberpunk in and of itself (though I may stumble across some in time), because I only have limited time and my main concern is the Orientalist part[4].

The thesis I have started reading states its perspective on the importance of cyberpunk for postmodernism in the introduction:

Cyberpunk’s postmodern scene, the flow of people, goods, information and power across international boundaries, is theorized in Fredric Jameson’s work on postmodernism as the cultural logic of late or third stage multinational capitalism, fully explicated in Postmodernism, or, the Cultural Logic of Late Capitalism(1991). Importantly, Jameson finds cyberpunk to be a significant manifestation of this, the “supreme literary expression if not of postmodernism, then of late capitalism itself”(419). … Moreover, this postmodern scene, a global array of disjunctive flows, specifically encompasses Japan: the multinationals, for example, are depicted as Japanese zaibatsu.

I’m inclined to agree with most of this position, though I’m going to skip over the supreme importance bit to see what our resident theorist has to say about Gibson’s view of Japan from the perspective of Orientalism, which he goes on to say will try to

“get beyond the reified polarities of East versus West” and in a “concrete way attempt to understand the heterogeneous and often odd developments” (Culture and Imperialism 41). By exploring a number of particular theoretical positions and terminologies, my intention is to work toward highlighting the dynamic of reflexivity inherent in postmodern orientalism.

(The quotes here are quotes of Said). This paragraph is easier understood in the context of the abstract, in which our resident theorist explains that his view of “postmodern orientalism” describes

uneven, paradoxical, interconnected and mutually implicated cultural transactions at the threshold of East-West relations. The thesis explores this by first examining cyberpunk’s unremarked relationship with countercultural formations (rock music), practices (drugs) and manifestations of Oriental otherness in popular culture.

This distinguishes the modern cyberpunk narrative of the orient from that of previous centuries, described by Said, in which the imaginative process is entirely one way – western writers and academics taking parts of the orient that appealed to them to form their own pastiche of cultural and aesthetic ideals of the orient which suit their own stereotypes; and then using these to bolster a definition of the West in opposition to an imagined Orient. In the cyberpunk world, characterised by postmodern orientalism, the Orient is actively engaging with, challenging or subverting the images which western writers and academics form of the East, and importing its own distorted images of the West, in a form of postmodern cultural exchange.

This cultural exchange is very interesting to me, and has been a topic of rumination for me on my other blog ever since I came to Japan. It’s clear that the West “dreams” the orient[5], not seeing much of what is really happening here; but at the same time the Orient has its own fantasies of the west, which have become increasingly influential in the west as the power of Japanese and Chinese media enables them to project their own images of the West back to it[6]. Both parts of the world also have their dreams of their own identity, and often these definitions are constructed at least partially in contrast to their dual opposite; but recently, with increased cultural exchange, it’s possible to see these identities becoming more diverse (at least in the Orient) as the “Other” hemisphere becomes less alien and the distinction between “Eastern” and “Western” blurs. I am interested to see if this phenomenon is sufficiently identifiable as to be described by a theory of postmodern orientalism, and that’s why I’m reading this thesis…

So, that’s the outline of what we’re aiming for. Strap yourselves in kids. We’ve taken the Blue pill…

[1] I think this is a misreading of postmodernist theory, which mainly seems to argue that the way we interpret truth is coloured by our cultural and linguistic assumptions. There’s an excellent example of this in the paper “The Egg and the Sperm: How science has constructed a romance based on sterotypical male-female roles”, Emily Martin, Signs(1991): 16(3), 485-501.

[2] “the text” is like a classic postmodern bullshit bingo cliche, but I actually think it’s a really useful word for catching the broad sense of what post-modernists[3] talk about when they do their critical analyses

[3] I’m really quite certain that I routinely confuse post-modernists and deconstructuralists, (deconstructionists?), but I don’t care because it’s their fault not mine. Nobody confuses a statistician and a mathematician, do they?

[4] Though actually I doubt one would have to google very far to find that Orientalism as a concept would have been significantly boosted by better consideration of gender relations…

[5] mostly, in the case of Japan, through a series of wet dreams or nightmares, but still…

[6] Consider, for example, the West as presented to the West by Miyazaki in Kiki’s Delivery Service, or in Full Metal Alchemist[7]

[7] I just want to point out here that if I was going to be a proper academic wanker like Said I would present these names in untranslated Japanese, on the assumption that you, dear reader, can just read everything, or that if you can’t you’re a worthless loser who doesn’t deserve to know what I’m talking about. Aren’t I nice?