Prophetic, even…

I don’t usually open up this blog to political debate, but my only commenter has been challenging me over the “incompetence” of the Australian Home Insulation Program recently, so I thought I’d try my statistical skills at investigating it, given that I’ve already used them so effectively to prove that all British people are ignorant. In this post I’m going to analyse the rate of fires occurring in houses before the advent of the Home Insulation Program, and after, and show that under a wide range of assumptions (some realistic, some unrealistic), the Insulation Program probably led to a reduction in the rate of house fires after newly-installed insulation relative to the time before its implementation. I will also attempt to give some explanations for this. This builds heavily on the work of Possum at Crikey, but with the addition of a time-dependent element to the analysis, a wider range of assumptions (within which Possum’s are special cases), and a bit of risk analysis. This isn’t to say Possum can’t do such things, but he/she didn’t, and since the linked analysis the Coalition have released new figures showing that the program is “even worse” than previously believed.

Introduction (skip if you’re Australian)

For my foreign reader(s), it may be a little puzzling that I’m diverting from discussion of Double Cross 3 to a relatively trivial statistical analysis of something as tedious as home insulation in Australia. In 2007 the Australian government changed after 11 years to become a Labor Government (left wing by standard definitions), and its response to the Global Financial Crisis (GFC) was to introduce a bunch of Keynesian pump-priming, including the Home Insulation Program (HIP). The then government became the opposition Coalition, and ran a heavy campaign against the pump-priming under its ludicrously maniacal leader, a failed monk called (appropriately) Tony Abbott. Their campaign relies heavily on accusations of wasteful spending and inefficiency, and they attacked all aspects of the government’s programs.

The HIP was intended to provide householders money to install installation in their home, generally through the use of contractors, whose numbers exploded overnight. The Coalition quickly realised that post-insulated homes have a heightened fire risk, and started making hay out of the fact that there were lots of insulation-related fires. They just didn’t mention that there have always been insulation-related house fires in Australia, and Possum’s analysis above was the first anyone has seen (as far as I’m aware) that compares pre- and post-HIP rates of fires. The central Coalition claim – that the government endangered householders through its poorly-run program – depends on the assumption that rates of house fires went up, since these insulation installations were a choice people made, so if the rate stayed the same there is no argument[1].

Method

Numbers of fires before the HIP, and numbers of installations per year before and after the HIP, along with total numbers of houses already with insulation installed, were obtained from the ABS and the Federal Government via the above-linked Possum post. The number of post-HIP fires was helpfully provided by Coalition press-release today[2]. Details of the length of time the HIP was running and some other minor figures were obtained from the Department Secretary’s statement linked to by Possum at http://blogs.crikey.com.au/pollytics/files/2010/02/Secretarys-opening-statement-220210.pdf.

The number of fires was converted into a rate of fires per insulated house-year. That is, a single house that was insulated for a full year was considered to contribute 1 insulated house-year (IHY) to the risk pool, and the rate was presented as a rate of fires per 1000 IHY. The number of IHY was calculated under a set of analysis cases.

Case 1: All fires were caused by insulation installations in the year that the fire was recorded, and all such installations happened in the first day of that year, contributing a full IHY to the period of the study.

Case 2: All fires were caused by insulation installations in the year that the fire was recorded, but the installations occurred smoothly over the year. If there were N installations in the period, then 1/365 of these occurred in the first day of the year, 1/365 in the second, and so on. This means that N/365 installations contributed 1 IHY to the risk pool, N/365 contributed 364/365 IHY to the risk pool, and so on.

Case 3: All installations from before the HIP were assumed to have an equal risk of a fire, so that every insulated house in Australia before the HIP was in the risk pool for one full year; these houses were also in the risk pool for the post-HIP period.

Case 4: An exponential rate of decay of risk was assumed over years, so that the risk of a fire decayed by exp(alpha) for every year since the installation. So in year 0, all houses contributed to the risk pool; in year 1, exp(alpha) houses, and so on. alpha was chosen for this case so that 25% of houses in year 1 contribute to the risk (but we will also present some sensitivity analysis).

Case 5: Information from the Secretary’s letter was used to identify the total pool of risk post-HIP, and compared to the year before the HIP under the conditions of case 1 (a full year’s risk). Under this case, the post-HIP period was assumed to be 15 months long. 176000 homes were installed in November 2009, 3300 in March 2009, and the remainder were assumed to be installed in between these periods, at an arbitrary point assumed to be September 2009.

As an additional note: Cases 1 to 4 were calculated based on a silly piece of rhetoric from the Coalition, which claimed that “reported house fires from her [Julia Gilllard’s] program [are] still running at around one a day.” There were 191 fires post-HIP in the same press release, so the Coalition seem to think the post-HIP period was only 200 days, when in fact it’s 15 months. However, assuming the 200 day period benefits the Coalition in this analysis, since a shorter post-HIP period means a smaller risk pool and thus a higher rate of fires. This is conservative statistics at its best (literally!).

Headline figures

The headline figures used here are:

Pre-HIP fires: 85

Pre-HIP installations per year: 70000

Houses insulated pre-HIP: 3183625

Post-HIP fires: 189

Houses insulated post-HIP: 1100000 (1.1 million)

Post-HIP period: 200 days (15 months in case 6).

Results

Case 1:Assuming fires occur due to installations in the year of the fire only, and all installations at the first day of the year

This gives us 85 fires in 70,000 IHY pre-HIP, and 189 fires in 1100000 IHY post-HIP.

Rate of fires pre-HIP: 1.21 per 1000 IHY

Rate of fires post-HIP: 0.31 per 1000 IHY

Relative risk of a house fire post-HIP vs. pre-HIP: 0.26

Case 2: Assuming fires occur due to installations in the year of the fire only, but installations are evenly distributed over the year

This gives 85 fires in 35095 IHY pre-HIP, and 189 fires in 165959 IHY post-HIP.

Rate of fires pre-HIP: 2.42 per 1000 IHY

Rate of fires post-HIP: 1.14 per 1000 IHY

Relative risk of fire post-HIP vs. pre-HIP: 0.47

Case 3:Fires in a given year are due to any house ever insulated up until that point; all post-HIP insulations occurred in the start of the year

This gives 85 fires in 3183625 IHY pre-HIP, and 189 fires in 3786364 IHY post-HIP.

Rate of fires pre-HIP: 0.027 per 1000 IHY

Rate of fires post-HIP: 0.050 per 1000 IHY

Relative risk of fire post-HIP vs. pre-HIP: 1.87

Case 4: Assume exponential decay of risk, all installations post-HIP at the start of the period

Assuming a exp(-0.3)% decay in risk per year, this gives 85 fires in 270,080 IHY pre-HIP, and 189 fires in 802820 IHY post-HIP. In this model we assume 70000 houses a year were insulated over 46 years until the start of the HIP period, when 1.1 million more were insulated in 200 days.

Rate of fires pre-HIP: 0.31 per 1000 IHY

Rate of fires post-HIP: 0.24 per 1000 IHY

Relative risk of fire post-HIP vs. pre-HIP: 0.74

This case can be modified to incorporate the assumptions of case 2 or 5 about the distribution of installations post-HIP (smooth over the period or end-loaded), but it likely won’t make much difference, since in this case large amounts of the risk pool come from previous years of data, which are the same for both the pre-HIP and post-HIP installations.

Case 5: Using the departments figures to approximate the risk pool post-HIP

We can do this using the assumptions of Case 1 or Case 2 for the pre-HIP risk pool. Case 1 is more favourable to the Coalition, so we use that one.

This gives 85 fires in 70,000 IHY pre-HIP, and 189 in 829792 post-HIP.

Rate of fires pre-HIP: 1.21 per 1000 IHY

Rate of fires post-HIP: 0.23 per 1000 IHY

Relative risk of fire post-HIP vs. pre-HIP: 0.19

Sensitivity analysis of the exponentially decaying risk

The analysis that is most consistent with any kind of modern frailty or risk analysis is case 4, where the most at-risk houses are assumed to go up soonest. That is, the bodgiest ones burn first. The model I have used above assumes that, effectively, the risk of a fire decays at a rate of exp(alpha*year)% , so in the year of its installation a house contributes 100% to the risk pool, but in the next year it contributes only exp(alpha)%, and then exp(2alpha)%, and so on. We can change this rate by fiddling with alpha. I’ve fixed alpha in the assumption at -0.3, which means the year after installation a house contributes 75% to the risk pool, then 58% and so on. We can fiddle with these figures to estimate the decay rate of risk at which the pre-HIP and post-HIP rates of fire would be equal. It’s actually alpha=-0.175, which corresponds to 83% of the risk transferring from the first year after installation, 70% from the 3rd year, and 17% from the 10th year. Note that case 4, where all houses are assumed to contribute equally to the risk pool no matter when they were built, corresponds with alpha=0, and represents the maximum relative risk of a fire that could occur under any assumptions for the post-HIP period.

I don’t think it’s reasonable to assume that houses insulated 10 years ago are still significantly contributing to the risk of fires today, and I think in fact a decay to almost no contribution over 3 or 5 years is better; hence my choice of -0.3 for alpha. I think everyone would agree that alpha is likely to be between -0.3 and 0, but the 0 assumption is silly. If we fix alpha at between -0.3 and -0.15, the highest Relative Risk of fires for the post-HIP period vs. the pre-HIP period is 1.07. This is a meaningless increase in risk, but it corresponds to houses from 10 years ago still contributing 22% to the risk pool[3].

This sensitivity analysis suggest to me that there is no sense in which the HIP has increased the risk of house fires; in fact, it has decreased the risk of house fires. I should note though that I’m no expert on risk analysis, though I’m good at survival/frailty analysis. So someone else could probably handle this better.

Discussion

It’s easy to imagine that an increase in risk of house fires is inevitable with an expansion of a program that, individually, carries a risk of house fires. But it’s not actually a contradictory finding when considered in light of other types of risk we are familiar with in our lives. It is often the case that the more an activity is performed, the more accurately and efficiently it is performed. Contrary to the claims of the Coalition that the HIP has unleashed an army of “cowboy contractors” risking the lives of ordinary Australians, what may actually have happened is a three-fold reduction in the main risk factors of fires, specifically:

  1. Homeowners are less likely to do it themselves, and this is probably the single biggest risk factor for insulation-related fires
  2. Where previously insulation was installed by general builders on an occasional basis, we now have an army of dedicated installers. Though their initial efforts may have been bodgy, the scale of their work – repeatedly doing the same installations for months – may have led to a significant improvement in the quality of installations. We see this with hospitals, where error rates reduce significantly as the number of operations performed increases, and transport, where professional drivers have much lower rates of accidents due to experience. Specialisation is a key way of reducing error-rates, and the HIP may have led to a massive increase in the specialist workforce[4]
  3. If it’s true that this program is “throwing money” at these contractors, with all the associated inefficiency and waste, presumably their profit margins are much higher than used to be the case for insulation installers. So with higher profit margins, maybe there is actually an increased incentive for them to use higher-quality materials, not cut corners, and actually do the job better – particularly if a quality job leads to referrals, and easier business. In this case these people, in addition to becoming very efficient at the work they do, might actually be doing it to a higher standard of care than was previously the case[5]

My money is on 2) as the cause, in this case, of a possibly quite significant reduction in the risk of fires due to insulation installations in Australia.

Conclusion

This report has found that under a wide range of conditions, including a general model of risk relating existing and new installations of insulation, the HIP likely led to a reduction in the rate of house fires in Australia. The relative risk of house fire after the HIP compared to before was probably about 0.75, though it may have been as low as 0.2. The highest possible relative risk that can be realistically obtained under any set of assumptions appears to be about 1.05, which represents a level of risk broadly similar to that existing before the HIP was introduced. The findings of reduced risk apply even when using the Coalition’s stated estimate of the post-HIP period as 200 days, which approximately doubles the post-HIP rate of fires.

In addition to reducing recipients’ electricity costs, the HIP has reduced the risk of fires in most homes compared to insulation installations done under the pre-HIP program. The most likely explanation for this reduction in risk is the increased specialization of the installers and the scale of their work; but there may also be a contribution due to reductions in poorly-installed home DIY jobs, and also the purported high profit margins of the work may inspire the use of higher-quality materials. Regardless of the explanation, the statistics do not appear to support Coalition claims of reckless endangerment of human life due to the HIP.

fn1: of course this Coalition campaign flounders a bit on the fact that it’s private contractors doing the work, and they love encouraging private contractors, so if the contractors did lead to an increase in fires, there is a bit of a credibility problem for subsequent arguments in favour of private sector contractors doing government services cheaper than the state can.

fn2: This is High Science we’re doing here, kiddies

fn3:Finally note that the three scenarios assumed by Possum in his/her modelling fit into the risk model presented here. Scenario 3 (90% of fires from existing stock) corresponds to a value of alpha=-0.1, while Scenario 1 (10% of fires from existing stock) corresponds to alpha=-2.2. Both of these values are, in my view, outside the reasonable range of values we can assign to the relative mix of risks from existing and new stock, but obviously this is just a matter of opinion.

fn4: This could have negative ramifications for the employment rate when the scheme stops and a bunch of insulation specialists have to find new work, I suppose

fn5: I remember a hospital I once worked in did the whole “lowest cost bid” thing for some wiring, and employed a bunch of unqualified building contractors to lay down the ethernet cables. The result was fire and electricity risks, and a network that didn’t work. A year or two later, when a state government renewal project was launched in our area, the project managers visited our hospital and were appalled at the quality of work. They told me that in the early dotcom boom lots of building contractors switched to computer infrastructure jobs like this, and offered bodgy jobs done dirt cheap to people who didn’t know any better. It’s not necessarily the case that a process aimed at driving down bidding prices and ruthless competition will increase quality, especially in a newly-growing industry where the standards aren’t well understood and the job is being commissioned by non-experts

Posted in ,

6 responses to “Did the Australian Home Insulation Program reduce the rate of insulation-related fires?”

  1. Paul Avatar
    Paul

    OK, given that I’m mentioned by inference if not by name I suppose I should wade into the murky waters that are debating a mathematician despite getting a solid 50% in my last university maths subject [1].

    To start this off, can I request the context of where I accused the Rudd government over the ““incompetence” of the Australian Home Insulation Program”? I make lots of claims that the Rudd government sucked, but the context may allow me to point out I wasn’t saying the fire risk was the sole measure of the suckiness of Rudd’s government [2].

    To kick off:
    “since these insulation installations were a choice people made, so if the rate stayed the same there is no argument”
    So, just to check, if the government funded cigarettes and the smoking and cancer occurrence rose at the same rate then that wouldn’t be the government’s fault? I think what you mean to say is that: If the government funds something with a low level of risk then the government can’t be blamed for the resulting a higher occurrence (not rate) of the risk. Am I right?

    Or do you want me to assume that you’re now in favour of any idiocy around as long as the government doesn’t increase the risk profile? “Free heroin for all, as long as you use clean needles!”

    You mention a 15 month timeline a couple of times, where did you get that from? We can assume the install rate followed some sort of skewed and truncated bell shape curve due to the abrupt nature of the end of the program (which was axed with minimal warning and no discussions other than a broken promise [3] to get back to protesters) and the installation high point being in November.

    Case 1 doesn’t address the question of how many installs cause fire more than a year after being installed. We don’t have the stats on how many of the 85 were from installs greater than a year old. We also don’t have confirmation that the 189 fires to July 2nd were all caused by the insulation associated with the scheme, though I’d hope that the people using the stats at least managed to eliminate that.

    Case 2: We know the HIP installs are not an even distribution. They’re a skewed bell curve with an abrupt cut off in Feb 2010 when the program was suspended.

    Case 4: The 200 days refers (I believe) to the time since the end of the HIP program (from Feb 2010). The HIP installations ran from March 2009 to Feb 19 2010. It’s probably a two to four weeks off a full year. How does changing the 200 days to a year change it?

    Case 5: Where is it? Has this been redacted by the US military/industrial complex? By some vast right wing conspiracy? By Birthers and Truthers working in concert? The public demands to know!

    Case 6: Oh. Ignore the comments re: Case 5. I’ll assume it’s just the mathematician we’ve got doing the analysis sometimes struggles with counting. Also, am I meant to trust your other figures? 😉

    And you still haven’t shown any stats on electrocution deaths. Well except the one of four people being fatally electrocuted during the 12 months the program ran. I did find one article (http://www.thefifthestate.com.au/archives/9996) which said “There are no statistics on whether installers were electrocuted before the program started, but it has always been dangerous work.” I’ve written to the organisation that made that comment and asked for further stats.

    Point 2: “Specialisation is a key way of reducing error-rates, and the HIP may have led to a massive increase in the specialist workforce[4]”
    Well, until the government stopped the program, promised to restart it, then didn’t.

    Point 3: “If it’s true that this program is “throwing money” at these contractors, with all the associated inefficiency and waste, presumably their profit margins are much higher than used to be the case for insulation installers. So with higher profit margins, maybe there is actually an increased incentive for them to use higher-quality materials, not cut corners, and actually do the job better – particularly if a quality job leads to referrals, and easier business. In this case these people, in addition to becoming very efficient at the work they do, might actually be doing it to a higher standard of care than was previously the case[5] ”
    I think you can safely discard this idea. There would be an incentive to do the worst job you could so you could move to the next house and give them a “free” installation then pocket your government issued cheque. I don’t think you can argue that the subsidy makes them better, though your point around experience (point two) is a valid consideration. Of course if they were always doing it wrong without finding the error they may just become faster at doing a terrible job…

    “The findings of reduced risk apply even when using the Coalition’s stated estimate of the post-HIP period as 200 days, which approximately doubles the post-HIP rate of fires.”
    While I think the best idea is to get better stats (and I’ve written some requests based on that), can you tell me if we assume the 1.1 million installs were over the period Feb 2009 to Feb 2010 (which they were) then the 189 fires were Feb 2010 to July 2 (which is dodgier as I don’t know if any fires were prior to that period), then what is the risk rate? You’d need to extrapolate a fire rate of 189 for 200 days out to a year, which predicts around 345 fires for the full year after the one year program wrapped up.

    Conclusion:
    Based on your analysis I’m starting to lean towards the assumption that the relative risk went down under the HIP. It would be nice if some of the points I made above could be clarified and if we had better stats to work from, then I could be thoroughly convinced on the relative risk.

    On the other hand that’s hardly the check associated with the accusation of incompetence. I may need to retract any former claim that the incompetence was demonstrated in the relative rate of fires (assuming you can find the part where I said that) and instead stand with you in insisting that the incompetence was demonstrated by the Rudd government’s failure to defend this program and their abrupt and poorly managed cancellation of it. I’m glad we can stand united on this.

    Now chant with me: “Hey. Hey. Ho. Ho. This Labour government has got to go!”

    No? 😉

    BTW I’m referring to the government as the Rudd government as I’d like to give Julia Gillard a “fair shake of the sauce bottle” [4]

    [1] Which I took as a “I’ll pass you if you promise not to come back”. Everybody wins! Except the value of my degree!

    [2] Do you want to discuss if he was incompetent overall? The manner of his dismissal gives me some amusing ammunition in favour of incompetence. Inciting your own deputy to knife you after she’s just pledged loyalty is a new low for Australian politics.

    [3] By the way, breaking promises to protesters after you’ve pretended to care about the impact of your policy on them isn’t a very nice approach. I’d say assuming it was incompetence that prevented or lost the response is probably the kind interpretation. The other option being that the ALP voluntarily chose a power mad two-faced jumped-up bureaucrat as their leader. Of course, the ALP’s ditching of him may suggest they agree with that description of him, but allow me to be kind to the PM in question and just describe him as incompetent [see 2 again].

    [4] Please don’t take that as a pass at her, it’s just I think that the expression is ridiculous and should be reused as often as possible to enshrine Rudd’s place in Australia’s history.

  2. faustusnotes Avatar
    faustusnotes

    haha, how did you know I was talking about you!? I’m meant to be working, so I can’t go into this in huge depth right now but here’s some basic answers to your points.

    I think what you mean to say is that: If the government funds something with a low level of risk then the government can’t be blamed for the resulting a higher occurrence (not rate) of the risk. Am I right?

    Yes, provided that it’s recognised that the “something” here is assumed to have a benefit judged greater than the risk, which in my opinion means that heroin doesn’t count; but certainly I support the prescribing of methadone, which does have a risk of death in the first two weeks but a longer-term reduction in death risk and crime risk which makes it worthwhile. And, in general, people “choose” to take methadone, so if they’re adequately informed its not the government’s fault if they die in the first two weeks[1].

    The 15 month timeframe comes from the assumption that the program started in March 2009 (the Secretary’s letter states this) and that it finishes when the last death was announced, i.e. July 2010. Oh, if only I could draw diagrams here… so you’ve got, in the year leading up to March 2009, 85 fires; then afterwards you have a 15 month period with 189 fires. The “timeframe” here is the time period within which I calculate the risk; your “skewed and truncated bell curve”[2] is estimated in case 5, very roughly. There would be better ways to do this, but they’d be less favourable to the opposition than my case 1 or 2, because the 200 day time frame doubles the post-HIP rate of fires relative to a 15 month time frame, under the same distribution of installation rates.

    Case 1 doesn’t address the question of how many installs cause fire more than a year after being installed

    Indeed. Possum attempted to do something about this by modelling expected fires, but he ran into the same problem. That’s why he had the three scenarios, and I introduced the risk distribution of case 4. The only way around this is to stop installations altogether and wait a year, because then we’ll know how many derive from existing housing stock (though we won’t know which years’ installations); then we can make a guess at fire rates from existing stock. I think since the program stopped in Feb 2010 it might be possible to say that all the subsequent fires (100?) were due to existing installations, but that doesn’t help because some proportion of them were post-HIP existing installations. What we really need is a good bit of information on the rate of fires post-installation in some kind of research study, that we can sum into a poisson distribution of fire rates, and then we can estimate the expected number of fires in a given year. But I haven’t looked for that info and I doubt there is any.

    Case 2: We know the HIP installs are not an even distribution. They’re a skewed bell curve with an abrupt cut off in Feb 2010 when the program was suspended.

    The skewed bell curve with an abrupt cut-off in Feb 2010 would not be as favourable to the Coalition’s case as case 2, because Case 2 assigns all the house years of risk over just 200 days, whereas the skewed bell curve assigns it over 9 months (270 days), followed by about 120 days where all the installations were in the risk pool. This increases the number of IHY’s relative to the same number of deaths, thus reducing the rate post-HIP. But the pre-HIP rate remains as it is here, distributed evenly across the year. I’ve made a rough attempt at the skewed bell curve in case 5, and again it doesn’t favour the opposition.

    Case 4: The 200 days refers (I believe) to the time since the end of the HIP program (from Feb 2010). The HIP installations ran from March 2009 to Feb 19 2010. It’s probably a two to four weeks off a full year. How does changing the 200 days to a year change it?

    see above, mostly. If this is the 200 days to which the Coalition refers, then their piece of rhetoric (a fire a day) is plain wrong, since it’s actually a fire every 2 days (189 fires over 400 or so days). Wankers. Changing the 200 days to a year will simply add 165/365 years *1.1 million to the pool of post-HIP risk, reducing the post-HIP fire rate and further weakening the case for the Coalition (I think I mention this below in more detail). The 200 day timeframe I favoured in this analysis massively favours the Coalition in all analyses, except the ludicrous case 3.

    Also, am I meant to trust your other figures?

    If you’re feeling brave…

    can you tell me if we assume the 1.1 million installs were over the period Feb 2009 to Feb 2010 (which they were) then the 189 fires were Feb 2010 to July 2 (which is dodgier as I don’t know if any fires were prior to that period), then what is the risk rate? You’d need to extrapolate a fire rate of 189 for 200 days out to a year, which predicts around 345 fires for the full year after the one year program wrapped up.

    I think in this case, the 93 fires that Possum used would be the number of fires from Feb 2009 to Feb 2010, and then we would be able to say that there were a total of 272 from Feb 2009 to July 2 2010. That would mean that we would be looking at 50% more fires over a period twice as long as the 200 days I used in most cases here, so it wouldn’t help the Coalition. But I think actually the Coalition is presenting figures for the whole period (out to July 2). But I agree it’s hard to say definitively. I am happy to run with the idea that the 200 days statement was a stupid rhetorical flourish, but the 189 deaths is a hard figure. Note it’s consistent with Possum’s earlier calculations – he had 93 deaths in February 2010, so now we’ve gone out to about 191 in another 6 months. That 6 months represents a definite 550,000 IHY of risk, because all the installations had been completed by then; in the year before the scheme started there was a maximum of 70,000 IHY of risk (if you assume the installations all happened on the first day of the year). So if you assume all the deaths in a year are due to new installations in that year, the most favourable assumption for the coalition, the death rate in the 6 months since the scheme stopped is 100/550000 – this is a definite figure – while the death rate in the year before the scheme is a minimum of 85/70000. That is, a definite 0.19 per 1000 IHY vs. a minimum of 1.24. That’s a maximum relative risk of 0.15. The more likely relative risk is calculated from dividing our rate of 0.19 by the pre-HIP rate from case 2 (which I think is the closest and best estimate to the truth of the pre-HIP rate, unless there is a seasonal pattern in installations), which was 2.42. Then the relative risk becomes about 0.08. That’s a massive drop in risk for a supposedly reckless program expansion. Even if you use the most favourable (to the coalition) possible number of fires, the 375 you mention, this relative risk goes up to 0.16. Still a great improvement on the pre-HIP rates!

    That’s an interesting finding actually, because if you could implement a scheme at a workplace which reduced the risk of fires by a factor of 6, simply by throwing money at dodgy contractors, you’d have achieved a miracle.

    Regarding electrocution deaths, they’d be much easier to compare because we wouldn’t need to worry about the problem of historical insulations or IHY of risk. We’d simply compare rates per 1000 installations. In order for the HIP to have had higher workplace death rates than the pre-HIP period, there need to have been more than 1100000/70000=16 times as many deaths. Note I say more than, because 16 times as many deaths by itself won’t be a significant change, it’s a break-even rate. If there was 1 electrocution death in 2008, you’d need 16 in Feb 2009 to Feb 2010 just to conclude the rates were similar[3]. I reckon that many didn’t happen, because if they did the Coalition would have been bandying those figures about as much as they could – on the surface, 16 deaths looks really bad.

    I think the Coalition’s rhetoric is really disingenuous and/or stupid here. Note that Tony Abbott was the health minister, he should understand basic comparisons of risks. Peter Garrett I’m willing to give a fair shake of the sauce bottle[4], because he’s a rock star, so the only comparison of risk he should reasonably be expected to be able to do would be “which nostril did I use last?”[5]

    Finally, I’ve been just enough demos to never, ever again chant “hey hey, ho ho!…” unless it ends “this stupid chant has got to go!”

    fn1: though obviously notions of choice in connection with drug abuse are sometimes dodgy, and it is the government’s fault if the program is implemented in a way that is definitively dodgy.
    fn2: i.e. not a bell-curve at all, really
    fn3: although in this case statistical comparison is hard because of the small number of pre-HIP deaths
    fn4: nudge-nudge wink-wink. Note that Rudd also said “gotta zip” at the end of his press conferences. The man really was very nerdy.
    fn5: That’s really unfair because back in his CND days Peter Garrett proved himself very smart, and his performance with the ALP has been disappointing simply because he has been neither as hard-nosed nor as bright as I expected. It’s like the ALP casts a shroud of decency over everyone it touches.

  3. […] of evil. Having proven statistically that British people are idiots and the Australian government didn’t burn the house down, I’m a little disappointed at this mixed result. I’m sure no priest of Sigmar would […]

  4. […] of how one has to consider the size of the risk pool in an analysis. I made this point about the HIP scheme, and it applies here too. There are many reasons to oppose the Afghanistan war, but there are […]

  5. […] it seems I should also have an obligatory election post here, since I commented on Australian politics recently, and also on the British coalition […]

  6. […] is a follow-up to an old post on reduced fire risk under the Australian Home Insulation Program (HIP). Blathering critics of that program have […]

Leave a comment