Today’s New England Journal of Medicine has a perspective piece arguing that an HIV vaccine remains an essential medical research goal. This might seem a strange question to even be considering, but in the era of test-and-treat strategies it is possible that HIV can be eliminated without resort to a vaccine. It’s a little early in the evolution of these strategies to be sure, but within five years or so we may know whether the promise of test-and-treat strategies will be sufficient to eliminate HIV. If so, should private and public organizations be pouring money into research on this problem, when other pressing challenges – such as a malaria vaccine – could offer a more effective research investment?

The argument that the NEJM is addressing is a variant of one of the criticisms that Greenpeace has leveled against Golden Rice – that the research money is better spent elsewhere, or on existing interventions that have been shown to work. Regardless of one’s opinion of Golden Rice specifically, this argument is a challenging one for health policy-makers, because if correctly applied it suggests that policy-makers need to incorporate the risk of research failure into research investment decisions, and consider the possibility of diverting money from the ideal to the good. In the era of judging treatment roll-out on the basis of cost-effectiveness analysis this is particularly important, for two reasons: by the time the treatment is developed other interventions may have reduced the problem to the point where it is no longer cost-effective to implement a new technique; and a high research cost may render the final product cost-ineffective, but this judgment is not easy to make at the beginning of the research process. In this post I will consider these problems as they apply to both the HIV vaccine and Golden Rice, discuss the difficulties in coordinating medical research policy when most agents involved are private, and propose a possible method for encouraging more rational decision-making without authoritarian intervention in private medical research.

The cost-effectiveness of HIV vaccines

The first example, HIV vaccination, is a highly topical issue, and in general the suggestion that we shouldn’t try to make such a vaccine is highly controversial. Upfront, I would like to state that I think HIV vaccine development is essential and should continue, for three reasons: vaccine development shouldn’t be based only on cost-effectiveness; the test-and-treat strategies are not going to be as effective as their proponents claim (in my opinion – and I am a proponent of these strategies!); and we already fund other completely cost-ineffective programs (such as polio vaccination) on the grounds that elimination is a moral good. But I also think that the decision to continue with HIV vaccine development should be made with the full realization that we are spending huge research funds on something that may ultimately prove unnecessary, that may be hugely expensive by the time it is developed, and that will deliver huge profits to pharmaceutical companies despite its potential unimportance.

HIV vaccination may prove to be unnecessary or at least cost-ineffective if test-and-treatment strategies work. These strategies work by identifying people with undiagnosed HIV and getting them into treatment immediately. Treatment prolongs their life and reduces the infectiousness of HIV by about 95%, so they essentially become non-infectious even if they are still engaging in high-risk behavior. The most optimistic estimates of the effectiveness of test-and-treatment suggest that elimination can begin within 10 years of widespread scale-up of such a program, and be effective in a generation. I think here “elimination” means the prevention of new infections: because there is no cure for HIV, the treatments keep people alive and mean that the pool of prevalent cases will only grow until all incidence stops, so until the last case dies there will be prevalent HIV even though it is officially considered to be “eliminated.” Note that because these treatments are essential to keep people alive, huge amounts of treatment will need to be rolled out across Africa regardless of whether a vaccine is invented, so a vaccine strategy cannot be implemented in place of treatment. If Africa were to become rich tomorrow, for example, and implement effective universal health coverage (UHC) and test-and-treat strategies, then everyone would get the treatment, the HIV epidemic would essentially stall, and vaccination programs would be almost completely irrelevant in preventing the further spread of HIV – and they could not be implemented instead of treatment, so they would be a huge cost on top of the existing strategies rather than a cost-effective alternative.

Furthermore, the longer the vaccine takes to develop, the less effective it will ultimately be in the face of effective test-and-treat strategies. HIV prevalence in sub-Saharan Africa now ranges from about 1% to about 30% (in Swaziland). As treatment-based elimination bites the incidence driven by this prevalence will grow more slowly, and the number of future cases that an HIV vaccine could prevent will be reduced. The number of future cases prevented is essential for deriving the cost-effectiveness of an intervention (cost-effectiveness = cases prevented divided by total cost), so the longer we wait the lower the effectiveness and thus the less cost-effective the vaccine becomes. This is a double whammy effect, too: the longer it takes to develop the vaccine, the more expensive the research becomes and thus the more expensive the final product becomes. So the numerator of the calculation drops as the denominator rises. Eventually this intervention will cross a threshold where it is no longer cost-effective compared to existing programs. Given that the majority of HIV is in countries – like Swaziland – that cannot afford the treatment themselves (and whose ability to pay for such treatments is directly impeded by the economic damage caused by HIV), it is international donors who need to make decisions about the deployment of this strategy in Swaziland. Surely at some point they should be saying to the drug companies that it would be better for them to refocus that money on developing cheaper treatments and tests, or that they should divert that research money to a more useful vaccine (such as against malaria) and leave the management of HIV to test-and-treat strategies?

This decision seems especially relevant since the financial stakes are huge. At the moment I think every drug company knows that a vaccine will be funded no matter the cost (within certain crazy boundaries), so the company that gets a functioning vaccine first has basically produced a license to print money. The developers will get a Nobel prize, and the company that patents it will be basically guaranteed an income for two or three generations as international donors flood Africa with cheap or free vaccines. The Global Fund and the Bill and Melinda Gates Foundation will sink so much money into an HIV vaccine. So in a sense I think this means the companies developing a vaccine are not too concerned by the long-term research costs of the drug – unlike a vaccine against a disease that hasn’t become a Bugbear, it’s unlikely that their work is going to be subjected to strict cost-effectiveness guidelines.

Which leads to the simple cost-benefit question: could that money be better spent? And if not now, in 10 years time would it be worth telling these companies either a) to stop or b) from now on we will only pay for HIV vaccine distribution if it is cost-effective? Is there a way to encourage pharma companies to invest in more useful vaccines, at the point when this decision needs to be made? And should we be predicting the future risks of research (in terms of failure and final cost)?

The usefulness of Golden Rice

Golden Rice is presented as an intervention to reduce Vitamin A Deficiency. There are many reasons why I think Golden Rice will be both ineffective at preventing Vitamin A Deficiency on a large scale, and is a research boondoggle that consumes resources better spent elsewhere:

  • Vitamin A Deficiency (VAD) is the least important of the nutrition deficiencies, and countries with VAD usually also have high levels of protein-energy and iron deficiency, which are much more serious
  • Protein-energy deficiency is usually caused by food insecurity and inequality, along with large quantities of diorrhea. Golden Rice is a food product and will be subject to the same distribution and insecurity problems that cripple existing food systems in countries with high levels of VAD. As a result it is unlikely to reach the people who need it, and will be vastly less effective than laboratory trials promise
  • Cheap and effective interventions against VAD exist and are in place, so Golden Rice’s cost-effectiveness needs to be assessed in comparison with these interventions
  • VAD has been declining rapidly in many countries, and so (just as with HIV) the longer it takes to implement Golden Rice, the less cost-effective it can be
  • Golden Rice won’t work in countries where rice is not a staple, so the research effort is targeting only a limited number of countries, the largest of which (China) is making rapid gains in reducing VAD

Given these problems, it seems obvious to me that research money being sunk into Golden Rice projects (whether public or private) is money being spent on a condition that is low priority for the affected countries, in an area of nutrition deficiency that has proven effective and cheap remedies. It also won’t address the fundamental cause of VAD – food insecurity and inequality – and projects based on Golden Rice may even be hampered by these problems. Given this, would the money be better spent on simply scaling-up increasing interventions? Or on strengthening interventions that target the full range of deficiencies, i.e. food security interventions?

Can we coordinate research policy?

Of course the big problem here – to a lesser extent for Golden Rice but certainly for HIV vaccine development – is that medical interventions are developed by private companies, and we cannot tell them what to research. In essence, most countries with UHC influence private medical research through two primary means: decisions about what treatments to fund, and the direction of public investment in basic sciences. Decisions about what treatments to fund are made through a variety of mechanisms, including assessing cost-effectiveness of treatments when they come to market, exercise of bulk buying powers, and decisions about what broad intervention strategies to fund (e.g. universal testing vs. condom promotion). Public investment in basic sciences is usually not driven by such “objective” rationales, but by what is considered important by governments and research leaders. Public investment in basic sciences will serve as a guideline for where private companies might choose to focus, and will benefit private research through new developments, but the primary drivers of private research will be the corporate sense of where the money can be made. So the simplest way to control corporate research is through the UHC system.

This means that pharma companies face significant risks in their research efforts: they can spend huge amounts bringing a drug to market, only to find that major UHC systems (like Australia, the UK and Japan) won’t fund it because it’s cost-ineffective, or drive very hard bargains over price. In the UK, the NICE system determines what the NHS will fund, and is renowned for making decisions that seem perverse on moral grounds but make perfect rational economic sense; in Australia the PBAC will refuse to pay for a drug if they think it overpriced, and because the PBAC determines what the country will pay for, it has huge price bargaining power. Drug companies work well with these organizations to ensure they don’t waste money on drugs that will never be cost-effective, but if I were an exec with big Pharma I would be mighty peeved if after 30 years of development work – pouring millions down the drain – the big governments told me that they were no longer interested in funding any HIV vaccine because the crisis was past. This seems like a very one-sided relationship, and unless one subscribes to the simplistic notion of big Pharma as Agents of Pure Evil, it doesn’t seem very fair.

But conversely, if major international donors and national aid agencies are going to sink billions of dollars of taxpayers’ and donors’ money into funding a newly-developed HIV vaccine that is largely irrelevant to the long-term progress of the HIV pandemic but represents a massive financial windfall for big Pharma, surely taxpayers have the right at some point to say – no, we no longer want to fund such ineffective interventions, no matter how much we encouraged you to research them 10 years ago. How could we manage these conflicting demands? And specifically, how could we intervene in research plans before the final product is developed to guide them towards the areas where they are most needed?

Research credit systems

One possible method for more directly involving social priorities in private research decisions might be to treat development of drugs similarly to the development of mineral resource projects, and regulate them through a kind of Resource Super Profits Tax (RSPT) as was proposed by the Australian government a few years ago. Under this scheme the government becomes a kind of minority partner in development projects, giving tax concessions where the projects suffer losses and drawing extra tax when they make large profits. I don’t think the RSPT was good politics for the mineral industry and I don’t think it would be better for the pharmaceutical industry. It would open the government to large losses, and it would create an obvious conflict between the regulatory arm of government (which wants to stop unsafe or ineffective drugs) and the taxation arm (which would stand to make money from them), especially in countries like America where the government doesn’t fund most healthcare but is responsible for approving the products health insurers buy[1].

Another option could be to offer some kind of system of cost-effectiveness credits in return for research realignments. That is, when a company invests extra research money in a potentially highly cost-effective intervention, it earns some kind of credit that can then be used to raise the threshold for cost-effectiveness applied to other drugs. So for example a company could invest in research into a cheap and safe substance to put into water supplies to reduce VAD, and in exchange would be granted a slight increase in cost-effectiveness thresholds when assessing drugs for the domestic market. Thus the more it invests in cost-effective interventions, the lower the risk that other medicines it develops will be rejected at the financial approval stage. This might be a little bit similar to a proposal for Global Health Cap-and-Trade schemes that I discussed on this blog a while back. It would mean that healthcare costs would rise slightly (which is what happens when cost-effectiveness rules are weakened) but it would also lead to greater levels of investment in interventions that are either very cheap or very effective, so the total benefit of the rising costs would be positive. Careful tweaking of the relative benefits of the credits could lead to an overall improvement in health research efficiency (or, alternatively, to a long run of cock ups).

A third option could be a system of tradable research guarantees, similar perhaps to a kind of bond, in which pharmaceutical companies invest in existing interventions or research into highly cost-effective and important interventions, and in return receive guarantees from the government to finance some other existing drug that is still under development. This guarantee would be represented as either a promise regardless of cost, an increase in the maximum price that the government was willing to pay, or an increase in the cost-effectiveness threshold it was willing to consider. Greater investment in some research paths might lead to even larger increases in this cap. So for example a pharmaceutical company could offer to double investment in malaria vaccine research in exchange for a bond from the government promising to fund HIV vaccines even if they are shown to be marginally cost-ineffective (an ICER of >3 and <5 times mean wages, for example, in the case of a deal struck with NICE). This could be extended to allow the bond to be purchased through investment in existing interventions: for example a company could purchase a guarantee of some new cervical cancer treatment being funded when it is finally developed, if in exchange it financed a large expansion in access to HPV vaccines now (e.g. lowering the cost enough to make it cost-effective to give to men[2]). The benefit of such a system would be an increased investment of the drug companies in the actual processes by which health development and health systems function, which would in turn give them greater interest in supporting regulatory and cost-efficiency monitoring systems.

All of these systems, and any other system that relies on trading off future research risks with current research priorities, requires a method for assessing the level and nature of project risk in major pharmaceutical research projects. I’m not sure if such a method currently exists in any quantifiable form. For example, have drug companies ever given a reasonable assessment of whether they can develop an HIV vaccine, or the time it will take them? One often reads about “vaccine will be available within X years,” but I don’t think those claims are more than guesses. We might not want to encourage big Pharma to develop such a system – they might give up on a whole bunch of essential research if they applied it – but it seems like an essential first step in rational research planning. I think we also want to avoid reducing research planning to a simple case of going for the cheapest and most effective option, since it needs to be remembered that every time any person experiences any serious illness there is a lot of suffering involved, and that suffering cannot be stopped if treatments haven’t been developed. It may also be the case that a chaotic approach to research planning, with multiple lines of research being pursued at the same time, is necessary because successful development depends on luck and the interaction of scientific findings in many different areas – for example, successful development of an HIV vaccine may be extremely important in the development of better preventive methods against other retroviruses, or against a wide range of other incurable viruses generally.

Nonetheless, it does seem to me that some kind of rationalization of medical research is necessary. If it’s going to take 50 years to develop an HIV vaccine, maybe it’s better to focus all that research money on a disease that doesn’t yet have a fully effective prevention method, such as malaria, and leave eradication and control of HIV to well-understood but not yet fully-implemented strategies that we think would work if done better. But who is going to make such a judgment? Certainly not me!

fn1: In the case of the mining industry I think this system would create an obvious conflict between the environmental safety and tax-collecting roles of government, with potentially disastrous consequences.fn2: This raises an interesting side problem here: by agreeing to lower costs to support an expanded vaccine program, the company reduces the profitability of all future cervical cancer treatments (since there will be less cases). So such arrangements might be impossible where they concerned interventions targeting different stages of the same disease process