On the Value of Investment in Science - An Essay in Two Parts
Part 1: Fixing peer-reviewed scientific funding in Australia
(Originally published in full on the Royal Society of Victoria Newsletter March 2022 issue, page 2)
To talk about “investment” is to talk about an activity that, although most people think is fairly straightforward – you allocate money to an asset or activity in hopes that it will give you a desired return in the future – in reality it involves a complex set of presuppositions, values, appetite for risk and attitudes that, although usually overseen, they nonetheless affect the level of return you get from said investments because they affect not only how the investor allocates her resources (when, and how much someone invests in something), but also affect the way the economy operates at large (how the markets and the collection of investors are affected by non-monetary decisions). Throughout this letter I will argue that these “non-intuitive” investments are as or more important than the financial investments we make in science – some of which we currently might not be able to quantify – and that we should take the time to consider “investing” in them. The list presented here is, understandably, not exhaustive.
The CSIRO has recently unveiled that for every $1 invested in research and development (R&D) in Australia, the latter creates $3.5 in economy-wide benefits in today’s dollars with a 10% average annual return even on very conservative estimates and assumptions.[1] However, R&D – and science more broadly – currently operates very inefficiently. It can therefore be argued that although the returns and benefits listed by the CSIRO on investing in science, although impressive in their own right, are a severe underestimation of the potential benefits that science holds if it is conducted under the right conditions – conditions that we should also invest in improving. What are some of these inefficient conditions? I would point, first, to the way we allocate scientific funding.
One of the main systems we have for allocating taxpayer money to research is peer-reviewed research funding (PRRF) in which scientists – often teams of scientists – write a project proposal hoping it gets selected for funding. The system, however, is simply inefficient – and by efficiency here I mean the appropriate or optimal distribution of resources (not only money but time and scientific brainpower as well) that maximises scientific productivity and innovation while minimising resource misallocation.
Our current PRRF system channels too much time and brainpower away from productivity: a study by Danielle L. Herbert and colleagues estimated that the 2012 NHMRC round saw 550 working years of researchers’ time spent on just preparing proposals for submissions, translating into AUD $66 million in annual salary costs.[2] This, for a funding rate of 21% of all valid submissions, which by extension means that 79% of those resources brought no immediate benefit whatsoever, and most likely a substantial portion of those simply did not make it through any future resubmission round which also involved additional resources spent. That was 2012, however. Nowadays the success rate is circa 10% and while I do not know the estimates of how much time was spent in the latest rounds, if I put together the explosive number of PhDs Australia has awarded in the last 10 years[3] with the decreasing amount of funding for science on the same period[4] and considering the current lower success rates, I can safely hypothesise that things have not improved, and most likely have gotten worse, even when accounting for plenty of those new PhDs moving to other industries or other countries.
This appraisal on the inefficiency of our current PRRF system is compounded – dare I say, is made worse – by several factors. The first one is that the hyper-competition that the scarcity of financial resources begets transpires into all aspects of scientists’ lives, further hindering their productivity. Danielle L. Herbert and her team, once more, report from a sample of 215 researchers[5] that the vast majority of them prioritised grant-writing over other work (97%) and personal (87%) commitments, became stressed by the workload (93%), and restricted their holidays during the grant writing season (88%). I personally know many who are sick and tired of having to choose between putting all the extra hours into grant writing and enjoying time with their children and family, haunted by the fear that by forgoing grant writing for a couple of weeks they might be outcompeted by people who don’t face the same pressures and can afford to be more competitive. Needless to say, a stressed, burnt-out and overworked scientist is hardly one who will have the mental space to be productive and innovative, especially if we compound this with the gendered issue of household workload. This misallocated time and brainpower could be much better spent either doing more science (because writing proposals begging for money to do science is not doing science), supervising and helping often neglected students and collaborations, or even simply enjoying time with family and friends, thus recharging mental battery. All of these would invariably boost productivity of not only the scientists themselves, but of their scientific teams as well. After all, and as stated elsewhere, “the value of the science that researchers forgo while preparing proposals can approach or exceed the value of the science that the funding program supports.”[6]
The second factor compounding the PRRF problem is that PRRF is not immune to economic pressures – that is, to the changes in human behaviour that are begotten from simple supply and demand. The hyper-competition in PRRF derived from the scarcity of funding – and the unquestionable importance of this funding to plenty of scientific activity, promotions, and careers everywhere in the world – translate into practices that some have qualified as “questionable”[7] (while some might say they are outright wrong) and that are intended simply to maximise the chances of the grant getting funded, sometimes at the cost of other important values. Chief among them is that often the success of the grant rides not on the value of the scientific idea behind it, but on what’s known as “grantsmanship”, which refers to “the art of writing successful funding applications, but is typically used to single out those aspects of the application that are not scientific but rather formal, stylistic and rhetorical.”[8] Because peer reviewers are also human and subject to natural biases like being swayed by linguistic style; we are not as objective as we claim we are. This often plays out in many other scenarios where the selling of the idea is often much more important than the idea or the research itself, like the famous 3-minute thesis competitions. The current PRRF system undervalues brilliant scientists that simply don’t know how to “play the game.” A second issue worth mentioning in this department is that on many instances authorship can be flagrantly violated if, for example, the author of the idea is not competitive enough or experienced enough to justify being the chief and lead investigator, in which cases the idea has better chances of being funded if it is “authored” by a more senior investigator. The young author usually has little room to complain if he depends on that grant to fund and kickstart her career, while a more senior author can draw from other grants or University funding and therefore does not face the same pressures to submit the grant. With a bit of imagination, one can safely hypothesise about all the power imbalances and abuses that these situations can create. This is even worse if a particular grant scheme prevents an author to draw a salary from the grant if she is successful.
Third and finally, some have argued that the ranking system is poorly predictive of grant productivity.[9] This is not to say that grantees are not productive, but that PRRF does a poor job at predicting and awarding those grants that will be the most productive of the round. This is because there are not enough number of reviewers to provide statistical precision, and because there is important disagreement in scores for the same application depending on how much or little the reviewer knows about the subject in the grant at hand; reviewers who are “closer” intellectually to the grant at hand are better equipped to be more critical of the proposal, or to better understand its value, implications, innovation, or future projections.
Many have proposed changes to the PRRF system, however one proposal I read stuck with me due to its potential to solve not only PRRF-related issues, but many other problems that escape the domain of science (see below). It is the case for a modified lottery.[10],[11]
A modified lottery, in its purest form, states that the grant money should be allocated randomly to scientific proposals that clear a previously agreed upon realistic threshold of quality and feasibility – the modification is that the lottery is only for those proposals that clear the threshold, not for every proposal submitted. One might argue “how is this any different to the current PRRF system we have?” to which the answer is: because the threshold for scientific quality and feasibility is much, much lower than one would expect from thinking that only the awardees of a current grant round cleared the threshold. The latter are simply just the most heavily groomed grant proposals of the round, which feeds into some of the problems I mentioned earlier – this solidifies inefficiency and other issues into the system, while many still feasible and valuable proposals are simply left unrewarded. Peer reviewers are already good at weeding out unfeasible, low-quality, and questionable proposals; this is a testament to their expertise. Past this threshold it is however unclear why a certain proposal should take priority over another, especially considering the potential that scientific endeavour holds to create benefits for wider society that are often unrelated to the scientific topic at hand – artificial limbs, insulin pumps, shock absorbers for buildings, for example, were all brought about from space exploration.[12]
A modified lottery solves many PRRF-related problems, and brings about a system that is fairer, more equitable and, importantly, much more efficient that the current PRRF system we have:
· It liberates scientific brainpower and time both from contestants and from peer reviewers. Since the threshold to make it into the lottery is much lower than the one artificially created by hyper-groomed proposals, this releases the pressure on contestants to invest countless hours in incessantly reviewing and grooming the proposal to the detriment of their other professional and/or social and family commitments. Proposals just need to be feasible and valuable by more realistic expectations (for example having the ability to carry out the research rather than having the most extensive track record of all submissions, or the most convincing and carefully curated preliminary data). Since peer-reviewers are scientists too – and sometimes grant contestants at the same time – this also liberates them from having to review proposals over and over to make decisions about scoring because a pass/fail decision is much easier and more intuitive decision to make than a score one.
· It bypasses current biases in the system. Many have decried cronyism, sexism, racism, and other biases in our current PRRF system. Whether these criticisms are justified or not, a modified lottery, by design, bypasses these biases especially if applications are blinded by name and gender at submission. As the realistic threshold for feasibility and quality are naturally lower, a modified lottery removes the pressure to have the most extensive track records to outcompete everyone (which puts women at a disadvantage for taking time off to raise children) and, if blinded, it removes the ability to rule for or against an application based on the gender, race, or surname (or even friendship) of an application. Considering the current distributions of male and female researchers in many scientific fields,[13] a modified lottery can bring about more balanced gender outcomes in funding, or at least one that is more reflective of the composition of the field at hand, without having to push for even more discriminatory practices like affirmative action to bring better balances to the equation: the funding allocation would be randomised between all the worthy submissions regardless of their demographics and regardless of their competitiveness past a reasonable cut-off point. Additionally, the modified lottery system has the benefit of countering one of the most pervasive but sometimes less spoken-about biases we have in most human systems: the Matthew effect, where already successful scientists run at an ever-increasing advantage over less successful ones despite the quality of their proposals.
· It bypasses conservatism and risk-aversion and thus boosts innovation. By selecting the most feasible, the most groomed applications by scientists with the longest or best track records who best play the grant-writing game, PRRF in Australia reflects a trend that holds true for the wider Australian population: we are a risk-averse, highly compliant, and conservative nation. This is not just my appraisal: a recent report by CSIRO and Business Council of Australia[14] also highlights this cultural aversion to risk-taking (see also the second instalment of this letter). A quick look around at the Australian response to COVID should make this point painfully evident. A modified lottery pushes the system to embrace the necessary risks for innovation highlighted in this and other reports by taking away the power of PRRF to allocate funding based on risk-aversion and conservatism.
I am confident others will see benefits other than the ones I have listed, and risks that I am positive will never outweigh the benefits of the proposal. It is now just a matter of momentum. With the right push for this “non-intuitive” investment – that is, for policy change in the PRRF department – alongside a most-deserved increase in funding from the government, science in Australia can sail in course for the innovative and efficient future that it requires to thrive in the highly disruptive waters of the 21st and upcoming centuries. Science can then bring the rest of Australia alongside it to the high-reward future that seemingly small but nonetheless significant “non-intuitive” investments can bring about.
[1] CSIRO Futures (2021) Quantifying Australia’s returns to innovation. CSIRO, Canberra
[2] Herbert DL, Barnett AG, Clarke P, et al, On the time spent preparing grant proposals: an observational study of Australian researchers. BMJ Open 2013;3:e002800. doi: 10.1136/bmjopen-2013-002800
[3] Algorithm Editorial Team, 29 May 2019, Where are Australia’s PhD students? CSIRO, visited 28 January 2022, <https://algorithm.data61.csiro.au/where-are-australias-phd-students/>
[4] CSIRO Futures (2021) Quantifying Australia’s returns to innovation. CSIRO, Canberra
[5] Herbert DL, Coveney J, Clarke P, et al, The impact of funding deadlines on personal workloads, stress and family relationships: a qualitative study of Australian researchers, BMJ Open 2014;4:e004462. doi: 10.1136/bmjopen-2013-004462
[6] Gross K, Bergstrom CT (2019) Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biol 17(1): e3000065. https://doi.org/10.1371/journal.pbio.3000065
[7] Conix S, De Block A and Vaesen K. Grant writing and grant peer review as questionable research practices [version 2; peer review: 2 approved]. F1000Research 2021, 10:1126. doi: 10.12688/f1000research.73893.2
[8] Conix S, De Block A and Vaesen K. Grant writing and grant peer review as questionable research practices [version 2; peer review: 2 approved]. F1000Research 2021, 10:1126. doi: 10.12688/f1000research.73893.2
[9] Fang FC, Bowen A, Casadevall A. NIH peer review percentile scores are poorly predictive of grant productivity. Elife. 2016 Feb 16;5:e13323. doi: 10.7554/eLife.13323.
[10] Fang FC, Casadevall A. Research Funding: the Case for a Modified Lottery. mBio. 2016 Apr 12;7(2):e00422-16. doi: 10.1128/mBio.00422-16. Erratum in: MBio. 2016;7(3). pii: e00694-16. doi: 10.1128/mBio.00694-16.
[11] Gross K, Bergstrom CT (2019) Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biol 17(1): e3000065. https://doi.org/10.1371/journal.pbio.3000065
[12] Josie Green, 8 July 2019, Inventions we use every day that were actually created for space exploration, USA Today, visited 29 January 2022, <https://www.usatoday.com/story/money/2019/07/08/space-race-inventions-we-use-every-day-were-created-for-space-exploration/39580591/>
[13] Science in Australia Gender Equity (SAGE), 25 May 2021, Gender Equity in Higher Education, Science in Australia Gender Equity (SAGE), visited 29 January 2022, <https://www.sciencegenderequity.org.au/gender-equity-in-stem/>
[14] CSIRO Futures (2021) Unlocking the innovation potential of Australian companies, CSIRO, Canberra