The following post does not represent an official statement by NSF or its Division of Environmental Biology.
The release of initial statistics on the latest round of proposals submitted to the DEB and IOS core programs has understandably catalyzed a new round of discussion on several blogs (and elsewhere). My intent here is not to dig into all the details of those statistics or to represent NSF in any official capacity. I can’t do that here. However, I am hoping to clarify one piece of the broader story that’s leading to some misunderstandings and erroneous assumptions: the comparison of this year’s success rates to years past.
The short story: one should not compare this year’s rates so far with past years and try to draw any conclusions about the effects of a review system change. Here’s why. So far this fiscal year, programs are operating under only 80% of last year’s budget due to the ongoing continuing resolution for FY13. As with all continuing resolutions, that may or may not change before the fiscal year ends on September 30, 2013, but for now, program officers have less money to allocate to new awards.
A key but largely unrecognized piece of this: budget cuts tend to have disproportionately large impacts on success rates in a given year. That’s because NSF programs must balance standard awards (funding entirely allocated in a given fiscal year), and continuing awards (funding mortgaged across three or more fiscal years). When budgets decrease, the mortgage still must be paid, so there is less available for new awards. This means the effective reduction in funds available for new awards is almost always greater than just the percentage cut to the budget.
Here’s a purely hypothetical example. Imagine that over the past few years, program ABC had an annual budget of $10M, and that $4M each year was needed to cover continuing increments to past awards. That leaves $6M each year to fund new work. Now, imagine that the program is hit with an unusual and unexpected 20% budget cut – i.e. they now have an $8M budget for the year (or at least for the time being under a CR-style scenario). The $4M from past awards must still be paid, leaving $4M for new awards. If everything else stayed flat across this time period (e.g. submission rates and budget sizes), that 20% budget cut would not result in a 20% drop in success rates. Instead, the money available for new awards drops by 1/3 (6M to 4M).
Again, this is a simplified and hypothetical example! Success rates depend not only on the money available for new awards, but on submission rates, budget sizes and various other budget requests and directives that can and do vary across years. But the need to distribute some funds across years means that one cannot extrapolate a budget cut into a change in success rate on a 1:1 basis. Program officers try hard to manage funds in such a way that buffers against budget oscillations (especially cuts), but awarding some continuing grants is often unavoidable, and at times desirable, for all kinds of reasons.
Bottom line: Nobody is happy with low success rates, but beware of trying to compare the effects of a review system change on success rates during a year in which a major budget deviation has occurred (at least to date).
Finally, none of the above implies that the new review system doesn’t merit continued discussion and analysis. It does, and as the recent email to DEB PI’s noted, that analysis and discussion is coming via a variety of mechanisms. As part of that, we still hope to launch a pilot blog from DEB in the near future, but the necessary review and implementation pieces of that blog are not yet complete.
Hope this helps. Hang in there everyone.