Since at least June, NIH peer review, the process by which the bulk of U.S. federal funding for biomedical research is meted out, has been under an intense review process by two groups working in parallel. The reason, funding increases haven’t kept pace with inflation and the biomedical research community is feeling the pinch. With slim funding prospects the approval rate for funding proposals has dropped from a historic 25% to somewhere around 10%. At ASCB Keith Yamamoto of UCSF, Mary Beckerle of the University of Utah, and Katherine Wilson of Johns Hopkins University talked about some of the radical ideas that have been floating around to totally revamp the process by which investigators compete for funding. While Yamamoto emphasized that these were not anything like final recommendations that they would be making, he wanted to offer a glimpse of the ideas they’ve been batting around and invite more feedback from the research community, a process that has officially close, but that these three at least say they are still open to. Incidentally Elias Zerhouni, the head at the NIH intends to implement changes to the review process in the first quarter of 2008. Read on for some specifics.
The challenges as Yamamoto and his colleagues presented them are fourfold and presented in no particular order:
1 To reaffirm and emphasize core values of review
2 To support new (read young) investigators
3 To reduce administrative burden
4 To strengthen review leadership and participation through training
While the reasons for these needs are probably apparent to anyone who’s participated in NIH review – either as participant or receiver, just how to address them is a tricky matter. Here are six ideas that Yamamoto outlined. They drew a lot of interest from the participants, and if you weren’t there to offer your feedback last night, please feel free to do so here.
1. An editorial board structure. This was really an overarching theme of the changes they were talking about that models the traditional study section more after an editorial board that is smaller but can request input from outside experts. It slims down study sections to fewer participants and dictates how they respond to proposals. It may even include incentivizing participation in study sections.
2. Shorter applications and processing. The magic number here seemed to be 7 pages. Rather than having researchers give rich detail and background on their work, they’d call for brief distillations of the big ideas, including strong statements as to how a proposed project, if completed, would have an impact on the field. R01s, the primary grant that NIH distributes would be broken into two categories. Most would be innovation on current trends or themes, but researchers could risk applying for transformative, revolutionary kinds of research. For the latter type of grant (estimated at 1% award) Triage would be limited to this single criterion, impact, and if it failed, the submitter would not be encouraged to reapply.
3. Prebuttals. For the majority of grants after initial comments come in via email from different reviewers and technical advisers, major concerns would be sent to the submitter for a prebuttal, prior to the study section meeting to discuss the grant. This is a chance for the submitter to clear any misconceptions or miscommunications in the original application and possibly salvage their chances of getting funded.
4. Straight rankings. Yamamoto suggested doing away with the current differential scoring systems completely and dealing with plain old rankings based on merit.
5. Short reviews on merit only. Rather than providing feedback to submitters on why their grant didn’t make it, reviewers would simply give a clear statement that it didn’t make the cut. Such mentoring would be better provided by the submitter’s colleagues.
6. Discussions led by a primary reviewer. Better training for reviewers and section chairs should lead to succinct presentation of papers to committees. A lot of time was spent on how the study sections should be composed – basically with more generalists and less vested interests. With shorter, less detailed proposals and the rebuttal system in place to allay miscommunication, Yamamoto and his colleagues presented these possibilities as a way of delivering a much more streamlined process that would reward meritous study and close the gap between the scientific have and the have nots.
It’s a deeply truncated list of the wide ranging topics and possibilities they discussed, but they’re obviously leaders in the thought pool. What do you think?