Facebook Twitter LinkedIn

Home Back Issues No. 33: Grant Clinic: How Can I Preserve My Data When Re-Submitting a Rejected Grant?

Jun 14
2010

No. 33: Grant Clinic: How Can I Preserve My Data When Re-Submitting a Rejected Grant?

Posted by: PIA in

Tagged in: Untagged 

Sign Up to receive free weekly articles like these

GRANT CLINIC

How Can I Preserve My Data When Re-Submitting a Rejected Grant?

Reader Question: My RO1 grant was rejected, not because the experiments were flawed but because the topic didn't "click" with some study section reviewers. I will submit it to a different study section, but because the rejected grant is already the A2 version it must be greatly revised and submitted as a new grant. What can I do to "rescue" these experiments?

Expert Comments:

When one receives less than a laudatory score for an application, it is easy to conclude that the panelists “didn’t get it.” In that case, it follows that all one needs to do is move the application to a review panel or a different funding agency.

Despite the limited number of times when this has worked (even for this author on one occasion), there are many more when it doesn’t.

In order to plan your next move, you should read the roster sheet for the panel that reviewed your grant and do a Medline search on the panelists to determine their specific research areas. If none has published in your area, then it is possible that they might not have the expertise to appreciate the significance of your work.

Typically, the study section’s Scientific Review Officer will work hard to ensure that every application gets at least one and perhaps two expert reviewers. If there truly are no experts, and you can identify another study section that has more specific expertise, then specifically requesting that alternative panel when you resubmit might be advantageous.

However, it is also possible that, if your topic didn’t "click" with the reviewers, they might have detailed knowledge of your field but have been unconvinced about the significance of either your central question or hypothesis. This can lead to good scores for approach and investigator but low scores for significance and overall impact.

Read the comments carefully and discuss with your program officer to confirm whether that scenario applies. An analysis of the comments also may help to determine whether the readers understood what you were trying to do.

To “rescue” this project, consider holding on to the experiments and data that you’ve collected. If it’s novel, accurate, and informative, it still might form the basis of a new application. Then go back and challenge all the assumptions that went into the initial question and hypothesis: Are they still appropriate and timely through all the revisions? Has your field moved on since your first submission? Consider significant revisions to your central question and hypotheses, and then use these to re-craft your title and aims (additional experiments might be required).

Float it by a disinterested (but friendly) colleague to see if they find it intriguing. These changes will certainly constitute a new application.

Splitting your original grant into pieces and then submitting each as a new grant brings significant risk of diluted impact. (It might make sense if the reviewers felt your original grant too broad and sweeping.)

At the end of the day, what matters most is building a case that your question and hypothesis are truly at the leading edge of the field: Anything less, and you are likely to fall short of a successful application.

Comments by Christopher Francklyn, PhD, a former study section chair and veteran reviewer for NIH and NSF study sections. He is a professor at the University of Vermont.

This eAlert is brought to you as an informational training tool by the Principal Investigators Association, which is an independent organization. Neither the eAlert nor its contents have any connection with the National Institutes of Health (NIH), nor are they endorsed by this agency. All views expressed are those personally held by the author and are not official government policies or opinions.

Comments (18)
Wrong info
written by OldTechie, June 14, 2010
To try and decrease workload, I thought NIH no longer allows one to resubmit an application after its last review (formerly A2, now A1). There may be ways to get special dispensation, but I think this requires having a program officer call CSR. I may be wrong, but if I am not, this advice is wrong and should get corrected.
OldTechie wrong info
written by anonymous, June 14, 2010
It is true that you cannot resubmit more than once. That is not with this questioner was asking. The questioner was asking about how to submit a new grant that is sufficiently different from the original rejected grant to be considered a new grant but that includes the same preliminary data that was included in the original grant.
Adjunct Professor, President, REB Research
written by Robert E. Buxbaum, PhD, June 14, 2010
The standard of research proposal writing is incredibly high. It is not enough to show that your project is interesting, or worthwhile, it must be more interesting and more worthwhile than the others. Similarly, it is not enough to show that your data "isn't flawed", it much be very good, and you must be lucky. That is your data must support your position better than the data of other researchers. And it must show-case your experimental approach so well that reviewers believe your experiments will succeed more than the others. Proposals are competitions where only the top 10% or so succeed. The average batting average is lower than that of an American League pitcher.
If your topic is uninteresting, or the area is considered unimportant, no amount of new data will help your application, I'm sorry to say. If your topic is interesting, and worthwhile, and your current data shows your approach is likely to work, you're on much firmer ground; you're now in the region of luck and finesse. You should tweak your proposed experiments to highlight the logic, and you should add more data, if only to show that you have not left the field. Still, the fact that most of your data is old should not be a killer problem. To show that you work well with the intellectual community, you may want to publish the data you showed in your previous proposal, and you may want to include in your article, your argument for future work. For the new proposal, I'd present that published data (probably with the reference marked as "in press") and I'd add enough new "preliminary results" to take care of any minor reviewer questions. Good luck, and don't get discouraged. There are few batters with lifetime averages above 300, and even Nobel-lauriates get their proposals rejected.
...
written by Observer, June 14, 2010
I've been on study section several times. Regretfully, I feel that the process of getting funded is a bit of a crapshoot and it's going to get worse because of the new review process. I believe this to be true because although one gets a score from the whole study section, it is very heavily reliant on the louder of the two (primary and secondary) reviewers that do almost all of the reviewing of that one specific grant. The third reviewer rarely makes a contribution that deflects the final score. The remainder of the study section is voting typically on a grant they have not even seen so they basically go along with the average of the primary and secondary reviewer scores. My advice: There is possibly nothing wrong with your grant- getting funded is based in large part on luck, assuming your grant is well written and addresses important questions in your field. Amending the grant based on the reviews is not likely to change the outcome of such a process, i.e. you are still entering a crapshoot competition. So why is it going to get worse ? The reviewers now have to cut down on the length of the part of the review that addresses their reasons why they didn't like it so as a grant applicant you will have even less information than before on the reason for the grant being turned down. Perhaps I'm contradicting myself a little here but I really don't think there is that much difference between the top 10 to 30% of grants being reviewed, the reviewers are just splitting hairs. The increase in the number of applications is going to further negatively impact the situation. Is there a solution ? No. The NIH review system is probably the best there is for nations that invest so heavily in research. I have had well above average success as a grant applicant but don't for one minute believe it wasn't in part based on luck.
PI
written by Maverick, June 14, 2010
Based on the above comments, all submissions that receive a certain designated minimum score should be placed in a pool and the proposals funded should be chosen at random from that pool. The minimum score should be weighted so that 30-50% of all submitted applications are included in the pool and from 1/2 or 1/5 of all submitted proposals are funded. Proposals are selected from the pool at random until the funds available in that round are depleted. To prevent over funding of specific proposals, fixed proposal funding limits should be designated and mandatory.
Professor
written by deanlet, June 14, 2010
Maverick certainly has a point. The problem is that universities have yet to catch on to the fact that the crapshoot aspect of applying for grants has left many investigators on an endless treadmill of grant seeking that leaves not enough time for actually doing research, writing papers, etc etc. Were it not for the university administration's belief that the purpose of research is bring in money, the crush of new applications would not perhaps be quite so large and the paylines not so draconian. As a long time reviwer, I find it extraordinarily difficult to distinguish among the relative merits of the high quality applications we see these days. However, success at getting grants is still regarded as the measure of a scientist's ability, success, quality of research etc. There's got to be a limit to this at some point, before all but a few are driven out of academic research altogether.
...
written by Fan of the Observer, June 14, 2010
Unfortunately, after almost 30 years in science, I have to agree with everything what the Observer stated. There is though, one notable exception. I do believe that the process can be reformed. There are at least three recommendations that would, in my opinion, make the system tiny bit less capricious.
1) Double blind review. That takes away the advantage of knowing the authors but removes a lot of biases. Let the bureaucrats check the competence level.
2) More reviewers. New technology makes it possible to have 5 or 7 reviewers.
3) Elimination of panels. Leave a job of distributing money to bureaucrats. They are less competent but much easier to move around.
Once a panel establishes a circle of mutual adoration to change its dynamics is awfully hard (not even to say expensive).
There is a beautiful illusion of competence. In modern science familiarity with an individual technique is make or break. Very few people on the panels are that savvy to have sufficiently broad view.
Many of those are inexperienced young friends of powerful panelists. I have seen people on the panels with less than 5 publications. I have a friend with more than 500 publications routinely not invited to participate in the panels.
The biggest problem is that a lot of proposals are just simply not read at all. Involving much more people in a review process over the web will be the solution. Reviewers do not need to be from a particular field. There should be three primary reviewers from a given field and the rest can be random. Good science will float to the surface no matter what. In a present system interesting science can be kept out by a few gatekeepers.
Associate
written by Smokeless one, June 14, 2010
I disagree with the random selection suggestion, but believe in the crapshoot hypothesis, and agree that the top quarter or so of applicants have practically indistinguishable scores. I would suggest that the peer review nominate high quality applications, potentially capable of providing taxpayers with good value for their funding largesse. However, I would leave final funding decisions to the agency staff who should be (and in most cases probably already are) directed to think of their funded grants as a portfolio that needs to be diversified within the boundaries set by their superiors. If the agency staff are not to be so encumbered for some reason, then perhaps some subset of the review panel could contemplate the portfolio problem. It may still feel like random selection from the investigator perspective, but it would be reassuring to know that a large pool of good applications is entering a portfolio management process, and that that process has a bigger picture in mind.
Principal Engineer
written by SoftMoneyResearcher, June 14, 2010
I agree with all of the comments: even a good proposal is not likely to get funded, and an excellent proposal faces the crapshoot method of selection. Thus, one must always submit an excellent proposal (exceedingly clear, concise, relevant, and innovative yet demonstrated to succeed), just to get into the group of fundable grants, but once in that group the rationale for support is often dependent on the distribution of other proposals (and applicants) in that group and is heavily weighted by the primary reviewer's expertise and bias. Most of the problem is a direct consequence of two factors: the large pool of applicants, spurred by research university metrics for tenure and promotion and by the very low paylines at NIH and NSF. Something has to give, and I believe that both attrition and absolute increases in funding for research are going to be part of the transition to sustainability.
I have heard that the UK is considering putting a cap on the number of proposals submitted per year (e.g. 3) and such a method would both accelerate convergence to the sustainable level of researchers, and increase research productivity by eliminating the proposal mill. The US is unlikely to accept such approaches (on a cultural basis, if nothing else, since this would be anti-entrepreneurial) but other than increasing funds for research from the tax base, the result of doing nothing is simply prolonging the inevitable decline in the number of applicants while maintaining very low efficiency and productivity in US research and higher education.
Other deleterious consequences of the lack of will to balance grant funding with available resources are the stress on faculty (not to mention non-tenured researchers) and the clear disincentive for students in the US to pursue careers in academia. Thus the increasing proportion of foreign students in our graduate programs, and the flight of US science and engineering students to business, law and finance.
I'm not entirely off-topic here. I plan to stretch the rules as much as possible, submitting excellent applications based on related topics to as many different study sections and funding agencies as possible, as frequently as possible. Do the math: a full time salary that also supports student RAs and experimental work requires three simultaneous grants, and even assuming 5 year projects, each requiring 2 submissions, with an overall probability of success of 5% (including non-scored applications...let's be realistic), I need to submit an average of one proposal per month. There is no way to survive in the long term unless one does some research and publishes too, so that means stretching the value of each independent idea by multiple simultaneous submissions.

If one is a fisherman (with a family) and the fishing is lousy, one has to fish a lot ... or find a new career. Those who stay are going to become very competitive indeed.
...
written by Observer, June 14, 2010
I would like to add something else I've seen happening during the previous 4+ years which I think is as a consequence of the funding crunch. Reviewers for 'private foundation' grants are increasing their political grip on the flow of funds: the same groups of people or their associates/collaborators are the ones consistently getting funded. These reviewers can get away with this activity more easily at small private foundation meetings than at a NIH study section where they risk being branded. Heaven forbid the day that activity takes a grip of study section outcomes (although to some extent I'm sure it happens but there are far more opportunities for grant applicants at NIH than at private foundations). If such politics become the norm also at NIH, then I will know it's time for a career change.

Watching all this has attracted me to exploring funding opportunities which I had ignored in the past: industry collaborations. I ignored them previously because of the lack of scientific freedom associated with such funding but I'm getting my ducks in a row now for such funding in anticipation of funding problems in the near future. I believe my output in terms of quality and quantity has never been better but the funding future never looked so uncertain.

Without getting too much off topic, advice to individual with original question: do not put too much emphasis on the reviews of your grant as they may not accurately reflect the actual quality of the document you submitted. For practical purposes, repackage the ideas to reflect what you believe is the best science and cross your fingers.
staff
written by indentured servant, June 14, 2010
Very interesting stream of thought.
With respect to both the Observer and Fan of Observers' comments, what is the argument against a "transparent" review (where the reviewer is also identified by name), instead of a double blind review process (which is difficult to really implement in an ever increasing specialized world of focused researchers).

I think that over the long run, it is likely to bring both civility and humitlity to the review process.
...
written by Observer, June 14, 2010
Given that the vast majority of successful grant applications (e.g. R01s) are those that contain preliminary data (preferably data already published in peer reviewed journals), it's virtually impossible in my opinion to submit a strong grant as an anonymous applicant. One way to do so would be to reference one's prior work as a third party but what is the good in that ? Reviewers should give credit to the current applicant for prior data he/she has published in that field and the current applicant deserves the credit over their competitors if the key data in the field is their own. I like the idea of a transparent review but I wonder whether there are too many potential social problems with that. Am I more or less likely to end up disliking a scientist for their professional opinion on my grant application if it is turned down for funding ? Am I likely to even want to take the risk of being disliked by my peers by serving on study sections that aren't to some extent anonymous ? These are practical issues that can't be easily solved. That's why despite all the problems with the NIH review system, I think it's one of the best in the world. I agree with Fan of the Observer (I'm flattered by the way, thanks!) that more reviewers for a single application would reduce the randomness of reviewer scores. This may work well with the new shortened grant applications but definitely would have been difficult with the old 25 page R01s where it would be difficult to serve as primary or secondary reviewer on too many applications. Recruiting reviewers is already difficult (I for example, turned down the last request) simply because they have to write more grants now to stay afloat (as success rates have gone down). I know my colleagues are in the same boat. I just hope it's not sinking !!!!
staff
written by indentured servant, June 14, 2010
Dr Observer;
Afterall, we are all pretty collegial, are we not?

Some might argue that the needs of applicants, especially newly independent applicants, are not being well represented in a single blind system. If criticisms are well thought out and valid, then why would you or other reviewers be afraid to voice them, and ascribe a numerical score that is associated with the reviews?

An advantage to the applicant/respondent would be that the reviewer-perspective/bias would become evident to the applicant-who could then respond in a more directed manner than trying to guess what the reviewer is asking for.
...
written by dtt, June 14, 2010
In my opinion, the present review process is almost missing the scientific component. One apparent reason for this is that there are no any objective scientific criteria in evaluation. The only solid criteria of scientific productivity known to me is a publication record. It, however, can easily be bypassed by the reviewers and, therefore, does not really work in the present "credit" system when one requests funding for future studies. A quite simple solution to this major problem seems to convert the "credit" line of funding in the "reimbursement" one, i.e. when the PI "sells" the finished (published) work to the funding agency. The money he/she gains (if any) may be then used for future studies with no specific application, proposals, etc.. In such path the University who hires a new, junior faculty should probably support his/her research for the first 2-4 years that to my knowledge happens anyway.
the real reason for poor NIH RO1 funding lines
written by experienced reviewer, June 15, 2010
I have been a regular ad hoc and permanent study section member for many years now. The most obvious reason for NIH not having enough money to fund all meritorious proposals is the policy to pay most if not all of a PI's salary for investigators in soft money positions. This has led universities and research foundations to grow a large cadre of "faculty" who are actually unpaid entrepreneurs. This allows these universities to milk the cash cow of NIH indirect costs without putting much in the way of real resources into the mix (investigators in these institutions will tell you how little of this money comes back towards research infrastructure). The consequences to NIH budgets have become very obvious in the past few years as a very large proportion of grant budgets are in the 400,000 plus range, 190,000 of which is just the PI's salary. In contrast, faculty at institutions that have a financial stake in their faculty (ie actually pay them and have real tenure) still typically submit modular budgets for 250,000/year or less. The most infuriating part of this is when two grants with similar needs for staff and reagents (and priority scores) come up for discussion by the study section in regard to budget. The 450,000/year grant from the investigator at the soft money institution is recommended for full funding since the PI "needs" to pay themselves 190,000/year (since why should the soft money institution care about paying reasonable salaries since they have no financial stake in the process) while the grant from the hard money institution is cut a module since the perception is that they don't need this much money (and honestly that hard money PI is unlikely to make such a salary if their employer actually has to pay the investigator.

I have discussed this with NIH program officers who say that NIH policies tie their hands about this. The budget "needed" to do the work is largely assessed by the study section. Thus, every one of these "soft money" grants leads to much less scientific productivity for the same financial outlay per dollar spent as well as pressure for these investigators to have 2, 3 or more grants each to make the finances work out. While as study section members we are supposed to assess productivity from the prior grant period, with the new guidelines, we are not supposed to look at productivity per dollar, just productivity per grant. This just makes the entire thing further perpetuated.

If the system is going to be sustainable long term, NIH needs to start thinking about moving to at least a "partial" NSF model which limits what percentage of a PI's salary can be charged to grants. This needs to be instituted over several years of course to minimize impact on individual investigators but the long term consequence would be to build a more stable funding structure.
"NIH policies tie their hands"
written by Dr. Fred, June 17, 2010
I completely agree with experienced reviewer regarding the impact of cadres of soft money faculty members on NIH support. I began my academic career in 1960 as an assistant professor and filed my first RO1 in September of that year. The result was a telephone call from the NIH liason with helpful comments (a) on suggested alterations in the research plan and (b)a recommendation to carve out a second application from the first for submission to a different Institute. I made the suggested changes and both grants were funded. That was the situation in 1960, partly because Congress was frightened by the success of the Soviet space program which had put the world's first artifical sattelite in orbit, and responded by handsomely supporting American science. But matters are very different today, in an era of a broad variety of other priorities for funds at all levels of Government. NIH needs to recognize this and understand that the support of soft money faculty, attendance at scientific meetings in Sicily, etc., may have been reasonable in better funded times but it has to be curtailed in view of the effects of years of inflation unaccompanied by commensurate levels of increased funding by Congress.
Professor of Biochemistry
written by Chris Francklyn, June 18, 2010
A tip of the hat to all of the folks who posted...there is some good discussion here that will be great material for future columns. In particular, "is there a better way of organizing study sections and review process?". At the end of the day, the comments by "experienced reviewer" are particularly salient, because they emphasize that a lot of the current angst is the result of Institutions viewing the NIH Doubling as a Great Opportunity to leverage their research activities. The increased funds during the years of the doubling were over-matched by an increase in the number of applications. And now all those newly minted PIs are trying to re-compete those grants."Experienced Reviewer" is also correct that the size of the budget, or more accurately, the amount of science/$$$ awarded is not used in the scoring. Will that persist? We'll see.
PI
written by SoftMoneyResearcher, June 19, 2010
I am a 100% soft money researcher, and though I make considerably less than the $190k suggested by "experienced reviewer" (I make about $110k/12 months, and am 55 yrs old, and feel like tenured faculty around me make more money with less stress) I completely agree with the point that soft money positions are a bad deal for NIH and for the PI, and are unsustainable. In spite of not having a stake in the salary, pay raises for soft money researchers are hard to come by. I am clearly a net cash earner for my state institution: an entrepreneur with no reward for the risks I take. This is both too expensive for the NIH, and unfair to the PI. Getting the large state research university to provide salary support for researchers is an absolute impossibility when states are broke.
We are suffering a very serious crisis across the board in the US, and it is time to re-prioritize our activities as a nation. Until we stop doing exorbitantly expensive and unproductive activities, especially waging war and feeding the military weapons and "national security" industrial complex, we will not have the resources to do the things that really improve quality of life: education, infrastructure, innovation, industry, environment. It will be improbable to see NIH budgets increasing again any time soon, and attrition of soft money investigators is a near-certain outcome. Sad to have missed out on the good old days, but we won't get them back by spending money on war, and the war machine. Talk about expensive!

Write comment
smaller | bigger


Write the displayed characters