Jun 14
2010
|
No. 33: Grant Clinic: How Can I Preserve My Data When Re-Submitting a Rejected Grant?Posted by: PIA in Tagged in: Untagged
|
|
Sign Up to receive free weekly articles like these
GRANT CLINIC
How Can I Preserve My Data When Re-Submitting a Rejected Grant?
Reader Question: My RO1 grant was rejected, not because the experiments were flawed but because the topic didn't "click" with some study section reviewers. I will submit it to a different study section, but because the rejected grant is already the A2 version it must be greatly revised and submitted as a new grant. What can I do to "rescue" these experiments?
Expert Comments:
When one receives less than a laudatory score for an application, it is easy to conclude that the panelists “didn’t get it.” In that case, it follows that all one needs to do is move the application to a review panel or a different funding agency.
Despite the limited number of times when this has worked (even for this author on one occasion), there are many more when it doesn’t.
In order to plan your next move, you should read the roster sheet for the panel that reviewed your grant and do a Medline search on the panelists to determine their specific research areas. If none has published in your area, then it is possible that they might not have the expertise to appreciate the significance of your work.
Typically, the study section’s Scientific Review Officer will work hard to ensure that every application gets at least one and perhaps two expert reviewers. If there truly are no experts, and you can identify another study section that has more specific expertise, then specifically requesting that alternative panel when you resubmit might be advantageous.
However, it is also possible that, if your topic didn’t "click" with the reviewers, they might have detailed knowledge of your field but have been unconvinced about the significance of either your central question or hypothesis. This can lead to good scores for approach and investigator but low scores for significance and overall impact.
Read the comments carefully and discuss with your program officer to confirm whether that scenario applies. An analysis of the comments also may help to determine whether the readers understood what you were trying to do.
To “rescue” this project, consider holding on to the experiments and data that you’ve collected. If it’s novel, accurate, and informative, it still might form the basis of a new application. Then go back and challenge all the assumptions that went into the initial question and hypothesis: Are they still appropriate and timely through all the revisions? Has your field moved on since your first submission? Consider significant revisions to your central question and hypotheses, and then use these to re-craft your title and aims (additional experiments might be required).
Float it by a disinterested (but friendly) colleague to see if they find it intriguing. These changes will certainly constitute a new application.
Splitting your original grant into pieces and then submitting each as a new grant brings significant risk of diluted impact. (It might make sense if the reviewers felt your original grant too broad and sweeping.)
At the end of the day, what matters most is building a case that your question and hypothesis are truly at the leading edge of the field: Anything less, and you are likely to fall short of a successful application.
Comments by Christopher Francklyn, PhD, a former study section chair and veteran reviewer for NIH and NSF study sections. He is a professor at the University of Vermont.
This eAlert is brought to you as an informational training tool by the Principal Investigators Association, which is an independent organization. Neither the eAlert nor its contents have any connection with the National Institutes of Health (NIH), nor are they endorsed by this agency. All views expressed are those personally held by the author and are not official government policies or opinions.
written by OldTechie, June 14, 2010
written by anonymous, June 14, 2010
written by Robert E. Buxbaum, PhD, June 14, 2010
If your topic is uninteresting, or the area is considered unimportant, no amount of new data will help your application, I'm sorry to say. If your topic is interesting, and worthwhile, and your current data shows your approach is likely to work, you're on much firmer ground; you're now in the region of luck and finesse. You should tweak your proposed experiments to highlight the logic, and you should add more data, if only to show that you have not left the field. Still, the fact that most of your data is old should not be a killer problem. To show that you work well with the intellectual community, you may want to publish the data you showed in your previous proposal, and you may want to include in your article, your argument for future work. For the new proposal, I'd present that published data (probably with the reference marked as "in press") and I'd add enough new "preliminary results" to take care of any minor reviewer questions. Good luck, and don't get discouraged. There are few batters with lifetime averages above 300, and even Nobel-lauriates get their proposals rejected.
written by Observer, June 14, 2010
written by Maverick, June 14, 2010
written by deanlet, June 14, 2010
written by Fan of the Observer, June 14, 2010
1) Double blind review. That takes away the advantage of knowing the authors but removes a lot of biases. Let the bureaucrats check the competence level.
2) More reviewers. New technology makes it possible to have 5 or 7 reviewers.
3) Elimination of panels. Leave a job of distributing money to bureaucrats. They are less competent but much easier to move around.
Once a panel establishes a circle of mutual adoration to change its dynamics is awfully hard (not even to say expensive).
There is a beautiful illusion of competence. In modern science familiarity with an individual technique is make or break. Very few people on the panels are that savvy to have sufficiently broad view.
Many of those are inexperienced young friends of powerful panelists. I have seen people on the panels with less than 5 publications. I have a friend with more than 500 publications routinely not invited to participate in the panels.
The biggest problem is that a lot of proposals are just simply not read at all. Involving much more people in a review process over the web will be the solution. Reviewers do not need to be from a particular field. There should be three primary reviewers from a given field and the rest can be random. Good science will float to the surface no matter what. In a present system interesting science can be kept out by a few gatekeepers.
written by Smokeless one, June 14, 2010
written by SoftMoneyResearcher, June 14, 2010
I have heard that the UK is considering putting a cap on the number of proposals submitted per year (e.g. 3) and such a method would both accelerate convergence to the sustainable level of researchers, and increase research productivity by eliminating the proposal mill. The US is unlikely to accept such approaches (on a cultural basis, if nothing else, since this would be anti-entrepreneurial) but other than increasing funds for research from the tax base, the result of doing nothing is simply prolonging the inevitable decline in the number of applicants while maintaining very low efficiency and productivity in US research and higher education.
Other deleterious consequences of the lack of will to balance grant funding with available resources are the stress on faculty (not to mention non-tenured researchers) and the clear disincentive for students in the US to pursue careers in academia. Thus the increasing proportion of foreign students in our graduate programs, and the flight of US science and engineering students to business, law and finance.
I'm not entirely off-topic here. I plan to stretch the rules as much as possible, submitting excellent applications based on related topics to as many different study sections and funding agencies as possible, as frequently as possible. Do the math: a full time salary that also supports student RAs and experimental work requires three simultaneous grants, and even assuming 5 year projects, each requiring 2 submissions, with an overall probability of success of 5% (including non-scored applications...let's be realistic), I need to submit an average of one proposal per month. There is no way to survive in the long term unless one does some research and publishes too, so that means stretching the value of each independent idea by multiple simultaneous submissions.
If one is a fisherman (with a family) and the fishing is lousy, one has to fish a lot ... or find a new career. Those who stay are going to become very competitive indeed.
written by Observer, June 14, 2010
Watching all this has attracted me to exploring funding opportunities which I had ignored in the past: industry collaborations. I ignored them previously because of the lack of scientific freedom associated with such funding but I'm getting my ducks in a row now for such funding in anticipation of funding problems in the near future. I believe my output in terms of quality and quantity has never been better but the funding future never looked so uncertain.
Without getting too much off topic, advice to individual with original question: do not put too much emphasis on the reviews of your grant as they may not accurately reflect the actual quality of the document you submitted. For practical purposes, repackage the ideas to reflect what you believe is the best science and cross your fingers.
written by indentured servant, June 14, 2010
With respect to both the Observer and Fan of Observers' comments, what is the argument against a "transparent" review (where the reviewer is also identified by name), instead of a double blind review process (which is difficult to really implement in an ever increasing specialized world of focused researchers).
I think that over the long run, it is likely to bring both civility and humitlity to the review process.
written by Observer, June 14, 2010
written by indentured servant, June 14, 2010
Afterall, we are all pretty collegial, are we not?
Some might argue that the needs of applicants, especially newly independent applicants, are not being well represented in a single blind system. If criticisms are well thought out and valid, then why would you or other reviewers be afraid to voice them, and ascribe a numerical score that is associated with the reviews?
An advantage to the applicant/respondent would be that the reviewer-perspective/bias would become evident to the applicant-who could then respond in a more directed manner than trying to guess what the reviewer is asking for.
written by dtt, June 14, 2010
written by experienced reviewer, June 15, 2010
I have discussed this with NIH program officers who say that NIH policies tie their hands about this. The budget "needed" to do the work is largely assessed by the study section. Thus, every one of these "soft money" grants leads to much less scientific productivity for the same financial outlay per dollar spent as well as pressure for these investigators to have 2, 3 or more grants each to make the finances work out. While as study section members we are supposed to assess productivity from the prior grant period, with the new guidelines, we are not supposed to look at productivity per dollar, just productivity per grant. This just makes the entire thing further perpetuated.
If the system is going to be sustainable long term, NIH needs to start thinking about moving to at least a "partial" NSF model which limits what percentage of a PI's salary can be charged to grants. This needs to be instituted over several years of course to minimize impact on individual investigators but the long term consequence would be to build a more stable funding structure.
written by Dr. Fred, June 17, 2010
written by Chris Francklyn, June 18, 2010
written by SoftMoneyResearcher, June 19, 2010
We are suffering a very serious crisis across the board in the US, and it is time to re-prioritize our activities as a nation. Until we stop doing exorbitantly expensive and unproductive activities, especially waging war and feeding the military weapons and "national security" industrial complex, we will not have the resources to do the things that really improve quality of life: education, infrastructure, innovation, industry, environment. It will be improbable to see NIH budgets increasing again any time soon, and attrition of soft money investigators is a near-certain outcome. Sad to have missed out on the good old days, but we won't get them back by spending money on war, and the war machine. Talk about expensive!