At some point in our youth, we become interested in having a partner. We meet at a party or class, exchange phone numbers, and arrange to go on a date. After a few dates where we get to know each other better, one of us decides that maybe it’s time for us to go our separate ways. The conversation goes:
Me: “Chris, I think we should stop seeing each other.”
Chris: “What?!? Things were going so great. What did I do?”
Me: “No that’s not it, it’s not you, it’s me.”
In interacting with the world to better understand it’s inner workings, it’s helpful to have alternative hypotheses:
Hypothesis 1: “it’s not you, it’s me”
Hypothesis 2: “it’s not me, it’s you”
In this scenario, dates are opportunities to gather data about this potential partner and see if these data match the image in our heads about the perfect partner. With a sampling of data, we make a decision. Typically when we are younger, we think the data support hypothesis 2 even though we may say hypothesis 1 as a way to shut down the conversation. Once we have a better understanding of people, we may realize that hypothesis 1 is actually in play. After some self reflection, we update our expectations about what we want in a partner.
This reminds me of the conversations that I have with people about proposals I write that get rejected. In this case, we can think of two hypotheses:
Hypothesis 1: “I am inarticulate in my writing such that the reviewers don’t understand what I’m proposing”
Hypothesis 2: “I have clearly articulated by ideas in writing but the reviewers don’t have the background to understand what I’m proposing”
While at Entelos, I began to appreciate some of the significant gaps in our understanding of biology particular in the context of disease (see [old man] post). One of my motivations to return to academia was to help fill these gaps (see hypocrit post). To do that, I need funding. I typically turn to either the National Science Foundation (NSF) or the National Institutes of Health (NIH). I don’t waste my time with most private foundations as they only fund people located on their select list of institutions. As if good ideas only come from top tier schools – whatever. Typically I propose a project that involves both mechanistic mathematical modeling and wet experiments (see [1,2,3] as examples). These two are intimately linked – without the mathematical model, the experimental design may seem excessive; the mathematical model without experiments is just theory. As a proposal is limited in length, these integrated wet/dry proposals have to cover twice the intellectual ground in the same number of pages as a proposal focusing on just experiments. I’ve had some success with receiving a CAREER Award from NSF (and EAGER follow-on that is viewed as a reward for productivity on the CAREER) and a NCI R01 that was scored at the 7th percentile when it was funded at NIH.
There is simple theory – which you can keep the idea in your head, like evolution or a Michaelis-Menton relation – and complicated theory – which is better articulated using mathematics and, more recently, simulation. The problem with theories that you keep in your head is that they become cartoons. These cartoons are tested qualitatively using pen-and-paper methods developed over 100 years ago, that is null hypothesis testing. When you use a mathematical model, though, the theory is explicit and can be tested quantitatively using simulation. Using mathematics to describe and predict the behavior of a system is common practice in engineering, where we focus on systems of our own creation, like chemical factories, cars, or computers.
Towards this aim, there has been repeated calls for integrating theory in biology [4-7]. With the advent of technology to measure cellular behavior at increasing resolution, the repeated calls are increasingly motivated by a desire to make sense of the deluge of “big” data. Though, in a nod to Ernest Rutherford, just because you have a bigger magnifying glass doesn’t change the fact that its’ still stamp collecting. By incorporating theory into designing experiments, the focus shifts from “big” to “small” data, where “small” implies carefully designed experiments to test predictions derived from the mathematical model that represents current understanding. So with all of these voices calling for more efforts at this theory-experiment interface, my proposals should be a slam-dunk.
In theory yes, but practice is often different. In the beginning, when my proposals were rejected, I thought it was me because that was what everyone else was telling me. If they were from a funding agency, it was a way to shut down the conversation because it couldn’t possibly be them. But after improving my grantsmanship by writing many proposals, asking peers for feedback, receiving spurious reviews, and reading 100’s of proposals at study sections, I’m skeptical that it’s all me. I can’t help but think why are these voices repeatedly calling for change? Well let’s talk about proposal review and academic culture.
Depending on the agency, proposals for funding are grouped together based on similar topics, like cancer immunology, and reviewed by a panel of academics that have related expertise. If you decide to apply to the NSF, thanks to Vannevar Bush, the proposal needs to focus on fundamental questions and not be directly related to human health. For instance, funding divisions at NSF may support quantitative studies of plant and insect systems or of cells that produce things of commercial importance.
At the NIH there are over 200 standing panels devoted to specific topics. The number and distribution of these panels are influenced by the number of proposals that they receive. For instance, I was a member of the Cancer Immunopathology and Immunotherapy study section, which focuses on studies that evaluate immunotherapies in a dish, in mouse models and in humans. Due to a recent increase in the volume of proposals in this area, it was merged with another study section and the collective reviewers were split into three standing panels. Now I’m on the Therapeutic Immune Regulation study panel, which focuses more on identifying strategies to overcome resistance to immunotherapies. If you propose a study that more strongly focuses on using mechanistic mathematical modeling and simulation to analyze a biomedical system and send it to NIH, there is one panel focusing on this topic applied across all types of biomedical systems. The sheer difference in number of study sections implies how rare it is to have individuals that work at this interface. It’s good to not be in a crowded field but it’s also bad because there are very few individuals that understand what you propose.
No matter where you submit, each proposal gets evaluated by nominally three reviewers drawn from academia – that is your “peers”. You don’t know who the particular reviewers are. You might think that having one study section devoted to modeling systems all across the biological spectrum would be a good thing. I’d have to disagree as, in this context, I think that a sampling of three reviewers is an unreliable predictor. While each individual reviewer may be a world-expert in their application of modeling and simulation to a biological system, each individual reviewer has their own take on what biological questions are important and quantitative methods that are most appropriate. It wouldn’t be a stretch to say that we all have a preferential bias towards the systems/approaches that we work on. In contrast, a study section focused on some aspect of biology – like Therapeutic Immune Regulation – can largely agree on what the key questions are in the field. Unfortunately these biology-focused study sections also have a bias against mechanistic mathematical modeling and simulation. Many students choose biology because they have an aversion to math. At least at NIH, a poor score by at least one of the reviewers can torpedo your application for funding consideration.
Ok so what about academia?
Academia is strange (see butterflies/rainbows post) – it’s the only place where we funnel people into these artificial pipelines called disciplines. Disciplines don’t exist in K-12 and don’t exist in the workplace. I understand their existence – it’s a way to shape students into a marketable product for potential employers. Though once in, academics typically never leave these pipelines. They become part of the institution and help reinforce the culture (qualifying exam anyone?).
Speaking of culture, each academic discipline has different norms of behavior. Within a discipline, these norms are expressed in terms of how disciples approach problems, like what techniques do they use, what standards of evidence are used to support claims, what problems they think are important, and what venues for disseminating their findings are considered impactful. Disciples learn these norms of behavior by interacting with their disciplinary communities and have mentors that shepherd trainees to navigate the community norms. Implicitly then organizers of peer review select disciples from academia to help reinforce community norms. In peer review, interdisciplinary work is naturally de-emphasized because the proposed work falls outside of the norms associated with each discipline.
So then, how to communities evolve to remain relevant and innovate? They need to incorporate perspectives/approaches/techniques that are initially marginalized into the mainstream, that is the transport ideas across disciplines [8]. The more diverse voices at the table helps speed innovation. This is one of the benefits of the increased recognition to improve diversity, equity and inclusion. Recognizing the challenges with funding interdisciplinary work, funding agencies in Denmark, New Zealand, Germany, and Switzerland have incorporated aspects of a lottery system in awarding grants [9].
I guess the funding agencies in the US don’t think this is a problem, though the data suggest otherwise. For instance, just 1% of NIH-funded investigators get 11% of the grant dollars devoted to research projects and 40% of the money goes to 10% of funded investigators [10]. The author of this study also noted that the sweet spot for peak lab productivity was around $400K/year per investigator and fell off at the upper and lower ends. Over 50% of cumulative funding goes to labs that receive greater than $800K/year per investigator, suggesting that the majority of public funds directed towards NIH are not being effectively used. Interestingly, the author proposes that limiting labs to $800K/year per investigator would free up enough money to support more than 10,000 additional unfunded investigators at the productivity sweet spot of $400K/year. As my lab currently falls into the unfunded category, that would be awesome but I’m not holding my breath for that to happen. The cynic in me advises junior faculty to not do interdisciplinary work. The optimist in me says I have a proposal to write. Who knows maybe me explaining Bayesian parameter inference for age-structured models using Markov Chain Monte Carlo methods to mechanistic modeling and simulation agnostics may lead to a book contract with the “For Dummies” series.
References
4. Bray D. “Reasoning for results” Nature (2001) 412:863.
7. Phillips R. “Theory in Biology: Figure 1 or Figure 7?” Trends in Cell Biology (2015) 25:P723-729.
8. Wu L, Wang D, Evans JA. “Large teams develop and small teams disrupt science and technology” Nature (2019) 66(7744):378-382.
9. Mesa N. “Q&A: A Randomized Approach to Awarding Grants” The Scientist (2022)
10. Wahls WP. “The NIH must reduce disparities in funding to maximize its return on investments from taxpayers.” eLife. (2018) 7: e34965.