The State of Academic Research on Nicotine, Part 3
Some thoughts on how to improve things
This is a multi-part series on content that I first presented in my Michael Russell Oration at the Global Forum on Nicotine conference in Warsaw, June 2025 (full video). This is a high-level and wide-ranging synopsis of: (Part 1) the pervasive, but often preventable, flaws in research on nicotine and tobacco; (Part 2) how problematic incentive structures in academia have contributed to the state of the research; and (Part 3, here) what we can do about it. This is my own perspective, based on ~10 years in academia and ~5 years as a consultant to industry.
In Part 1 of this series, I described how the academic research on nicotine/tobacco is both highly polarized and hostile and plagued by pervasive (but preventable to an extent) flaws. In Part 2, I discussed how the research got to this state, from the perspective of academic research incentives. Here in Part 3, I wrap up with some ideas on how we can improve things.
Academic Funding In Flux
This year in the US, there have been severe actual or threatened cuts not only to the number of NIH grants, but to the rate of indirects. I talked about indirect costs in Part 2; if you missed it, indirect costs are essentially an extra percentage tacked on to a research grant given by the funding agency directly to the university, with the intention of covering overhead and operational costs. My basic thesis of Part 2 is that NIH’s singularly high indirect rates (relative to other funders in public health and behavioral research) has incentivized researchers to conform with the NIH’s funding priorities for their survival in academia, and that this is the main driver of low-quality and anti-tobacco harm reduction (THR) research.
Avoiding the current political issues on this topic, on one hand, some academic research is important and should continue to be funded. But on the other hand, a lot of these grant dollars aren’t being well-spent given the abundance of flawed research in this field (see Part 1), and I do think the incentive structures in academia have to change. In my understanding of the system, reducing indirects in grants would be the #1 thing to change, but there are important caveats that I’ll go into below. It would have to be done carefully, and would only be successful if other conditions are met (namely, the existence of other funders with different priorities and viewpoints.
How will universities respond?
Clearly there’s a lot of uncertainty right now, both from NIH’s side (especially given the legal challenges to recent actions) and in the response of universities. If there are substantial and permanent cuts either to the number of NIH grants or the indirect rate they are willing to pay universities, I will outline a pessimistic and an optimistic scenario:
Pessimistically, I can see universities responding by pressuring their faculty even harder to go after a shrinking pool of money, especially at first. Eventually they would have to face the reality that there will not be the amount of money they’ve come to depend on to support high-paid administrator salaries and sustain the grant-applying-workforce, and they would raise already sky-high tuition rates on students, and rely more on low-paid adjunct positions to teach courses. (I have complete respect and admiration for adjunct instructors because it’s a lot of work for little pay and low job certainty.) Pessimistically, since the high-paid administrators are in charge, they will preserve their own salaries with these actions, without any real change to the system.
Optimistically, the pessimistic route would eventually be unsustainable, and might force universities to reconsider their expectations of faculty members securing grant funding, and what are considered acceptable funding sources. If the chances of NIH funding are so low that administrators are forced to recognize it’s unreasonable to expect every single academic researcher to get one, or if more equalized indirect rates lessen the unique appeal of NIH grants for institutions, then universities could eventually be less selective about the kinds of research grants they expect or require faculty members to get. After all, most research grants offset at least a percentage of the research team’s salaries, which (depending on the university) lowers the costs of the university. Ultimately, if a variety of funders were acceptable, then that could broaden scientific inquiry and result in a wider variety of what’s considered an acceptable research topic.
For the Optimistic Scenario to Happen:
The big missing piece preventing my optimistic scenario from happening is the limited set of available big funders of academic research. There are other funders of academic research, but unfortunately they all have the same stance on THR:
So even if NIH grants are cut and universities re-calibrate their funding expectations to equally value grants from all sources, the current options would not improve the state of the science. They would all continue to incentivize research that focuses on the harm — rather than the harm reduction — of lower-risk nicotine products.
Ultimately what we need is a range of funders with diversity of thought. This might possibly open up a role for grants or contracts from industry, but that’s an enormous barrier to overcome given the hostility towards industry in this field (see Part 1). But, if universities are forced to be less selective about research funding, even more optimistically, if grants or contracts from industry become more normalized in this field (just as grants from the pharmaceutical industry are common in medical research), might this help to mend the ostracization of the noncombustible nicotine industry? I acknowledge there are problems with industry being too dominant in the research (in many fields), but there are established ways of handling this (i.e. full disclosure of financial and non-financial conflicts of interest). Having an environment that normalizes researchers of different viewpoints interacting with each other seems preferable to the current hostile and polarized silo situation which is a detriment both to the state of the research and to public health.
Communication beyond Academic Articles
One more anecdote from academia. Here’s the top of my Google Scholar profile, with my papers listed in descending order:
My far, the most commonly cited paper (and therefore, arguably my highest-impact one) by a factor of ~20 (see yellow highlight above) is a technical guide on performing a very niche statistical procedure: calculating a specific kind of effect size in a specific kind of statistical model:
The strict boundaries of academic publication
There’s an interesting story behind this paper: I almost didn’t publish it, or if I did, it would have been buried in the methods or supplement of a different (and far less cited) paper. The other “main” paper focused on risk factors for adolescent smoking, and I found myself needing to calculate a very specific kind of effect size. (For anyone curious, this effect size compares different predictor variables in the same model, which were a mix of categorical and continuous variables measured on different scales, and determines their relative contribution to the model). I couldn’t find an existing software procedure to calculate this, so I pieced together different parts of the process, wrote up my process informally, and passed it my my postdoc advisor and collaborators to make sure I wasn’t getting anything wrong.
My postdoc advisor (who was a wonderful mentor) suggested I write this up as its own paper. I hadn’t thought of that because I didn’t invent anything new in this process — I only pieced together different parts of what other people had done from different sources — and I was painfully aware by then how hard it was to get anything published in a traditional journal that wasn’t “novel.”
So, I decided to go open-access. A brief explanation about open-access vs. traditional print journals: “traditional” journals make money through expensive subscriptions to their journal that are usually paid by university libraries. It’s free for authors to publish in these journals, but the content itself is usually paywalled (unless the research is supported by NIH; in a positive more for open science, NIH announced in June 2025 that all NIH-funded articles must be made publicly available immediately, replacing the prior 2008 policy allowing journals to enjoy a 12-month embargo before the content went open-access). Since traditional journals come from a tradition of physical printing, they are more choosy about articles because they have physical space limitations and prefer “novel” or exciting studies. Open-access journals, on the other hand, make their money through hefty publication fees (paid by the authors, often through a grant) while making the eventual content freely available. Because open-access journals are usually online-only and don’t have the physical space limitations (not to mention the financial motive to collect more publication fees), they are less choosy about articles being “novel” as long as the study is scientifically sound.
Open-access was the right move for this paper because, again, I wasn’t inventing a new effect size or statistical procedure, I was just assembling different pieces of the process in one place and providing example code. The success of this paper speaks for itself: by publishing as a standalone statistical analysis paper rather than folded into the gory details of a content-specific paper, it had far greater spread in many different research fields.
How did the academic system respond to the success of this paper? (Note: I’m not talking about my postdoc advisor, who was amazing, I’m talking about when for some reason this paper came up in my annual evaluation years later and at a different institution). I was essentially slapped on the wrist:
This article doesn’t count. Don’t publish in open access journals; they’re predatory.
How can my most successful article have hurt my performance evaluation? Thankfully, this evaluation didn’t have any lasting negative impact on my career, but it did get put into writing as an area for improvement (i.e. not to publish in “predatory” journals). Predatory journals are a problem — collecting the hefty open-access publication fees from authors for minimal or no peer review — but not all open-access journals are predatory. The journal I published in had already passed a few essential quality checks to be a reputable journal (indexed in PubMed and a decent impact factor). As I said earlier, if this paper wasn’t published in an open access journal, it wouldn’t have been its own paper at all. Ironically, I think folding this into a much lower-impact paper that only tobacco researchers would read might have actually been more favorable to my performance evaluation.
Broadening Communication of Academic Articles
These strict rules around what “counts” as proper academic communication are a detriment to the impact of academic research. When even open-access scientific journals are not considered “pure” enough, academia has a communication problem. This may be changing (see Künzli et al. 2025 for a defense of open-access journals) but in my experience academia has firm believers that only traditional print journals are valid outlets.
But academic communication needs to go beyond dialogue with other like-minded academics. Even dialogue with other academics who are not like-minded would be an enormous improvement because that would help to check the rampant confirmation bias and interpreting ambiguous results in support of preconceived beliefs (I covered an example here). U Penn’s Adversarial Collaboration Project is a promising initiative towards this goal, and I am trying to pursue one myself.
However, more communication in and of itself isn’t always better. In theory, more public dissemination of science should be a good idea, but many times in practice it makes things worse. A recent case in point (link):
I don’t have space here to debunk this article, but just about every phrase in the title is wrong. It was from a study not only not-yet-published, but not even completed yet; it looked at short-term transient physiological responses, not clinically-relevant diseases; and it was nowhere near the “first ever study” of its type.
Unfortunately this alarmism in the media is common, since alarmism sells and many researchers are happy to self-promote. The limitations and caveats of the study — which are necessary for scientific publication — didn’t make it into the media. Gal Cohen covers another such case here. So maybe it’s not more communication per se that we need, it’s greater impact. This includes strengthening the quality of the research itself.
Some Ways to Improve Things
For academics: I didn’t realize until I left academia how indecipherable academic language is to other audiences. We write to other academics in formal scientific language with a lot of field-specific terminology. As a recovering academic, I’ve had to learn different communication styles to speak with clients, regulators, and lay people. It is a learned skill and something I actively continue to work on. It’s difficult to simplify descriptions of your research while telling an accurate and complete (including with limitations) story.
Communication outside of academia is a skill that can be learned, and academics should more often acquire this skill. If communication could be improved — e.g. explaining the study without overselling it or omitting key limitations — that could improve the low-quality media coverage of research.
Media: This is partly interrelated to the above point — that researchers need to get better at communicating their research and its limitations in plain language — but media bears responsibility too. Journalists need to get better at asking challenging questions, and above all understand the harms they are causing with alarmist coverage, i.e. scaring people away from reduced-risk nicotine products and keeping them using tobacco in its most harmful form.
For universities & journals: These are the some of the important leverage-points of research quality. There is a slow move towards open science (including providing data and analysis code with every publication) and if journals require it, researchers will have to comply. Many times when I suspect or identify flaws in a published article, I don’t truly know how bad things are until I start trying to re-analyze the data (if it’s publicly available). Similarly, preregistration is also important in science integrity because it should cut down on “fishing expedition” papers where someone takes a large dataset such as NHANES and goes down the long list of possible health outcomes to see what’s significantly associated with e-cigarette use. (The problem with this is that with multiple such analyses, the chance finding a false-positive result increases drastically; and none of the non-significant findings get published, so what makes it into the literature is biased).
It would also be simple for universities to require these science-integrity practices into annual evaluations. Universities could also encourage adversarial collaborations to reduce the silos and confirmation bias in this field.
Consumer voices. The call “nothing about us without us” has been accepted in other research and public health fields, but this has not entered the mainstream of nicotine research yet. Since a majority of academic papers conclude with a recommendation for some tobacco control policy or intervention, shouldn’t we be talking to the people who are affected? Frankly there’s a lot of paternalism towards people who use nicotine or tobacco, treating them as if they are passive victims of marketing and need strong regulations to protect them.
Consumers and advocates are sharing their powerful stories on social media and it’s starting to make a difference (e.g. having a presence at more conferences). Facts can change some people’s minds, but emotional personal stories are more effective. Getting to know some of the consumers on Twitter/X has been the great unexpected joy of leaving academia for consulting. We researchers have a lot to learn from them. For example, if actual consumers were involved in research, maybe there wouldn’t be so many flawed toxicology studies using “dry puff” conditions that no human would ever tolerate in the real world (see Roberto Sussman’s post on this).
Conclusions
Part 1: Nicotine/tobacco research is highly polarized and fraught with pervasive, but to some extent preventable, flaws. Currently, papers skeptical or hostile to tobacco harm reduction dominate.
Part 2: I believe these problems in the research stem from academic incentive structures. It started with a vicious cycle of PhD overproduction which led to hypercompetitiveness for grants. As a result, academic researchers’ survival in academia depends on aligning with the funding agencies’ priorities, particularly NIH because of their singularly high (in this field) rate of indirects which are lucrative to universities.
Part 3: Changing academic research incentives, including norms around research practices, is necessary in my opinion to improve the quality of the research. Starting from easiest to most difficult: universities & journals can encourage or require best practices such as 1) preregistration of studies and secondary data analyses and 2) transparency of data and analysis code. Universities can also encourage adversarial collaborations. Adding these as expectations for annual reviews should be simple and cost-free. Researchers need to get better at communicating outside of academic audiences, including about the limitations and caveats of their research. Consumers should keep sharing their stories and arguing for their rights to access reduced-risk products and for inclusion in the research sphere. The media needs to understand that alarmist coverage can cause real harms in scaring people from moving down the continuum of harm. There needs to be a better range of research funders with diverse viewpoints and universities will need to come to terms with more limited research funding that can no longer sustain the current level of administrators and grant support staff.
To close on an optimistic note, the philosopher Artur Schopenhauer said:
All truth passes through three stages: first, it is ridiculed; second, it is violently opposed; and third, it is accepted as self-evident.
I do think we’re seeing signs of entering the 3rd stage of truth. At the Society for Research on Nicotine and Tobacco (SRNT) conference just 2-3 years ago, it was my experience that it was controversial to say “people who smoke can reduce their harm by switching completely to e-cigarettes,” but now that seems to be taken for granted (and the opposition has retreated to dual use). There are also a few notable papers that recognize the harm-reduction potential of non-combustible nicotine products, published by research groups that have historically been critical of these products (e.g. Miech et al. 2025; Harlow et al. 2025).











Comprehensive take, from such diverse angles! Communicating research to non-technical audience has been a challenge across fields, most notably in climate change, which in my view plays a major role in feeding climate action skepticism. Media is partly to blame for this too.