Speaker 1:
Welcome to TD Cowen Insights, a space that brings leading thinkers together to share insights and ideas shaping the world around us. Join us as we converse with the top minds who are influencing our global sectors.
Dan Brennan:
Welcome back to another episode of The Future Is Now, TD Cowen’s Tool/Dx Conference recap podcast. Hi, I’m Dan Brennan. I’m joined by my colleague, Brendan Smith. We’re the two healthcare research analysts here at TD Cowen who cover the life science and diagnostic tool space. For those who are new to our podcast, the point of this series is for Brendan and I to discuss the key sector insights going across each of our investor conferences throughout the year and highlight the ramifications for companies and investors across the space. Today we’ll be discussing the top takeaways from TD Cowen’s 14th Annual MedTools Conference hosted in Boston, which featured a lineup of key opinion leaders across four areas of high interest amongst investors. We’ll start with some high level thoughts on the state of the space, walk through the four panel discussions and hopefully debate a few ideas that stood out to us throughout the day. Brendan?
Brendan Smith:
Yeah. Thanks, Dan. That’s right. We hosted 10 different experts across our panel sessions with nearly a hundred investors on site to discuss our four main topics, which included the AI era of tools/Dx, the future of bioprocessing, pharma R&D spending trends, and multi-cancer early detection or MCED. Across the different sessions, we spoke with our experts about where this sector is today, trying to contextualize 2025, expectations for 2026 and beyond, and how innovation across the space is disrupting long-established tools spending paradigms.
Dan, maybe we can start with a question for you. It’s October 2025. Folks are gearing up for Q3 earnings, thinking about their numbers for 2026. 1000 foot view, how did you find overall investor sentiment at the conference and maybe what are some broad themes people are really latching onto right now?
Dan Brennan:
Yeah. Brendan, the key to debate in the hallways was how to think about the impact on spending and tool stocks from the MFN deal between Pfizer and the White House, and also trying to quantify and time whether the roughly 400 billion or so of onshore CapExing build outs that have been announced by the large global biopharma companies will be felt by the bioprocessing vendors. I mean, beyond that, oncology Dx, particularly for MRD, continues to draw a lot of attention. What about you? Were there any areas or segments within tools and diagnostics that people are sounding more bullish on?
Brendan Smith:
Yeah, for sure. I think to no surprise, AI came up on just about every panel that we hosted. It can sometimes feel like there’s so much noise around AI, the different use cases and applications, all the money flowing towards these tools. But one of our panelists actually described broader adoption of AI as a “critical survival tool for US biotech,” especially as more and more Western brands are competing China’s biotech market. So it’s definitely not going away anytime soon. I think we also heard a lot about the importance of automation, especially as it relates to new factory build-outs in the US and the years ahead, really with a huge focus on efficiency of wet lab bioproduction and basic R&D outputs.
Dan Brennan:
That’s great, Brendan. I do think it sets the stage well for us to run through some of these panel takeaways. So maybe let’s start first with the AI era of tools and diagnostics. As you already noted, most of the panelists across all of our sessions brought up the use of AI across workflows in each of their respective spaces, but I think this panel really focused specifically on the different applications within drug discovery and diagnostics for some of these tools. How would you sum up the most important takeaways from this discussion?
Brendan Smith:
Yeah, for sure. For this session, we hosted two leading AI/ML academic researchers. This is Dr. Shantanu Singh, who’s a principal investigator and senior group leader at the Broad Institute of Harvard and MIT, as well as Dr. Michael Hughes, who is an assistant professor in computer science at Tufts. Both KOLs have a strong background in imaging capabilities and machine learning applications within drug discovery and diagnostics. We did some live polling of investors in the room on a few key topics as well. The first being which area within healthcare is most ripe for AI-driven disruption. Our panelists agreed with surveyed investors that drug discovery is the top choice though diagnostics and then hospital admin operations were both a close second in our live survey.
Clicking into drug discovery, our KOLs really emphasized that using AI to improve the toxicity profile of drugs in the clinic is a high priority for biopharma right now. They both expect bio-simulation software to predict on and off-target toxicity and virtual animal models will be critical to bedding the curve here. As a key catalyst to watch for, they expect that within two years we need to see more concrete evidence and likely publications comparing animal testing data within non-animal bio-simulation data as a real gauge of the quality of these models. They suspect this is also a big driver of the recent pharma AI collaborations really opening their doors to the treasure troves of data that they’ve got as Dr. Singh feels that data sharing is now just absolutely essential to getting the most out of these tools.
We also asked who he sees as some of the top innovators in AI drug discovery, and he was quick to point out J&J, Roche Genentech, Recursion and insitro as really being on the cutting edge. He also noted actually that his experience with agentic coding and analysis using Anthropic Claude 2 has dramatically surpassed his expectations and really impressed him. He mentioned that a couple of times on stage. Dr. Hughes also confirmed that recent initiatives coming out of HHS are already making their way into their workflows, namely that NIH is now including stipulations within their grant funding that non-proprietary data generated by grant recipients actually needs to be made publicly available for model training within the agency and public domain.
One last point here is something that investors ask us all the time, and it has to do with the impact of broader AI adoption on traditional tool spending. We surveyed investors in the audience who had very mixed thoughts on the net impact, but ultimately expect about one to 10% less spending on traditional tools will be the result over the next couple of years, but our panelists actually fell differently and expect this could be an incremental tailwind for traditional tool spending. Dr. Singh pointed out that wet lab experiments are still going to be required to validate AI predictions and that if anything, he actually expects more automated wet lab tools and experiments that produce high quantities of usable data could see a meaningful tailwind from broader AI adoption. He specifically called out proteomics and was notably bullish on perturb-seq as especially right for these kinds of applications. So he thinks AI can really help unlock traditional barriers to large-scale deep data sets that are really necessary for rapid innovation within sequencing. Dan, did you hear anything else in this panel that stood out to you?
Dan Brennan:
The one thing was really Dr. Singh’s focus on cell painting and how the use of AI he feels is expected to find useful biologic signal out of the massive volume of data that’s being generated, that certainly resonated with me given the focus on the same topic that we hear a lot with regards to spatial transcriptomics platforms where know TenX is the leader and then Bruker through the NanoString acquisition also participates. I think Dr. Singh, when we inquired, felt a key holder for these platforms was their cost and speed. That was encouraging to hear his optimism about the likely utility of AI in spatial data.
Brendan Smith:
Yeah. It sounds great, Dan. So now maybe we can turn to our next panel on the future of bioprocessing. I’ve got a few points that stood out to me for our coverage, but maybe I’ll ask you first. What were some of the biggest takeaways on your side from the conversation we had on bioprocessing?
Dan Brennan:
Yeah. For our bioprocessing panel, we hosted Claus Tollnick who runs BIOperate Consulting. He’s also been a former executive at biopharma operations at Sanofi, at Ferring and also Recipharm. Then joining Claus on stage was Brandon Razooky, who was a senior manager at Resilience. The panel covered a number of topics I think relevant from monoclonal antibodies, cell and gene therapy and other modalities. From a top-down basis, a key insight we tried to glean was the health of the market for tools demand heading into ’26, how it relates to the recent Pfizer MFN deal plus all this ongoing on-shoring CapEx announcements. Tollnick’s view is that customers remain reticent to spend CapEx and that despite the recent policy progress, clients continue to see a lot of uncertainty.
Now, the audience poll on this was more bullish than the consultant. Nearly 50% of the audience expected call it 10% growth in bioprocess equipment revenues in ’26, and 80% overall expected a positive impact on equipment revenues whether in ’26 or ’27. With regard to the potential for a CapEx pickup from reshoring, 95% of the audience expected a benefit for tools. Most saw it coming in ’27 as opposed to nearer than that. Then from a consumables basis, the experts felt ’26 would see slightly better growth rates in ’25. This was echoed, I think, by the audience of investors and industry folks who saw Danaher bioprocess business as a proxy growing call it 8% in ’26.
Another key topic was workflow and we explored intensification, namely processes like N minus one that enable greater efficiency to drive greater yields. While our experts clearly the merits of this, there’s a significant hurdle convincing manufacturers to change their traditional fed-batch process that works very well, hence uptake for intensification was expected to reach just about 10 or 15% of upstream processes in the next few years. For downstream, the uptake is expected to be more modest though this is actually where the KOL saw a bigger benefit if implemented, namely in reducing the use of Resin-A. Sartorius and Danaher reviewed as the share gain as amongst the bioprocess vendors. What about you, Brendan? What stood out for you during the bioprocess conversation?
Brendan Smith:
Yeah, definitely. I mean, as you mentioned we talked a lot about the potential timing for bioprocessing tailwinds related to on-trying and manufacturing that pharma announced this year. I mean, our panelists both emphasized, like you pointed out, how slowly some of the major bioproduction changes can be to roll out. They think probably at three to four year time horizon is most likely to see really the full impact of some of this manifest. They also both agreed that QC products are especially ripe for replacement and innovation just given how outdated a lot of that infrastructure is. I think to this end, our KOLs think that tools leveraging next gen automation and even AI capabilities have a real opportunity to cement themselves in this next wave of bioproduction installments. Noting that any machine learning tools that can help predict usability and reduce variability would be pretty transformative in both upstream and downstream processes.
We also asked our panelists and investors in the row about which drug modalities beside monoclonal antibodies are likely to be the biggest growth driver for the segment, on the clear consensus in the room, probably to no surprises, antibody drug conjugates or ADCs, though our KOLs did note that cell therapies are still going strong and expanding, especially within the Chinese biotech market. But as for ADCs, the thinking there is really… And this technology is not brand new and fairly de-risked in that sense with a pretty straightforward manufacturing process that’s fairly well understood. ADC clinical trials are growing as are use cases across different therapeutic spaces, all of which should keep the class as a key driver for bioprocessing over the near term.
With that, maybe we can turn to our next panel on pharma R&D spending trends. Again, I’ve got a few points that stood out to me for our coverage, but maybe, Dan, I’ll let you kick it off here. What were some of the biggest takeaways on your side from the conversation we had on pharma R&D spending?
Dan Brennan:
Yeah. Thanks, Brendan. For the pharma R&D panel, we hosted Rajat Gupta, who’s director of proteomics at Arena BioWorks. Rajat also previously held the senior position at Pfizer on proteomics. Then we had Zhinan Xia who’s a founder and CEO of Abimmune, which is an AI drug discovery company. Zhinan also was a formerly a senior proteomics scientist at Pfizer and sort. So we split the discussion between top-down topics like budgets and spending and bottoms up areas, namely like technology and research trends. I think there’s a bunch of takeaways here. First, I think similar to the bioprocess panel, we sought insight from the KOLs regarding how MFN tariffs have potentially slowed spending and if there had been any improvement as deals are being struck. Our experts did believe that cost from tariffs have dampened spending along with concerns over drug pricing and funding pressure on younger biotech companies.
Based on the Pfizer MFN deal, they believe this is good news for the drug industry as it mitigates downside risk, and although pricing comes down, stronger volumes will help. Interestingly, they felt larger biopharma companies will look to cut R&D a few percent in response to the drug price pressure. Our audience also expected on average R&D to drop 2% as a result of the MFN deals. Second, while R&D is expected to drop a bit, the audience expected a modest bump to tool spending from pharma in ’26 on the order of magnitude of about 50 basis points given the clarity from getting a deal done. I think our KOLs were more wary that tool spending would see a bump we would argue, which is certainly different, the view from the market given how positive benefit the stocks have reacted.
Third, we also dug into which technologies are best positioned to benefit from stronger spending related to proteomics research. Mass spec has been and remains the workhorse and significant improvements in speed and throughput are enabling greater usage. Gupta felt single cell spatial tools would benefit significantly. He was also pretty positive on pharma uptake for affinity tools. Thermo and Bruker dominate in discovery mass spec and they’re expected to continue to do so. Our KOL was more biased towards Bruker given their superior service. Then finally, the audience and our experts felt the biggest share gainers of budgets for proteomics will be mass spec and AI. What about you, Brendan? What stood out to you during the pharma R&D conversation?
Brendan Smith:
Yeah. I mean, to your last point here, look, this was another panel where both KOLs unprompted brought up AI/ML as really one of the most transformative tools impacting pharma R&D. Here we surveyed the audience on what percent of total pharma R&D budgets will be shifting to AI tools over the next year. Dan, you and I ran our biopharma R&D survey late last year that showed about nine to 10% of total R&D spend went to AI biosimulation tools in 2024. The live investor survey we took at this conference suggested upwards of 15 to 16% of total R&D budgets will go to AI by next year, which our panel has absolutely agreed with. I drilled a bit more into this though just to try to understand where exactly that money is actually going.
They were both clear that this is not a big AI personnel onboarding trend, and in fact, they actually expect a pretty meaningful reduction in more junior lab members and labor expenses as AI ramps up. Instead, they both expect that biosimulation software licenses like those provided by Certara and Simulations Plus will absolutely see expanded uptake within drug discovery workflows. They also clarify that they don’t necessarily expect wet lab tools to dry up because of AI. Instead, they frame the shift as spending smarter within traditional tools budgets as companies focus increasingly on tools, excuse me, that can provide the most amount of high quality data in the least amount of time, really noting that basic consumables will still be required to validate AI findings regardless.
With that, I think we’re on to our final panel on multi-cancer early detection or MCED, which I know is an increasingly hot area of focus for your team and tools/Dx investors. Dan, I guess how would you sum up maybe the most important takeaways from this discussion?
Dan Brennan:
Yeah. Thanks, Brendan. I think there was a bunch. This is great panel as were the others. For the MCED panel, we hosted Kevann Simms from Ochsner Health. They’re the largest hospital system in Louisiana. We had Anil Saldanha from Rush Health in Chicago, and then we had Scott Ramsey from the Fred Hutch Cancer Center. That center is actually running the NCI’s Vanguard MCED feasibility study, excuse me. There were numerous insights I think we gleaned from the discussion first as it relates to the early uptake of Grail’s Galleri MCED test, the health systems moved ahead offering Galleri in large part given the community interest in the test rather than really a thorough risk-benefit analysis.
Since Ochsner is only supporting enabling Galleri to concierge cash-pay patients, the decision to roll out the test was not predicated on the test performance and Ochsner is really not tracking performance. The system didn’t want the local populace going to other locations that were offering Galleri, hence has prompted their decision off the test. Rush evaluated the test for a period of time before deciding to launch it and the decision was based upon the expected benefit for screening such a large number of cancers, most of which don’t have other options, but performance and cost didn’t seem to be key considerations here.
We found the lack of a rigorous performance and benefit-to-cost analysis is quite surprising to us, something investors also highlighted. Second, in terms of the path forward to see a broader rollout of MCED tests, there were numerous factors cited. These included establishing strong clinical utility. The NHS study seeking to show Galleri can shift the detection of a significant number of stage four cancers to stage three cancers is viewed as a much more meaningful study compared to the PATHFINDER 2 study where PDV is the focus. That said, just to shift to stage three alone likely won’t be enough for broader uptake and there was strong interest in seeing a solid detection rate for stage one and stage two cancers.
On average, the audience saw 50% early cancer sensitivity is needed for an MCED test, hence overall better performing tests are going to be needed to move the field forward. Lower price was also viewed as important in the $200 or so range. Tumor of origin performance was also critical. Finally, a clear workflow for testing and follow-up along with patient and doctor education were also going to be key. Finally, the experts are room for multiple tests from large panel tests like Galleri to more targeted, say like three-plus cancer tests like Shield. Two of our KOLs expected Galleri to remain the leading MCED test in the future with the audience targeting 700,000-plus MCED tests getting done by 2028.
Brendan Smith:
Yeah. That’s great, Dan. I think, look, we’ve covered a lot of good ground on this episode. There’s obviously a lot of moving parts in this entire process and the broader discussion to be had and with a lot more to come at our Diagnosing Tomorrow event in New York in December, our healthcare conference back in Boston in March, and then Tools/Dx Revolution again in Southern California in June. We’ll have plenty more to say very soon. Looking forward to continuing the conversation as things progress and space continues to evolve.
Speaker 1:
Thanks for joining us. Stay tuned for the next episode of TD Cowen Insights.