world-hupo-2025:-proteomics-beyond-mass-spec-to-ecosystem-impact
World HUPO 2025: Proteomics Beyond Mass Spec to Ecosystem Impact

World HUPO 2025: Proteomics Beyond Mass Spec to Ecosystem Impact

World HUPO 2025

When HUPO picked Toronto for its 24th World Congress, I imagined crisp fall air by the lake and the usual conference hustle. I did not expect fat, storybook snowflakes drifting past the lobby windows in mid-November.

But somehow, the snow fit. Inside, “One Health Powered by Proteomics” framed four days of talks, posters, and hallway debates about humans, animals, microbes, and environments as one interconnected system, echoing the One Health idea that human, animal and environmental wellbeing are inseparable. Outside, the city was wrapped in white, and the congress itself felt surprisingly cozy for a truly global meeting.

Breakfast and lunch sessions were packed. The exhibit hall spilled into the corridors; serious conversations about protein isoforms and funding strategies took place in corners, by the elevators, and anywhere else two proteomics people could stand still for long enough.

HUPO meetings have historically leaned heavily toward mass spectrometry. This year still had plenty of MS talk, but the center of gravity had shifted.

Affinity-based platforms, next-generation sequencing (NGS) readouts, single-molecule approaches, single-cell and spatial proteomics, massive cohorts, heavy-duty bioinformatics and AI all jostled for space, from academic developments to company-led advances. It felt like a proteomics meeting, in plural. A kind of single-cell–resolved city of interconnected neighborhoods rather than a gathering dominated by one flagship technology.

One recurring theme was that no single technology gets to “own” proteomics.

Proteomics for all

Yuling Luo, PhD, founder and CEO of Alamar Biosciences, laid out what has become a kind of unofficial axis system for the field: breadth and depth. On one axis, content is exploding. Mass spec technologies are improving peptide and protein coverage across a wide range of sample types, particularly plasma. Affinity-based technologies are also rapidly increasing the number of proteins they can measure in a single experiment.

On the other axis, sensitivity is being pushed into attomolar territory to reach the low-abundance part of the proteome that likely hides valuable early disease markers. That framing, coverage and sensitivity, discovery and translation, mass spec and non–mass spec, resonated across talks and side conversations.

At the Buck Institute for Research on Aging, Birgit Schilling, PhD, has one foot firmly in classic mass spec and the other in emerging single-molecule platforms. She spoke about mapping senescence signatures across organs including brain, bone, spinal cord, and how the same aging markers keep resurfacing as risk flags for diseases like ALS and Alzheimer’s.

What really animated her at HUPO, though, was a non–mass spectrometry platform: Nautilus Biotechnology. Her lab is among the first outside the company to run its single-molecule, multi–affinity-probe system.

“We’re looking at proteoforms of tau phosphorylation and splice variants,” Schilling said. “I think this could potentially revolutionize how we think about proteomics, not just looking at peptides, but the protein itself and its many proteoforms.”

One visible sign that HUPO is broadening: this was the first time Illumina had a full, visible presence at a HUPO World Congress. Their sponsored lunch session “Competition to complementarity: Bringing Illumina Protein Prep into a mass spectrometry-focused environment,” with Stuart Cordwell, PhD, from the University of Sydney, was packed.

Illumina Protein Prep uses SOMAmer reagents to capture roughly 9,500 proteins from plasma or serum, with readout via NGS rather than mass spectrometry. For MS-heavy facilities like Sydney Mass Spectrometry, Cordwell, the director, framed it less as a rival and more as a translator: a way to let clinicians already fluent in sequencing “speak proteomics” without climbing the full MS learning curve.

In my conversation with Dalia Daujotytė, PhD, from Illumina, she underscored accessibility as a hard requirement if proteomics is ever going to match genomics in impact.

“We need to make this type of experiment accessible to the broad community,” she said. “Once we bring proteomics as a more accessible tool for everyone, then the next step is data.”

Next step data

That “next step is data” comment is classic Illumina and exactly where proteomics finds itself in 2025. High-content tools exist. The choke points are cost, workflow complexity, and the availability of robust analysis pipelines that work for non-specialists.

Daujotytė stressed automation and fit-for-purpose workflows: “If we want to make experiments accessible to a broader community, they have to be automated, accessible within existing budgets, and accommodate a range of workflows to address all these needs that different labs and different type of customers may have because one scientific lab may have a different need than clinical lab.”

Illumina is still squarely in the research-use-only camp for proteomics, but she pointed out that a lot of multi-omic work already happens in clinical settings with patient samples. The company sees its role as rounding out a multi-omics portfolio including genomics, transcriptomics, proteomics, epigenomics, while respecting that it sits in a broader ecosystem of tools, not on top of a conquered hill.

As the dominant sequencing vendor, Illumina also carries a kind of educational responsibility. Several people at HUPO, including Adam Lewandowski, DPhil, from UK Biobank and Jennifer Van Eyk, PhD, raised the same point in different ways: if proteomics is going to get the funding and policy attention it deserves, big players have to help educate decision-makers, not just sell instruments. Illumina’s move into HUPO’s orbit is a step in that direction. Other major players, including Olink (now part of Thermo Fisher Scientific), also carry significant responsibility.

If there was a single project that embodied the “scale” side of proteomics, it was UK Biobank.

Lewandowski, who helps steer the resource’s proteomics program, described how Olink proteomics is being extended from the initial 55,000-participant Pharma Proteomics Project to the full 500,000-participant cohort, using the 5,400-protein Olink Explore HT platform.

Government match funding and a consortium of pharma partners are supporting around 600,000 samples in total, including roughly 100,000 repeat samples for longitudinal analysis. In parallel, the SomaScan platform will be used to cover about 55,000 participants, providing another affinity-based view of the circulating proteome.

Scaling proteomics

Lewandowski was very clear that complementary is the operative word. Olink and SomaScan bring standardization and scalability. Mass spec remains crucial for characterizing protein-protein and protein-metabolite interactions, antibody isotypes, isoforms and post-translational modifications, including increasingly detailed glycoproteomic patterns, and for exploring corners of the proteome not well covered by panels.

As Paola Picotti, PhD, put it, “Classical proteomics, or bottom-up MS, typically tries to measure protein abundances and how they change across conditions, while limited proteolysis MS is a structural proteomics tool that tries to detect proteins that undergo structural alterations across conditions as a readout for functional alterations.” In that sense, interaction- and structure-focused workflows, from Picotti’s limited proteolysis approaches to native and glyco-focused mass spectrometry in groups like Albert Heck’s, PhD, extend what panel-based proteomics can see. And there are still gaps, especially low-abundance and modified markers for neurodegeneration (think p-Tau181 and p-Tau217) that current platforms struggle to quantify at population scale.

That is where UK Biobank is eyeing technologies like Alamar’s ultra-sensitive CNS panel to improve coverage of low-abundance brain-derived proteins in plasma.

Lewandowski also emphasized that large-scale proteomics is as much a training and documentation challenge as a technological one. One of the major tasks, he said, is “creating the appropriate documentation but also the appropriate training materials that can go with that to make sure that researchers understand how best to use those data types. The real big advantage of where we started with that is that we had this close collaboration with the Pharma Proteomics project.”

When I asked about funding, he was frank. Proteomics is still the newer kid in the omics family, and many funders simply know genomics better.

“The reality is proteomics is a much newer field because of the challenges in the past of scalability of proteomics, genomics is ahead in many of those ways and there’s more awareness as to what the potential value of that data are,” he said.

“The opportunity as we scale up the proteomics programs, not just with Olink, but with other areas of proteomics, we’re going to start to see more and more development of those opportunities…and then translation to testing that within clinical settings and testing out what the value could be for enhancing downstream patient care.”

That awareness gap came up again and again in Toronto. Lewandowski said one reason he engages with HUPO is precisely to tap its collective expertise when making the case to funders and policymakers about why scaling proteomics matters.

Even questions from the floor underscored how quickly this is evolving. Stephen Williams, MD, PhD, from Standard BioTools, for example, raised the issue of causality in the era of AI. If we want AI models trained on resources like UK Biobank to emulate real interventions, he argued, they need to rely as much as possible on causal, not merely correlated, features.

One major HUPO headline announcement was the publication of version 25 of the Swedish Human Protein Atlas. The release was timed to the congress and quickly became a reference point in multiple talks and side meetings.

Version 25 is big in both the literal and scientific sense. It now includes more than 10 million manually annotated bioimages and data for over 6 billion assay measurements, from around 300,000 biological samples, covering all human protein-coding genes across nine major resources. A new Human Disease Blood resource integrates Olink Explore HT and SomaScan data across 32 cohorts, spanning 71 diseases (cancer, autoimmune, infectious, neurological, cardiovascular) plus healthy cohorts for childhood development, aging, and pregnancy, providing pan-disease blood profiling at unprecedented breadth.

Moving forward

At the end of the week, as the snow outside turned to slush and people started comparing flight delays, HUPO flashed its customary “see you next year” slide. Next stop: Singapore, under the theme “Proteomics Plus: Transforming Lives.”

If Toronto was about One Health, Singapore feels poised to showcase proteomics as a driver of precision medicine in Asia. Precision Health Research, Singapore (PRECISE) is already running the SG100K initiative, 100,000 deeply phenotyped participants with large-scale plasma proteomics using Standard BioTools’ SomaScan 11K assay and other multi-omics readouts for biomarker discovery in a multi-ethnic population.

Further north, China’s π-HuB (Proteomic Navigator of the Human Body) project has emerged as another pillar in the global proteomics landscape. It is envisioned as a long-term, large-scale effort to map human proteomes across cell types, organs, life stages, diets, and environments, and to build large-scale computational models of human biology that can support disease prediction and, eventually, clinically useful protein biomarker panels. Only a handful of academic centers worldwide, such as the large-scale clinical proteomics efforts led by Matthias Mann, PhD, currently have the infrastructure to run mass spectrometry at something close to population scale, so projects like π-HuB will have to work hand-in-hand with companies that can execute industrial-scale workflows and ultimately bring new assays and products to market.

Human Protein Atlas director Mathias Uhlén, PhD, has been explicit, in public comments, that initiatives like π-HuB, AlphaFold-inspired structure resources, and SciLifeLab’s own Alpha Cell project are all part of a global race, not in a zero-sum sense, but in the sense that the world is simultaneously building different “virtual cells” and proteome-centric maps of human biology.

While proteomics is carving out its own space in precision medicine and big-science infrastructure, most importantly, the field is starting to think in terms of responsibility: to patients, to public-health systems, to global collaborations, and to the young generation of scientists who will inherit these enormous datasets.

Next year in Singapore, under the banner of “Proteomics Plus,” I’ll be looking less for proof that proteomics can transform lives, and more for concrete examples of where it already has, and where the gaps still are.