Living in a Spatial World Q&A, Part 1
We recently presented a webinar together with GEN Publishing entitled “Living in a Spatial World: Quantitative and reproducible characterization of multiplexed protein expression from clinical samples”. Following the presentation, an interactive Q&A session covered a range of topics around high-plex mIF assays and AI-driven image analysis.
Access the full recording here.
Angela and Lorcan answers to viewers’ questions about multiplex assays are summarized here. Also check out part 2, which focuses on image analysis and how AI deep learning enables digital pathology in this blog post: ultivue.com/living-in-a-spatial-world-part-2.
October 26, 2022
GEN: If I were to design my own panel for immunofluorescence, what are the main considerations, given everything you’ve suggested during the webinar for validation?
Angela: There are several things one needs to think about when you build a new panel for multiplexed immunofluorescence. You need to consider the abundance of the targets to be able to combine that target with the right fluorescence channel. Further, the use of good quality primary antibodies is especially important, particularly to assess their performance in chromogenic DAB, which is still the gold standard for a single plex assay before moving into building a multiplexed panel.
With InSituPlex® (ISP) technology you have fewer things that you need to consider as far as optimization and with that comes less variability because the workflow is pretty simple. For example, the end user won’t have issues with optimizing the concentration and optimal combination of an antibody. We don’t have a secondary antibody in the workflow, so this is also something researchers shouldn’t think about. And on top of that, because we deliberately use fluorescent channels that are spectrally distinct, you also don’t need to balance your signal. Another benefit of our technology is that we can amplify all the targets of interest at the same time, especially important for some weak signals.
GEN: Can you comment on what the best controls are for a multiplexed assay? Most companies use tonsil sample as the positive control, but is this really optimal?
Angela: Well, I would say it all depends on the type of targets you want to develop. If the target is well expressed in tonsil, that is a good starting point. At Ultivue, we always like to test the best conditions in the development of an assay, so we typically use control samples like tonsil and, where appropriate, on the actual tissue types that the study is going to be run on. Now that being said, when you are developing a target, it might not be expressed necessarily in tonsil. So ultimately what is important, is to use the appropriate tissue sample that expresses a good amount and a good level of the target of interest so that you can really identify the best conditions that would pick up low suppressors as well as high expressors of that target.
GEN: Is your H&E staining and multiplex IF used on the same slide?
What we do at Ultivue is perform multiplex IF as a first step, then we perform the H&E. This is not a virtual H&E, but instead the classically used H&E on the same exact tissue section.
Angela Vasaturo
Associate Director Scientific Affairs
Angela: The short answer is yes. Usually when we perform multiplex IF, we work on tissue paraffin embedded blocks. As far as a routine practice, once that’s performed, you’ll always have to have an additional H&E stained section from the block. With different sections, that H&E may not be truly representative of the actual tissue. In our case, it’s good practice to have the H&E from a block just to assess the quality in terms of fixation. But bear in mind, it’s critical to use the same section for the reasons that Lorcan explained for subsequent image analysis. So, what we do at Ultivue is perform multiplex IF as a first step, then we perform the H&E. This is not a virtual H&E, but instead the classically used H&E on the same exact tissue section.
GEN: Did you mention what kind of tissue preprocessing is required? Paraffin or OCT embedding?
Angela: We work with paraffin embedded samples. This doesn’t mean that the technology couldn’t be applicable to frozen samples, it’s just something that we haven’t wholly tested at this time because we focus mainly on clinical trials where paraffin embedded samples are most commonly used. With some additional steps for fixation, the technology could however be applicable to frozen (OCT) samples. But again, we haven’t tested that internally.
GEN: Angela, you mentioned there is 20% variability in the inter-, intra-day staining on the entire tissue section. While this is not a large difference maybe for known cell types, what about this variation in respect to rare unknown cell type combinations? Will this not impact further analysis downstream?
Angela: I would say that would be a little bit more difficult to evaluate on extremely rare phenotypes, and I think this is also the reason we use positive controls to evaluate the assay itself. Now, the biological variability is different than the intrinsic variability of the assay and although we are looking at the variability of the multiplex IF, we already take into consideration that 20% max of variability. There is also the component that comes from the instruments, for example, the autostainer, as well as the contribution of variance that comes from the image analysis. So, all of it is intricately linked and ultimately it is hard to really evaluate the true variability of any single component or reagent associated with multiplex IF.
Now that said, I think we have a lot to learn about those rare phenotypes. With the tools that we have for quantitative analysis, I think we are better placed today to really identify even pretty rare phenotypes by the unbiased way we look at the images. We are not pre-defining known phenotypes when you see the analysis; we really look at the single positive markers and then we combine those single positives to look at how many cells express multiples of the targets that are in the assay.
Lorcan: Yes, I agree, and I can add to that. When we talk about multiplex assays, we go from 4, 8 to 12 plex. The number of potential phenotypes that can be generated start to multiply quite substantially, and when you think about rarer phenotypes and maybe those that are not so obvious, it can be very difficult to validate and look at the variability for every single individual phenotype in something like an eight plex.
We’ve worked with Ultivue to address variance across sections that have been stained on different autostainers on different days, so what do we actually quantify? Well, we quantify every single channel for staining and how many positive cells are present for that particular marker within fields of view on sections. So, if you have an eight plex that will be 8 different readouts and that variance will be looked at, but we will also choose four or five of some of the main combined phenotypes, e.g. CD3, CD8, tumor PD-L1, and we will also count those as a phenotype manually and compare with the specific image analysis readout. All told we’ve at least got some reassurance that the variances are within the required parameters.
GEN: We have time for one last question. I think this is going to go to Angela. I’m going to combine two questions here, as I think the answers are probably short, but they’re also important. One of them, which I think is an interesting question, says the current spatial profiling is protein based, have you ever considered gene or RNA profiling based algorithms to do digital pathology? And then in connection with that, another audience member asked about limitation in terms of markers, they want to know what’s the max number of markers that can be used in multiplexing?
Angela: I think Lorcan can also help me a little bit on the digital pathology side with RNA, because he has more experience than I do. So while at Ultivue we look at proteins, there are also tools to look at RNA and actually we just presented a poster at AACR 2022 wherein we combined on the same section protein and RNA. Why would you choose one or the other if you can have both? I would be as provocative as I can be with this question.
And then the second part was the limitation in terms of number of targets, right? I think it depends on how we look at the field. At Ultivue, we currently have the maximum of 12 markers that one can choose from either our fixed off-the-shelf kits or our expansive U-VUE menu. It’s more like a choice in the sense that there are multiple companies that work in the digital and in the translational space, but also in the discovery space. When you are approaching things in a discovery way, then you need a lot of markers because this is where you need to generate your hypothesis. The more markers you can have, the better you can shape your hypothesis. Once you have done that and you need to validate your findings, you then want to reduce complexity. You wouldn’t want to go for 20-40 markers. In our experience, when you start from 12, or even eight, you have a good number of markers contributing to a very high number of phenotypes that you can identify and multiple hypotheses that you can validate at the same time.
We would suggest to researchers to initiate their studies from this higher number of targets on a small cohort of samples to identify the targets that give you the clinical utility. Then, ideally, move on only with very few or the least number of targets on a big cohort so that you can really have clinical value and validate your findings. Eventually in the future, as I mentioned during the presentation, we will want to use the data that we generate with mIF to train AI algorithms to be able to just use H&E.
Lorcan: Yes, I agree with what Angela said there. Maybe just from a digital pathology perspective, protein versus RNA, both HALO and Visiopharm software have got some great modules for the detection of the RNA and protein signals, be it sort of a spot signal or a clump of spots. These signals or stains are captured in different channels, then the signal can be co-localized and detected. Certainly with the image analysis available we are able to do that.
From a target perspective with image analysis, the number isn’t really the issue. From a practical sense though, you may need to think about when you get up to something like 100 targets, how you will develop your algorithm or how you’ll look at phenotypes in different cells. When you’ve got so many different markers, it becomes really quite challenging. Angela is right, the higher numbers of targets are usually more associated with the earlier discovery studies. There you’re doing “a look and see” to determine what’s coming out of interest that you may not know of. Then, as you gain confidence in certain biomarker profiles as part of a project, you may start to reduce your panel to be able to build more robustly that dataset around particular cell types or sub-cell types. So yes, from an image analysis perspective, these are doable.
Something else to think about with 100-plex studies is to make sure you have confidence in your section that you’re generating. Is it representative of the tissue and is it reproducible, and so forth. It’s certainly a dynamic and interesting area when we discuss the level of plex. As we move towards the clinic, I think getting more focus on that and triaging down to something that can be more reproducible is key.
If you have any questions, or would like more information on some of the topics discussed during this Q&A session, please look through our scientific content available in our resource center, https://ultivue.wpenginepowered.com/resource-center