Transcript
Good afternoon, everyone. And so good to see so many all familiar faces, and welcome the new ones. And thank you all for the great questions and conversation. Of course is also cutting into my presentation time, but that's great because I can speak pretty fast. Part of the privilege of running Novalis Circle, I get to sit in the audience and don't present and just listen to our users talk about their clinical experience. But I chose to actually present today, predominantly, because the next level of automation in indication-specific software that Brainlab is embarking on actually requires your support and you to join in. So hopefully, I'll try to convince you on that.
Before I get going, there was a good question, actually, we do have an answer regarding whether there is some validity or ongoing trials that are looking at local recurrence versus regional crosses. The trial is NCT02747303. It's ongoing right now. It's supposed to close by 2026. So as you can imagine it takes time for these kinds of trials to accrue. But hopefully in what I'll show you today, I'll convince you that there's a better way to answer some of these questions and maybe improve clinical practice. So, why do we automate? Typically, we automate to treat a patient faster. We automate to increase safety, so to try to minimize user errors, and we try to automate really as a surrogate of having to train everybody to be an expert in technology.
So far everything that we've done in the last five, six years with our products have been a very technical automation, and it is necessary, and we see errors in how technologies are being used or how protocols are being applied everywhere. This was a pretty interesting study, was a drug study that didn't make its endpoint. And when we evaluated what went wrong with it, there was a pretty high percentage of patients with major complications that didn't actually quite meet the study protocol criteria. And when you really analyze what has happened with those treatments that were sent to this study, the people that had very little experience with this particular treatment had the highest number of adverse effects and the worst overall survival local control numbers.
So the more you become an expert in a certain disease, the more you become an expert in a certain product, you do better. But as you start on a program, there's quite a steep learning curve. And there's the challenge with technology. Technology is always going to evolve much faster than you can ever keep up with it. Hence the need for automation, and automation really to look at what technical aspects can we have a software do better?
This applies to us too. Just to be a little mean with some Novalis Circle customers, we asked them to plan a few cases with brain metastases however they deem appropriate. Some use iPlan, some use other techniques, and cases with increasing complexity in terms of tumors, number of tumors and locations. And as you can expect, out of 10 people here or 9 people, actually, most of them did okay, but there were some outliers. One pretty bad and two with a lot of variability. So even the so proclaimed experts still have a lot of variability in what they can output using a technology for all sorts of reasons. Lack of training maybe sometimes be the cause, but most of the times, it's the lack of time.
So treatment planning, no matter how you look at it, it's a tradeoff. It's an optimization of some sorts. And ultimately, the time that you decide to put it into a treatment plan definitely impacts the quality of that treatment plan. So it's worth to look into, how can we automate technologically the steps that we do on a day to day basis in treatment planning? And for us, this meant that we have to create algorithms that are indication-specific. We don't give you a generic tool where you have to figure out, "Well, how do I use it in the brain to treat 50 tumors and how do I use it for prostate cancer?"
So the technology from Brainlab is always an indication-specific technology where we really look at what are the challenges for each disease. So for intracranial metastases, having the ability to do radiosurgery and do it safely for any number of tumors was critical. So you get the result that is perfect for radiosurgery, whether you have 3 targets or whether you have 15 targets. For primary tumors, again, we wanted to automate disease-specific solutions where you simply specify the tumor type that you want to treat and the software will give you an automatic plan for a pituitary or an acoustic or a meningioma. And the third product that we have available today is for spine. And again, here, the automation is looking at being able to treat any type of complexity of bone involvement for metastatic disease, and always give you the safe radiosurgical plan.
So these are easy things to do. There's a lot of technology behind. I'll focus today on what we've done so far to automate the technical components of brain metastases and what's also new with version 2.0. We do have already a few users with 2.0. We are in the process of upgrading all our existing customers to 2.0. So hopefully, you really like the ongoing automation. We have introduced a Monte Carlo algorithm as well. This will really help you with 3D QA, dynamic conformal arcs for low isodose values, have some calculation issues that you may have seen in QA. So if you want to flip to the Monte Carlo calculation, you won't have those issues anymore.
We have really provided more flexibility to the optimizer in terms of how you can set the prescription. So now the gradient index is also part of the objective function, very similar to what we utilize for our VMAT technologies. Keep in mind that brain mets continue to utilize inversely calculated apertures that are essentially dynamic conformal arcs. You can actually select the desired heterogeneity and also the desired freedom that you allow a tumor to receive. So you can choose to try to prescribe, let's say, to 99%, but you can give the algorithm a little bit of freedom to maybe go down to 98% for the more challenging cases. And again, this is very similar to what all the other products are trying to do with VMAT-type technologies. But you can do this simply by adjusting how you define your prescription.
The heterogeneity selection realistically is put in place for you to decide from what kind of studies are you deriving your clinical expertise? And you have to always question that. We have a lot of Gamma Knife data, and we have a lot of LINAC radiosurgery data. And the two cannot always be interpreted in a similar fashion. You have to really consider how the dose was really prescribed to derive a specific clinical outcome, and within a LINAC-based radiosurgery planning system, you can specify either an extremely homogeneous result or you can specify any level of heterogeneity that you desire, including a dose distribution that looks like a Gamma Knife dose distribution.
We also have an option to spare risk organs. But for the previous question in terms of how can you better control the normal brain, that's not the way to go. The algorithm itself has the ability to penalize more towards a normal tissue sparing so you have both functionalities within the new application. And more importantly, we have a new optimization algorithm for the trajectory selection. So you will continue to have the automation right now be template dependent, so you define a template that's being used, but you do not have to modify the table positions anymore. The new algorithm will do a significantly better job in utilizing what you give as a template and provide a dramatic improvement, actually statistically significant improvement in normal brain dose reduction to the previous 1.5 version.
We're also trying to help larger MLCs, and I'm showing you here the agility supports where you can indeed either utilize the Agility 5-millimeter MLC or the Millennium, like I've seen in Dr. Rahimian presentation, MLC for radiosurgery. But dynamic jaw tracking really helps with the dosimetry parameters that we always evaluate things like conformity indices and so on when you're trying to treat with a larger MLC. Okay, so if you do have a tumor where you do not like a CI value, you have the option now of treating the tumor with its own arcs. And that's the location where the collimator optimization plus dynamic jaw tracking really dramatically improves the results.
And you have here kind of a comparison to what we've always in the last decades used as the standard for radiosurgery, single fraction radiosurgery planning, which is the HD120. And, yes, these values look like they are different but look at the amplitude here. So realistically, with jaw tracking, you can make the Agility or the Millennium MLC look very, very similar in profiles to the HD120 if you do not have access to an HD120. Of course, we are going to continue to automate these kinds of dosimetric endpoints if you want for the next version. So I'll just give you a glimpse a little bit into what's coming up next. And realistically, with the next version of brain mets, what we'll do is have the ability for the software to learn from your practice.
So we can create now a local database, where the more you treat the database gets automatically populated. All this is happening via a DICOM infrastructure. And you get to see all of a sudden if somebody uses the software and creates a plan that deviates from what you typically do and then be able to adjust that plan to bring back to what your normal practice should be. Again, dosimetric type improvements, up to this point, none of these are really linked to outcomes but that's kind of where I'm going. You'll also have better ways to evaluate for brain metastases whether a plan is safe. Again, from a radiosurgical perspective, only looking at global V12 in the onset of multiple targets is typically meaningless. So we're creating now a visualization where you can see local V12, V10, whatever you choose to analyze, for a given metastasis.
So radionecrosis is an issue as long as it is in normal brain, not within the target, but it has to be analyzed regionally. All our retrospective data comes from single tumor information. So you have to utilize that only regionally and not globally. And then there are also visualizations where if you do have bridging those between targets, like you see over here, sorry, that you have a combined V12 information. Okay, so you'll have a different way when you analyze whether a prescription is safe, or information regarding local toxicity. We're also going to introduce a full, think of it as a 4-ply trajectory optimization. This is actually starting with our new packing algorithm for selecting which tumors to be treated with which arcs, but will also have the ability to automatically adjust the table position, table lengths to give you the ideal trajectories. And ultimately, this is the last final step that will improve the dosimetry. You can already start to think that the treatment planning aspects and definitely the automation regarding tumor plans has pretty much been solved. We can slightly improve it, but the results are already very good. And you've already heard that if you use this technology, judiciously, you get extremely well or great clinical outcomes.
We also tried to streamline the commissioning process, you're all physicists so I decided to add some of these slides. Realistically, we have now an automatic way to load all the data that you can generate from any third party system. And you can simply upload this information into our Beam Profile Editor. And the scope of our physics beam data acquisition world is to try to provide you with standard or reference data for the majority of the measurements and you only have to spot check if you want certain location predominantly looking at the small fields, and overall reduce the amount of time that you need to spend to get a system clinical. So, that's the technical world. Are we going to change clinical practice by ensuring that we always have a consistent quality, dosimetric output? And I would challenge you as a white person said that that's not going to be the case.
So realistically, the next generation of automation has to bring back into the treatment planning process the outcome measures. So how can we make that a reality? First, let's see what are the needs, what has really changed in the last decade in terms of management of brain metastases? And a decade ago, we were still using whole brain. We were doing radiosurgery for one to a maximum of three targets. And medical therapy had very little role. Just a few years ago, radiosurgery started being applied to any number of tumors and medical therapy, again, of very little value. But as you've seen already, in today's presentation, medical therapy starts to play a much more significant role. And the question is really, how does it overlap with other treatments such as radiosurgery?
And, also, I think what's new today is that whole brain radiation therapy is typically never utilized unless you have military disease. You have a myriad of medical therapy options out there looking at immunotherapy and targeted therapy, but the most critical component is that they are not replacing what we do today with stereotactic radiosurgery. So this is a great study published at Yale where they clearly show that although these therapies start to work, they are not as durable as stereotactic radiosurgery. And the questions really change in terms of, how do we overlap the practice of the medical oncologist to the practice of the radiation oncologist and medical physicist?
We also know that it's extremely challenging to determine how long patients live. This was a great study that asked experts, again, nurse, surgeons, RadOncs, to predict how long patients live, patients with brain metastases. That's the curve you see in blue versus the curve you see in yellow. So it's clear that we need better tools to assess when patients are failing therapy because that's realistically why you see a difference in these curves. And also, the tumor phenotype matters, but the size of the tumor matters, too. If you've managed to treat a brain mets when it's really, really small, you could have 100% local control. In any world, 100% local control is almost synonymous to cure. If you wait, these patients won't do so well and it's worth to look into what kind of patients do we see that have larger tumors, and this is always linked with what kind of follow-up imaging that we really do for these patients. And we tend to image more frequently patients with lung cancer than we tend to image patients with breast cancer. So I think more aggressive surveillance imaging is necessary across multiple histologies.
If you get a tumor that is too large, this is where you now need to have a multidisciplinary approach. Instead of only doing surgery, it's a lot better for these patients to do a radiosurgical sterilization of the tumor and then resect the tumor, and you will dramatically lower the rates of Leptomeningeal disease, which in the long run also means that you will lower the requirements of doing whole brain radiation therapy. And Brainlab provides this kind of multi-focal integration for the nurse, surgeon together with the radiation oncologist where you essentially can do the new adjuvant radiosurgery in the morning and resect the tumor in the afternoon.
Okay, so what are the new challenges? We know that we need to treat tumors when they're small and be very aggressive about it. It also means that we probably shouldn't offer or add margins when tumors are very small. We know that if tumors are large, we need some multimodal focal therapy, usually a combination of radiosurgery and surgery or medical therapy. Medical therapy plays an increasing role. And we don't quite always know how it overlaps with radiation. So we need to always monitor it in terms of, when do we apply medical therapy to what size tumors, and what are adverse reactions. And we need better tools to assess when therapy is failing and is failing earlier.
So, let me convince you how the next generation of automation will actually be about bringing outcome measures into the whole treatment planning process. So at first, we have better ways to assess a response to treatment. This is an MR technique. It's a vascular technique where you simply by repeating an MRI and feeding these two scans into our system, you have the ability to distinguish between tumor and treatment effect. We call this technology contrast clearance analysis. I'll show you an example of a patient with breast cancer that receives surgery. And then a perfusion scan is being performed at the end of whole brain radiation therapy. So this is post tumor resection, and the disease looks stable. And then the patient goes under surveillance imaging up to this point where that kind of tumor comes back. So the question here is, what is it?
And if you continue to have to wait to see if that's tumor then you change the profile for this patient. And at this point you can use what you traditionally may have. Perfusion doesn't quite show that that is 100% tumor but the contrast clearance does. So you have now a clear indication that at this point patient has failed therapy. And you can re-treat. In this case, patient goes back for surgery, and you can see at the end of surgery that the case is...or the patient is stable. Okay, so how can we bring outcome measures? And I think registries are really the way forward. We have started an initiative four years ago with WNS and Astra joined as well. Thirty hospitals have been prospectively collecting data and currently we have over 4,000 patients in this registry. Data comes from both Novalis and Gamma Knife institutions.
We have created essentially an interface to aggregate all surgical data as well as all radiation data, and follow-up imaging, including treatment information and clinical parameters. And this is a technology now that Brainlab is going to offer to all Novalis customers into next year that would like to utilize it. Key to making something like this work is again automation, because you will not have the time to manually enter data into a database or into a registry. So all you really have to do within this new infrastructure is export a treatment plan. For example, you have an e-Form option, which I'll show you next for how to enter some clinical parameters that cannot be automatically extracted today from EMR type systems. And then the data is there. And the software does a lot of the data enrichment and the data extraction and aggregation. So everything from extracting dose to any risk organ, any tumor, referencing data together. Automatic tumor detection, which either happens at treatment plan or happens in follow-up imaging.
And this is now solving one of the challenges that you've heard from Dr. Rahimian that as you treat patients with more and more tumors, figuring out what's a new tumor from a treated tumor becomes extremely challenging. So we'll introduce an algorithm to do this automatically for you to essentially when you add a new data set into the system, you will be able to see what's a new target from a previously treated target, and decide how to continue to operate your surgery. The e-Form infrastructure is the way to add clinical parameters. This is where you first establish a baseline and then anything that isn't automatically extracted can be defined via this interface to basically collect any parameter that you think is appropriate. Okay, so why should we care about this? Because not everyone's practice is the same. This was the first paper published with registry data last year. And for this particular publication, they've only looked at brain metastases. And this is the overall survival curve that you see over here. This is the override survival curve from the NCI database.
Patients who receive radiosurgical treatments, generically in United States, do quite significantly worse than patients treated in these Gamma Knife and Novalis institutions in the United States. And I would always argue that if you have a rate of death at 30% of 3 months, those patients shouldn't have received stereotactic radiosurgery. So monitoring outcomes not only becomes critical, but it will also be the way that future practices will achieve reimbursement. If you will not be able to show similar outcomes to a national benchmark, you may not be able to actually get those treatments reimbursed. And, again, I think a lot of this is a result of poor patient selection criteria for who gets radiosurgery and who shouldn't receive radiosurgery.
Within this interface you also get a tool that will completely transform the way that your tumor boards will happen at your institutions. You have an automatic tumor monitoring simply by always adding data into this interface that is automatically fused and tumors are automatically recognized. You have now longitudinal volumetric monitoring of what is happening to your patient. So you can make better decisions for when you need to change therapy and for what kind of tumor. You can also see how an individual patient performs against a similar patient population. And this is going to be either at your local institution only looking at your data. Or if you have the ability to see the global registry, the brain or bronze, you can benchmark your outcome to a global outcome for that specific type of disease.
You can also have answers on better questions. And one of you had a really good question earlier about why do we still practice based on old data that doesn't necessarily apply today. And I would challenge that sometimes RTOG derived data isn't necessarily the best in terms of outcomes. So you can ask a question to this registry infrastructure of, well, given the disease presentation, so what is my primary patient specific parameters and so on, to maximize survival output, how should I prescribe? And then you get a number. This number also defines if you need to put a margin or not. And this is now a different way of how you should be treating this patient simply or moving away from simply looking at a volume range and an arbitrary RTOG trial.
You also have the ability to aggregate your data to change how you maybe do systemic therapies. So we had asked the question, if the probability of brain metastases to proliferating the brain clusters. As you can see here it does. So if you do have to do systemic therapies, rather than targeting the whole brain, you can target the areas that have a higher probability of developing brain metastases. And this should really change the way that neurocognition is spared, and simply looking at arbitrarily sparing hippocampus sparing isn't going to be the way forward. And ultimately, and this is my last slide, you can also QA now clinically your treatment plans. Meaning that once you do have a treatment plan, you can upload it into the registry space and your global toxicities are now evaluated in the context of reported adverse reactions.
Okay, so basically what you see in yellow there would be a noncompliance plan or range of plans where adverse reactions have been reported across all normal brain dose values, right? Why should we look at just V12 or just V10 or V8 or V5 and so on? So you want to look at all relevant toxicity profiles in the normal brain and have a better understanding of whether your new plan, in this case that's kind of what you see there in blue, will have a safe profile or not.
So, thank you for your time.
Before I get going, there was a good question, actually, we do have an answer regarding whether there is some validity or ongoing trials that are looking at local recurrence versus regional crosses. The trial is NCT02747303. It's ongoing right now. It's supposed to close by 2026. So as you can imagine it takes time for these kinds of trials to accrue. But hopefully in what I'll show you today, I'll convince you that there's a better way to answer some of these questions and maybe improve clinical practice. So, why do we automate? Typically, we automate to treat a patient faster. We automate to increase safety, so to try to minimize user errors, and we try to automate really as a surrogate of having to train everybody to be an expert in technology.
So far everything that we've done in the last five, six years with our products have been a very technical automation, and it is necessary, and we see errors in how technologies are being used or how protocols are being applied everywhere. This was a pretty interesting study, was a drug study that didn't make its endpoint. And when we evaluated what went wrong with it, there was a pretty high percentage of patients with major complications that didn't actually quite meet the study protocol criteria. And when you really analyze what has happened with those treatments that were sent to this study, the people that had very little experience with this particular treatment had the highest number of adverse effects and the worst overall survival local control numbers.
So the more you become an expert in a certain disease, the more you become an expert in a certain product, you do better. But as you start on a program, there's quite a steep learning curve. And there's the challenge with technology. Technology is always going to evolve much faster than you can ever keep up with it. Hence the need for automation, and automation really to look at what technical aspects can we have a software do better?
This applies to us too. Just to be a little mean with some Novalis Circle customers, we asked them to plan a few cases with brain metastases however they deem appropriate. Some use iPlan, some use other techniques, and cases with increasing complexity in terms of tumors, number of tumors and locations. And as you can expect, out of 10 people here or 9 people, actually, most of them did okay, but there were some outliers. One pretty bad and two with a lot of variability. So even the so proclaimed experts still have a lot of variability in what they can output using a technology for all sorts of reasons. Lack of training maybe sometimes be the cause, but most of the times, it's the lack of time.
So treatment planning, no matter how you look at it, it's a tradeoff. It's an optimization of some sorts. And ultimately, the time that you decide to put it into a treatment plan definitely impacts the quality of that treatment plan. So it's worth to look into, how can we automate technologically the steps that we do on a day to day basis in treatment planning? And for us, this meant that we have to create algorithms that are indication-specific. We don't give you a generic tool where you have to figure out, "Well, how do I use it in the brain to treat 50 tumors and how do I use it for prostate cancer?"
So the technology from Brainlab is always an indication-specific technology where we really look at what are the challenges for each disease. So for intracranial metastases, having the ability to do radiosurgery and do it safely for any number of tumors was critical. So you get the result that is perfect for radiosurgery, whether you have 3 targets or whether you have 15 targets. For primary tumors, again, we wanted to automate disease-specific solutions where you simply specify the tumor type that you want to treat and the software will give you an automatic plan for a pituitary or an acoustic or a meningioma. And the third product that we have available today is for spine. And again, here, the automation is looking at being able to treat any type of complexity of bone involvement for metastatic disease, and always give you the safe radiosurgical plan.
So these are easy things to do. There's a lot of technology behind. I'll focus today on what we've done so far to automate the technical components of brain metastases and what's also new with version 2.0. We do have already a few users with 2.0. We are in the process of upgrading all our existing customers to 2.0. So hopefully, you really like the ongoing automation. We have introduced a Monte Carlo algorithm as well. This will really help you with 3D QA, dynamic conformal arcs for low isodose values, have some calculation issues that you may have seen in QA. So if you want to flip to the Monte Carlo calculation, you won't have those issues anymore.
We have really provided more flexibility to the optimizer in terms of how you can set the prescription. So now the gradient index is also part of the objective function, very similar to what we utilize for our VMAT technologies. Keep in mind that brain mets continue to utilize inversely calculated apertures that are essentially dynamic conformal arcs. You can actually select the desired heterogeneity and also the desired freedom that you allow a tumor to receive. So you can choose to try to prescribe, let's say, to 99%, but you can give the algorithm a little bit of freedom to maybe go down to 98% for the more challenging cases. And again, this is very similar to what all the other products are trying to do with VMAT-type technologies. But you can do this simply by adjusting how you define your prescription.
The heterogeneity selection realistically is put in place for you to decide from what kind of studies are you deriving your clinical expertise? And you have to always question that. We have a lot of Gamma Knife data, and we have a lot of LINAC radiosurgery data. And the two cannot always be interpreted in a similar fashion. You have to really consider how the dose was really prescribed to derive a specific clinical outcome, and within a LINAC-based radiosurgery planning system, you can specify either an extremely homogeneous result or you can specify any level of heterogeneity that you desire, including a dose distribution that looks like a Gamma Knife dose distribution.
We also have an option to spare risk organs. But for the previous question in terms of how can you better control the normal brain, that's not the way to go. The algorithm itself has the ability to penalize more towards a normal tissue sparing so you have both functionalities within the new application. And more importantly, we have a new optimization algorithm for the trajectory selection. So you will continue to have the automation right now be template dependent, so you define a template that's being used, but you do not have to modify the table positions anymore. The new algorithm will do a significantly better job in utilizing what you give as a template and provide a dramatic improvement, actually statistically significant improvement in normal brain dose reduction to the previous 1.5 version.
We're also trying to help larger MLCs, and I'm showing you here the agility supports where you can indeed either utilize the Agility 5-millimeter MLC or the Millennium, like I've seen in Dr. Rahimian presentation, MLC for radiosurgery. But dynamic jaw tracking really helps with the dosimetry parameters that we always evaluate things like conformity indices and so on when you're trying to treat with a larger MLC. Okay, so if you do have a tumor where you do not like a CI value, you have the option now of treating the tumor with its own arcs. And that's the location where the collimator optimization plus dynamic jaw tracking really dramatically improves the results.
And you have here kind of a comparison to what we've always in the last decades used as the standard for radiosurgery, single fraction radiosurgery planning, which is the HD120. And, yes, these values look like they are different but look at the amplitude here. So realistically, with jaw tracking, you can make the Agility or the Millennium MLC look very, very similar in profiles to the HD120 if you do not have access to an HD120. Of course, we are going to continue to automate these kinds of dosimetric endpoints if you want for the next version. So I'll just give you a glimpse a little bit into what's coming up next. And realistically, with the next version of brain mets, what we'll do is have the ability for the software to learn from your practice.
So we can create now a local database, where the more you treat the database gets automatically populated. All this is happening via a DICOM infrastructure. And you get to see all of a sudden if somebody uses the software and creates a plan that deviates from what you typically do and then be able to adjust that plan to bring back to what your normal practice should be. Again, dosimetric type improvements, up to this point, none of these are really linked to outcomes but that's kind of where I'm going. You'll also have better ways to evaluate for brain metastases whether a plan is safe. Again, from a radiosurgical perspective, only looking at global V12 in the onset of multiple targets is typically meaningless. So we're creating now a visualization where you can see local V12, V10, whatever you choose to analyze, for a given metastasis.
So radionecrosis is an issue as long as it is in normal brain, not within the target, but it has to be analyzed regionally. All our retrospective data comes from single tumor information. So you have to utilize that only regionally and not globally. And then there are also visualizations where if you do have bridging those between targets, like you see over here, sorry, that you have a combined V12 information. Okay, so you'll have a different way when you analyze whether a prescription is safe, or information regarding local toxicity. We're also going to introduce a full, think of it as a 4-ply trajectory optimization. This is actually starting with our new packing algorithm for selecting which tumors to be treated with which arcs, but will also have the ability to automatically adjust the table position, table lengths to give you the ideal trajectories. And ultimately, this is the last final step that will improve the dosimetry. You can already start to think that the treatment planning aspects and definitely the automation regarding tumor plans has pretty much been solved. We can slightly improve it, but the results are already very good. And you've already heard that if you use this technology, judiciously, you get extremely well or great clinical outcomes.
We also tried to streamline the commissioning process, you're all physicists so I decided to add some of these slides. Realistically, we have now an automatic way to load all the data that you can generate from any third party system. And you can simply upload this information into our Beam Profile Editor. And the scope of our physics beam data acquisition world is to try to provide you with standard or reference data for the majority of the measurements and you only have to spot check if you want certain location predominantly looking at the small fields, and overall reduce the amount of time that you need to spend to get a system clinical. So, that's the technical world. Are we going to change clinical practice by ensuring that we always have a consistent quality, dosimetric output? And I would challenge you as a white person said that that's not going to be the case.
So realistically, the next generation of automation has to bring back into the treatment planning process the outcome measures. So how can we make that a reality? First, let's see what are the needs, what has really changed in the last decade in terms of management of brain metastases? And a decade ago, we were still using whole brain. We were doing radiosurgery for one to a maximum of three targets. And medical therapy had very little role. Just a few years ago, radiosurgery started being applied to any number of tumors and medical therapy, again, of very little value. But as you've seen already, in today's presentation, medical therapy starts to play a much more significant role. And the question is really, how does it overlap with other treatments such as radiosurgery?
And, also, I think what's new today is that whole brain radiation therapy is typically never utilized unless you have military disease. You have a myriad of medical therapy options out there looking at immunotherapy and targeted therapy, but the most critical component is that they are not replacing what we do today with stereotactic radiosurgery. So this is a great study published at Yale where they clearly show that although these therapies start to work, they are not as durable as stereotactic radiosurgery. And the questions really change in terms of, how do we overlap the practice of the medical oncologist to the practice of the radiation oncologist and medical physicist?
We also know that it's extremely challenging to determine how long patients live. This was a great study that asked experts, again, nurse, surgeons, RadOncs, to predict how long patients live, patients with brain metastases. That's the curve you see in blue versus the curve you see in yellow. So it's clear that we need better tools to assess when patients are failing therapy because that's realistically why you see a difference in these curves. And also, the tumor phenotype matters, but the size of the tumor matters, too. If you've managed to treat a brain mets when it's really, really small, you could have 100% local control. In any world, 100% local control is almost synonymous to cure. If you wait, these patients won't do so well and it's worth to look into what kind of patients do we see that have larger tumors, and this is always linked with what kind of follow-up imaging that we really do for these patients. And we tend to image more frequently patients with lung cancer than we tend to image patients with breast cancer. So I think more aggressive surveillance imaging is necessary across multiple histologies.
If you get a tumor that is too large, this is where you now need to have a multidisciplinary approach. Instead of only doing surgery, it's a lot better for these patients to do a radiosurgical sterilization of the tumor and then resect the tumor, and you will dramatically lower the rates of Leptomeningeal disease, which in the long run also means that you will lower the requirements of doing whole brain radiation therapy. And Brainlab provides this kind of multi-focal integration for the nurse, surgeon together with the radiation oncologist where you essentially can do the new adjuvant radiosurgery in the morning and resect the tumor in the afternoon.
Okay, so what are the new challenges? We know that we need to treat tumors when they're small and be very aggressive about it. It also means that we probably shouldn't offer or add margins when tumors are very small. We know that if tumors are large, we need some multimodal focal therapy, usually a combination of radiosurgery and surgery or medical therapy. Medical therapy plays an increasing role. And we don't quite always know how it overlaps with radiation. So we need to always monitor it in terms of, when do we apply medical therapy to what size tumors, and what are adverse reactions. And we need better tools to assess when therapy is failing and is failing earlier.
So, let me convince you how the next generation of automation will actually be about bringing outcome measures into the whole treatment planning process. So at first, we have better ways to assess a response to treatment. This is an MR technique. It's a vascular technique where you simply by repeating an MRI and feeding these two scans into our system, you have the ability to distinguish between tumor and treatment effect. We call this technology contrast clearance analysis. I'll show you an example of a patient with breast cancer that receives surgery. And then a perfusion scan is being performed at the end of whole brain radiation therapy. So this is post tumor resection, and the disease looks stable. And then the patient goes under surveillance imaging up to this point where that kind of tumor comes back. So the question here is, what is it?
And if you continue to have to wait to see if that's tumor then you change the profile for this patient. And at this point you can use what you traditionally may have. Perfusion doesn't quite show that that is 100% tumor but the contrast clearance does. So you have now a clear indication that at this point patient has failed therapy. And you can re-treat. In this case, patient goes back for surgery, and you can see at the end of surgery that the case is...or the patient is stable. Okay, so how can we bring outcome measures? And I think registries are really the way forward. We have started an initiative four years ago with WNS and Astra joined as well. Thirty hospitals have been prospectively collecting data and currently we have over 4,000 patients in this registry. Data comes from both Novalis and Gamma Knife institutions.
We have created essentially an interface to aggregate all surgical data as well as all radiation data, and follow-up imaging, including treatment information and clinical parameters. And this is a technology now that Brainlab is going to offer to all Novalis customers into next year that would like to utilize it. Key to making something like this work is again automation, because you will not have the time to manually enter data into a database or into a registry. So all you really have to do within this new infrastructure is export a treatment plan. For example, you have an e-Form option, which I'll show you next for how to enter some clinical parameters that cannot be automatically extracted today from EMR type systems. And then the data is there. And the software does a lot of the data enrichment and the data extraction and aggregation. So everything from extracting dose to any risk organ, any tumor, referencing data together. Automatic tumor detection, which either happens at treatment plan or happens in follow-up imaging.
And this is now solving one of the challenges that you've heard from Dr. Rahimian that as you treat patients with more and more tumors, figuring out what's a new tumor from a treated tumor becomes extremely challenging. So we'll introduce an algorithm to do this automatically for you to essentially when you add a new data set into the system, you will be able to see what's a new target from a previously treated target, and decide how to continue to operate your surgery. The e-Form infrastructure is the way to add clinical parameters. This is where you first establish a baseline and then anything that isn't automatically extracted can be defined via this interface to basically collect any parameter that you think is appropriate. Okay, so why should we care about this? Because not everyone's practice is the same. This was the first paper published with registry data last year. And for this particular publication, they've only looked at brain metastases. And this is the overall survival curve that you see over here. This is the override survival curve from the NCI database.
Patients who receive radiosurgical treatments, generically in United States, do quite significantly worse than patients treated in these Gamma Knife and Novalis institutions in the United States. And I would always argue that if you have a rate of death at 30% of 3 months, those patients shouldn't have received stereotactic radiosurgery. So monitoring outcomes not only becomes critical, but it will also be the way that future practices will achieve reimbursement. If you will not be able to show similar outcomes to a national benchmark, you may not be able to actually get those treatments reimbursed. And, again, I think a lot of this is a result of poor patient selection criteria for who gets radiosurgery and who shouldn't receive radiosurgery.
Within this interface you also get a tool that will completely transform the way that your tumor boards will happen at your institutions. You have an automatic tumor monitoring simply by always adding data into this interface that is automatically fused and tumors are automatically recognized. You have now longitudinal volumetric monitoring of what is happening to your patient. So you can make better decisions for when you need to change therapy and for what kind of tumor. You can also see how an individual patient performs against a similar patient population. And this is going to be either at your local institution only looking at your data. Or if you have the ability to see the global registry, the brain or bronze, you can benchmark your outcome to a global outcome for that specific type of disease.
You can also have answers on better questions. And one of you had a really good question earlier about why do we still practice based on old data that doesn't necessarily apply today. And I would challenge that sometimes RTOG derived data isn't necessarily the best in terms of outcomes. So you can ask a question to this registry infrastructure of, well, given the disease presentation, so what is my primary patient specific parameters and so on, to maximize survival output, how should I prescribe? And then you get a number. This number also defines if you need to put a margin or not. And this is now a different way of how you should be treating this patient simply or moving away from simply looking at a volume range and an arbitrary RTOG trial.
You also have the ability to aggregate your data to change how you maybe do systemic therapies. So we had asked the question, if the probability of brain metastases to proliferating the brain clusters. As you can see here it does. So if you do have to do systemic therapies, rather than targeting the whole brain, you can target the areas that have a higher probability of developing brain metastases. And this should really change the way that neurocognition is spared, and simply looking at arbitrarily sparing hippocampus sparing isn't going to be the way forward. And ultimately, and this is my last slide, you can also QA now clinically your treatment plans. Meaning that once you do have a treatment plan, you can upload it into the registry space and your global toxicities are now evaluated in the context of reported adverse reactions.
Okay, so basically what you see in yellow there would be a noncompliance plan or range of plans where adverse reactions have been reported across all normal brain dose values, right? Why should we look at just V12 or just V10 or V8 or V5 and so on? So you want to look at all relevant toxicity profiles in the normal brain and have a better understanding of whether your new plan, in this case that's kind of what you see there in blue, will have a safe profile or not.
So, thank you for your time.