Video
library

  • 275
  • More
Comments (0)
Login or Join to comment.
Transcript
Bogdan: Hello, everyone, and welcome to a new Novalis Circle webinar. My name is Bogdan Valcu and I'm the director of Novalis Circle. Today I have joining us, Dr. Jason Sheehan from University of Virginia and he will cover a topic pertaining to establishing national registries, particularly focusing on a prospective initiative via NeuroPoint Alliance to collect data from institutions in United States performing stereotactic radiosurgery. He will discuss today what we can accomplish with such large data sets once acquired in the context of brain metastasis. Dr. Shehan is a professor of neurological surgery at UVA, he's also the chair of the Gamma Knife program there, and we had the pleasure to work with him over the last five years on this initiative via NeuroPoint Alliance on establishing this radiosurgery registry in United States. As always, we do provide credits for our webinars, and for those of you interested in obtaining credits, please email us at info@novaliscircle.org after completion of the webinar and we can provide you with details on how you can obtain your credits. And as always, don't forget to sign up for our upcoming events. The next webinar will focus on utilization of spine SRS with Dr. Alongi from Verona. So that said, I would like to turn it over to Dr. Sheehan.

Dr. Sheehan: So, I want to thank Novalis Circle and Brainlab for inviting me to speak today on the NeuroPoint Alliance national registry for stereotactic radiosurgery. I'm going to be highlighting the registry and I'm also going to be speaking about the implications of registry-based research and big data for practice improvement as well as advancing the medical science around radiosurgery. So, radiosurgery is certainly an established discipline, more than 2 million patients have been treated with stereotactic radiosurgery and stereotactic body radiotherapy throughout the world. It is a big business with major medical centers purchasing equipment, software, and devices as well as building the infrastructure to house those devices and the personnel required to deliver the care for patients with radiosurgery and to your desk body radiotherapy units.

It's very prevalent, there are more than 800 radiosurgical centers throughout the world. There are fellowships and courses and meetings and societies that help to advance the field of radiosurgery. Radiosurgery evidence, though, has largely been around single center retrospective studies. Here depicted is a picture of the 35 to 40 years worth of medical records and imaging files that we have paper copies of. These files have been accumulated over the last 35 years at the University of Virginia. Obviously, we've switched largely to electronic database into a registry-based initiative as we accrue new patients in our radiosurgical practice. But the evidence that we rely upon to guide our treatment algorithms is generally Level III evidence or lower.

As we think a little bit about the history of radiosurgery, I put forth a paper here by a very dear colleague, Dr. Steiner. Dr. Steiner first published a successful report of arteriovenous malformation treated with radiosurgery using the Gamma Knife with this case report. It was really just one patient and as we think about case reports in the current era, they're almost relegated to case report journals and it would be hard to really gather the information that would shape the field as this particular study did. And I don't want to cast out on Dr. Steiner, so here are two papers that I've written earlier in my career. One from my time at the University of Pittsburgh on intracranial hemangiopericytomas where the number of patients was just 14, and another one on Cushing's disease from my time at the University of Virginia with 43 patients.

Again, this is hardly strong evidence, these are retrospective studies with clearly statistically limited power So, in modern-day medicine, we will typically advance the field by developing a hypothesis, gathering clinical observations. In some instances, those may be the retrieved ones, things that have already happened. Gathering and analyzing those data versus the null hypothesis, testing for statistical significance. We'll then slowly develop enough data to generate guidelines, and in the case of randomized control trials, Class I evidence. It's really our adherence to these guidelines and to the Class I evidence that is oftentimes what we're measured by in terms of our performance as clinicians, that is the adherence to best practices.

But how do we best define best practices in radiosurgery? Well, we could use a randomized clinical trial. This first appeared in the medical literature, thanks to Charles Sanders Peirce. It was used in the field of psychology first and it's really that evidence-based practice is largely based upon randomized controlled trials as the gold standard for investigation. These studies are considered Level I or occasionally Level II evidence. And as I've shown you earlier, some of these works, including the one by Dr. Steiner would be considered far lower than these in terms of the level of evidence, although arguably his observations on a case of one has dramatically shaped the field. What are some of the disadvantages to randomized controlled trials?

Well, there are many, clinical equipoise whether or not one should accrue a patient to a randomized control trial or if the patients even wish to be a group. Crossover, do we allow for that? And do we analyze based upon the intent to treat or the actual treatment that was given to a patient? Cost is another major disadvantage, RCTs on average cost between $45,000 to $85,000 per patient in the United States. There may be problems with ethics, the time, and we know that all too well in the current COVID era, how do we come up with a randomized controlled trial to test a drug or a vaccine in time to hopefully save thousands of lives for people that may be infected or soon to be infected with the COVID-19 virus, are RCTs the best way to do that?

Real-world applicability, oftentimes, those bumpers, those guardrails that are in place with the randomized control trial are not necessarily reflective of what our patients deal with on a day-to-day period in their lives. And then, limits to the applicability of an RCT in a surgical arena, particularly in a neurosurgical arena where the incidence of some disease states are actually quite low. And the timing of measuring the endpoint is quite long. So, here's an example of cost for performing drugs RCTs by major pharmaceutical companies. And as you can see, they've spent billions of dollars testing, in some instances, one or just a few drugs in an RCT. Is that even feasible in the arena of neurosurgery or radiation oncology or medical physics to spend billions of dollars to do an RCT? No, it is not for the most part.

So, let's look at a practical case, how would we develop ideal randomized control trials to test the different treatment algorithms for patients with brain metastases? Well, we will look at the treatment options, which include surgery, surgery and whole-brain radiation therapy, surgery and radiosurgery, surgery and stereotactic radiotherapy, stereotactic radiotherapy alone, SRS alone, whole-brain radiation therapy alone, as you can see the various permutations. And you've got confounding factors, how do you mitigate against the effects of those? And then as you can see, you've got at least half a dozen major endpoints that one would want to study, survival, quality of life, patient satisfaction, costs, neurocognitive function, just to name a few.

So, if we just look at that grid of the treatment options and we look at the comparative studies that would need to be required to define an optimal treatment algorithm, you'd be looking at nearly 72 different comparisons required. So, let's look at the practicalities of that. Well, here's one study that was a pivotal study published in "The Lancet Oncology" by a good friend, Eric Chang. And you can see that at MD Anderson, it took nearly 7 years to accrue 90 patients who were eligible for this trial that helped to shape the idea that neurocognitive preservation was better achieved with radiosurgery rather than the addition of whole-brain radiation therapy for patients with limited numbers of brain metastases, 1 to 3. Let's take a look at another example. Here's a European study that took 11 years to accrue 359 patients to look at the effects of whether or not whole-brain radiation therapy should be added or observation alone after radiosurgery or resection for patients with limited numbers of brain metastases. Eleven years, multiple European studies, how many RCTs can one do within a given practice lifetime for neurosurgery or radiation oncologist?

So, as we look at this, the use of randomized control trials to build the literature and to help to define how radiosurgical fields should be evolving are really getting quite complicated as we look at patients with single metastases, 1 to 3, 3 to 10, or greater than 10. What happens when new technologies are put into the arena and that may invalidate older technologies that were less robust or less sophisticated? What about permutations which I didn't go into in that grid of immunotherapy and targeted therapies? How do we factor those in? How do we study for those? And are we really relegated to the paradigm of randomized controlled trials to advance the clinical field meaningfully? I would submit to you, no. At the same time, as we began to look at registry-based research, the field of big data in neurosurgery began to grow exponentially. Many common topics in which big data has been used for had been around vascular neurosurgery or neuro-oncology. And you can think about the utility of databases such as the National Inpatient Sample, which has been used to stratify, for instance, patients who have small versus large acoustic neuromas and to study that better.

We don't yet have a specific neurosurgery database that's national, but National Inpatient Sample and other types of registries that are out there do help to accrue some patients who are getting cancer treatments, radiosurgical treatments, and neurosurgical treatments for which we can glean some information. And as you can see, over the last decade, there's been a gradual increase in the number of big data research published in "Journal of Neurosurgery," in "Neurosurgery," in "World Neurosurgery," just to name a few of the leading journals that have seen an influx of this type of publication. So, registry-based research really provides a unique opportunity to study patients in real-world conditions. It's generalizable, applicable, available, and relatively inexpensive and can be easily integrated into our practice.

The work by Dr. Yamamoto and colleagues that was published in "Lancet Oncology" was really an observational registry-based platform by which they published a very important paper, looking at patients with multiple brain metastases treated with radiosurgery. And there's increasing recognition on the part of national healthcare entities about the importance and value of registry-based research and payers also have begun to note the value of registry-based research as a way of heading to the evidence and also as a way of looking at improving the cost-effectiveness. So, as we think about the sustainability of health care, the Institute of Medicine and others have identified that perhaps as much as 50% of what we do is relatively wasteful spending, or not spending that would, in the long run, helped to benefit patients appreciably.

How do we become more data-driven? How do we look at the quality of the evidence and to shape our practices in a way that is really based upon real-world evidence? So, one of the things that we helped launch was the Radiosurgical Registry, which is really a national registry that is under the auspices of NeuroPoint Alliance and supported in part through an unrestricted grant through Brainlab. That registry encompasses more than 20 leading centers throughout the country that have begun to accrue patients in a meaningful fashion and prospectively, upload those patients into a cloud-based registry platform. As we think about what the goals of the registry were, nearly five-plus years ago, we really wanted to gather and collect medical data that would evaluate the effectiveness of treatments and support clinical decisions.

This data would be gathered throughout the U.S. but hopefully could be generalizable to our colleagues in other countries. It could be used to establish national benchmarks for quality of radiosurgical procedures but also allow for individual practices or groups to analyze their outcomes for radiosurgery treatment on patients. And potentially, it could be used to evaluate the cost-effectiveness of radiosurgical procedures. So, the initiative as originally envisioned was for NeuroPoint Alliance, which is a not-for-profit, registry-based entity of the American Association of Neurological Surgeons to create a registry with a data dictionary that had common terms that will be followed and used across all sites. And that we have about 30 participating sites that would broadly accrue patience as a function of time.

And that this could be integrated...this registry-based platform could be integrated into typical neurosurgical practice, and that data could then, with IRB approval, be analyzed in a fashion that would help to guide the field and also that individual sites could have access to their data at any point to help to guide their own individual practices. As we think about the structure of this, the American Association of Neurological Surgeons forming the NeuroPoint Alliance, with corporate liaison to scientific research committee, we know that interested parties, including governmental ones as well as payers, are very interested in the data that would be acquired in a registry such as the radiosurgical one.

And then there was a board of principal investigators representative of the various sites accruing patients to the registry, that this could be integrated into practice-based improvement for which the ABNS would also recognize the participation. And that individual neurosurgeons or radiation oncologists would upload the data from various sites, and that that data would go through a cloud-based system into the NeuroPoint Alliance registry. So, more graphic representation of this, it's really showing here that clinical data that's uploaded into clinical assessment at the time before the radiosurgery is delivered, patient outcomes that are standardized for quality of life, clinical, and radiologic endpoints. And then the imaging data that's uploaded at the time of treatment, at the time of follow up, and at the time of every follow-up point after a patient is accrued.

Some of the data that's captured in the imaging can be scrubbed and extracted include the tumor volumes, the prescription doses, and the various CTVs. And that we can then integrate the clinical and imaging data into reports that then could be uploaded and analyzed by scientific teams, either across the entire registry-based enterprise or on individual site or even individual physician-based practices. One of the things that's important with a registry is to define your data dictionary, what are the fields that are important? Unlike a randomized control trial where you may have tremendous resources that would include multiple clinical research coordinators and others, this was not meant to be an overwhelmingly time-consuming process.

So, collecting the data and minimizing the manual data entry of the registry was critical to ensuring participation and completeness of data entry, that we really were striving for on any given encounter to have no more than 25 manual data entry fields, but that most could be automated in the track that I've shown you in part through the uploading of the actual imaging. And it's also critical to make sure that there was a reporting uniformity so that the terms and data collected from one site to site to site all were consistent. Here's another example of the power of the registry to really depict data as a function of time to really look at, for instance, the changes in tumor volume for a patient and then the changes in other outcome metrics that can be displayed, as well as your site's own practice patterns in terms of volumes and responses and where patients may be heralding from, that would allow for practice improvement and hopefully, allow for broad participation.

So, to date, we've been able to accrue nearly 4,000 individual patients into the Radiosurgical Registry. There have been nearly twice that many events, that is the number of encounters for those patients. Obviously, patients have an encounter at the time of radiosurgery but then subsequent encounters thereafter as part of follow up. We also accrue information such as surgical resections and whole-brain radiation therapy information. So, here's an example across the various sites with some sites being newer ones and others being older ones, but you can see that participation across multiple sites in terms of radiosurgical data, resection data, and follow-up data depicted here. By and large, we are collecting over a number of different radiosurgical indications including patients with metastatic lung cancer, metastatic breast cancer, melanomas, different types of vascular lesions, and other types of metastatic indications.

You can see that we've got, by and large, as you'd expect, brain metastases being the predominant indication for patients treated with radiosurgery who are enrolled in the radiosurgical registry, as I think is typical of the trends in the U.S. for radiosurgical treatments. The platforms have largely been Gamma knife and LINAC-based that we're accruing across different devices, which improves this generalizability of the data that we're acquiring. So, let me just summarize a little bit about the descriptive analysis of some of the 411 brain metastases patients that we have meaningful follow-up on in our registry today. You can see that this is a fairly even split in terms of men and women, that the average age was about 64 at the time of their radiosurgery, although this range from 27 to 90 years. You can see a broad distribution of age groups, a broad distribution of ethnicities and races, and what you might expect in terms of BMI for patients within the U.S.

Karnofsky Performance Status for the most part with the brain metastases patients, they had a KPS of 70 or above but there were a few patients that had a little bit of a lower KPS, but by and large, that was less than 5%. Most patients had a KPS of 70, 80, 90, or 100. And EQ5D, which is a very reproducible, very generalizable, and very well-published quality of life endpoint is being tracked on all these patients in a prospective fashion. We then looked at the baseline tumor characteristics of these radiosurgical patients. For the most part, the patient's predominant type of cancer was lung, followed by breast, and then gastrointestinal, but there was a group of patients that had other types including melanoma. We also collected cancer subtypes. So, for instance, estrogen, progesterone, and HER2 new receptor status, and data on ALK, EGFR, and KRAS mutations in lung cancer, and BRAF mutations in melanoma.

Really, the location of involvement of the brain metastasis was throughout the various neuroanatomical lesions classically involved with brain metastases. Based upon the overall size and blood distribution, you'd expect them to be prominent in the frontal, occipital, and parietal and temporal regions. And that's exactly what we saw, again, a very representative field of patients that were being accrued into the radiosurgical registry. Very few of the patients had intralesional hemorrhage, and we did note whether or not they had systemic disease control and primary tumor control within the radiosurgical data entry at the time of the initial encounter.

So, as we look at using standard analytic methods to look at these 411 patients, as you can see, our Kaplan-Meier plot of overall survival from the time of initial radiosurgery, it's very representative of what you would expect from a randomized control trial, where achieving survivals of about 12 to 18 months for some patients with higher performance status and unfortunately, lower survival for those with poor performance status, as indicated here with the RPA class. You can see that we've, in essence, shown that our data is meaningfully representative of what has been published in the past and that RPA Class 1 patients survive longer than RPA Class 2 and RPA Class 3, and that's depicted in this Kaplan-Meier plot, again, achieving statistical significance between the RPA classes.

As we look at the predictors of survival, for overall survival, systemic disease control was a major and statistically significant predictor of overall survival. Here, the p-value was 0.004 when they had well-controlled systemic disease versus poorly controlled. And again, RPA class was a major predictor of survival with those patients with RPA Class 1 doing the best in terms of overall survival. So, I'd like to pivot and talk a little bit about some collaboration that we're doing. As I've noted, that we have, with every encounter, some imaging that's uploaded and the ability to use radiomics to study the data that's been acquired in the registry is not something that neurosurgeons by themselves or radiation oncologists by themselves accustomed to doing, so we've reached out and formed a collaboration with colleagues at Hamburg.

These three clinicians shown here are helping to build a radiomics platform that will, in essence, be able to analyze the data that we've acquired as part of the radiosurgical registry to look at tissue responses such as pseudoprogression, radiation-induced changes, and radionecrosis. And long term, what we'd like to see is can we detect and model patients who might develop recurrence and where that recurrence may occur with patients as a whole and then even personalized that based upon these specific patient attributes? For instance, if someone has metastatic melanoma, they've got a BRAF mutation, and they have three brain metastases and they're 52 years old, how likely that patient could have a new brain metastasis at 3, 6, 9, and 12 months? And where might that metastasis form, If it's going to form? Can we use artificial intelligence and radiomics based upon the thousands of imaging studies that we've acquired to better study and define this? I think we can and our colleagues at Hamburg are helping us to try to do that.

So, what Brainlab has done is to create a sandbox, if you will, that allows the Hamburg team to look at the identified data and to be able to build these predictive and analytical models that will allow us to use deep learning and radiomic tools to better extract the scientific information that's part of the radiosurgical registry and to move forward the field in a way that randomized control trials would not be as well equipped to do. So, I'd just like to conclude by saying that I think that the NeuroPoint Alliance registry has already demonstrated that it can capture patient characteristics, processes, and outcomes that are reflective of contemporary randomized control trials and other types of retrospective studies.

That with the power of the registry data, we'll be able to shape and improve the quality of outcomes for patients who are getting radiosurgery, will allow closing of the gap in competencies and allow for more rapid practice improvement of clinicians, myself included, and then to really have high-quality research data that can be extracted for standard analytical purposes and radiomics and artificial intelligence-based analysis of the radiosurgical data. And that we alone can help to accrue patients with this in a way that is more amenable to typical clinical practice and certainly more timely and less expensive than a randomized control trial alone would do. So with that, I just like to conclude by saying I thank the audience for listening to my talk and hope that you'll come to understand the value of radiosurgery registry-based studies and maybe you can get involved in the radiosurgical registry as a participating site in the years to come. Thank you.

Bogdan: Great, and thank you very much, Dr. Sheehan, for your informative presentation. Brainlab has been privileged to work with NeuroPoint Alliance over the last five years and we not only see the tremendous value in sponsoring such initiatives but also in working with leading institutions in the United States towards furthering the field of stereotactic radiosurgery. Before we go to questions, I would like to briefly introduce you to our Quentry registry infrastructure. Originally, we have developed this product for NeuroPoint Alliance but now that the technology has been predominantly migrated to the cloud, Brainlab will offer, starting this year, the solution to all our customers interested in either creating their own repositories or joining into a greater international registry.

The Brainlab registry solution is, first of all, an efficient cloud-based platform to upload, enrich, store, and analyze clinical parameters, imaging data, and treatment-related information. The Quentry registry product has been developed to be technology agnostic and support any PAC storage, treatment device, and treatment planning system. Submitting data into the registry takes just a few minutes and the workflow has been designed to go here with the steps that you do in the clinical routine so that enrolling patients comes natural. The time efficiency in data collection comes from heavy automation in parameter extraction, all due to performers that Brainlab has migrated into the clouds that segment normal anatomy, fuse data, detect tumor locations, extract the symmetric parameters, and many other tasks. To give you a glimpse into the product, I would like to introduce you to Joel Fuchs to show you the workflow of creating a patient into the registry and visualizing some treatment statistics. Joel?

Joel: Hello, I'm Joel Fuchs, regional marketing manager at Brainlab. Today I'm going to take you through the process of entering a new patient and multiple brain mets SRS case in Quentry. We'll then follow up on our patient using SmartBrush and Quentry to document patient outcomes including tumor response. We'll start by exporting the patient's approved SRS treatment plan from our planning system. The plan along with images and objects are selected and then exported to Quentry. Next, we'll log into Quentry where we'll find that the treatment plan we've exported appears in Quentry's tasks list as a new event. The tasks list is organized by patient and displays all the forms which need to be completed for the selected patient.

To enroll our patient, we'll fill out the baseline and disease forms and then complete forms related to the patient's SRS treatment. The baseline form captures a few key demographics, comorbidities, functional status, patient-reported quality of life, and other related information. The disease classification form captures disease properties and diagnosis-related details. SRS treatment delivery and post-treatment information is entered in the treatment details form. The lesion definition form is used to identify target volumes and to document fractionation, prescription dose, expansion, and target location.

Quentry may pre-fill values based on automatic extraction from the uploaded treatment plan. It will also automatically suggest appropriate anatomical location labels for targets in the plan. Once our tasks have been completed, they can be submitted to the registry and cleared off of the tasks list. Our patient is now fully enrolled in the registry, and the patient's data will begin to appear in the Quentry analytics. We'll begin entering the follow-up for our patient by identifying the current volume of prior or new tumors using the patient's follow-up scan and Elements SmartBrush. Using the brush tool, we'll outline the tumor on two planes. This will create a tumor object with volumetric properties which will be added to the registry automatically. Outline additional tumors as needed, and then export to Quentry.

In Quentry, we now see a new follow-up event task for our patient. To complete this task, we'll need to fill out the follow-up details and lesion definition forms. "Follow-Up Details" documents changes to the patient's conditions, functional status, and quality of life. Neurological exam observations are recorded along with toxicities, medications, new treatments, and next follow-up scheduling. The lesion definition form is used in follow-up to enter some details about the tumor objects we created using SmartBrush and to answer a few related questions. Quentry will automatically detect the location of each tumor object and suggest the correct anatomical labeling. Now that the follow-up details and lesion definition forms have been completed for this follow-up event, you can submit the event for inclusion in the registry. Once we confirm submission, the follow-up event is cleared off of the tasks list and the events data will appear in Quentry analytics.

Quentry analytics provides insights into your patient population with a selection of customizable graphs which can be pinned to your dashboard. Data filters and layers can be added before pinning your graph, providing quick access to key information. Your data can also be exported on demand for offline analysis. When accessing your registry patients in Quentry, you can update the forms, manage and organize accrued data. And view images and structures. With a single click, you can also view a snapshot of your patient's registry-based outcomes using the patient dashboard. The dashboard displays a timeline of reported events for the selected patient and graphs of key outcomes measures. Thank you for taking time today to learn about registries and Quentry.

Bogdan: All right, thank you very much, Joel, and if I can bring Dr. Sheehan back online, perhaps we can answer some questions. All right, Dr. Sheehan, I'll start with a question that we typically get a lot from our users, how can others join the NeuroPoint Alliance registry? And since we have an international audience here, do you have any plans, does AANS have any plans to take this initiative to an international audience?

Dr. Sheehan: Certainly. Well, there is the ability to expand the registry within the U.S. through the NeuroPoint Alliance, and that effort is run out of an office in Chicago that's associated with AANS. There's a website that you can contact the NeuroPoint Alliance and gain initial conversations about how to become a member site. Obviously, I think you can work with your Brainlab local representative too to reach out and make connections to NeuroPoint Alliance. In terms of expansion internationally, I think that the natural partner to do that would be the Novalis Circle group that could help the NeuroPoint Alliance to expand this abroad, and the existing framework, of course, is already there.

It's cloud-based, it's software that can easily be scaled up, if you will, to other sites because the infrastructure that I would use here in Charlottesville, Virginia would be the same as colleagues would use in Chicago or in Tokyo. And I do think that the uploading of data to a secure site and with a common set of data elements is really the way that we can all share our clinical data in a way that would allow for more meaningful research. So, I'm very much a proponent of expanding sites, of course, there has to be that commitment to providing the data, to making sure that the quality of the data that's uploaded is really one that's high quality.

Bogdan: Another question related to data quality and data monitoring, so how do you QA for the entered data and how do you audit compliance and data dictionary?

Dr. Sheehan: Well, I mean, first, one of the ways that I think we've audit data is the range-checking, range-checking when the data is entered. So, for instance, if I were to enter a prescription dose that was 100 gray, that would be out of the realm of normal practice and would not really be accepted and you'd be queried as to whether or not that was the appropriate data entry. Another way that we've gotten around the human error is through the scrubbing of a lot of the data elements out of the dose planning data that is obtained at the time of the radiosurgical treatment so that we don't need to be entering a lot of the fields manually.

But, of course, there are going to be fields that need to be entered manually, those fields can be checked both at the local side as well as audited...not all but some, through our NeuroPoint Alliance team. It is certainly something that I think if you find an error, that you can go back to correct that error, and there are going to be instances where, inevitably, errors are made in manual data entry. But I do think that there are a lot of safeguards in place, including the range checking, the semi-automated data that's uploaded on the basis of imaging, which is, you know, vastly superior in many ways than human data entry for those fields, and then through the auditing of a portion of the data that would be done randomly and sporadically by the NeuroPoint Alliance team.

Bogdan: I have a somewhat related question here. Suppose a user identified an error or a mistake or anything incorrect about a patient, can you edit patient information once it's been uploaded into the NPA registry and how can someone modify a given field?

Dr. Sheehan: So, yes, yes, you can go back and you can edit data. The data fields are uploaded periodically from your site to the cloud so that it would not be an instant fix, but it would be one that would be updated at the next update of...or the next uploading of data to the cloud-based registry server. But yes, you can go in and make adjustments or corrections to your data. And certainly, I've done that and we've done that, and we expect that it will be a part of what any site would need to do if you're entering hundreds of patients as we've done already, and with each patient, of course, it's not just the data at the time of treatment but it's the data at subsequent follow-up points. So, in essence, there's opportunities to make mistakes, and obviously, opportunities to catch those that we can catch but also some that you'll catch.

Bogdan: A question related to patient reconciliation, what happens if one patient is treated, let's say, at two different institutions and submitted to the registry, will the registry combine the two? And I guess linked to this, you can extrapolate this question, what happens if the patient with the same name from the different institution is added into the registry?

Dr. Sheehan: So, interestingly, the latter is an easier question to answer because if a patient has the same name at my institution or at another institution, let's say John Smith, and John Smith is treated as Institution A and John Smith is treated in Institution B, those John Smiths are given unique identifier numbers that are not associated with that individual name when they're uploaded to the cloud so that they have a unique patient ID that's part of the registry, so that it's not based upon their name. Now, you can back out that information because of that unique identifiers and link to that individual patient at your site but a John Smith at Institution A and a John Smith in Institution B will have two unique identifiers numbers. Now, if the same patient were treated at two different institutions, I think that that could represent a little bit of an overlap in the way that that patient data is obtained and uploaded to the cloud, so that would be something that would be a little bit unusual but not impossible to envision, but I do think that that would be a bit more challenging to accommodate.

Bogdan: A question pertaining to outcomes. You've reviewed, of course, survival in your talk, what about other metrics such as complications? What else is being tracked in the registry?

Dr. Sheehan: Yeah, so I'm pleased to say that we've had two publications so far, one has been on the creation of the radiosurgical platform, another one has been on quality of life and that was published about a year ago in the "Journal of Neurosurgery." And about a week ago, we were notified that a third publication on quality of life had the trajectory of quality of life, not just at that at last follow-up, as was the case with the initial paper, but this was on changes in quality of life as a function of time for brain metastases patients. So, as you can envision that some patients may do have a decrease in the quality of life perhaps initially in the initial phases of cancer treatment, they're getting radiosurgery, immunotherapy, chemotherapies, and their quality of life may decline a little bit but then it may bounce back up within a matter of months.

So, we looked at that and that is also going to be coming out in the "Journal of Neurosurgery," I would hope that it would be published within the next couple of months. The broader question that we've been working with a big data group also out of another institution, the Mayo Clinic, and the Mayo Clinic big data center has been working with us to look at a host of outcomes including adverse events for patients with brain metastases. As you can envision, the brain metastases cohort is, in many regards, more mature because the number of patients that were were accumulated in the registry so far is the biggest single indication.

And also that the ability to discern meaningful outcomes is much more easy to do in the short term in patients with brain metastases sets compared to, for instance, more benign histology such as acoustic neuroma or pituitary adenoma. So, with the help of the big data science team at Mayo Clinic, we have about half a dozen papers that are in varying stages of maturity that are focusing on radiologic outcomes, on clinical outcomes, on quality of life, and on adverse events. And I think that you'll see those coming out within the next...or you'll see those submitted within the next six months and I hope out in publication as new publication in peer-reviewed journals within the next year.

Bogdan: I have a question pertaining to clinical documentation and, obviously, we collect fields that historically may not exist or be collected in EMR system, so is it possible to create documentation based on registry collected data, or do you need to duplicate this information back into EHR? So, maybe you'll start and I'll talk about some future technological additions from our side.

Dr. Sheehan: Well, it's a good question. It is possible to...for instance, we use EPIC and it is possible to develop smart phrases and smart text that could then be extracted by a clinical research coordinator or a research fellow, for instance, that would make it easier for those individuals to more quickly carefully and accurately transfer that data into the radiosurgical registry. But at present, the radiosurgical registry software is not directly talking to my electronic medical record, although that is I think very feasible and likely something that could be done in the years to come. You may have additional insights from the Brainlab perspective.

Bogdan: Right. So, from the Brainlab side, obviously, we are very committed to continuing to develop the registry platform. Towards the end of last year and beginning of this year, we were fortunate enough to also acquire a company called VisionTree by Martin Pellinat, it's a company out of San Diego, and we're working right now on integration of that particular tool that collects patient-reported outcomes to augment what we provide right now with the Quentry registry. And part of that integration, we would also have an automatic connection into the EHR via HL7 FHIR communication to both extract demographic data from the EHR but also repopulate specific fields that we may be collecting via the registry infrastructure back into the EHR, so that would help with automation of writing back the data to the EHR.

It's possible to do it today via PDS but via HL7 FHIR, we'll actually have the script fields available in the EHR. One last question and then I'll take...there's one technical question, I'll answer that. You've covered obviously today the review for brain mets patients, what do you think is the next intracranial target or tumor where meaningful data for potential research exists in the registry?

Dr. Sheehan: That's an excellent question. So, we're collecting a number of different indications as I showed you on one of the slides, including benign indications and malignant tumors. I think that glioblastoma would be one natural indication and also I think that as the number of patients who have had a reasonable amount of follow-up, that we'll see some benign skull base tumors such as meningioma and acoustic neuromas that will have some preliminary to intermediate follow-up that will be, I think, valuable to present. And one thing that I think is critical is that we're really prospectively collecting the quality of life in these patients in a real-world scenario, not something that is more artificial and constrained like a randomized controlled trial.

Now, don't get me wrong, we participate in a number of randomized control trials, I've been involved in some...previously involved in some now, so I value them highly. But I also know that the patients have dedicated clinical research coordinators that are calling them, that are keeping them on track and on tab, and that's not necessarily what a real-world life radiosurgical patient has, I wish that we did have that for every patient but we don't, but I do think that there'll be meaningful data. And then I think that the next other indication will really be the use of...or the next other paper will really be the use of artificial intelligence and radiomics. As I touched upon this with one of our collaborative groups in Germany, that the uploading of information suddenly gives the ability for them to validate some of the algorithms that they've already published in the "American Journal of Radiology."

And then they can not only refine those algorithms for predicting new brain metastasis and looking at predicting who might experience complications and when those complications such as radionecrosis or adverse radiosurgical effects arise, and then to also help us to become very refined in our ability to model a patient's likely outcome. So, for instance, what we've seen with other NeuroPoint Alliance registries is that suddenly you can take...you can compare outcomes on the basis of age, underlying disease state, and different other comorbidities.

So, for instance, not all spine patients are created equally, so if you're doing a one level fusion and you're a 55-year-old patient who runs a marathon and has no appreciable comorbidities other than maybe hypertension versus someone who was 65 who has some obesity and has hypertension and has a more sedentary lifestyle, what's the likely return to work, what's the likely quality of life for that patient at 3, 6, and 12 months after that spine procedure? We'll be able to do that in a modeling sophisticated way with the registry data that we have after thousands of patients we've already accrued and the many more patients that we're already in the process of accruing.

Bogdan: The last question was more related to support for different treatment planning applications and different versions, so maybe I'll answer that. Technically, as long as you have the ability to export your dose plan via DICOM RT, the registry infrastructure should be able to support that. We validate this from multiple applications and again, from Brainlab's perspective, we're creating this platform to be technology agnostic so we can support any device out there and be able to collect more data.

Dr. Sheehan: Yeah, I think that's critical to say too that this platform...this was not a device-specific or unit-specific registry, it was a radiosurgery device-agnostic registry and that the ability to upload that information should be there for virtually all users.

Bogdan: Great. Well, Dr. Sheehan, again, thank you very much for your lecture today, and then thank you all who join for your participation, and have a great day.

Dr. Sheehan: Thank you.