The speed of technological innovation in the healthcare industry has been moving at a breakneck pace for decades. Now, artificial intelligence – and computer vision solutions in particular – are now poised to deliver increasingly powerful benefits to patients and providers. In this webinar you will learn about a wide range of solutions including cell identification, procedure tracking, gesture analysis, safety and security.
With the participation of Michael Liou Vice President Strategy & Growth Chooch AI, Matthew Bloom, MD, MSEE Cedars-Sinai, Brad Genereaux Medical Imaging Alliance Manager NVIDIA, Mark Wolff, PhD Chief Health and Life Science Analytics Strategist SAS
Learn more about computer vision for healthcare AI
Here’s the transcript.
Tiffany Templin:
Yes, today we have Matthew Bloom. He is an Associate Professor of Surgery, and he is the General Surgeon and Critical Care Intensivist at Cedars Sinai. We also have joining us Mark Wolff, Ph.D. He is the Chief Health and Life Science Analytics Strategist at SAS. We have Brad Genereaux. He’s the medical imaging Alliance manager at NVIDIA. And Michael Liou, the VP Strategy and Growth of Chooch AI.
To start, Michael and Brad will dive into the benefits of computer vision and other AI technologies that are in the marketplace today, for both patients and providers. Then we’ll go into the advances of AI in healthcare we’re expecting to see over the next five years. And finally, we’ll end with a Q&A. If you have questions during the conversation, please type them in the chat below, and we’ll go ahead and answer those at the end. To start, I’d like to ask Michael to talk about AI and computer vision in healthcare, Michael.
Michael Liou:
Great. Thank you Tiffany and thanks everyone for joining. My name is Michael Liou, VP of Strategy and Growth at Chooch AI. And let me just walk through a couple slides to set the stage show. So Chooch AI is a computer vision company. We’re based here in Silicon Valley, and we develop the platform that is designed to identify both objects and actions. So in the healthcare context, that means we can detect scalpels, forceps, wheelchairs, we can also detect patient behavior, such as coughing, falling down, trying to get up.
Michael Liou:
We are a horizontal and very versatile platform. And what we’ve done here, is that we are able to take not only visible structured video, such as what we see every single day, but we also are able to ingest other types of formats such as X-rays, CT scan, and MRI. We’ve developed a unique approach to computer vision. And what Chooch AI platform has done, has actually integrated the dozens of steps involved in three key disciplines in order to deliver visual AI computing on the edge. And those three disciplines are data set generation, model training, and edge device management.
Michael Liou:
If you think about the first two, data set training as well as model training, these are very tedious and time consuming tasks. And typically when you combine these two processes together, we’re talking about development and delivery of a computer vision models to the edge. The cycle tends to be somewhere anywhere from nine to 12 months, it’s fairly lengthy. And what Chooch AI has done, has actually created a no code platform and instead of internal tools for rapid prototyping AI model training, where we’ve actually delivered models as quickly as short as two weeks.
Michael Liou:
Now the third element, edge computing is quite critical, especially within healthcare, where privacy, HIPAA compliance and data protection need to be observed and are quite critical. Here, we have orchestrated the use of GPUs, cameras, containerized models, multi stream video, as well as the actual inferences, all on edge devices. And currently, we are running on NVIDIA GPUs both in the cloud and on the edge. And we are currently members of the NVIDIA Inception, Clara Guardian Healthcare as well as the Metropolis Program.
Michael Liou:
I want to touch upon some of the broad applications of computer vision within healthcare where we have delivered solutions at Chooch AI, and they range anything from a radiology that you typically think of when you think about AI within the field of healthcare, to patient behavior monitoring in terms of, is that patient coughing, are they lying down, are they sitting up, a fall detection in hospitals or nursing homes, PPE detection and even behavior such as hand washing before a procedure, but I want to discuss a little bit what we’ve actually done with a major Fortune 500 company in medical devices.
Michael Liou:
We’ve actually delivered AI to what we call smart operating room. And here what we’re doing, is actually helping relieve the IT that nurses and doctors face every day while operating. And we’ve done everything ranging from the more mundane in terms of counting how many times the doors open and close, when the room is cleaned, even we just build in to more sophisticated actions such as tracking, scalpels, forceps, sponges and gauzes, entering and exiting a body cavity, determining when that patient drape is up or down and when anesthesia starts and stop. So as you can see, the applications are quite married within just this particular vertical itself. But let me stop there and hand it back to Tiffany. Thank you.
Tiffany:
Thanks, Michael. NVIDIA is known for its hardware and software powering AI. Given your position at NVIDIA, what are you seeing Brad?
Brad Genereaux:
There we go. Sorry about that, I couldn’t get the unmute. Absolutely, we are seeing so much across the board in terms of leveraging compute, to solve problems that ordinary computers cannot tackle. Regardless if we’re in the healthcare space or in other domains like gaming, like robotics, like self driving cars, like smart cities, NVIDIA is powering the entire stack from the bottom to the top.
Brad Genereaux:
Right from the hardware layer, from the GPUs, for even that whole bunch of processing, all the way up to building systems with our DGX, with our Jetson, with computing at the edge, as well as creating platforms that we run all of our applications on top of, whether it’s running on the cloud, doing things like MLOps or remote management, looking at security, from end-to-end, looking at the networking and the data processing and bringing data to and from the different systems all the way up to building those application frameworks that we can build our applications on top of that are already pre-accelerated, so that developers don’t have to spend the time working at that lower level. They’re just building the innovations that are driving what we’re seeing today.
Brad Genereaux:
What this looks like from a full stack, all the way from the bottom, we’ve got our hardware, we’ve got the servers that the hardware plugs into. As I said, we’ve got our acceleration libraries, our developer toolkits, and these things work all across the board. So a lot of the intelligence, a lot of our research, a lot of our engineering efforts, are building tools that work across all domains. So taking the best and brightest of what we’re learning for self driving cars, for robotics, for smart cities. And taking that up to the level of how do we take this to the verticals. And the area that I spend most of my time in, is in healthcare.
Brad Genereaux:
And in health care, we have an application framework called Clara. Clara has tooling that allows us to accelerate the workloads in genomics, in natural language processing and natural language understanding, in medical imaging. So this is taking the imaging and deriving insights from it, visualizing it, using cinematic rendering to get views of our organs of diseases in real time for our clinicians, powering the instruments.
Brad Genereaux:
So this is being able to take images coming from CT units, from MRI units, and processing that before it even gets to the eyes of our clinicians. Building smart hospitals are usually healthcare Internet of Things. This is using computer vision to monitor patients, look for temperature screening, and monitor what’s happening in our surgery theatres, all the way up to drug discovery, and using the best brass to find new compounds to develop new treatments for our patients, and everything that they deserve.
Brad Genereaux:
From accelerating solutions, we’re there every step of the way to help accelerate breakthroughs in healthcare and life science research. We’re powering next generation medical devices. You’d be hard pressed to find a CT or MRI that does not have a GPU inside of it. And we’re building solutions that go from the edge at the bedside, all the way up to the data center, the nerve center of our hospitals up to the cloud and powering those processing, looking at containerized applications, really right once run anywhere to really power and accelerate our developers. Back to you.
Tiffany:
Thanks, Brad. Matthew, I’d like to turn to you now and understand the current environment of AI in healthcare and how it’s being perceived by the providers.
Matthew Bloom:
Well, I think in general, providers always like new toys to play with. But I think it’s important to look at it as to what tools can make my life easier, and then what tools can let me do things I couldn’t do for patients last year. There’s a variety of used cases. When you talk clinicians, clinicians are practicing in the operating room, but they’re also practicing in the clinic. They’re also practicing in remote clinics, they’re also seeing patients through telemedicine at their health while they’re in the office. So there are dozens of specific used cases and nearly all of them could be improved by a technologist.
Tiffany:
Mark, could you jump in here and help us understand why we need AI in healthcare? Mark, I can’t hear you. Mark? I’m having audio issues.
Mark Wolff:
Yes, of course, I’d become a mean. As digitization and digitalization are becoming very commonplace in healthcare, and in a sense accelerated by the COVID pandemic, in terms of this concept of digital health, which is a formal concept with the regulators globally. The idea that we will now generate incredible volumes of high dimensionality, high frequency data that previously we were not generating, and now we have to deal with those data. And even more importantly, now we can extract features and information from those data that previously, we could not have to measure things that previously we could not measure, particularly around imaging.
Mark Wolff:
As you’ve just seen a moment ago, AI and ML will become critical, in order to managing the availability of humans to interact with those data. It will automate analysis, it will automate data management in a sense, and it will drive decision making to the point of the sensor. And all of that will facilitate a level of intelligence, of automation, and efficiency when it comes to clinical workflows where appropriate, as we’ve seen in the example before.
Mark Wolff:
So to me, there’s a bit of a paradox that AI has become very important. Now it’s been around since the ’50s. Right? IoT, in medicine has become very important. The funny thing is they need each other. IoT is going to develop the data that AI is hungry for and AI will be critical to manage the data that IoT is producing. So it’s a wonderful confluence of events and technologies.
Tiffany:
Mark, I heard you talk a lot about sensors in there. Michael or Brad, do you have anything you want to add to that?
Michael Liou:
Yeah, let me jump in. I mean, a number of times, you may have seen cell counting. And this is a relatively new breakthrough that [inaudible 00:12:23] over the last couple months or so. When you think about typically how we [inaudible 00:12:31]. But what we’re doing right here is we’re actually using pixel level segmentation and actually identifying the individual cell on these very, very large slides. Now, in order to give a little bit of a background of what the status quo is, typically, in brightfield microscopy, when, say an immunologist takes a look at cells, they throw it onto a slide, they throw a micro grid on there.
Michael Liou:
And this scientist with two PhDs and MDs sitting there, actually clicking and counting the number of cells through statistical sampling, and getting 80% accuracy. And that’s the gold standard, and what the Chooch AI team has done, is actually have developed an ability to tile that very large image into segments, count all the cells, all the cells with 98% plus accuracy in milliseconds.
Michael Liou:
So think about all the aspects where scientists all around the world are counting cells at different stages of drug discovery, oncology, in the areas of histology, immunology, right? We’re talking about hundreds of millions of hours actually being saved every year, and thereby accelerating the drug development, accelerating the delivery of medicines and effective treatments to people. So this is something that we just rolled out at Chooch AI and we’re very, very excited to talk to a number of folks about this.
Brad Genereaux:
Also, I mean from my perspective, that is really mind blowing, because when I think about, for example, taking the whole slide imaging with pathology and the size of those images, in order to take that and process it, and even to move it around from the microscope to a centralized location, you have to break the laws of physics in order to get it to the point of someone else to be able to review that same level of fidelity of content. Being able to pre-process this information. So we’re not moving hundreds of gigabytes, but really just the insights and what’s valuable for diagnostic purposes.
Brad Genereaux:
I mean, it’s mind blowing. It’s a game changer. And so coming from a world of, how do I even describe all of the circles that we’d see in an annotated pathology image, all the way through to just having the numbers that come out of it, the things that we’re deriving from it, it’s incredible. It absolutely is. And this is, like I said, getting around how we break the laws of physics to move this content around. So super exciting.
Tiffany:
So as you talk about medical imaging, and Michael, I heard you start to talk about a couple of used cases. Matthew and Mark, can you add an example that we haven’t touched on yet?
Matthew Bloom:
[inaudible 00:15:14]. Certainly, I mean there’s been image based tools like cell counting, and of course video based tools. Imaging, there’s been some tremendous projects that have been done and their successes and failures have taught us a lot about the strengths and weaknesses of AI, whether it’s Google’s early effort with the monopoly, which was a fantastic effort. But it that example, for example, follows the importance of clean data and representative data of patients.
Matthew Bloom:
And in their case, they had a system that worked routinely in the sterile environment of the lab. But their processing, they had images which would be uploaded to the cloud, when they took it on a roadshow to Thailand and tested the principle, they had problems with bandwidth, because the whole purpose of this technology was to give expert level care in faraway regions, where these faraway regions had infrastructure problems. That was one early, early example.
Matthew Bloom:
So image based of retinopathy, dermatology, pathology, that is the slide counting. And then we have the video based arenas of surgery of course and robotics and in between that are the usual static images in radiology, which opens up to tremendous possibilities in image analysis, in patient diagnosis.
Matthew Bloom:
And also because of this huge wealth of medical digital data now, it’s an exponential explosive growth year after year. We have a tremendous warehouses of data with which to look at, the difficulty and a problem that I spend a lot of time thinking about is, is how to use that data appropriately. And sometimes, a one size solution doesn’t fit all, because the patients themselves are so unique.
Mark Wolff:
To follow up on that if I could, some of the work that I’ve been doing, kind of relates to this notion of digital twins, not digital twins from the industrial perspective, but now from the human perspective, and looking at individual physiological subsystems and modeling those systems such that they can be used as a comparator to an individual. This is not just in the sense of improving diagnosis and potentially how to provide a therapy, but also as a baseline to an individual. So addressing this issue of personalized precision medicine.
Mark Wolff:
And so what we’ve been doing is using imaging, video imaging of balance in motion, with a simple accelerometer and gyroscope placed on the back of the neck on the body, and literally building a digital twin using physics and motion, as a means of understanding the musculoskeletal system. And these applications can be embedded at the edge. And an individual using their phone or their webcam, could actually stand in front of it, go through a simple set of motions and be assessed for let’s say how they’re progressing after a total knee replacement or a hip replacement, looking at balance and tremors.
Mark Wolff:
So the idea then, of introducing imaging, of video imaging and physical sensors, gyroscopes, accelerometers, bringing that together, continuous assessment of those streaming data in real time, against both standard models and anomaly detection of how you’re deviating from that standard model, which then makes it possible to compare people against each other. And then finally, deploying that as a remote patient monitoring scheme, which is now very interesting and reimbursable, of course.
Mark Wolff:
So bringing assessment up to remote or home locations on the edge, dealing with the data problem as described by Brad earlier in terms of the cloud, and in essence, addressing issues of precision, personalization, value and cost, all because you can deploy an algorithm with a camera and a sensor, which we all carry almost all of us, and then use that intelligence to help clinicians better decide about how you’re progressing or where to go next.
Mark Wolff:
These are not autonomous in the sense that they are in and of themselves diagnostic. Rather, they’re supporting technologies to provide efficiency, in that an individual can look at many more patients and get alerted as necessary, to focus on the patient that needs help. So again, talking about workflows, talking about efficiency, and how analytics can drive that in terms of remote patient monitoring and sensor based analysis of individuals.
Matthew Bloom:
The outpatient setting is probably the most important place where healthcare can be impacted because we want to keep people at home out of the hospital as much as possible, whether it’s wellness or early detection of disease. One benefit of the horrible COVID experience we had was, telemedicine up until now was never really primetime. It was practiced, but really hadn’t reached acceptance.
Matthew Bloom:
In the past year and a half, telemedicine for clinic visits, has become the norm, except for those, let’s say a few cases when a physical examination, that’s an abomination of medicine to say a physical examination, is not the most important thing to do to a patient, because that’s how medicine was properly practiced for the past 100 years.
Matthew Bloom:
But with new sensors at home and with a video stream like the one we’re on now, everything is digitized. So the question then becomes, how much additional information can we get from this video stream that we’re looking at right now? Whether it’s simple stuff like pulse rate or general assez and perfusion. There’s a bunch of technologies that have been investigated that haven’t really hit primetime yet, except in [inaudible 00:21:19], but I think in the next five to 10 years, there’s going to be explosion of commercially available and actually used technologies that impact healthcare.
Mark Wolff:
Absolutely Matthew, and this is this idea of feature extraction from a digitized stream of data, where you’re measuring things you could not have measured before, whether it’s imaging or even voice sound. Can you put your microphone to your belly and assess your Crohn’s today? It’s something of that sort. So tremendous opportunity. But I think one of the challenges as Brad alluded to, how much data, how much dimensionality, how much frequency those data have, and where best to process those data for decision making?
Michael Liou:
Yeah, from a lot of what I see-
Brad Genereaux:
Yeah, if I could add.
Michael Liou:
Okay, go ahead Brad.
Brad Genereaux:
Yeah, what I was going to say, from what I’m seeing, looking at the different systems that we need to be able to create AI to build AI to support it, to store the data, the insights, the annotations. That’s what’s happening right now, and that’s what’s super exciting is that we’re seeing a lot of commercialization. It’s not enough to say, “Hey, I have a model that I can feed an image on a DOS prompt, and then I see some kind of line that says, hey, point nine, seven or what have you.
Brad Genereaux:
We’re actually seeing this, how do we all participate in the creation of AI? How do we all participate in supplying our knowledge, looking at content, taking audio feeds and helping the machines to understand how we classify this? How do we create those systems that make this really easy, and we’re seeing this today, in integrating into our PACs applications for medical imaging, we’re seeing this integrated into our EMR applications.
Brad Genereaux:
We’re seeing a lot of this too, to be able to do this annotation on the fly, while we have some spare minutes of downtime to be able to provide our insights. So it’s super exciting, super important, and all of this work helps us in the end, whether we’re clinicians, whether we’re IT analysts supporting these systems, or whether we’re the patients or the patients’ families.
Michael Liou:
I was going to add to Mark and Matthews comments regarding telemedicine imaging. And so, as we all know, we started to see a much stronger adoption back in last year during the height of the pandemic. But now we’re talking about access and reach, right? We’re also talking about taking diagnostics in a home environment, which are more reflective for your natural state, versus going to a doctor’s office being somewhat nervous, and we’re talking about time efficiency as well.
Michael Liou:
So as a doctor speaking to you or nurses speaking to you, they’re taking your vitals, and that can in turn be thrown back into the hospital setting. So there is technology out there that exists today that actually can take your heart rate, your blood pressure, your temperature and your HRV right now using a video camera by measuring imperceptible changes in the skin surface right now.
Michael Liou:
But now imagine applying that back in a hospital setting where you’re maybe monitoring 20, 50 people’s vitals within a hospital ward and noninvasively having to put a blood pressure cuff on each of their arms, a stethoscope, a thermometer, and be able to do this real time, and to be able to detect changes that might be able to allow a doctor or nurse to intervene before something actually happens.
Brad Genereaux:
I sometimes refer to that as contact less vitals.
Matthew Bloom:
It’s a wonderful role. In any large hospital, there’s a busy ICU and then there’s a step down unit, which is sick patients that don’t quite meet the ICU level of care or floor care. These patients are being hooked up to a variety of monitors that typically make alarms go off. I’m telling you alarms are going off all the time. There’s a real for the nurses at the bedside and the doctors, there’s a real alarm fatigue of false negatives, as well as a role for increased interpretation of these signals actually coming up with a diagnosis.
Matthew Bloom:
An early adoption of course, was the big panels of waveforms from EKGs that you might have seen in TV shows on a big wall and a nurse or a doctor can look at an entire floor worth of patients who are being monitored for telemetry, for signs. And if one is particularly bad, that one they flashed a little bit to get your attention, but to put some intelligence on to the interpretation of these signals, and to be able to detect clinical states before they become critical.
Matthew Bloom:
The early signs of person who is in breeding will, the early signs of someone who is getting septic, is getting infected. Using AI techniques on the raw digital signal, is a tremendous opportunity. And it’s only going to be more prevalent in the years to come.
Mark Wolff:
Matthew, you mentioned alert fatigue. I’ve actually worked on projects where there’s a fear that if we bring more data and more information, we will simply amplify the alerts and the alert fatigue associated with it. And I kind of argue the opposite, I say that more information, actually will give us more data to increase the confidence of an alert, therefore lowering the amount of alerts and the ones that do trigger, are ones that people will pay attention to and act on.
Mark Wolff:
And that alert not simply being a beep, but to your point it is informed. It’s giving you some context as to how best to respond it and even triage who gets the alert, not everybody needs to get that alert at the same time. So those new data, the higher volume, higher frequency data, driving higher confidence and alerting, and therefore higher confidence in decision making around that particular patient. So it is paradoxical, but I very much believe that that information will actually lower the problem of alert fatigue.
Matthew Bloom:
Look what science fiction shows us when they show us the ICU or the deep space and you get banged up and you go to the sickbay, and a robotic surgeon operates on you with a laser, or your whole body gets scanned with a handheld thing. Those portraits of medicine are fantastic. And as a patient, I would submit myself to that examination because they portray a system that works with 99.9% reliability. The characters play the hologram doctors, one of the Star Trek series is witty, but you know his medical diagnosis is correct.
Matthew Bloom:
There’s a perception though, there’s a Harvard Business Review paper that came out I think it was earlier this year or last year on patient’s perception of AI. And in general, they found a distrust of AI because they felt that it was cold and uncaring. And they trust at this point and at that point in fact, the authors found that patients wanted a human certainly in the loop, as far as treatment decisions. So we’re basing our talk here today on clinician uses and what we can’t, we can’t forget to leave the patients out of the equation as well, and their perceptions and their experience using AI.
Mark Wolff:
Of course, the concept today is what’s called ethical AI. To what extent is AI capable of addressing those fears and those. Just a quick metaphor that I often use, most commercial airliners, if not, all are fully autonomous. And in fact, pilots flying them are inefficient. They waste gas, they introduce human error and so on. Yet, we still have two highly trained, exceptionally experienced individuals sitting in a fully autonomous vehicle. Now imagine this as a metaphor for healthcare. Why? What are those humans doing there, when they may be introduced a level of efficiency?
Mark Wolff:
Well in a sense, they are there as quality control managers over the autonomous AI systems driving that airplane. The idea is that, let the AI drive and take away a lot of labor and distraction from the humans, allow the humans to focus on what is most important and possibly anomalous to the point where the machines may not be able in their logic and rigor to deal with.
Mark Wolff:
Medicine certainly has that, that individual relationship with the patient and possibly looking at influences and comorbidities that may not be obvious to the machine. And so I fully believe in that future where machines will be highly autonomous and do a lot, but they will be essentially enabling humans to focus on really the more human part of medicine, frankly.
Michael Liou:
100%, Mark. I mean you think about the workflow of a radiologist, and let’s just say this person is trained at UCLA, Stanford, and she has 20 years of experience and goes through 200 slides a day, right? Maybe 190 of them are no brainers. They’re A, B, or C, A, B, or C, except for the triage pile that comes in where she has to address that, but maybe 190 are A, B and C, and what does that do with the workflow?
Michael Liou:
Well, then she can use her expertise to focus on the trained eye and maybe edge cases that where the AI is not smart enough quite yet, to actually then use her trained eye and expertise and render a judgment or decision much more efficiently. And therefore, we get a lot more done throughout the course of a day or a week.
Mark Wolff:
And the AI learns from that. And effectively, we reach a sort of asymptotic efficiency of the machine, where what is not in that curve, is what humans should be paying attention to. And I think there’s a future of happy coexistence, Matthew, as you said in science fiction, the machines didn’t take over.
Matthew Bloom:
My point is, one of the other great use cases in radiology is helping draw attention for the radiologists to disease state. What I mean at 2:00 in the morning an X-ray is done. Who’s reading it at 2:00 in the morning? Now every once in a while, an X ray at 2:00 in the morning has an important finding that the clinician wants to know about. But even in the biggest healthcare systems, at 2:00 in the morning, the staff is in full steam number wise as well as being pulled in multiple different directions.
Matthew Bloom:
And a radiographic study with an important finding can sit certainly on read by the radiologist. But even on read by the clinician who ordered them because the clinician, although they order it, they don’t know exactly when it was done. The reality is you put an order in, and you hope it’s going to be done the next half hour, but systems are complicated and things that should be stat sometimes take several hours. If it’s very important, you stand there until it’s done in effect making not multiple calls.
Matthew Bloom:
But if you’d had a low suspicion that there was going to be a problem that was just more than many things you were taking care of it at 2:00 in the morning, you might not come across that X-ray for a couple hours. An AI system that that can pre-screen these studies for the radiologist to find something unusual that says of all the studies, “It’s 7:00 in the morning now, it’s time to read your 100 studies from yesterday. Look at this one first, or better yet, at 2:01 in the morning, think someone just take a look at this one, make sure it’s not that. That’s an important used case for radiology that AI is perfect for them.
Brad Genereaux:
Yeah, I’d say looking at how we can apply AI in the hospital, it’s so hard for it to say, “I want AI because that could mean 10,000 different things.” Whether it’s around helping us triage, being able to help us diagnose, being able to identify organs or identify pacemakers, identify things in the images, is the tube placement in the correct spot?
Brad Genereaux:
There’s so much out there, and as we continue to see the development of these AI applications, a great example, I always reference back to a website called gamuts.org, which is put on by the RSNA. And it really itemizes all the findings that we could find in radiology images. And there’s both findings and things like speculator or solid cell or semi solid. Like these sorts of findings, there’s about 12,000 of them.
Brad Genereaux:
And so looking at how we can take AI to even address 5% of these, is an important step forward. But as we roll forward, building up these enterprise systems to be able to support AI across the board, is going to become so important. And it’s never about, just like in self driving cars, we have level five fully automated self driving cars. We’re at the stage of how do we help the clinicians. It’s not clinician versus machine, it’s clinician with machine versus clinician without. And I think that that rings so very true in so many of these AI used cases and applications where we could actually take advantage of these things.
Tiffany:
Brad, I’d like to hear your comments here. That was really helpful. I don’t have any stake in the game here. So I look as the patient mindset, and that’s something Matthew that you brought up, is what is the patient mindset and how can we help, as this group here on the panel today, what do you see happening in the next five years with adoption? Are the patients open to using AI with what you’re using today in the clinic? Is that something that you’re seeing adoption or acceptance, I guess from your patients?
Matthew Bloom:
On the one hand, I wonder if the patients really know how much is going on behind the scenes, whether they had a cell smear and it was counted by hand or counted by a camera. Medicine is so specialized now, that I’m not even sure what goes on and think outside of my specialty region, [inaudible 00:35:40] within any hospital, and any laboratory in a hospital, any clinical laboratory, any clinical department, any clinical practice.
Matthew Bloom:
So the question whether patients will accept it, I think it has been touched on that here, it needs to be done with a human touch. They need to be brought along a path with under the guidance of physicians who they trust, whose opinion they trust, and I think that’s the key to adoption. It will be their user experience, and that will still at this time be a big part of human augmented with AI.
Tiffany:
I know bedside manner is important to me as a patient. So I’d love to hear that that’s the direction that you all are taking this and feels like maybe you’re opening us up for another panel to include some patient opinion.
Mark Wolff:
I’ll make a quick comment on that. There’s a study done recently, a few years ago out of Europe, where oncology treatment outcomes were measured with humans alone, making decisions. And humans supported by algorithms in terms of optimization of dose, of drug combinations, minimizing side effects on and so forth. And the conclusion of that study, was that the humans with the algorithms, outperformed the humans. The final comment that this research group made, speaking about patients, adoption, more data, improving confidence in decision making that asymptotic about how confident are you.
Mark Wolff:
The conclusion of these researchers was, that if algorithms in conjunction with humans are outperforming human decision making, quote, “It is unethical to make oncology treatment decisions in the absence of algorithmic support.” Unquote, unethical, not inefficient, not wrong or good decision, bad decision, but unethical that it’s going against the primary ethos of treating patients to the best that you can treat them.
Mark Wolff:
So it’s inevitable. But again, I’ll go back Tiffany to your point. In the end, the machines will free up humans to spend more time with humans, and work on that aspect, as we’ve discussed before. And I think that new forms of data, extracting information that we couldn’t measure before as we mentioned, particularly through imaging, will be transformative in driving both the confidence and the decision making, and the acceptance that patients will say, “Well, yes, give me the best possible course of treatment, and that will inevitably involve math.”
Tiffany:
That’s great to hear. I love it. Michael, do you have anything you wanted to add to that?
Michael Liou:
Yeah, thinking maybe it’s going back to a theme that we discussed earlier about telemedicine, I always see the advances really accelerating in this space. Mark brings up a good point. In addition to the visual, there are also audio components too, there are technologies that can determine. And then once you get that by the way, all that data can be baseline, right? So it’s much easier to have an annual or semi annual checkup and develop a baseline using data that’s actually being stored.
Michael Liou:
So it could be that your voice what cough, dry cough, it could be your posture, it could be your heart rate, blood pressure, HRV, temperature. I mean, we all have scales of course, but the more that you adopt using AI on the front end of that, the more time that can be done in terms of diagnosing, analyzing and talking about, again, back to the human element, interacting with a patient. You’re almost essentially creating a virtual diagnostic nurse in some ways, and I think this stuff is coming pretty fast. We’ve already seen some good examples of it.
Tiffany:
Michael, to add to that, I’ve got a question here. I’d like to get answered. We’re 40 minutes just to give you all an idea, we have a few questions, and one of the questions we have is in regards to aggressive diseases like cancers. What is AI doing from the perspective of screening to help improve patients survival rates, and perhaps some of this early intervention that you all are talking about. Does anyone have something they’d like to add to that?
Mark Wolff:
I can make a brief comment, I don’t want to hijack the conversation. I was actually involved on a project with the Cancer Center at the University of Amsterdam, one of the largest in Europe. And what we were trying to do, is to link together various data entities within the cancer center. That means the imaging, so the imaging that’s done on tumors, the proteomics, so looking at the protein expression of the tumor at surface proteins, genetics and genomics, and pharmacology and toxicology.
Mark Wolff:
How do you bring that together, all those massive amounts of information potentially, and then focus in on the individual to see what is the best path forward. Now again, I’m not a physician, but in oncology, you’re balancing many different factors in terms of dosing, in terms of side effects, in terms of quality of life, length of life, survivability, so on and so forth. So it’s profoundly complex.
Mark Wolff:
And going back to my point that those researchers made earlier, that complexity necessitates I think, an analysis that goes beyond a team of humans sitting at a table. And then using that best possible understanding across those domains, interacting with the patient. And basically, I think bringing them into their care path and saying, “Here’s what we have. This is as good information as we can get, what do you think?”
Mark Wolff:
And so my view of that is that data, bringing highly diverse data together in order to support decision making, letting the machines do the hard work of filtering and analyzing and clustering, presenting to the human options. And then, with that going to the patient. I mean, that seems to me it’s already happening, but it can actually go to the next level with more data.
Brad Genereaux:
I’d also add, to give a real life example. It’s not just the hard work, but it’s also the tedious work. And so if we look as an example, lung cancers, and in CT imaging, and looking at nodules. And one of the measurements that we use, is something called tumor doubling time. So we look at the nodules, we do a volume calculation of what that nodule is, and we’ll compare it to another study from six months ago, a year ago, and calculate the doubling time.
Brad Genereaux:
Something that’s benign, that’s not cancerous, will stay relatively the same, and they’ll have a tumor doubling time of like 10,000 days, 100 something, a large amount. But something that’s aggressive that’s rapidly growing, will see a very small tumor doubling time. If you think about a radiologist who’s looking at 100 CT studies a day and they’re presented with a CT lung study right now, and there’s seven nodules, some of them are benign, and maybe one or two of them are cancerous.
Brad Genereaux:
The amount of time that they’re going to spend manually going through each of those, calculating the volumes, looking at the volumes from a previous report, and then making a call, is a lot of work. And they have to do that for every single study that comes across their desk.
Brad Genereaux:
If we can use AI to calculate the volumes, to look at the volumes from the previous study, to register that against the nodules we’re seeing today versus that and say, “Here’s a list of the seven nodules, five of them are benign, they’re not growing, but two are particularly concerning, they could be smaller, and something that they might not have flagged as a particular concerning issue.” But with this information, they can make improved decisions or at least have the information on hand to take whatever next steps are necessary.
Matthew Bloom:
Brad, the researchers are even taking that a step further right now. Researchers are focused on looking at what have been read as normal CTs in patients who have gone on to develop cancer and going back to those normal CTs and applying as much digital data as possible, whether it be laboratory data, or natural language processing for the medical record extracting data and trying to see which patients went on to develop tumors later. So now we’re picking up disease processes before they become what’s clinically recognized as a disease. We’re advancing in this case cancer therapy very far.
Brad Genereaux:
Now, that’s an exciting time. Absolutely. I couldn’t agree more.
Michael Liou:
Yeah, Brad, I was talking to your pathology group. I didn’t video about this exact thing, the different stages and the types of cells as they kind of morph. And so even before it’s a clinically identifiable as a certain type of cancer, what are the pre-stages that you actually could count. And this is a very, very subjective feel from what I understand. And being able to use AI to actually exactly pinpoint what stages they are and actually then aggregating that data, could be a lot more preventive than the current state of technology that we have right now.
Mark Wolff:
And to that end, there’s an example where two individuals can look at a patient and make two different decisions in terms of whether to proceed first with surgery or chemo, a decision that has to be made all the time. How do you break that tie if you will, and there are now imaging techniques that actually will help break that tie through rational data driven as you say, historically, looking at the trend, and then informing both clinicians. Well, you say surgery, you say chemo, computer says chemo to begin with for the best possible outcome given these expectations.
Tiffany:
Mark, I think that’s a great closing thought from my perspective. I really appreciate all of you joining us today for this panel. This has been a great discussion, and I look forward to continuing this. We do have a few questions that didn’t get answered, and we have your contact information. We’ll shoot out our notes with this recording and the answers after this call. Thanks again everyone for attending. Have a great day.
Brad Genereaux:
Thank you.