The AI Security Blind Spot That Healthcare Can’t Afford to Ignore [PODCAST]
In this episode, Vrajesh Bhavsar, CEO & Co-founder at Operant AI, discusses the AI security blind spot that healthcare can’t afford to ignore.
Highlights of this episode include:
- What’s the AI risk that most hospital leaders still don’t fully appreciate
- Zero-click vulnerability
- How autonomous AI fundamentally can challenge the compliance model healthcare
- Why traditional security tools struggle to keep pace with the way AI actually moves data inside a health system
- What the real financial and reputational costs are when a healthcare AI deployment goes wrong
- What the first steps are to take to understand their actual exposure
Kelly Wisness: Hi, this is Kelly Wisness. Welcome back to the award-winning Hospital Finance Podcast. We’re pleased to welcome Vrajesh Bhavsar. VJ is an engineer with a Master’s in Computer Science from USC and over 20 years of experience building hardware and software products. VJ built core technologies for iOS and Mac OS, including dynamic tracing, data protection, and secure enclave at Apple. He holds eight patents in distributed systems, data, and security. He is passionate about building technology-first businesses that drive positive human impact at scale.
In this episode, we’re discussing the AI security blind spot that healthcare can’t afford to ignore. Welcome, and thank you for joining us, VJ.
Vrajesh Bhavsar: Hey, thank you for having me.
Kelly: Well, let’s go ahead and jump in. So, AI is being deployed across healthcare at a remarkable pace. From a cybersecurity standpoint, what’s the risk that most hospital leaders still don’t fully appreciate?
VJ: That’s a great question. And it’s such an exciting time that we are living in. There are so many new innovations coming to the entire space. And the impact of AI in so many different areas gets really exciting for a lot of industries where this kind of innovation is needed. And, of course, healthcare has so many different areas where AI can be applied, but also there are a lot of risks that come in when you are exposing this kind of critical area of safety and care to this kind of new innovation. And so the big risks that we see in a lot of interactions we are having is how when you have a lot of kind of new innovation getting sprinkled across use cases and areas where you didn’t really understand the full scope and things are operating without a lot of visibility, especially in the deep areas where sensitive data is in question and you have patient information as well as ways that a lot of the third party systems are going to interface with these things. That’s where there are so many risks that it’s not fully understood and appreciated.
And the thing that really gets people is that we are used to kind of operating with these innovative systems in kind of traditional systematic ways, that A plus B results in something. But in the world of non-determinism, where there are a lot of new attacks coming in, the level of risk really, really goes to the roof. And the kind of attacks that have come through in terms of prompt injection or zero-click, and a lot of things that have been reported across the industry, and we have done some of the work ourselves. It really throws people back into like, “Oh, wow, I didn’t realize that this can really exfiltrate the data at such scale and such speed.” And the level of protections and defenses that people had through traditional tools are now out of question.
Kelly: Yeah, it’s definitely an interesting time in healthcare and AI, and there’s a lot to consider there. You recently discovered a zero-click vulnerability that can silently extract complete patient records without leaving a trace. What does that mean in plain terms, and why is it a signal of a much larger industry problem?
VJ: That’s a very interesting question. And I think as an industry, we have been trying to get everyone to kind of understand that, “Hey, don’t respond to random emails, don’t share credentials, don’t go chase random links and all that, right? But what’s happening in the world of AI is that without users taking any of such risky actions, now you can have a massive exposure and that’s what zero click refers to. And what we discovered is that a lot of these AI systems as they are interfacing with so many different data sources and all the records and all that, they can actually go take the credentials and access that you have given them and try to be helpful in ways that can actually result in data exfiltration and leakage at a massive scale. And so, what we are finding is there are the kind of attacks that come through in AI systems that are prompt injection or jailbreak attempts. And those things are getting embedded in documents, in ways that are invisible to the human eye, but those instructions mean a lot to what an AI system or an agent bot is going to do.
And that’s where, now, you are bringing– you have so many, so much intelligence baked into these AI stacks that they are trying to be super helpful and trying to kind of take all these instructions that are embedded and the users didn’t do anything wrong, but this is where some of the attacks that are coming through. Some of the ones that we have discovered and the industry has discovered, even Anthropic reported several different types of attacks. And there is a lot of education needed in the industry to really kind of understand the scale and scope of what these intelligent, non-deterministic systems bring in these critical environments.
Kelly: Completely agree. There’s definitely a lot of education required for us. VJ, HIPAA was built for predictable human-reviewed workflows. How does autonomous AI fundamentally challenge the compliance model healthcare has spent decades building?
VJ: I know. This is where we are really passionate about like there is so much to be done, and I know HIPAA is trying to catch up on a lot of the new innovation. But at the end of the day, there is kind of like an inert way in which HIPAA assumes there are human accountability layers behind all the different decisions that are getting made. And I think that’s the thing that gets thrown out the window when you bring in agentic AI. And in these environments where you are passing responsibility, you’re passing autonomy, you’re passing decision-making capabilities to agents and at a speed of machine speed at which you can access so many different systems all at once and try to be helpful. That’s where there is no mechanism in place to even understand what these systems are trying to do. And beyond understanding, you need to actually govern and bring controls into these environments, right? And I think that’s kind of the core to a lot of the challenges and what we refer to it as runtime visibility and runtime controls.
And when these agents are getting born and they are trying to figure out, like, “Okay, what are the instructions given to me?” And I’m going to try to make sense of that. I’m trying to access the systems that are available to me, and sometimes they overreach. And that’s when these breaches happen. That’s when, kind of, unexpected consequences happen. That’s when you end up with a non-compliant system. So, I think there is a lot to be done. I think the industry was still just catching up on what was happening in the world of microservices and all the API ecosystem. And now we have leaped directly into agentic environments. And I think that requires a full depth understanding of what all things are happening to stay compliant.
Kelly: Yeah, there’s definitely a lot of things happening right now, and I know HIPAA complicates things as well. So why do traditional security tools struggle to keep pace with the way AI actually moves data inside a health system?
VJ: Yeah, this is where we have gone through such massive waves in the last 30, 40, 50 years, right? And AI agents, and that’s a big, big one, that is going to completely change how security tooling and security requirements would work. But as you think about when cloud came about, there was a very bare bones kind of understanding of, okay, how am I protecting different network systems and databases? And this is very, very early on when you had your data centers, and firewalls came about to actually stop access to different parts of the data system, where different parts of the data center where you might have databases or critical data that you want to protect from different attacks. And over the years, we had usage of mobile and now usage of APIs, and there are so many different technologies have come into play, and you need a different approach for all these different technology adoptions that are going on.
And so, as you think about what is happening in the layers of APIs, in the layers of AI, in the layers of agent, you kind of need a very different AI layer firewall, right? The traditional things that used to be at the network layer, just trying to make sure that computer A doesn’t talk to computer B, it now needs to translate in the way that, hey, agent A cannot talk to agent B or agent A cannot talk to the patient record systems, or it needs to get permission from a human before it does that. And all those things are happening at such scale. We see so many stats about thousands of agents running and doing all these things every day in every enterprise. And so, when such speed and scale is at play, you’re going to need a different tool, a different system to tackle these systems, and it cannot be just a manual process. That’s kind of where a lot of the traditional tools fall apart because they relied on kind of checking the external boundaries, but they don’t know what is going on inside these environments. The tools used to be, oh, I’m going to scan code and try to make sure there is no threat lurking inside.
But when the code is being generated in real time by these agents, they’re coming up with new API endpoints on their own. They’re coming up with MCP servers on their own. And so, what do you do when this new code is getting generated on the fly? What do you do when intent and instructions can drift and can change over time? And that’s where you need something that is understanding what kind of actions are going on and make sure that you’re going to stay compliant as well as not bring more threats and risks into your system.
Kelly: Makes a lot of sense. Thanks for explaining all that for us. Beyond regulatory exposure, what’s the real financial and reputational cost when a healthcare AI deployment goes wrong? And is the industry pricing that risk correctly?
VJ: Look, there are a lot of people trying to understand a lot around pricing the risk and what to do in care of the benefits and kind of threats that come through. And I feel like we’ll learn a lot over the coming couple of years that how this translates in practical life, but definitely there’s a big shift needed, right? With the scale at which these agents can access the number of records, that’s really scary to be honest. There are different types of compliance rules and regulations around like, “Hey, XYZ number of records breached results in XYZ kind of fine.” Well, but also when you have the loss of trust, let’s say patients were trusting some EHR system or some hospitals and other systems with their private information that now suddenly when you have a massive scale breach, that trust is lost, there will be a lot of questions around like, well, do I want to be passing on all my data to the system that has a lot of security risks. But beyond that, there were a year or two ago, a couple of years ago now, where there was a massive outage at airports and airlines where some cybersecurity vendor was not able to deploy things properly.
And those type of operational risks that come in that can really bring down some of these systems and healthcare being such a critical infrastructure requires a level of kind of risk analysis before you put in AI into these environments where if something goes wrong, it can terribly, terribly bring down the entire infrastructure. And I mean, I think those things are obviously questions that a lot of the leaders are thinking through. A lot of the security teams that we talk to, the legal, financial teams that we interface with. And so those are things that are still kind of in flux and hopefully we’ll find ways to keep bringing innovation while also kind of bringing in the right guardrails and safety measures along the way.
Kelly: Yeah, no, definitely agree with all that. And what you were talking about, the lost trust really resonated with me because it seems like once that trust is lost, it’s hard to regain. VJ, for a CCO or Chief Compliance Officer who has already deployed AI across their organization, what are the first steps they should take to understand their actual exposure?
VJ: There’s a classic saying: You can’t secure what you can’t see, right? And I think in the world of AI and agents and APIs, I think a lot of leaders are realizing that discovery means a completely different thing at this point. A lot of teams have had engineering observability tools or some form of access visibility and all that. But I think what we are seeing is that as teams try to understand like, “What is going on? Okay. My teams have already deployed all this AI. They are using all these AI tools, or they have deployed agents in certain ways. Just getting visibility into what’s going on. That’s where everyone has to start.”
And being able to do that in all these different use cases and areas, so whether it’s employees using AI clients and talking to different AI systems and trying to, whether it’s private or public kind of exposure, there is a lot to be done in just understanding that exposure. When you have your EHR information or EHR systems as well as your other cloud environments, whether they are running on hybrid, private, there are a lot of different systems. People have to start with what are the AI models running? What are the agents running? And we’ve come across teams that feel like, “Oh, yeah, I have XYZ tool that gives me AI, SPM, and I know the five models that I’m running.” And when we actually go show them like, well, actually it’s not five. It’s like 97.
That really shocks people. And that type of sprawl has happened so fast in the last year or two that getting access to the telemetry that gives you that type of visibility, it’s something that is actually available, right? It’s such a daunting task to know what is going to come through. With kind of full visibility, we call it discovery. We talk about getting the right telemetry from the layers at which AI operates, layers at which agents talk to different agents and systems over APIs and MCPs. And if the leaders are uncomfortable kind of knowing like, “Do I really know how many things are running?” and that’s kind of the big gap that you have to start closing, and from there, you can set up the right processes around detecting the risks and then defending and controlling and governing all these different systems that are in your purview.
Kelly: Well, thank you so much for providing all that great information and for sharing your insights with us, VJ, on the AI security blind spot that healthcare can’t afford to ignore. If a listener wants to learn more, contact you to discuss this topic further, how best can they do that?
VJ: You can reach us on our website, operant.ai, and I’m also reachable on email directly, vrajesh@operant.ai. Thank you so much.
Kelly: Awesome. Thank you for providing that. And thank you all for joining us for this episode of The Hospital Finance Podcast. Until next time…
[music] This concludes today’s episode of The Hospital Finance Podcast. For show notes and additional resources to help you protect and enhance revenue at your hospital, visit besler.holdings/podcasts. The Hospital Finance Podcast is a production of Besler Holdings.
If you have a topic that you’d like us to discuss on The Hospital Finance Podcast or if you’d like to be a guest, drop us a line at contact@besler.holdings.
