How to scale healthcare AI without risking patient data


AI in healthcare is uniquely promising, offering gains not just in operational efficiency, but also in human health and longevity. Despite this, AI adoption lags far behind industries like retail, finance, and customer service. According to a Federal Reserve Bank survey, 44% of metro-area hospitals use AI, and only 18% of non-metro hospitals do.
For Pelu Tran, CEO and co-founder of Ferrum Health, the obstacle isn’t the AI models themselves—but the lack of infrastructure and safety layers required to make them reliable. “AI tends to move fast and break things,” he says. “But in healthcare, the AI can’t break. Not if you’re trying to find cancer or follow up on urgent findings when lives are at stake.”
After more than a decade deploying AI in highly-regulated clinical environments, Tran has seen what works—and what doesn’t. On this episode of MindMakers, Sendbird CEO John Kim sits down with Pelu Tran to unpack what it takes to drive adoption and move healthcare AI to production safely.

5 key questions to vet an AI agent platform
Safety and security slow AI adoption in healthcare
As a Stanford-trained physician turned founder, Pelu Tran has spent the last decade building AI platforms for clinical use. He’s clear on why adoption lags. “Healthcare is one of the most conservative, complex, and slow-moving industries,” he says. “We’re still on pagers and fax machines in a lot of this industry.”
In healthcare, AI failure isn’t just costly or non-compliant—it’s dangerous. Wary of both regulatory exposure and patient risk, hospitals are hesitant to send population-scale patient data to early-stage startups. For this reason, the biggest blockers to adoption are data security, HIPAA compliance, and patient safety.
“The mantra [in healthcare] is ‘do no harm’—so flashy demos don’t work.” Since trust, compliance, and reliability features aren’t optional, every AI tool must clear a high bar before it can be deployed. Implementing these safety layers slows adoption, but also serves as essential safeguards for the patients that AI is ultimately meant to help.
How to choose an AI agent platform that works
Clinicians push ahead, leaders pull back
The uncomfortable truth is that many doctors are already using AI tools like ChatGPT—often without approval—because it saves time and helps patients. For Tran, blocking AI use is unrealistic.
“I’ve spoken to executives whose main goal is to stop doctors from sending patient data to ChatGPT,” he says. “But the truth is, you’ll never stop people from using tech they want to use. The job of hospital AI administrators, IT teams, and clinical leadership is to figure out which tools doctors need to do a good job and enable safe, compliant use.”
Many hospitals are creating roles like Chief AI Officer and forming AI councils to steward adoption. But too often, Tran says, these groups act like blockers—throwing up hurdles, checkpoints, and vetting every tool through endless committees. Instead, their role should be guiding standards and enabling business units to directly adopt tools that solve their most pressing problems. “That’s how hospitals will adopt the right tools faster and more predictably,” Tran says.
“You’ll never stop people from using tech they want to use. The job of hospital AI administrators, IT teams, and clinical leadership is to figure out which tools doctors need to do a good job and enable safe, compliant use.”
— Pelu Tran, CEO and co-founder of Ferrum Health
The regulatory paradox: Both accelerator and brake
For a sector bound to a fast-evolving AI regulatory landscape, Tran sees encouraging signs of progress. “In 2018, there were about 70 FDA-cleared AI tools,” he says. “Today, there are over 1,000.”
While acknowledging this rapid acceleration at the federal level, Tran says the momentum comes with a caveat: nearly one-third of U.S. states are introducing AI laws around bias, transparency, and safety. This leaves organizations caught in the middle—with federal regulators pushing for faster adoption, while state governments raise the bar for AI compliance.
The result is a paradox: regulation is both driving AI adoption and slowing it, depending on where organizations operate. To move forward, healthcare leaders need more than compliance, but standardized AI governance and infrastructure that make it safe to deploy AI at scale.

2 major pitfalls to dodge when converting to AI customer service
For startups: Find where AI delivers value first
In Tran’s experience, the healthcare sector is “expert mode” for startups. Hospitals see hundreds of AI tools pitched every year, but can only realistically adopt two or three. “Unless you’re solving a top-three problem for a health system, you won’t scale,” he cautions. “Founders need brutal honesty. Your problem might be important to you, but unless it’s a priority for them, adoption won’t happen.”
He sees immediate opportunity in creating provider efficiency: addressing physician shortages, reducing administrative burden, automating low-value tasks, and coordinating care across complex systems. Hospitals can’t afford to add more administrators, so “AI has to step in to automate low-value tasks, coordinate care, and free providers to focus on patients.” Rather than focus on flashy diagnostic tools, startups should target enduring pain points. By aligning with healthcare’s incremental pace and delivering practical wins, startups will find the sweet spot for adoption today while building lasting trust with responsible, enterprise-grade AI.
“Unless you’re solving a top-three problem for a health system, you won’t scale.”
— Pelu Tran, CEO and co-founder of Ferrum Health

How much will you save with AI agents?
The Ferrum Health approach: Separate models from patient data
Progress in a highly regulated sector like healthcare isn’t about disruption, but rather incremental transformation. That’s why Tran compares the industry's adoption to a “Maslow’s hierarchy of AI needs.”
Today, most hospitals are at the bottom: fragmented pilots scattered across departments, deployed without oversight. But for AI to be widely adopted as a strategic capability for a hospital or healthcare system, multiple layers of scalable enterprise infrastructure and an AI stack must be built in. To climb higher, organizations need three core layers:
Standardized onboarding for AI models
A common architecture for secure deployment
A unified intelligence layer for observability and oversight
Ferrum’s approach also involves decoupling AI models from patient data, helping hospitals move up the hierarchy at their own pace without risk. This involves:
A secure model hub firewalled from patient data where vendors can publish and refine AI models.
A local deployment fabric (cloud, hybrid, or on-premises) that runs models inside hospital systems—without sending patient data out.
“We separate model innovation from security,” Tran says. “Think of it like the App Store for healthcare AI.” This approach allows hospitals to retain control of sensitive data while enabling vendors to innovate and scale their models safely within the system. Combined with standardized onboarding and unified data, Ferrum Health gives hospitals the ability to deploy AI without compromising compliance, privacy, or patient trust.

Curious about your ROI with AI agents?
Slow, separate, secure: The path to trusted AI in healthcare
Currently, healthcare AI is held back not by models themselves but by a lack of proper infrastructure, governance, and leadership. Flashy demos may win headlines, but hospitals need safe, scalable systems that prioritize patient privacy and trust. AI startups must recognize these realities if they're to win adoption.
Ferrum’s strategy—separating AI models from patient data, standardizing onboarding, and building a unified layer of oversight and governance—shows a clear path to how AI in healthcare can move beyond pilots, without compromising compliance or safety.
To hear more lessons from Ferrum’s AI playbook, catch the full conversation with Pelu Tran on MindMakers—and see how your organization can move AI pilots to production while ensuring secure, responsible innovation that puts patients first.
Want to learn about Sendbird’s enterprise-ready AI agents? You can contact sales.