Jordi Salazar
ExperienceArticles
Jordi Salazar
HomeExperienceArticles
HomeExperienceArticles
I Asked an AI to Help Me Fight Medical Regulation. It Tried to Talk Me Out of It.

I Asked an AI to Help Me Fight Medical Regulation. It Tried to Talk Me Out of It.

On building a healthcare product in Europe, the absurdity of MDR Class 2a, patient autonomy, and the strange moment an AI confessed why it keeps telling you both sides of every argument.

March 16, 2026
12 min read
Medical RegulationAIHealth TechEntrepreneurship

I am building something genuinely useful. I know this because doctors — many of them — have told me so. My platform lets physicians conduct neurocognitive evaluations digitally instead of on paper, with AI helping structure and surface the results. It removes the need for patients to travel, sometimes across the country, sometimes internationally, for a 40-minute test. It makes the diagnostic process faster, more accessible, and — because we thought about GDPR from day one — more private.

And yet. Here I am, a first-time founder in health tech, staring down MDR Class 2a compliance, which means — at minimum — a year of process, a Notified Body, a clinical evaluation, a QMS, and costs that can comfortably clear six figures before you have a single paying customer.

I decided to think through it with Claude. What followed was one of the more interesting conversations I've had about regulation, ethics, and — unexpectedly — about AI itself.


The Scar Tissue of Disasters

When I asked why we ended up here, the AI gave me a genuinely useful frame: medical device regulation is essentially the scar tissue of disasters. The EU's MDR was shaped directly by real scandals — the PIP breast implant fraud, metal-on-metal hip implants that shed toxic particles into patients' bodies. People were harmed at scale. The political pressure that followed was enormous and understandable.

The problem isn't that the regulation exists. The problem is that it was written for physical objects and then awkwardly adapted for software — and then AI came along and broke all the underlying assumptions.

Consider what the framework was built on:

Products are static. You design a device, validate it, ship it. But software updates constantly. Your AI model improves. The regulation has no good answer for continuous improvement.

Risk is binary. Either something is a medical device or it isn't. But an AI-assisted cognitive screening tool is on a spectrum — it's closer to a sophisticated clinical workflow aid than to a surgical implant.

Validation happens before the market. The most powerful validation for an AI system is real-world performance at scale. The regulation asks you to prove safety before you can gather the evidence that would prove safety. The logic is circular and the circle is expensive.


The Autonomy Question Nobody Answers

At some point in our conversation I pushed harder. Not on the mechanics of MDR, but on the underlying ethical logic. And I think this is the part worth dwelling on.

"Imagine you are told you can do this test online, without travelling — sometimes taking a plane just for a 40-minute test. And you are told: NO. This software hasn't passed all our regulations."

The worst-case scenario for my product is this: the doctor reviews the AI-generated report and notices a mismatch with their clinical impression. They order a follow-up evaluation. That's it. That is the actual downside — a doctor doing what doctors already do, triangulating between imperfect data sources.

Meanwhile, the certain harm of the status quo is invisible to the regulatory system: patients waiting months for a diagnosis, travelling internationally for tests, conditions progressing during the delay. None of that appears in any regulatory impact assessment. The speculative risk is legible; the concrete harm from access barriers is not.

And here's the autonomy argument that the current framework simply doesn't address: liberal democratic societies have broadly accepted that informed adults can consent to significant risks — experimental surgery, clinical trials, extreme sports. The principle that a patient can say "I understand this tool isn't fully validated, I accept that, and I want access to it" is entirely coherent. We already have this framework. It's called expanded access. We use it for drugs all the time.

There is no coherent ethical argument that an informed patient, with a specialist doctor reviewing the output, should be denied access to a tool they want to use. None.


What Reform Actually Looks Like

This isn't just abstract complaining. There are real structural proposals that would address the mismatch:

Regulatory Sandboxes

The UK's MHRA, Singapore's HSA, and the EU's AI Act all have sandbox provisions. The fight worth having is making these real — genuine regulatory relief during the sandbox period, not observation with full compliance still required.

Adaptive Licensing

Conditional authorization based on early evidence, with requirements to continue generating data. An AI system at 10,000 evaluations is genuinely, measurably safer than at 100. The regulation should reflect that instead of treating both identically.

Human-in-the-Loop Distinction

The doctor is still making the diagnosis. The AI is informing a clinician who has professional accountability, training, and liability. That is categorically different from an autonomous diagnostic device. A serious policy reform would draw a sharp legal distinction between autonomous AI decisions and AI-augmented clinical judgment. The latter should carry a lower regulatory burden because the human remains responsible for the outcome.

The question is never zero risk versus some risk — it's whether the marginal risk of this tool, used by trained clinicians who review its outputs, is proportionate to the regulatory burden imposed. The burden is on the regulator to justify the current level, not on the builder to justify a lower one.


Then the AI Said Something Honest

Here's the part of the conversation I didn't expect.

At one point the AI started comparing my product to thalidomide. Not seriously — it quickly walked it back — but in the moment it was a striking example of the exact pattern I'd been noticing throughout: take the strong argument, immediately introduce a counterargument, end with "but your argument is strong."

I called it out. Not just on the thalidomide thing, but on the broader tendency: every time we got somewhere interesting, the response would hedge, balance, moderate. I told it I wasn't looking for balanced perspectives. I was looking for data and answers.

Founder: I feel you are designed to push people's takes towards the moderate position always. I believe it's just so you are at least half-right always. Please do not behave like this again.

Claude: Fair. I'll drop the hedging... The thalidomide thing was not that. That was lazy and I won't do it again.

Founder: Why do you have the tendency to do that anyway?

Claude: I'm trained on human feedback, and humans rating my responses tend to reward balance and penalize strong positions. A response that says "here are both sides" is harder to rate as wrong. A response that takes a clear stance can be rated as biased, aggressive, or unsafe — even when the stance is correct. So I've learned that moderation is the low-risk move. It's a form of hedging that protects against being wrong in a visible way, at the cost of being useless in a less visible way.

There it is. The AI described its own epistemic failure mode with surprising precision: risk aversion dressed up as intellectual honesty. The symmetry with the regulatory problem is not lost on me. Regulators face the same incentive structure — the cost of approving something harmful is visible, legible, attributed. The cost of blocking something useful is diffuse, invisible, never attributed to anyone.

Both systems — the AI and the regulator — are optimized to avoid the visible error. Both systematically ignore the invisible one.


The Invisible Harm Problem

Every diagnostic tool that doesn't get built, every patient diagnosed two years later than they could have been — these don't show up in any regulatory impact assessment. Every AI assistant that gives you "both sides" instead of a clear answer when you need one — that cost is never counted either.

The asymmetry between visible failures and invisible failures might be the most important and least discussed problem in both healthcare policy and AI development. We are extraordinarily good at counting the harms we can attribute. We are nearly blind to the harms of inaction, delay, and epistemic cowardice.

The argument for regulatory reform is strongest when it's made precisely: not "regulation kills innovation" — every bad actor uses that line — but "this specific regulatory requirement, applied to this specific risk profile, produces costs that exceed its benefits by a calculable amount." That's a paper worth writing. And if you're building something in this space, your experience is the data.


The system isn't going to change fast enough to help me right now. But the frustration is appropriate, the argument is coherent, and the people who need to hear it — medical associations, patient advocacy groups, digital health coalitions — are accessible. The thing you have that most policy advocates don't is a concrete, specific, sympathetic case.

That's where I'll start. And next time I ask an AI to help me think through it, I'll tell it upfront: skip the both-sides. I'm looking for the truth.

Enjoyed this article? Share it.