OpenAI and Anthropic show that their rivalry is alive and well, now in perhaps the most sensitive domain imaginable: our health. On Jan. 7, OpenAI launched ChatGPT Health: a separate environment within ChatGPT for health and wellness questions, linking health data such as Apple Health and medical records. Less than a week later announced Anthropic Claude for Healthcare, with similar links and a clear focus on both consumers and healthcare organizations. I see in this an attempt by both to become the interface between citizen, healthcare professional and the data that flows in between.
Legally, this is immediately a minefield. These tools connect to health data: special personal data with onerous requirements for basis, purpose limitation, data minimization, transparency, retention periods, security and (international) sharing. Although both providershighlight That Health chats and linked health data are not used to train their foundation models, they do use it for product improvementThen the key question is: what gets logged, what falls under product improvement, how testable is that promise and who oversees it?
When nuance disappears in the answer
Large Language Models (LLMs) can translate existing, controlled information into understandable language just fine. It gets trickier as soon as the model adds to knowledge, introduces assumptions or draws conclusions that are not reducible. You can also have doubts about the quality of the "knowledge" being used.Anthropic, for example, states that theirPubMed connectorprovides access to more than 35 million "pieces of biomedical literature," butfull articles are often not there. In doing so, you miss methodological details and nuance, exactly what is needed to carefully weigh medical evidence. Guidelines are slower, but contain just that scientific interpretation. And even then, a familiar risk remains: models can sound convincing while being being subtly wrong.
Also read:AI in healthcare: answering questions
Much influence, little scrutiny
For the layperson, this is extra exciting. Citizens get access to a chat that can ask deep questions, outline connections and offer concrete action perspectives, but can rarely judge the quality of the substantiation for themselves. That creates risky asymmetry: lots of influence on behavior, little ability to review. Not coincidentally, international healthcareon lack of regulation, transparency and performance evaluation of these types of tools.
For healthcare professionals, the solution seems simple: they have expertise, so they can control.Theoretically correct, but in practice, it dependshuman-in-the-loop off of time, mental space and workflow design. In addition, enter automation bias op: when the tool is often right, the critical attitude wanes. That's human. So the safeguard is not in "there's a doctor in between," but in process agreements. When is source control mandatory, how do you make uncertainty visible, how do you log decisions, and how do you monitor performance and drift?
Man as a safety net
What can these tools deliver? Potentially a lot. They can strengthen health literacy, better prepare consultations, make results and letters understandable, and help citizens navigate care pathways and administrative steps. For healthcare organizations, they can ease administration and speed communication. In a system with staff shortages and growing demand for care, they can improve accessibility.
But the risks are hefty. In the short term: misinformation at scale, delayed or unnecessary care, and vulnerable groups blindly following the digital counselor. Longer term: normalization of uploading medical data to commercial platforms, new dependencies on big techas a "healthcare interface" and a shift from public standards to private ones.
Also read:Successfully implementing AI-driven healthcare innovations: lessons from the UMCG
From tool to dependency
So I'm still bouncing back and forth. In the Netherlands these tools are not yet available, but in slimmed down form people already use the regular chat models on a daily basis for health questions and revealing medical information in them is undoubtedly already happening on a large scale. The difference with Health modules like ChatGPT Health and Claude for Healthcare is that they are adding data linkages, masking and explicit claims. This can reduce health disparities, provided we organize development transparently and enforce public values: traceability, guideline-first, auditability, clear claims and testableprivacy pledge.
Instead of making us dependent on American Health platforms, I see opportunities from the European regulatory context: sovereign infrastructure like the new Dutch AI factory and with European AI players like Mistral. Because if AI reads your medical record, ultimately it's not the technology that determines the truth in the consulting room, but the frameworks we build around it.
This text was created in part with support from ChatUMCG, the internal and independent AI chat of the UMC Groningen.
About Bart Scheerder
Bart Scheerder is co-founder of the Applied AI Acceleration Lab (A3 Lab) at the UMC Groningen. In the A3 Lab, researchers, technicians and medical specialists work together on AI applications that make the work of healthcare professionals lighter and more enjoyable. On Wednesday, April 15, Scheerder will provide a keynote on the mainstage of Zorg & ict 2026, the day entirely devoted to AI. Zorg & ict will take place April 14, 15 and 16. Keep the website of Zorg & ictwatch for the full program.
