This article is a part of Poland Unpacked. Weekly intelligence for decision-makers
Katarzyna Mokrzycka, XYZ: OpenAI has announced the launch of ChatGPT Health. Formally, the company describes it as a “dedicated solution that safely combines your health information with ChatGPT’s intelligence to provide better assistance when seeking answers to health-related questions.” OpenAI acknowledges that health-related queries are among the most frequently asked of ChatGPT. In your view, will this be a breakthrough? Could this solution actually change how people are treated worldwide?
Ligia Kornowska, Data Lake: I hope not, because ChatGPT Health is an untested product. That is, it is not a certified medical device. At present, we do not know to what extent ChatGPT Health will tell us the truth - or the opposite - when analyzing our test results. If we cannot verify an algorithm’s effectiveness - which also means we cannot quantify its shortcomings - there is absolutely no basis to make diagnostic or treatment decisions based on its output. I hope it will not be used for rigorous diagnostic or therapeutic analysis. In Europe, that would be illegal.
A chat won’t replace a doctor. A chat can provide soft guidance on wellness, exercise, or nutrition. It can also help by reminding users about preventive screenings, suggesting better sleep habits, or drawing attention to dietary choices. That is exactly how OpenAI currently markets the tool. ChatGPT Health, however, should not have a decisive impact on diagnostic pathways, and certainly not on patient treatment.
Who's who
Ligia Kornowska
Physician and leader of the AI and Health Innovation Coalition. Activist in medical innovation, focused on digital transformation in healthcare, particularly AI and medical data. Co-founder of Data Lake, a company leveraging medical data and blockchain technology for research purposes. Former vice president of IFMSA-Poland (International Federation of Medical Students’ Associations). Co-created and led an association of young medical managers at the Polish Society of Health Management.
For now, the solution is only available in the U.S. Not in Europe - yet it is likely only a matter of time before it spreads to other markets. Both of us are aware that, with limited access to specialist doctors, people will inevitably ask a chat not about improving sleep but about what ails them and how to treat it. OpenAI is the first to launch such a solution, but certainly not the last. You cannot forbid people from asking AI health questions. You can prohibit providers from giving answers, as has been done in China. Does that make sense?
Prohibiting it would be a nuclear option. The development cannot be stopped. It would be like forbidding patients from using a search engine to check symptoms. But anyone using an uncertified product like a chat must remember these are not medical facts - they may be false and cannot serve as a basis for treatment. A chat must explicitly state this: it is not a medical device. No one can guarantee the answers, and recommendations or analysis must always be checked with a professional. Anyone who has tried AI for questions in other domains has probably encountered misinformation at least once, right?
Discussions are already underway about whether a chat like GPT could replace medical staff. This needs repeating endlessly: absolutely not.
This is only the first phase of healthcare chats - a transitional stage, most likely. OpenAI says it collaborated for two years with 260 doctors across 60 specialties from 60 countries. Could ChatGPT become useful for systematic healthcare solutions? Could such a chat support formal medical consultations?
I envision a model where I ask the chat for advice at home, it flags a potential issue, and my family doctor sees it in the system and orders tests. Then both the AI and the doctor review the results. This way, the doctor gets a “second opinion” while retaining control over the diagnosis and the chat’s recommendations. Is that possible?
Absolutely. I can imagine scenarios where large language models support treatment processes in certain areas. Some may not even require medical device certification - but they must communicate appropriately with patients. For instance, an algorithm could inform a patient about free preventive programs in their region or city that the individual could or should use. Engaging people in preventive actions is an excellent way to support them. The chat informs the user about next steps.
Organizationally, it may be difficult to implement a system where family doctors can monitor every question their patients ask the chat—the volume would be overwhelming.
AI Deployment Must Accelerate
OpenAI aims to create a personal assistant for everyone, so the company is keen on building personal relationships between chat and user. What can be done systemically to increase the safety of such services or to build a controllable alternative to private Big Tech?
It is worth promoting the development and deployment of certified AI medical devices with proven effectiveness—where scientific evidence shows that AI works at least as well as a physician. Deployment should also involve doctors, both general practitioners and specialists. A physician, overseeing known patients, could allow them to use a certified AI medical device to provide additional information when the doctor is unavailable. Instead of using an internet chat, patients could rely on validated AI algorithms under their doctor’s supervision. It would not be as instant or universally accessible as current large language models, but adapting the system to a changing world seems inevitable.
We must accelerate the introduction of AI-based solutions into the healthcare system. What obstacles remain?
Mainly, it is a matter of defining funding pathways for certified AI medical devices in Poland. Medical staff also need education on why using such tools is beneficial. Modern technology could potentially help care for more patients.
AI in healthcare we don’t see
Is AI in healthcare hype or reality? There is a lot of talk about AI being used somewhere, but in practice, AI in healthcare is more often experienced through AI chats than through true medical solutions.
Definitely reality.
What can AI currently do in healthcare services and medical sciences? Does it play a bigger role in diagnostics or in discovering new drugs and therapies?
AI already has spectacular successes abroad—in the U.S., Europe, and Poland. This is especially visible in radiology, where AI algorithms help diagnose conditions from medical images, including CT, X-ray, and MRI scans. AI is already used in Polish hospitals. The 2024 report on the state of digitalization in Polish hospitals, published by the e-Health Center, notes that over 13% of hospitals now use AI. More than half of these applications support image diagnostics. Cardiology follows radiology, and even psychiatry now sees AI aiding diagnostics.
Pharmaceutical companies also use AI successfully to discover new therapies. For instance, AI can narrow a pool of tens of thousands of potential compounds to a few thousand with the highest potential for new drug development, significantly reducing development time. In the U.S., AI is also being used to create “digital twins” of patients in clinical trials, simulating health outcomes without the drug while participants physically receive it. This allows smaller control groups in studies.
Polish Implementation Pathways
Is there space for personalized medicine? How soon will we reach a point where anyone can conduct such an in-depth analysis on themselves - in five, ten, or twenty years?
It is already happening. A Polish startup offers an AI-supported solution for basic triage. When a patient feels unwell, they can quickly find out whether to go to the emergency room, schedule a doctor’s appointment, or stay home if there is no real cause for concern.
In 2018, the U.S. approved the first AI-based algorithm that could independently issue a diagnostic result from an eye exam - critical for diabetic patients at risk of vision loss - without a doctor’s intervention. Previously, patients’ images had to be evaluated by a physician. Today, algorithms can analyze images, detect complications, and tell patients whether urgent consultation is needed or if a later appointment suffices.
Poland’s Ministry of Digitalization is also considering deploying these modern tools. The country’s digitalization policy includes a dedicated chapter on healthcare, which mentions implementing predictive analytics tools for the population. Likely, it will not be a single tool assessing every health condition of every Pole. Instead, selected AI tools may target specific tasks, such as predicting risks of hypertension, lung disease, or diabetes—conditions with the highest treatment challenges or mortality rates.
Sensitive data is essential
This cannot happen “on good faith” alone. AI needs access to our data to make predictions.
Absolutely. If we want personalized solutions, we must collect our own data to feed AI - ranging from smartwatch heart rate and pulse readings to lab results and genetic information.
Developing new solutions requires millions of records - a very sensitive issue. Companies struggle to access even anonymized statistical data. There are still no guidelines for democratic, transparent data access.
We face a tension: on one hand, access is crucial for advanced, effective healthcare; on the other, patients’ privacy must be respected. People may refuse the use of their data. Anonymizing medical data is challenging - removing name, age, and city may not suffice in ultra-rare diseases.
In Poland, data access remains difficult. There are multiple medical registries of varying quality. NFZ (National Health Fund) and the e-Health Center maintain their databases. The Ministry of Health commissions additional registries. Yet companies still struggle to access even anonymized datasets. Guidelines for transparent, democratic access are lacking.
Data block = Startup block
Does this hinder the development of medical startups in Poland? If data access were regulated, would more medical startups emerge?
Maybe not more startups, but existing ones could conduct research on larger datasets, improving AI algorithms. Fortunately, AI does not always require Polish data; European datasets can also be useful. Polish startups often use external data, for example from the UK. This is better than using data from Asia or the U.S., as those populations differ from European ones, leading to potentially divergent algorithm results.
This often drives startups to target the U.S. market or larger markets rather than Poland alone.
If a startup wants to operate in Poland, it usually requests patient consent for data use. Some companies already follow this path, using patient-provided data for research or medical experiments.
Is this automated, or does the startup request consent each time?
Consent must be obtained each time and be detailed. This limits studies to small groups—hundreds of patients—while millions of records are needed to train highly effective algorithms.
Europe lacks a certified AI list
How many certified AI-based medical tools are in use?
In Europe and the U.S., several thousand. This market is growing rapidly - three years ago, there were maybe a thousand in each region.
And in Poland?
If an algorithm is certified as a medical device in Europe, it is valid in Poland too - for instance, radiology assessment tools require certification; otherwise, their use in diagnostics would be illegal. Unfortunately, Europe lacks up-to-date statistics on which algorithms are certified. In the U.S., such a list is maintained regularly; in Europe, it does not exist.
