That is the conclusion of a report on AI adoption and regulation in healthcare systems in Europe, based on responses from 50 of the 53 member states in the WHO’s European region, which includes Central Asia.
Only four countries, or 8%, have adopted a dedicated national AI health strategy, and seven others are in the process of doing so, the report said.
“We stand at a fork in the road,” Natasha Azzopardi-Muscat, the WHO Europe’s director of health systems, said in a statement.
“Either AI will be used to improve people’s health and well-being, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care,” she said.
Almost two-thirds of countries in Europe are already using AI-assisted diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.
The Mater Hospital in Dublin recently began using artificial intelligence across its radiology department.
It’s used to analyse all head scans for bleeds, all chest scans for blood clots, and all bone x-rays for fractures, to make sure patients with the most urgent needs are seen first.
In September, the Royal College of Surgeons in Ireland (RCSI) began offering an AI in Healthcare course. Trinity College Dublin launched a similar course earlier this year.
The WHO urged its member states to address “potential risks” associated with AI, including “biased or low-quality outputs, automation bias, erosion of clinician skills, reduced clinician–patient interaction and inequitable outcomes for marginalised populations”.
Regulation is struggling to keep pace with technology, the WHO Europe said, noting that 86 percent of member states said legal uncertainty was the primary barrier to AI adoption.
“Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong,” said David Novillo Ortiz, the WHO’s regional advisor on data, artificial intelligence and digital health.
The WHO Europe said countries should clarify accountability, establish redress mechanisms for harm, and ensure that AI systems “are tested for safety, fairness and real-world effectiveness before they reach patients”.
Short link: