As AI technology rapidly evolves, Scotland’s health care system finds itself at a critical juncture — one that could redefine how care is delivered for generations. A new report from the University of Glasgow’s Centre for Population Health signals that while the potential is vast, the path forward must be paved with transparency, accountability, and above all, public trust.
Published this week, the report, The Potential of Artificial Intelligence (AI) within Public Health and Health care Systems in Scotland, offers a clear-eyed assessment of AI’s transformative promise in NHS Scotland — along with the governance challenges that could determine its success or failure.
Efficiency Meets Ethics: A New Era for Public Health?
At the heart of the report is a compelling vision: AI as a powerful tool to support clinical decision-making, reduce strain on overburdened health care systems, and unlock insights from vast datasets that human analysts alone cannot process.
From AI-driven diagnostics to predictive modelling for public health crises, the possibilities are wide-ranging:
-
Accelerated data analysis: AI can process large-scale health datasets in seconds, identifying patterns that help predict outbreaks or prioritize interventions.
-
Improved service delivery: Automation could reduce administrative burdens on NHS staff, freeing up time for direct patient care.
-
Personalized medicine: AI has the potential to tailor treatment plans to individual genetic, lifestyle, and social factors.
But there’s a catch — several, in fact.
A Question of Trust and Transparency
“The expanding use of AI will potentially be one of the most strategic advancements facing the NHS over the next 50 years,” said Chris Harkins, the report’s lead author and Public Health Programme Manager at GCPH.
Yet, as the report makes clear, technological capability is only half the story. For AI to deliver real benefit in Scotland’s public health and care systems, public confidence is essential.
And right now, that confidence is fragile.
A central finding of the report is the lack of transparency around current AI trials and pilots in Scotland. Without a centralized, public-facing repository of what’s being tested — and where — citizens remain in the dark. This opacity risks eroding trust and, ultimately, undermining uptake.
AI Is Not the Next Industrial Revolution — It’s Faster
AI is often likened to the Industrial Revolution, but the comparison may understate the urgency.
Unlike steam power or railways, AI evolves at lightning speed — and across borders. Innovations developed in one country can be deployed globally within days. This pace presents enormous challenges for regulation, oversight, and ethical safeguards.
Scotland’s health leaders know they must keep up. But doing so while ensuring equity, patient safety, and democratic accountability will require an approach more deliberate than disruptive.
AI Governance: The Missing Framework
One of the report’s most significant warnings is the absence of a cohesive governance and evaluation framework. There are concerns about algorithmic bias — systems trained on unrepresentative data sets may reinforce existing health inequalities — and about the lack of ethical guidance for AI developers and deployers.
To address these gaps, the report calls for:
-
A central repository of AI projects and evaluations in NHS Scotland
-
Stronger research capacity to understand AI’s long-term effects
-
Independent evaluation frameworks to monitor outcomes and guard against unintended harm
-
Public engagement strategies to foster inclusion and co-design
The idea is clear: AI in health care must be done with people, not to them.
Lessons from IT Failures of the Past
Scotland’s public sector doesn’t have the cleanest record when it comes to digital innovation.
High-profile failures — from delayed hospital IT systems to malfunctioning patient data platforms — have made many wary of grand promises. The University of Glasgow’s report doesn’t shy away from this. Instead, it uses these failures to advocate for realistic planning and risk management in AI deployment.
“If we get this wrong,” said one health tech consultant familiar with the findings, “it could set public trust in digital health care back a decade.”
Inequality, Access, and the Human Touch
Another major theme in the report is health inequality — and how AI, if not handled responsibly, could worsen it.
Communities already underserved by NHS services are at risk of being further marginalized if algorithms are built using biased data. For example, AI trained on predominantly urban populations may misdiagnose or deprioritize rural patients. The report urges developers to build inclusivity into every stage of AI design — from data collection to deployment.
Human oversight is also key. The aim, the report insists, is not to replace clinicians but to augment human judgment with computational insight. That means frontline NHS staff must be part of the conversation — and supported with the training to use these tools confidently.
Looking to 2026: A Strategy in Need of Renewal
Scotland’s national AI strategy is due for review in 2026. The GCPH report lands at a crucial moment, offering practical recommendations and a much-needed ethical compass.
The Scottish Government, NHS boards, and tech providers are now under pressure to respond — not just with pilot projects and press releases, but with a concrete roadmap that puts patients, professionals, and public accountability at its core.
“We need to balance innovation with inclusion,” Harkins said. “The goal is not just a smarter health system, but a fairer one.”