For her dissertation, Linxuan Niu focused on the role of AI in medical care using an online survey experiment. Based on data from an online survey of 2,256 UK adults, she argues that there is a complex relationship between the public’s concern for technical effectiveness, and deeper seated political and gender views. The research findings carry important implications for healthcare policymakers.
By Linxuan Niu (MSc Public Policy)
Artificial intelligence (AI) is ushering in a new era in medicine. From algorithms that can spot cancers invisible to the human eye to systems that design personalised treatments, its potential to improve and save lives is immense. Yet, for many of us, this exciting future is tempered by a deep sense of unease, rooted in concerns about our sensitive health data, the risk of algorithmic bias, and the fundamental reliability of these new technologies.
This presents a profound dilemma for policymakers. When faced with difficult trade-offs in governing the use of AI in healthcare, where should their priorities lie? And how do our own concerns shape the choices we want them to make? The answers, it turns out, are far more complex and surprising than we might imagine.
‘Does It Actually Work?’: The Public’s Unwavering Priority
My research showed that within healthcare, ensuring the technical effectiveness and safety of AI systems is the overwhelming top priority. When presented with concrete policy options, the public consistently favoured those guaranteeing that an AI option is reliable over those aimed at protecting privacy or promoting fairness. For instance, mandating that any final diagnosis made by an AI must be reviewed by a human doctor.
This suggests that the majority of the public believes that, before debating the finer details of data ethics or social justice, there must first be absolute confidence in the effectiveness of AI healthcare technology. This reinforces the idea that for National Health Services and regulators, public trust cannot be earned merely by promises of secondary benefits. It must be built on a foundation of rigorous, transparent, and verifiable reliability.
The Worry Paradox: When Too Much Concern Backfires
But here the story takes an interesting turn. One might think that the more we worry about AI failures in healthcare, the more we would demand strict regulation. My research revealed that this is only partially true.
My research found that the relationship between concerns about effectiveness and support for related policies follows a unique inverted U-shaped curve. As concerns increases from low to moderate levels, the demand for safeguards rises in tandem—this is a perfectly rational response. However, once concerns surpasses a threshold and becomes extreme, the trend sharply reverses.

Figure 1: Relationship between effectiveness concern and prioritising effectiveness-protection policies
Why, then, are those most concerned less likely to support policies designed to address the issue? The answer seems to lie in a fundamental shift in belief—from thinking the problem is “manageable” to thinking it is inherently “insurmountable”. Moderate concerns operate on the assumption that AI’s flaws are similar to programmatic errors that can be fixed with better regulation. Yet, extreme concerns appear to trigger a deeper scepticism, a crisis of trust in the solution itself.
This fear shifts from “this AI might make a mistake” to “this AI can never truly understand my unique health needs”. When the public begins to view the technology as inherently flawed at a conceptual level, they lose trust not just in AI itself, but in the idea that any top-down policy can be an effective safety net. This represents a form of “algorithmic solution aversion”, where a deep-seated scepticism regarding the feasibility of a fix leads people to pragmatically withdraw their support for it.
Values over Concerns: A Different Logic for Social Risks
If the public’s response to AI healthcare effectiveness is a complex calculation of belief and feasibility, their reaction to social risks such as privacy and fairness follows an entirely different logic. Here, my research found that an individual’s expressed level of concern did not predict their policy preferences at all. Instead, their choices were driven by deeper social identities and values.
When it came to protecting personal data, an individual’s gender emerged as a statistically significant predictor (p < 0.001), while their stated privacy concerns, by contrast, were not. In other words, a woman’s long-term social experiences may shape her baseline trust in data-handling institutions more profoundly than any news story about data breaches.
And when the conversation shifts to algorithmic fairness—an issue deeply entwined with social justice debates—decisions become a form of political expression. Indeed, the statistical model showed this was the most significant predictor (p < 0.001), indicating that individuals identifying as more left-wing are significantly more likely to support fairness-oriented policies, such as mandatory bias audits and ethical appeal channels.
Implications for policy
These insights reveals that a ‘one-size-fits-all’ approach to AI healthcare governance and public communication is destined to fail. Policymakers need to adopt a dual approach.
For technical risks such as effectiveness, the task is to build competence trust. This requires transparent audits and validation processes that the public can easily understand, showing robust and verifiable evidence of these systems’ safety. However, for social risks such as privacy and fairness, the goal must be to achieve value resonance. This means moving beyond abstract assurances and engaging directly with the core values and deep beliefs of different social and political groups instead.
The views expressed in this post are those of the author.