Skip to main content

Palantir in bed with the NHS

Is Palantir using war to train AI systems for the NHS?

A shocking revelation has come to light regarding the NHS and defence firm Palantir. 

The idea that Palantir is “using war to train AI systems for the NHS” has come from a mix of real facts and legitimate ethical concerns. 



Here’s a clear breakdown.


1) What Palantir actually does (in both war and healthcare)

Palantir is a US tech company that builds data analysis and AI platforms used in very different contexts:

• Military / war use

Works with the Pentagon and allies on systems like Project Maven


These tools:

• Analyse satellite/drone data

• Help identify targets

• Speed up military decision-making 

Some demos  even show AI helping plan attacks or interpret battlefield data.


👉 So yes — Palantir does build AI used in warfare


In the UK, Palantir also runs the Federated Data Platform (FDP)

This system:

• Connects hospital data (waiting lists, beds, staff, etc.)

• Helps with scheduling, logistics, and planning care


👉 FDP is mainly about health system efficiency, rather than clinical AI or weapons


2) Where the “training AI with war data” idea comes from

This claim is more about concern than evidence. It usually comes from three points:



A. Same company, different domains

Palantir builds AI for:

• military intelligence

• policing & surveillance

• healthcare


Critics worry:

If the same company builds all these systems, knowledge or techniques could transfer between them.


That’s plausible at a technical level (shared tools, methods), but there’s no concrete evidence that:

NHS data is used to train military AI

or that military data is used to train NHS systems



B. Data-sharing fears (key controversy)

Some groups argue:

• NHS data is extremely valuable (huge population, long history)

• Once data enters complex AI systems, it’s hard to guarantee it isn’t reused


Even though:

NHS England says Palantir is not allowed to train AI on NHS data, Palantir says it does use client data to train its models.


👉 The concern is also about future misuse or lack of transparency.



C. Ethical objections (big driver of the narrative)

Many NHS staff and groups object because Palantir has worked with:

• US military

• immigration enforcement (ICE)

• It develops tools linked to surveillance and warfare


This leads to arguments like:

“Using their software supports a war-linked company”

“Healthcare data shouldn’t be handled by firms tied to military AI”

These concerns are widely reported among NHS staff and unions


3) What is NOT happening (based on evidence)

So far there is no credible evidence that:


❌ NHS patient data is being used to train battlefield AI

❌ War data is directly used to treat NHS patients

❌ The NHS system is a military AI system


4) What is a fair concern vs what isn’t

Legitimate concerns

Data privacy and control

Dependence on a private US tech firm

Ethical issues around Palantir’s other clients

Transparency of AI systems


Less accurate claims

“They are training NHS AI using war data” → ❌ no evidence

“Your medical data is being used to run wars” → ❌ not supported



Core argument: healthcare should not be built on military-industrial infrastructure


Healthcare exists to preserve life impartially. 

Military companies are structurally designed for strategic advantage, intelligence gathering, and conflict outcomes. 

Combining the two creates unavoidable conflicts of interest that risk distorting healthcare priorities, data use, and public trust.


1) Fundamental mission conflict


Healthcare mission:

• Treat patients equally

• Maximise wellbeing

• Minimise harm

• Maintain trust and confidentiality


Military/defence mission:

• Gain strategic advantage

• Identify threats

• Operate in adversarial environments

• Prioritise national security objectives


Argument:

Even if a company works on both, these missions are not fully compatible. Healthcare data systems built by defence contractors risk inheriting a logic of surveillance and control rather than care and confidentiality.


2) Risk of function creep (gradual misuse)

A key concern is function creep: systems built for one purpose slowly expand into others.


Example risks; health data systems later used for:

• population monitoring

• predictive policing

• security screening

• “Emergency use” justification expanding over time


Argument:

Once infrastructure exists, it is difficult to strictly enforce boundaries between healthcare use and security use, especially when the provider’s core expertise is intelligence and defence.


3) Data trust and patient confidence

Healthcare depends on absolute trust that:

• sensitive data stays within medical contexts

• it is not repurposed for non-medical surveillance


If patients believe their data is connected to military-linked firms:

• they may withhold information

• they may avoid care

• public trust in health institutions erodes


Argument:

Even perceived association with defence intelligence can damage the psychological contract between patients and healthcare systems.


4) Concentration of power and opacity

Military tech companies often:

• operate with high levels of secrecy

• work on classified contracts

• resist full public disclosure


Healthcare systems require:

• transparency

• clinical accountability

• public scrutiny


Argument:

When critical healthcare infrastructure is built by firms used to secrecy, democratic oversight becomes harder, weakening accountability.


5) Normalisation of surveillance infrastructure

Military AI systems are often designed for:

• detection

• tracking

• prediction of “threats”


In healthcare, similar tools can become:

• risk scoring systems

• predictive profiling of patients

• automated prioritisation of care


Argument:

This can shift healthcare from universal care to risk-managed population control, especially if models prioritise efficiency over equity.


6) Ethical spillover and reputational legitimacy

Working with healthcare can:

• legitimise military companies in civilian life

• blur public understanding of surveillance technologies

• make defence contractors appear “neutral infrastructure providers”


Argument:

This risks normalising companies whose primary expertise is defence and intelligence into spaces that should remain strictly civilian and humanitarian.


7) Alternative is possible (so trade-off is not necessary)

A strong argument must address inevitability. Healthcare systems can instead use:

• public-sector tech development

• open-source platforms

• civilian-only contractors with strict health governance

• transparent procurement systems


Argument:

Because alternatives exist, reliance on military-linked firms is a choice, not a necessity.


Conclusion

Healthcare depends on trust, transparency, and non-adversarial data use, while military companies are built around secrecy, intelligence, and strategic advantage. 

Combining them risks distorting healthcare priorities, weakening public trust, and enabling long-term expansion of surveillance logic into medical systems.


Bottom line

• Palantir does build AI used in war

• It also builds data systems for the NHS


The systems are purportedly separate in purpose. The controversy is really about:

• trust

• ethics

• data governance


More: How AI is being used to target Palestinians




Comments

Popular posts from this blog

What Actually Works (and Doesn’t) to Avoid Facial Recognition in 2026

What Actually Works (and Doesn’t) to Avoid Facial Recognition in 2026 Advice about “beating” facial recognition is everywhere—but much of it is outdated, oversimplified, or just wrong.  Modern systems are built on deep learning and high-dimensional embeddings, which makes them far more robust than earlier generations. This article cuts through the noise. It explains what actually reduces your likelihood of being identified today, what doesn’t, and why. 1. The Reality: You Can Reduce Risk, Not Eliminate It Before getting into techniques, it’s important to be precise: There is no reliable way to guarantee anonymity in environments where facial recognition is actively deployed You can reduce accuracy, increase uncertainty, or avoid inclusion in certain systems.  Effectiveness depends heavily on context (lighting, camera quality, database size, and system design) Think in terms of risk reduction, not invisibility. 2. What Doesn’t Work (or Barely Works Anymore) Many widely shared t...

Fargo Police Facial Recognition Error Sparks AI Policing Debate

Fargo Police Facial Recognition Error Sparks AI Policing Debate A Fargo police facial recognition error led to a wrongful 5-month jail term. Lets explore what went wrong, AI risks, and the future of policing. The growing use of artificial intelligence in policing has come under intense scrutiny following a high-profile Fargo police facial recognition error that resulted in a wrongful arrest and months-long imprisonment. The case highlights a critical question: • Can law enforcement safely rely on AI to identify suspects? What Happened in the Fargo Case? At the centre of the controversy is Angela Lipps, a Tennessee woman who was: • Arrested in her home state • Accused of bank fraud in Fargo • Jailed for nearly five months • Extradited over 1,000 miles Despite the severity of the charges, Lipps maintained she had never even visited North Dakota. The case eventually collapsed when evidence confirmed she was in Tennessee at the time of the alleged crime. The Role of Facial Recognition Tech...

Is Facial Recognition Racist? What You Need to Know

Facial recognition itself is a technology—it’s not inherently racist. However, the way it is developed, trained, and deployed can lead to biased outcomes, which is why concerns about racism and discrimination arise.  Here’s a careful breakdown: 1. How bias can occur Facial recognition systems use AI trained on large datasets of faces. Problems arise if: • The dataset is unbalanced, containing mostly faces of one ethnicity. • Algorithms are tuned for certain facial features more than others. Result: The system may misidentify or fail to recognize people of underrepresented ethnic groups, even though it works well for the majority group in the dataset. 2. Evidence from research Several studies, especially in the US, have found: • Higher error rates for darker-skinned individuals, particularly women. • Some commercial facial recognition tools misidentify Black people 10–100 times more often than white people in certain tests. • Misidentifications have led to wrongful arrests and discr...