A shocking revelation has come to light regarding the NHS and defence firm Palantir.
The idea that Palantir is “using war to train AI systems for the NHS” has come from a mix of real facts and legitimate ethical concerns.
Here’s a clear breakdown.
1) What Palantir actually does (in both war and healthcare)
Palantir is a US tech company that builds data analysis and AI platforms used in very different contexts:
• Military / war use
Works with the Pentagon and allies on systems like Project Maven
These tools:
• Analyse satellite/drone data
• Help identify targets
• Speed up military decision-making
Some demos even show AI helping plan attacks or interpret battlefield data.
👉 So yes — Palantir does build AI used in warfare
In the UK, Palantir also runs the Federated Data Platform (FDP)
This system:
• Connects hospital data (waiting lists, beds, staff, etc.)
• Helps with scheduling, logistics, and planning care
👉 FDP is mainly about health system efficiency, rather than clinical AI or weapons
2) Where the “training AI with war data” idea comes from
This claim is more about concern than evidence. It usually comes from three points:
A. Same company, different domains
Palantir builds AI for:
• military intelligence
• policing & surveillance
• healthcare
Critics worry:
If the same company builds all these systems, knowledge or techniques could transfer between them.
That’s plausible at a technical level (shared tools, methods), but there’s no concrete evidence that:
NHS data is used to train military AI
or that military data is used to train NHS systems
B. Data-sharing fears (key controversy)
Some groups argue:
• NHS data is extremely valuable (huge population, long history)
• Once data enters complex AI systems, it’s hard to guarantee it isn’t reused
Even though:
NHS England says Palantir is not allowed to train AI on NHS data, Palantir says it does use client data to train its models.
👉 The concern is also about future misuse or lack of transparency.
C. Ethical objections (big driver of the narrative)
Many NHS staff and groups object because Palantir has worked with:
• US military
• immigration enforcement (ICE)
• It develops tools linked to surveillance and warfare
This leads to arguments like:
“Using their software supports a war-linked company”
“Healthcare data shouldn’t be handled by firms tied to military AI”
These concerns are widely reported among NHS staff and unions
3) What is NOT happening (based on evidence)
So far there is no credible evidence that:
❌ NHS patient data is being used to train battlefield AI
❌ War data is directly used to treat NHS patients
❌ The NHS system is a military AI system
4) What is a fair concern vs what isn’t
Legitimate concerns
Data privacy and control
Dependence on a private US tech firm
Ethical issues around Palantir’s other clients
Transparency of AI systems
Less accurate claims
“They are training NHS AI using war data” → ❌ no evidence
“Your medical data is being used to run wars” → ❌ not supported
Core argument: healthcare should not be built on military-industrial infrastructure
Healthcare exists to preserve life impartially.
Military companies are structurally designed for strategic advantage, intelligence gathering, and conflict outcomes.
Combining the two creates unavoidable conflicts of interest that risk distorting healthcare priorities, data use, and public trust.
1) Fundamental mission conflict
Healthcare mission:
• Treat patients equally
• Maximise wellbeing
• Minimise harm
• Maintain trust and confidentiality
Military/defence mission:
• Gain strategic advantage
• Identify threats
• Operate in adversarial environments
• Prioritise national security objectives
Argument:
Even if a company works on both, these missions are not fully compatible. Healthcare data systems built by defence contractors risk inheriting a logic of surveillance and control rather than care and confidentiality.
2) Risk of function creep (gradual misuse)
A key concern is function creep: systems built for one purpose slowly expand into others.
Example risks; health data systems later used for:
• population monitoring
• predictive policing
• security screening
• “Emergency use” justification expanding over time
Argument:
Once infrastructure exists, it is difficult to strictly enforce boundaries between healthcare use and security use, especially when the provider’s core expertise is intelligence and defence.
3) Data trust and patient confidence
Healthcare depends on absolute trust that:
• sensitive data stays within medical contexts
• it is not repurposed for non-medical surveillance
If patients believe their data is connected to military-linked firms:
• they may withhold information
• they may avoid care
• public trust in health institutions erodes
Argument:
Even perceived association with defence intelligence can damage the psychological contract between patients and healthcare systems.
4) Concentration of power and opacity
Military tech companies often:
• operate with high levels of secrecy
• work on classified contracts
• resist full public disclosure
Healthcare systems require:
• transparency
• clinical accountability
• public scrutiny
Argument:
When critical healthcare infrastructure is built by firms used to secrecy, democratic oversight becomes harder, weakening accountability.
5) Normalisation of surveillance infrastructure
Military AI systems are often designed for:
• detection
• tracking
• prediction of “threats”
In healthcare, similar tools can become:
• risk scoring systems
• predictive profiling of patients
• automated prioritisation of care
Argument:
This can shift healthcare from universal care to risk-managed population control, especially if models prioritise efficiency over equity.
6) Ethical spillover and reputational legitimacy
Working with healthcare can:
• legitimise military companies in civilian life
• blur public understanding of surveillance technologies
• make defence contractors appear “neutral infrastructure providers”
Argument:
This risks normalising companies whose primary expertise is defence and intelligence into spaces that should remain strictly civilian and humanitarian.
7) Alternative is possible (so trade-off is not necessary)
A strong argument must address inevitability. Healthcare systems can instead use:
• public-sector tech development
• open-source platforms
• civilian-only contractors with strict health governance
• transparent procurement systems
Argument:
Because alternatives exist, reliance on military-linked firms is a choice, not a necessity.
Conclusion
Healthcare depends on trust, transparency, and non-adversarial data use, while military companies are built around secrecy, intelligence, and strategic advantage.
Combining them risks distorting healthcare priorities, weakening public trust, and enabling long-term expansion of surveillance logic into medical systems.
Bottom line
• Palantir does build AI used in war
• It also builds data systems for the NHS
The systems are purportedly separate in purpose. The controversy is really about:
• trust
• ethics
• data governance
More: How AI is being used to target Palestinians


Comments
Post a Comment