Skip to main content

A Stark Warning on Digital ID, CBDCs, and the Tokenisation of Society



Prof. Dr. Richard Werner Warns European Parliament of “Unprecedented Centralisation” in Digital Age

At a recent event inside the European Parliament in Brussels, economist Prof. Dr. Richard Werner delivered a stark critique of emerging digital financial and identity systems, cautioning policymakers against what he described as a rapid and potentially irreversible shift toward centralised control.

The event, titled “Digital ID Exposed,” was hosted by Christine Anderson, a Member of the European Parliament, and brought together a panel of speakers to discuss the societal, economic, and legal implications of digital identity frameworks and related technologies. 


A Warning on Digital ID Systems

Werner’s address focused on the growing push for digital identity systems across Europe. 

While proponents argue that such systems streamline public services and enhance security, Werner urged caution, framing Digital ID as a foundational infrastructure that could enable far-reaching surveillance and control mechanisms.

He argued that once universally adopted, Digital ID systems could become a prerequisite for participation in everyday life—from banking to travel—raising concerns about exclusion, privacy, and individual autonomy.

The event itself highlighted how Digital ID is expected to “impact our lives” across multiple domains, reinforcing the scale of the transformation under discussion. 


Central Bank Digital Currency: Efficiency or Control?

A central pillar of Werner’s critique was the development of central bank digital currencies (CBDCs). 

As an expert in banking and monetary systems, he warned that CBDCs differ fundamentally from traditional cash or even commercial bank deposits.

According to Werner, CBDCs could allow central authorities unprecedented oversight of financial transactions. 

Unlike cash, which offers anonymity, or decentralized cryptocurrencies, which operate outside direct state control, CBDCs could enable programmable money—where spending conditions are imposed or transactions restricted.

He cautioned that such capabilities, while often framed as tools for efficiency or crime prevention, could fundamentally alter the relationship between citizens and the state.


Tokenisation of Assets: Extending the Digital Net

Werner also addressed the broader trend toward the tokenisation of assets—the process of converting rights to physical or intangible assets into digital tokens on a ledger.

While tokenisation is often promoted as a way to increase liquidity and accessibility in markets, Werner warned that its scope is expanding beyond financial instruments to include real estate, natural resources, and even environmental assets.

In his view, the idea of tokenising “everything”—including elements of nature such as land, ecosystems, or even air quality—raises profound ethical and philosophical questions. 

He suggested that such developments could lead to the commodification of essential aspects of life, potentially concentrating ownership and control in the hands of a limited number of institutions.


A Broader Debate on Freedom and Governance

The event hosted by Anderson brought together multiple speakers addressing different facets of digital transformation, including data privacy, health records, and social systems.

Werner’s contribution stood out for its macroeconomic perspective, linking digital identity, monetary policy, and asset ownership into a single narrative about centralisation. 

He argued that these systems, when combined, could form an integrated framework with the capacity to monitor, influence, and potentially restrict human behaviour at scale.


Conclusion

The discussion at the European Parliament reflects a growing divide in how digital transformation is perceived. While institutions often emphasise innovation, efficiency, and integration, critics like Richard Werner highlight risks related to privacy, autonomy, and economic freedom.

As the European Union continues to advance digital initiatives, debates such as this underscore the importance of balancing technological progress with fundamental rights—ensuring that new systems serve citizens without compromising their independence.

Comments

Popular posts from this blog

What Actually Works (and Doesn’t) to Avoid Facial Recognition in 2026

What Actually Works (and Doesn’t) to Avoid Facial Recognition in 2026 Advice about “beating” facial recognition is everywhere—but much of it is outdated, oversimplified, or just wrong.  Modern systems are built on deep learning and high-dimensional embeddings, which makes them far more robust than earlier generations. This article cuts through the noise. It explains what actually reduces your likelihood of being identified today, what doesn’t, and why. 1. The Reality: You Can Reduce Risk, Not Eliminate It Before getting into techniques, it’s important to be precise: There is no reliable way to guarantee anonymity in environments where facial recognition is actively deployed You can reduce accuracy, increase uncertainty, or avoid inclusion in certain systems.  Effectiveness depends heavily on context (lighting, camera quality, database size, and system design) Think in terms of risk reduction, not invisibility. 2. What Doesn’t Work (or Barely Works Anymore) Many widely shared t...

Fargo Police Facial Recognition Error Sparks AI Policing Debate

Fargo Police Facial Recognition Error Sparks AI Policing Debate A Fargo police facial recognition error led to a wrongful 5-month jail term. Lets explore what went wrong, AI risks, and the future of policing. The growing use of artificial intelligence in policing has come under intense scrutiny following a high-profile Fargo police facial recognition error that resulted in a wrongful arrest and months-long imprisonment. The case highlights a critical question: • Can law enforcement safely rely on AI to identify suspects? What Happened in the Fargo Case? At the centre of the controversy is Angela Lipps, a Tennessee woman who was: • Arrested in her home state • Accused of bank fraud in Fargo • Jailed for nearly five months • Extradited over 1,000 miles Despite the severity of the charges, Lipps maintained she had never even visited North Dakota. The case eventually collapsed when evidence confirmed she was in Tennessee at the time of the alleged crime. The Role of Facial Recognition Tech...

Is Facial Recognition Racist? What You Need to Know

Facial recognition itself is a technology—it’s not inherently racist. However, the way it is developed, trained, and deployed can lead to biased outcomes, which is why concerns about racism and discrimination arise.  Here’s a careful breakdown: 1. How bias can occur Facial recognition systems use AI trained on large datasets of faces. Problems arise if: • The dataset is unbalanced, containing mostly faces of one ethnicity. • Algorithms are tuned for certain facial features more than others. Result: The system may misidentify or fail to recognize people of underrepresented ethnic groups, even though it works well for the majority group in the dataset. 2. Evidence from research Several studies, especially in the US, have found: • Higher error rates for darker-skinned individuals, particularly women. • Some commercial facial recognition tools misidentify Black people 10–100 times more often than white people in certain tests. • Misidentifications have led to wrongful arrests and discr...