Why Data Protection Assessments Are Crucial for Digital ID

On 26 September 2025, the government published a Digital ID scheme explainer and news release, stating a Digital ID will be rolled out and that it will be mandatory for Right to Work checks within this Parliament.

This is an interesting one, I hadn’t been paying attention to the Digital ID debate until I was asked what my opinion was on them. I didn’t have one, so set about reading to inform myself in the hope it may inform an opinion.

It was also coincidentally while I was writing another article about data privacy, which was convenient as it gave me a lens in which to start investigation.

So I looked, but it turns out currently there is no information published about what data is stored, processed and who it will be shared with, how this is shared, what the data retention is, what alternatives are available for those without access to smart phones – we have no detail currently to make any kind of judgement.

Given the lack of detail, currently I’m neither for nor against a proposed Digital ID until I can see much more detail BUT if we are doing this, which from what I have read it seems we are, I think it makes sense to build trust through transparency at the earliest opportunity. Prevent speculation but removing the mystery.

That starts with publishing if assessments with meaningful scrutiny.

The government have stated there will be a public consultation but has not given any clues as to the schedule. This lack of detail has unfortunately lead to a lot of speculation and disinformation across social media and an uneasy feeling for some people.

Given the potential impact of this proposal, public trust and confidence will depend on timely transparency in the form of a Data Protection Impact Assessment (DPIA) done before beginning to implement, an independent ethics assessment aligned to the UK Data Ethics Framework, and a transparent consultation that invites scrutiny of the risks and mitigations.

Data Protection Impact Assessment

Under UK GDPR Article 35, controllers must conduct a Data Protection Impact Assessment where processing is likely to result in a high risk to individuals’ rights and freedoms. The ICO advises that the DPIA should start early and be completed prior to processing; “innovative technology,” large-scale processing, and special category data are indicators that a DPIA is needed.

After mitigation, if residual high risk remains, Article 36 requires prior consultation with the ICO.

A mandatory Digital ID for the entire population meets multiple high risk criteria (scale, sensitivity, novelty). While we will argue scope or design choices and how they risk, it is cautious to assume high risk and detail mitigation early.

A credible DPIA could look like:

  • Description of data flows and actors (data items; sources; storage; transfers; processors and sub-processors).
  • Necessity and proportionality analysis with alternatives considered (other designs and analogue alternatives.
  • Risk analysis identity theft, function creep, discrimination, exclusion, security failures, secondary use.
  • Mitigations data minimisation; access controls ; encryption; audit logging; retention & deletion policies.
  • Consultation internal, external experts, and public where feasible.
  • Governance updates; accountability; assurance.

Publishing DPIAs is not legally mandated in every case, but for a population-scale scheme, transparency is part of the control environment. Redact sensitive information and publish as much as possible.


Ethics assessments

DPIAs focus on data protection law. Ethics assessments focus on power imbalances, social impact, long-term effects, and edge-cases.

The UK’s Data Ethics Framework provides principle of transparency, accountability, fairness


Public consultation

What we might expect from the public consultation:

Before consultation

  1. DPIA published responsibly, with necessary redactions. Data flow diagrams, lists of processors/sub-processors; retention schedules.
  2. Ethics assessment aligned to the Data Ethics Framework (who benefits, who’s does not, controls for function creep and other risks).
  3. Human rights and equality impact analyses
  4. Alternatives analysis why other potential options were discounted and what benefits the proposed system would have.
  • Independent review with questions on the DPIA and ethics assessment
  • Deliberative engagement with those communities that will not be able to access a Digital ID for whatever reason.

During consultation

After consultation

  • Government response detailing how feedback has been assessed and what impact this had on the proposed design.
  • Updated DPIA/Ethics
  • Technical architecture published responsibly, removing only sensitive data.

Cool – so if this is the first step, what is our track record for this kind of thing in recent UK projects?

NHS Test & Trace (2020)DPIA done late

Public trust was impacted but it could be argued this was unavoidable given the circumstances.

The government revealed this was late following a legal challenge from Open Rights Group. The outcome was that urgency didn’t change the legal duty to do a DPIA.

Live facial recognition by South Wales Police (2017-2020)DPIA existed but was deemed inadequate
in 2020 the Court of Appeal ruled the deployment unlawful, finding failings including non-compliance with data protection impact assessment duties and the Public Sector Equality Duty.

This demonstrates the quality of a DPIA matters, which is reassuring.

NHS Federated Data Platform (2023–24) DPIA published with engagement mechanisms in place
NHS England published a DPIA with acknowledgements of limits and commitments to ongoing updates and public panels.

This is promising. A step towards proactive transparency.

Digital immigration status: “View and Prove” (2020–25)DPIA published; clearer description of flows and governance

Published a DPIA for View and Prove with data-flow detail, mitigations, and governance.

Stronger on publication timing than many programmes

National ANPR Service (2013–2025)DPIA published with regular updates

Demonstrates iterative review and publication

Pensions Dashboards Programme (2023–25)Explanatory DPIA and transparency blog
The programme published a DPIA and a public explainer on why it was published explicitly linking transparency to trust.

This trend demonstrates progress towards the type transparency that I believe the public needs before we are able to assess the impact to the population’s data privacy.


As of 11 October 2025, I have not seen a published DPIA, ethics assessment – nor have I seen promise of one yet.

There is an official explainer and news release on GOV.UK and various speculation in the press but as far as I can see, no schedule for the public consultation.

That means the first public DPIA/Ethics documents are still to come, but if they do not, the consultation is the place to insist on them.

Leave a comment