MPs Concern : ‘Predictive Policing’ in UK

A cross-party group of MPs is calling for the UK government to outlaw predictive policing technologies through an amendment to the forthcoming Crime and Policing Bill, citing concerns over racial profiling, surveillance, and algorithmic bias.
Proposed Law Aims to Outlaw Future Crime Predictions
At the centre of the debate is New Clause 30 (NC30), an amendment tabled by Green MP Siân Berry and backed by at least eight others, including Labour’s Clive Lewis and Zarah Sultana. If passed, the clause would explicitly prohibit UK police from using artificial intelligence (AI), automated decision-making (ADM), or profiling techniques to predict whether an individual or group is likely to commit a future offence.
Berry told the House of Commons that such systems are “inherently flawed” and represent “a fundamental threat to basic rights,” including the presumption of innocence. “Predictive policing, however cleverly sold, always relies on historic police and public data that is itself biased,” she argued. “It reinforces patterns of over-policing and turns communities into suspects, not citizens.”
What Is Predictive Policing?
Predictive policing refers to the use of data analytics, AI and algorithms to identify patterns that suggest where crimes are likely to occur or which individuals may be at greater risk of offending. It takes two broad forms, i.e. place-based systems that forecast crime in particular geographic locations, and person-based systems that claim to assess the risk posed by individuals.
Already Piloted or Deployed
It’s worth noting that these systems have already been piloted or deployed in over 30 UK police forces. For example, according to a 2025 Amnesty International report, 32 forces were using location-focused tools, while 11 had tested or deployed systems to forecast individual behaviour. The aim, according to police, is to deploy resources more efficiently and prevent crime before it happens.
However, critics argue that the data used to train these systems, such as arrest records, stop-and-search data, and local crime statistics, is historically biased. This, they say, leads to feedback loops where marginalised and heavily policed communities are disproportionately targeted by future interventions.
Why MPs Are Taking a Stand Now
The renewed push for a legislative ban follows a string of revelations over the past 18 months about the growing use of algorithmic policing in the UK, often without public consultation or oversight. One of the most contentious examples was uncovered by Statewatch in 2025, i.e., the Ministry of Justice’s so-called “Homicide Prediction Project”, a system under development to identify individuals at risk of committing murder using sensitive data, including health and domestic abuse records—even in cases where no criminal conviction exists.
Statewatch researcher Sofia Lyall called the initiative “chilling and dystopian,” warning that “using predictive tools built on data about addiction, mental health and disability amounts to highly intrusive profiling” and risks “coding bias directly into policing practice.”
The amendment to the Crime and Policing Bill comes as the government continues to expand data-driven law enforcement under new legislation. The Data Use and Access Act (passed earlier this year) permits certain forms of automated decision-making that were previously restricted under the Data Protection Act 2018. More than 30 civil society groups, including Big Brother Watch, Open Rights Group, Inquest and Amnesty, have signed a joint letter condemning the changes and calling for a ban on predictive policing to be included in the new bill.
Bias, Surveillance and Lack of Transparency
At the heart of the pushback is the view that predictive systems do not eliminate human bias, but instead replicate and scale it. As Open Rights Group’s Sara Chitseko explained in a May blog, “historical crime data reflects decades of discriminatory policing, particularly targeting poor neighbourhoods and racialised communities.”
The concern is not just over potential inaccuracies, but the broader impact on civil liberties. Campaigners warn that predictive tools undermine the right to privacy and fuel what they call a “pre-crime surveillance state,” in which individuals can be subjected to policing actions without having committed any crime.
This can include being flagged for increased surveillance, added to risk registers, or subjected to stop-and-search, all based on algorithmic assessments that may be impossible to scrutinise. Data from these tools is often shared across public bodies, meaning individuals can be affected in housing, education, or welfare decisions as a result of hidden profiling.
55 Automated Tools Identified
Researchers at the Public Law Project, which runs the Tracking Automated Government (TAG) register, have documented over 55 automated decision-making tools used across UK government departments, including policing. Many operate without publicly available data protection or equality assessments. Legal Director Ariane Adam said, “People deserve to know if a decision about their lives is being made by an opaque algorithm—and have a way to challenge it if it’s wrong.”
How the Crime and Policing Bill Fits In
The Crime and Policing Bill is part of a broader effort by the UK government to modernise policing powers and criminal justice processes. While not specifically focused on predictive technologies, the bill’s scope includes provisions for police data access, surveillance capabilities and crime prevention strategies.
Critics argue that without clear prohibitions, the bill risks giving predictive systems greater legitimacy. “Predictive policing isn’t just a technical tool—it’s a fundamental shift in the presumption of innocence,” said Berry. “We need the law to say clearly: you cannot be punished for something you haven’t done, just because a computer says you might.”
A second proposed amendment from Berry seeks to provide safeguards where automated decisions are used in policing. This would include a legal right to request human review, improved transparency over the use of algorithms, and meaningful routes for redress.
What the Police and Government Are Saying
Police forces and government departments have largely defended their use of predictive technologies, arguing that they allow for more proactive policing. For example, the Home Office has supported initiatives such as GRIP, a place-based prediction system used by 20 forces since 2021 to identify high-crime areas.
Proponents claim these tools help reduce violence and make best use of limited resources. However, recent assessments suggest the benefits may be overstated. Amnesty found “no conclusive evidence” that GRIP had reduced crime, while also warning it had “reinforced racial profiling” in the communities it targeted.
The government has not yet formally responded to the proposed amendments. However, officials have previously argued that AI and ADM can be used responsibly with the right oversight. The Department for Science, Innovation and Technology’s 2023 White Paper on AI governance promoted voluntary transparency standards but fell short of recommending statutory controls.
Businesses and Civil Society
If the amendment banning predictive policing passes, it could reshape how AI and automation are used across public services, not just policing. For civil society and legal groups, it would mark a significant win for rights-based governance of AI.
For businesses working in AI, data analytics and security tech, the implications are mixed. Suppliers of predictive systems to police forces may lose a key customer base, while developers of ethical or human-in-the-loop systems could find new demand for tools that meet stricter legal standards.
More broadly, companies operating in sectors such as insurance, HR tech, or public procurement may face growing scrutiny over how their algorithms are used to assess individuals, particularly if they supply services to the government. A legislative ban on predictive policing could signal the start of tighter controls on high-risk ADM across all sectors.
Cybersecurity professionals and data governance officers may also need to reassess compliance strategies, especially where their systems intersect with law enforcement or public sector clients.
The challenge, according to legal analysts, is ensuring any ban does not create ambiguity. “There’s a fine line between banning profiling-based prediction and stifling responsible innovation,” said one lawyer familiar with the TAG project. “Clear definitions and thresholds will be vital.”
Key Obstacles to Progress
Even with growing public and parliamentary concern, the road to banning predictive policing is unlikely to be smooth. One challenge is technical, i.e. there’s no consensus on what exactly counts as “predictive policing,” given the variety of tools and methods involved.
There’s also the legal complexity of drawing lines between fully automated systems and those that merely assist human decision-making. As with facial recognition and biometric surveillance, courts and regulators have struggled to keep pace with the technology.
Policymakers face a political challenge too: calls for stronger law and order measures remain popular with some voters, and banning high-tech crime-fighting tools may be portrayed as soft on crime. Opponents of the amendment are likely to argue that police need every available advantage to tackle modern threats, including gang violence, knife crime and terrorism.
However, it seems that the tide may be turning. As Berry put it in her Commons speech: “This is a moment to decide what kind of society we want to be—one where we protect rights and freedoms, or one where we criminalise people before they’ve done anything wrong.”
What Does This Mean For Your Business?
Whether or not the amendment to ban predictive policing is adopted, the pressure now facing the UK government reflects a growing public and parliamentary appetite for more robust oversight of AI and algorithmic decision-making. The evidence presented by civil rights groups, legal experts and academics points to a consistent pattern where predictive systems are deployed without transparency or accountability, the result is often discrimination and deep mistrust in public institutions.
For police forces, this moment raises urgent questions about how data is collected, analysed and applied. Even if predictive systems are well-intentioned, their reliance on flawed historical datasets and opaque algorithms makes it difficult to separate operational efficiency from systemic bias. Without clear legal limits, the use of such technologies could further entrench inequalities and reduce trust in frontline policing.
The implications extend far beyond law enforcement. For example, businesses involved in AI development, analytics or public sector contracting will need to stay alert to changing expectations around transparency, fairness and accountability. A legal ban on predictive policing could signal broader regulatory moves against high-risk algorithmic profiling, especially where sensitive or personal data is involved. Companies that rely on such tools in recruitment, risk scoring or fraud detection may need to rethink how their systems operate and how they explain them to clients and users.
For civil society and campaigners, the bill presents a rare chance to press for hard legal safeguards rather than soft ethical guidelines. The current momentum suggests that arguments grounded in lived experience, statistical evidence and human rights law are starting to gain traction in parliamentary debates.
What happens next will shape the relationship between data, power and the public for years to come. Whether through this bill or a future AI-specific law, the UK faces a clear choice: allow automated prediction to quietly redefine policing, or legislate to ensure that new technologies serve justice without undermining it.
Sponsored
Ready to find out more?
Drop us a line today for a free quote!