AI, Data Protection & The ICO

ai-data-protection

The Information Commissioner’s Office (ICO) has published guidelines to help clarify how data protection principles apply to AI projects.

The Document

The guidance document (now a pdf available online on the ICO website) was produced by an associate professor in the Department of Computer Science at the University of Oxford and is aimed at those with a compliance focus e.g. data protection officers (DPOs), risk managers and ICO auditors, and at the many different technology specialists involved in AI.  The guidance document is designed to act as a framework for auditing AI, focusing on best practices for data protection compliance and as “an aide-memoire to those running AI projects”.   The ICO guidance document can be found here: https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection-0-0.pdf

Why?

The ICO document notes how there is a range of risks involved in using technologies that shift the processing of personal data to complex computer systems with often opaque approaches and algorithms. These risks could include the loss or misuse of the kinds of personal data that is required (in large quantities) to train AI systems or software vulnerabilities that are the result of adding AI-related code and infrastructure.

With this in mind, the ICO has produced a set of guidelines that could help organisations involved in AI projects to mitigate those risks by being able to see how data protection principles apply to their AI project without detracting from the benefits the AI project could deliver.

What?

The guidance document, which clarifies the distinction between a “controller” and a “processor” in an AI project and covers the kind of bias in data sets that leads to AIs making biased decisions, also seeks to provide vital guidance in areas related to the general legal principle of accountability (for data) and support and methodologies on how best to approach AI work. The document also seeks to cover aspects of the law that require greater thought, such as data minimisation, transparency of processing and ensuring individual rights around potentially automated decision-making.

Existing Guidance

The ICO points out that some aspects of this new guidance document are complemented by an existing ICO guidance document ‘Explaining decisions made with AI guidance’, published with the Alan Turing Institute in May 2020.

What Does This Mean For Your Business?

With more businesses now getting involved in AI projects, and with AI requiring, for example, large amounts of personal data to ‘train’ AI systems, and with the algorithms involved being so complicated, expert guidance of how to mitigate the data protection risks will, no doubt, be welcomed.  Having an AI auditing framework to hand could help businesses to avoid potentially costly data protection law breaches and could help them to approach and manage AI projects in a way that promotes best practice.

Sponsored

Ready to find out more?

Drop us a line today for a free quote!

Posted in

Mike Knight