Skip to main content

New ICO Guidelines for AI and PII Data Privacy Compliance

    As AI becomes more prevalent in our daily lives, businesses around the globe must dive into the rising landscape of legal and regulatory duties related to the usage of AI systems.

    In November 2022, the ICO announced a set of guidelines on how organizations can utilize AI and personal data both ethically and legally, in compliance with the UK's data protection framework.

    It is complemented by a number of commonly asked questions about the use of AI and personal data, such as whether impact assessments should be done, whether outputs must adhere to the accuracy principle, and whether organizations require authorization to analyze personal data.

    This guidance explains how data protection rules apply to AI initiatives while keeping an eye on the many advantages that such projects might provide, helping organizations reduce the risks that arise specifically from a data protection standpoint.

    Read more about this direction on the ICO website.

    How to Legally Collect AI Data

    The ICO's guidance acknowledges that while utilizing AI has undeniable advantages, it can also endanger people's freedoms and rights when data protection is not taken seriously. To this end, their guidance provides a useful framework for how enterprises should evaluate and mitigate these risks.

    The guide covers eight strategic elements that businesses can adopt to improve how they handle AI and personal data:

     

    1. Use a Risk-Based Procedure When Creating and Implementing AI

    When using AI, you should determine whether it is necessary for the situation. AI is typically considered a high-risk technology when it interacts with personal information. A company requires large amounts of data in order to improve its systems to work properly and our data might get resold, becoming unaware of who is receiving it or how it is being used.

    Thus, there may be a more efficient and privacy-preserving substitute.

    As the ICO states, you must evaluate the risks and put in place the necessary organizational and technical safeguards to reduce them. Realistically, it is impossible to completely eliminate all risks, and data protection laws do not mandate that you do so, but make sure you:

    • Employ a data protection impact assessment (DPIA) to determine and reduce the risk of not adhering to data protection laws that your use of AI poses, as well as preventing harm to individuals
    • Seek input from many groups that your usage of AI potentially impacts in order to better understand the danger

    When a DPIA is legally required, you must conduct one before deploying an AI system, and introduce proper organizational and technical safeguards that will help reduce or manage the risks you find. Before any processing happens, you are legally required to speak with the ICO if you identify a risk that you are unable to adequately mitigate.

     

    2. Consider How You Will Explain Your AI System's Decisions to Those Who Will Be Affected

    According to the ICO, it can be challenging to explain how AI generates certain decisions and results, especially when it comes to machine learning and complicated algorithms - but that doesn’t mean you shouldn’t provide explanations to people.

    Here’s what the ICO recommends:

    • Be clear, open, and transparent with people about how and why you collect and use their personal data
    • Think about what justification is required in the environment where your AI system will be used
    • Consider how people are likely to perceive your explanation
    • Analyze the likely effects of your AI system's choices to determine how thorough your justification should be
    • Consider how individual rights requests will be managed

     

    3. Only Gather the Information Necessary to Create Your AI System

    The ICO advises limiting data collection whenever possible. This is not to say that data cannot be collected - it only means that data be managed in a way that meets GDPR standards.

    You should:

    • Make sure that the personal information you use is accurate, adequate, relevant, and limited - this will differ depending on the context in which you use AI
    • Think about which privacy-preserving methods are suitable for the situation where you're employing AI to process personal data

    The accuracy principle for data protection does not require an AI system to be 100% correct. Instead, organizations should ensure that procedures are in place to guarantee fairness and overall accuracy of results.

     

    4. Label Risks of Bias and Discrimination at an Early Stage

    There are ways in which an AI system can be biased or lead to discrimination. This can create inaccurate, imbalanced datasets, and addressing this issue early is an important aspect of data privacy compliance.

    The ICO recommends that you:

    • Determine whether the data you're collecting is accurate, representative, reliable, relevant, and up to date for the community or different groups of people affected by the AI system
    • Determine whether the judgments made by the AI system are acceptable by mapping out the potential implications and outcomes for various groups

     

    5. Invest Time and Resources Into Properly Preparing the Data

    As was already mentioned, AI is only as reliable as the data it collects. Therefore, organizations need to make sure that enough resources and time are devoted to gathering the necessary data.

    The ICO recommends that:

    • When it comes to the labeling of data involving protected characteristics or special category data, there should be defined criteria and lines of accountability
    • To maintain consistency and help with unusual situations, you should involve multiple human labelers

     

    6. Make Sure Your AI System Is Safe

    AI systems have the potential to increase risks or introduce new security vulnerabilities. 

    When it comes to security measures, there is no one-size-fits-all approach. However, you must abide by the law and put in place the proper organizational and technical safeguards so as to provide a level of security proportional to any risks identified.

    The ICO recommends that businesses:

    • Perform a security risk assessment that includes a current inventory of all AI systems so that you can get a general concept of where potential incidents might happen
    • Perform model debugging, which is the process of identifying and addressing flaws in your model - either by an internal security auditor or an external security auditor
    • Implement a proactive monitoring system, and look into any irregularities

     

    7. Human Reviews of AI Decisions Should Be Meaningful

    It should be determined early on if the outputs are being utilized to help a human decision-maker, or whether decisions are fully autonomous, depending on the goal of the AI. 

    The ICO emphasizes that data subjects have the right to know whether choices involving their data have been made entirely on their own or with the help of AI. The guidelines also suggest that they should be meaningfully assessed when they are being used to assist a person.

    To make sure that these reviews are meaningful, the reviewers should be:

    • Skilled enough to evaluate and question AI system results
    • Capable of overturning an automatic judgment
    • Aware of additional factors that weren't reflected in the input data

    When a decision has a legal or other substantial impact, data subjects have the right under GDPR to not be subject to it. They also have the right to expect meaningful information about the reasoning behind the decision. 

    As a result, even if it is stated as a recommendation, human review is nevertheless necessary when AI is making important decisions.

     

    8. Consult With an External Supplier to Ensure that Your Use of AI Is Appropriate

    Purchasing an AI system from a third party does not absolve you from the responsibility of adhering to data protection legislation. In most cases, you will be the data controller, deciding how to deploy the AI system.

    As a result, you must be able to demonstrate how the AI system adheres to data protection legislation.

    The ICO suggests that businesses:

    • Pick an appropriate supplier by conducting due diligence prior to any procurement
    • Engage with the external provider to conduct an evaluation before deployment (such as a DPIA)
    • Establish roles and duties with the external supplier and record them (e.g., who will respond to requests for individual rights or conduct security checks)
    • Request documentation from the external supplier that demonstrates that they respect “privacy by design”
    • Consider whether any personal data will be transferred internationally - if so, ensure that people's privacy rights are respected

     

    GDPR and AI

    Although AI has the potential to be a valuable tool, as it develops, it also poses a risk to data security and privacy as well as regulatory concerns.

    It's difficult to avoid bringing up the General Data Protection Regulation (GDPR) while discussing artificial intelligence (AI) rules. Data is the essential component for AI applications, and the GDPR has had the greatest worldwide influence in terms of creating a more regulated data market.

    Check out our GDPR and data privacy hub, which goes in-depth into regulations and compliance.