AI is impacting data governance responsibilities

Data Protection’s AI-shaped elephant in the room

AI offers organisations access to extraordinary data processing potential, but it also introduces more than a little (largely uncharted) complexity to the compliance picture.

Today, in our second instalment of our AI article series, we’ll be taking a closer look at some of the key areas in which AI is impacting data governance responsibilities, and what checks and balances can be applied to keep your organisation on the right side of risk.

Accountability and governance

One of the more complex challenges of using AI for data processing is ensuring the rights and freedoms of individuals remain protected. The nature of AI makes it dangerously easy to inadvertently introduce both allocative harms* and representational harms** when automating decision-making based on the processing of personal information.

Preventing this generally requires a thorough Data Protection Impact Assessment (DPIA) that includes (at bare minimum):

  • A full description of the scope and context of the data processing.
  • A full description of the processing activity, including every place in which an AI process or automated decision could affect an individual.
  • A description of any stages at which human interventions/reviews can alter/affect decision-making.
  • An accuracy assessment detailing any variations or margins of error in your systems that could affect their fairness.

Most organisations should already have some form of DPIA in place. Any introduction of AI, however, will necessitate an extensive overhaul, as well as regular reviews moving forward to ensure ongoing (and demonstrable) compliance.

*Decisions that limit the allocation of goods/opportunities to a specific group, causing harm to those outside that group e.g. AI-driven recruitment systems that disproportionately favour male candidates.

**Decisions that reinforce the subordination of groups along identity lines e.g. misrecognition/mislabelling/stereotyping of minority groups.

Transparency

The GDPR (amongst other regulations) requires organisations to explain any AI-assisted decision-making processes to affected individuals or face regulatory action. The aim is to ensure individuals understand exactly how their data was used to reach an automated decision; how their rights, freedoms and privacy were protected during the process; and what their options are should they disagree with the outcome.

Needless to say, getting this right requires a comprehensive internal understanding of your AI processes, as well as the ability to extract relevant information to provide the necessary explanations to individuals on request.

Fairness, bias and discrimination

Being purely data-driven does not guarantee that AI outputs are always objective. It’s actually disturbingly easy to introduce unintentional bias during the design, training and/or use of AI systems.

As a result, in addition to general anti-discrimination data protection obligations, the law also requires organisations specifically prove their AI systems are not unlawfully discriminatory.

That means demonstrating that:

  • personal data is handled in a responsible, reasonable and just manner
  • individuals’ rights and freedoms are actively protected
  • any profiling or automated decision-making includes anti-discrimination measures

Security

AI introduces a number of technological complexities that many security teams will not be immediately familiar with. Intricate networks of connections, integrations, and third-party code relationships (amongst other things) can make it dramatically harder to identify and manage security risks.

This isn’t helped by the fact that AI data protection and security best practices are still under development. We simply don’t know the full scope of security risks arising from the use of AI for data processing yet.

So, how do you adequately secure your AI system? We’d suggest starting with a thorough security assessment, including internally and externally maintained code and frameworks. Any identified vulnerabilities can then be addressed directly, with the understanding that security is a moving target requiring ongoing vigilance and regular updates.

Individual rights

Responding to individual rights requests is the bane of many a compliance officer’s existence. They don’t get any easier with the introduction of AI, either.

Individual rights include:

  • the right to be informed
  • the right of access
  • the right of erasure
  • the right to rectification
  • the right to data portability

In terms of AI systems, these rights apply to:

  • personal data used for training AI models
  • personal data used for predictions/decisions, including any subsequent results
  • personal data contained within the model itself (intentionally or accidentally)

The best way to ensure the ability to respond to individual rights requests is to build the necessary capabilities in during the design and implementation phase of your AI project. Where AI services are outsourced, we highly recommend choosing a supplier that specifically provides for individual rights requests as part of their service.

The future

AI is a thrilling expansion to the technology landscape, offering untold potential to organisations willing to brave these relatively uncharted waters.

In terms of data governance and compliance, however, AI still raises more questions than it answers. We’re looking forward to the progression of international standards and best practices as applications and use cases evolve, and playing our part in helping clients tap into this emerging technology, safely, compliantly, and to its fullest potential. Take a look at our compliance services to see how we work in partnership with clients to accelerate their compliance journey.

Subscribe here to follow our article series exploring the warts and wonders of AI in the corporate environment, or watch this space for more.

Read the first in the series here.

The only way to really know if we’re a good fit is to get in touch, so let’s have a chat! One of our friendly experts will get straight back to you. You never know, this could be the beginning of a great partnership.
Bristol
Cape Town
Johannesburg
Email