AI learning

Is Generative AI learning all your business secrets?

Of all the AI capabilities available, generative AI is arguably making the biggest splash in the modern workplace. Small wonder – having access to AI that can summarise complex documents, write key elements of a proposal or even whip together a presentation can save countless work hours and significantly boost productivity. 

But at what cost?  

In this third instalment of our AI series, we take a look at the way users interact with generative AI, and how – if left unchecked – this can create unseen holes in your corporate data security strategy. 

You are what you eat 

Despite its name, generative AI doesn’t actually generate anything from scratch. Rather, these large language models (LLMs) ingest, transform and then regurgitate information from two key sources: vast bodies of training data, and direct input from users. 

It’s the latter that poses the greatest security risk to businesses. 

According to data security company, Cyberhaven, 11% of data employees paste into ChatGPT is confidential. That’s right: employees are literally feeding their business (and/or customer) secrets into third-party generative AI platforms.  

In theory, these platforms do not automatically ingest user prompts into the LLM, and therefore cannot intentionally or unintentionally make your data available for query by other users. They do, however, typically store user prompts and interactions in their own data siloes to be used to train and improve their models in future, opening the door to some other potential vulnerabilities. 

Potential vulnerabilities 

The full scope of possible data protection and privacy vulnerabilities introduced by generative AI is still being explored and will inevitably evolve as rapidly as the technology. For now, let’s take a look at three of the best-known security weaknesses and how they may affect your organisation. 

Data retention and storage: User prompts have to be temporarily stored by AI platforms for processing. In many cases, they are also stored for future use to train and improve the LLM. In both cases, your data becomes subject to third-party data storage, security and data protections, which may not live up to your organisation’s standards or comply with your regulatory obligations. 

Data leaks: There are two main ways sensitive and/or proprietary data can be accidentally leaked via AI platforms. Firstly, through being ingested into the LLM during training (although training data is normally at least nominally sanitised before use). Secondly, by accidental or malicious breach of the service provider’s security perimeter. 

Compliance: Submitting sensitive data and/or PII to a third-party AI provider may well put your organisation in direct contravention of data privacy regulations.  

The power of the protection “triumvirate” 

So, how do you benefit from generative AI’s vast potential without exposing your organisation to unacceptable risk? We’ll be the first to admit that the answer is far from simple (particularly since the field is evolving at such an unprecedented rate). 

The first piece of the puzzle is undoubtedly data loss protection (DLP). Microsoft’s DLP technology does a pretty comprehensive job of protecting known sensitive information, applying encryption, access controls and restrictions on what can be shared, and how.  

On its own, DLP can only go so far, however. Users also need to be educated on appropriate behaviour to ensure sensitive data – so diligently labelled and protected by Microsoft – isn’t inadvertently shared outside the organisation via unprotected channels. 

This is a great example of the importance of our favourite triumvirate: people, process and technology.  

The reality is: technology alone cannot adequately protect data in the modern workplace. It needs to be supported by regularly reviewed, fit-for-purpose policies, and a comprehensively trained workforce that understands those policies, as well as the controls that enforce them and the reasons behind their existence. 

Where to from here? 

Love it or hate it, AI is here to stay. You can’t put the genie back in the bottle.  

As such, compliance managers and IT teams have two choices: lock down generative AI access and sacrifice its productivity benefits or find a way to harness that potential without compromising data security.  

As a designated Microsoft Solutions Partner for Security and Modern Work, it’s no surprise that we’re firmly in favour of the latter. Join us as we continue to explore the opportunities and challenges AI presents in our AI article series. Next up, we’ll unpack the hows and whys of bringing AI in-house, followed by our best practice guide.  

Sign up to our newsletter to get upcoming articles delivered straight to your inbox or watch this space for more. 

The only way to really know if we’re a good fit is to get in touch, so let’s have a chat! One of our friendly experts will get straight back to you. You never know, this could be the beginning of a great partnership.
Bristol
Cape Town
Johannesburg
Email