Shapes

Cyber security

The unveiling AI dynamics in 2024

16 February 2024

AI, or Artificial Intelligence, has been a buzzword since late 2022 when Chat-GPT was first unveiled to the public and the buzz only got louder through 2023.  2024, it seems, will be another “year of AI”; we have already seen the general availability release of Microsoft’s Co-pilot, powered by the latest version of the GPT Large Language Model, and its many spin off's.  For anyone with access to DALL-e, the image drawing engine accessible via Chat-GPT, the quality of the latest engine and its results are genuinely impressive.  This artwork was crafted through a simple one-line prompt: 

 “Draw me a picture of an electric shock machine connected to a computer user sitting at their computer answering emails” 
 

An effective Anti-phish training machine? 

 

Why? you might ask… well I was wondering how to reduce the incidence of folk memorably clicking phishing links.  And no, this is not the solution we provide! 

The pace of AI advancement seems rapid and unrelenting; debate in cyber security circles continues to swirl around a couple of topical questions:

1: What are the cyber-risks associated with deploying AI into an organisation’s IT system, for example, switching on CoPilot in your Office 365 environment?  

and...

2. Who will win?  Are the security AIs or the bad, evil AI being cooked up by cyber criminals and hostile threat actors? 

Neither question is answered yet, but we are beginning to see a maturing debate around these.  Let’s start with the first issue. 

 

The first thing to understand is a little bit about how the current suite of AIs work, known as Large Language Models; essentially they are capable of learning and training themselves through access to lots and lots of data.  So how does that work if deployed in an organisation's IT systems?  Immediately it's apparent that for the AI to work effectively, and to return meaningful results to a user seeking business-relevant information, the LLM needs access to pretty much all the information in the organisation's IT system.   

So, the first thing to be wary of is just what information resides in that system.  For example, sensitive files, personal emails, intellectual property – maybe cutting-edge research - or financial data that the Board don’t want shareholders to see just yet.  This potentially will be available to the AI to learn and draw upon to produce its output; which suggests that the switch-on of, eg, Co-Pilot for O365, might need some prior preparation.   

 How confident are you really in the permissions applied to your SharePoint structure?   Are you sure that data that should not be there has been removed?  And are you confident that user access permissions across the filestore are robust and accurate?  If not then Bob from marketing might be able to gain query results based on files from HR.  More worryingly, contractors with limited file access might gain access to privileged information through queries to an LLM that can read the contents of everyone's inbox. 

Any uncertainty about data security is a good indicator that a precautionary approach is advisable.  This in turn leads to the idea of performing an AI readiness check – something that can be done by your MSP/ MSSP for you.  Various solutions are now coming to market, or have been developed from previously available solutions, designed to monitor and audit your “data surface posture” (or where stuff is and where it's going to and from/ leaking to and from).   

 Even the best SharePoint structure and permissions regime will deteriorate over time as users join, leave, departments close or merge etc.  Many organisations will find, if they check, that there is data in odd locations or things that should not be shared widely are potentially accessible widely if an LLM is given unfettered access.  Checking this regularly and managing data surface posture proactively is good information security practice; in the age of AI, it becomes essential. 

 

The second question is more speculative and the best thing is perhaps to refer the reader to the NCSC’s recent publication “The near-term impact of AI on the cyber-threat1.  This balanced report concludes, perhaps unsurprisingly, that AI may well benefit the attackers more than the defenders.  That said, there is also plenty of evidence that cyber-defence can be enhanced by AI; the NCSC notes: 

The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design. 

Whether that’s AI parsing and filtering vast amounts of system information in the search for an indicator of compromise or shortening the training cycle needed to get cyber-security staff up to speed, there are many, many uses of AI in its current form that help the cyber-defender.  We already use AI-assisted tools to monitor and detect threats and attacks to our customer’s systems, and we exploit the new capabilities increasingly to drive out wasted effort, free up human operators and hone in on the things we need to do.  All of this makes AI an enabler for better and more affordable cyber-security services.  AI is also already shortening the time to respond and remediate attacks.  So all is far from lost in this newly emerging facet of the cyber-security contest.   

As to who will win?  A bit like the famous quip by Chinese Premier Zhou Enlai to Henry Kissenger in 1972, ‘It’s too soon to tell.’ (on whether the French Revolution was a good thing)  it is too early to call this for AI and cyber security.   

What is clear is that, like most technological advancements, AI is neither intrinsically good nor evil, it is simply the purposes humans put it to that define the eventual outcomes as positive or negative.   For any organisation chasing the positive outcomes of success, growth and efficiency, AI can be a force for good.  However, the chosen methods of use, and the due consideration applied before deploying them, will likely determine the outcome more than the AI itself. 

 

 

Latest