Skip links

Is AI alone the answer to tackling your cybersecurity threats?

Rob Shapland | Falanx Cyber | Ethical hacker

In a recent poll, it was found that 83% claimed that without AI their organisations would not be able to respond to cyberattacks.

The same faith in AI can be heard from cybersecurity professionals in any sector. Such sentiment has paved the way for platforms that offer AI-only monitoring, detection and response.

But can we really do away with the experts who control our Security Operations Centres (SOCs) and is it really such a great idea to rely on AI alone to manage our cybersecurity?

 

Where AI shines

There’s no doubt that AI can be extremely effective in enhancing cybersecurity. In particular, it excels in:

  • Processing enormous volumes of data – at a pace and scale that would be unimaginable for a human mind.
  • Recalibrating and propagating algorithms quickly – working in near real time AI can enable security teams to stay on top of large and evolving threats.
  • Becoming smarter and smarter over time – the more data collected over time the better AI can spot and respond to cyberthreats.

But such abilities don’t mean that we can kiss goodbye to our human cybersecurity experts. It simply means that we now have AI technology that can free them up to focus on other critical and strategic areas.

 

AI’s limitations

There are many critical limitations to AI that should caution against using it alone. These include:

  • AI algorithms are only ever be as good as the data they are based on.
    What’s more for AI to work at its best it requires huge volumes of high-quality data that have been collected over time.
  • AI algorithms may generate many false positives.
    This means, for example, that harmless emails or websites may be flagged as dangerous which can lead to ‘response fatigue’ and the ignoring of genuine threats.
  • AI lacks transparency.
    AI security vendors rarely reveal how their technologies work – instead you are expected to trust that their algorithms are tailored to your specific needs.
  • AI can be hacked.
    There are many examples of hackers compromising the data companies use to train their AI algorithms or mimicking AI algorithms to find a way through them.

 

Best practice cybersecurity AI

Security teams today must protect a vastly inflated attack surface and to handle such scale AI-based platforms are ideal. But you are going to still need to have human experts verifying and reviewing.

Here are four pieces of best practice to consider when deploying AI.

  1. AI should only be implemented as part of a multi-layered defence strategy.
    It should be deployed in situations that play to its strengths such as tackling general threats at speed and scale while advanced tools and human intelligence are relied upon for more precisely defending against specific, targeted attacks.
  2. Ensure your AI algorithms are fuelled with sufficient, high-quality data.
    To work at their most effective, AI solutions require billions of data points to properly train their algorithms.
  3. Rely on AI when speed is critical.
    AI excels at acting rapidly to detected intrusions in your network, often before a criminal has managed to exfiltrate your data.
  4. Combine AI with human experts.
    While AI algorithms are perfect for taking the pressure off of your human security team and enabling them to become more productive, nothing can replace the judgment of an experienced SOC analyst at understanding a sophisticated attempt to breach.

AI can’t handle grey areas or ambiguities particularly well. It is in this murky, as yet undefined area that unknown threats often lurk.

This is where your SOC analyst will use their years of expertise to investigate as AI knocks out the more obvious contenders.

AI as a tool

With many businesses facing talent shortages in their security teams – and more complex attacks arriving from all angles – it is, perhaps, understandable that many are coming to see AI as a lifeline that will level the playing field for their SOC.

But despite the claims from security providers, AI is not a panacea for modern cybersecurity challenges.

It simply cannot replace human expertise.

AI is essential to your defence strategy, but it is an essential tool. And like any tool someone should be directing where and how it is used and providing support in areas where it can’t.

This is why in our MDR service we ensure that you have a team of highly-trained professionals monitoring and responding to threats 24/7.

Your security is far too important to trust to an algorithm alone.

Contact us to find out how our MDR using AI and HI within our Security Operations Centre can protect your business







    Leave a comment