Are you jaded by the overuse of Artificial Intelligence (AI) – with vendors instilling either fear or faith? In the cybersecurity domain, we see CISOs investing in Machine Learning (ML), but remaining justifiably skeptical of AI.
Security teams at enterprises still drown in too many warnings. In November, Enterprise Management Associates (EMA) found that 64% of alerts go uninvestigated, and only 23% of respondents investigate all of their most critical alerts.
Machine learning – a building block for AI – lets augmented analytics help security staff decide what to investigate, detect low-and-slow attacks that defenses have missed, and gain enough time to explore the serious problems. ML can discern indicators of attacks from collections of loosely related data faster and more reliably than an overworked (and often under-experienced) analyst. In security operations, ML helps combat a genuine, compelling, and intractable problem – the shortage of security analysts.
ML models evolve over time based on what they observe, or how they are trained. Used on authoritative data sets, ML helps prioritize those indicators that are materially interesting and automate aspects of investigation that slow and complicate the security operations center (SOC).
AI adds on to this idea by letting the machine either suggest or take action based on its models and observations. The challenge here is that while this sounds marvelous in theory, it's far more utopian in practice. For years, security teams have avoided even basic automated responses for fear of disrupting business. The 2-man rule, privileged access, playbooks, and surprise audits – these practices offset the risk of errors through haste, ignorance, or poor judgement.
Yet, cybersecurity leaders have seen the value of automation in DevOps and other areas and are now embracing automation for cybersecurity. This same laggard model will be used for AI in cybersecurity – just not yet. Right now, AI in security is still mostly artificial and not that intelligent. With that in mind, we will let other markets and operational teams find the bugs and breakdowns before we put our businesses, reputations, and careers at risk. In the meantime, although not all ML delivers equally, the approach has plenty of scope for positive impact without AI's downsides.