Last week in my weekly SecOps call with clients, I went through all of the critical security alerts your team (in-house, SIEM, or SOC) should be alerting on so that you’re able to detect suspicious activity related to the current indicators of compromised networks.
The list was made up of quite a few alerts—I went through about 30 alerts on the call because some are rather similar but have nuances that are important in understanding what is happening on your network. In my head—and I bet in many of yours—these alerts are important.
But for your team, as you implement and test these alerts, they very might end up (or already have) alert fatigue.
I want to be clear here: alert fatigue is dangerous.
The numbing effect of seeing and responding to security issues can lead to alert overload, especially if your team sees a ton of false alarms. If your team sees 30 security incidents (these may be minor alerts), it’s probably too many to properly treat them seriously.
Having dealt with a variety of security vendors—some of whom send out alerts on a daily or hourly basis—your team might already be inundated with the numbing sensation of alert fatigue already (even before you start alerting on critical indicators of compromise within your own network).
In today’s world of automated alerting, I’ve been finding that our technologies just haven’t caught up—they just are not smart enough to deliver credible intelligence to our teams, leaving them with a laundry list of false positives speckled with a very few true security concerns.
During the pandemic, this alert overload has seemed to have gotten somewhat worse. With staff and processes changing, amounting to extra work for IT departments or management, your team is likely dealing with an unusually high-pressure situation. Your resources that are confronting alerting are probably worn even thinner now, more fatigued by the alerts needing review.
How to confront this fatigue?
First and foremost, prioritization. It’s far easier to investigate and dig into events that may cause the most problems for your organization or that of your clients. Prioritizing critical alerts—like the 30 that I went through in last week’s SecOps call—is to limit the scope of your monitoring. I recommend prioritizing alerting based on what should be triggered by recent events. Think about what a hacker is doing when the enter a network. They may lie dormant for quite some time before actually attacking—likely gathering information about your network and behavior. How can you make these rather invisible events more visible?
Understand your context. Many false positive alerts go off because your team is working and supporting your network. There are a whole bunch of alerts that may not be meaningful if you or your team are performing work on the network that may be alerted by your SOC or SIEM. Your team may be getting a whole bunch of events that at one time or another they shouldn’t be monitoring simply because someone on your team is directly working on that area. On the other hand, you might want to look closely at those alerts if no one should be triggering them while working on the network.
Measure how you’re alerting. Instead of simply handing off alerting to someone on your team—or to your entire team—get them involved on keeping track of what is working and what isn’t. As you go, get your team involved in thinking about how to improve detecting alerts that are problematic and eliminate the ones that are creating too much noise to even see anything.
Identify how to communicate alerting. I ascribe to the old saying that what gets measured gets managed. If you aren’t keeping track of what is working and what isn’t in a measurable way, your alert system is never going to work. Measure your alert response across service levels, identify effectiveness of an alert helping to mitigate incidents and track where your alerting processes may need improvement.
Develop a culture around action. Does your team act on the alerts they get? Is that because they have no confidence in them or don’t know what to do? As your alerting improves, make sure your team understands how to react/ act when a critical alert comes in. This is certainly important for other areas of your business outside of IT security!
Bottom line: to prioritize your alerting, make sure you put the alerts in context. Get your team to start thinking about what alerting is doing and where their biggest bang for the buck will be while keeping their eyes peeled for suspicious alerts.