Security leaders are facing the perpetual challenge of keeping their threat detection capabilities on par with increasingly sophisticated malware. Unfortunately, traditional threat detection technology that uses malware signatures and rules no longer offers the most effective approach for protecting enterprises against modern malware.
While signature-based detection – which scans traffic for unique patterns of code that indicate malware, or the hash of a known bad file – is useful for catching unsophisticated malware, it doesn’t catch new or unknown threats for which no signature exists. Furthermore, attackers can also easily repackage malware so it won’t match known signatures.
A good example of this is the Cryptolocker ransomware, which was first discovered in 2013. Variants like CryptoWall and TorrentLocker use the same basic Cryptolocker code and are still common today. Signature-based threat detection platforms have other limits as well – they are notorious for false positives and for flooding security teams with more alerts than they can investigate.
Traditional threat detection is also unable to identify insider attacks carried out by employees or by an attacker who has obtained legitimate credentials through a phishing attack or data breach.
In response, many organizations are making the shift to behavioral risk analysis, which uses a completely different process, one that requires a great deal of input data to be effective. In this article, I’d like to dive deeper into how behavioral risk analysis helps overcome the challenges associated with traditional threat detection.
Also see: The Successful CISO: How to Build Stakeholder Trust
Shifting to Behavioral Risk Analysis
Behavioral risk analysis examines network activity for behavior that is both unusual and high-risk. This requires machine learning models that baseline normal network behavior and look for anomalies.
But not all unusual activities are risky. For example, consider a marketing employee accessing marketing materials from a SharePoint drive for the first time in several months. This is unusual compared to that person’s normal behavior, but likely relatively low risk. But that same employee accessing code repositories from an unfamiliar location in the middle of the night when most employees are offline is much riskier and should be flagged.
Conducting risk analysis involves determining the risk level of behaviors, which requires gathering a large amount of contextual data (usually into a data lake), calculating a risk score based on that data, looking at the anomaly in light of that risk score, and prioritizing it accordingly.
This helps to reduce false positives (behavior that is unusual, but low risk often triggers a false positive alert in less sophisticated solutions) and brings the security teams workload down to a more manageable level by helping them to prioritize. This contextual information is the key to identifying what behaviors are risky or not.
5 Techniques for Behavioral Risk Analysis
Behavioral risk analysis has several techniques. These include the following (please note that the details might vary based on the specific solution in question):
- Outlier modeling: Uses machine learning baselines and anomaly detection to identify unusual behavior, such as users accessing the network from unrecognized IP addresses, users downloading copious amounts of IP from sensitive document repositories not associated with their role, or server traffic from countries that the organization does not do business with.
- Threat modeling: Uses data from threat intelligence feeds and rule/policy violations to look for known malicious behavior. This can screen out unsophisticated malware quickly and easily.
- Access outlier modeling: Determines if a user is accessing something unusual or something they shouldn’t be. This requires pulling in data on user roles, access entitlements, and/or badging.
- Identity risk profile: Determines how risky the users involved in an incident are, based on HR data, watchlists or external risk indicators. For example, employees who were recently passed over for a promotion may be more likely to have a grudge against the company and want to retaliate.
- Data classification: Tags all the relevant data associated with an incident, like the events, network segments, assets or accounts involved, to give context to the security team investigating the alerts.
Also see: Secure Access Service Edge: Big Benefits, Big Challenges
Complex and Multi-Factored
As you can see from these steps, estimating risk is complex and requires looking at many different factors. Behavioral risk analysis requires input data from a wide range of sources.
These sources include HR and identity data from Microsoft Active Directory or an IAM solution, logs from security solutions like firewalls, IDS/IPS, SIEM, DLP and endpoint management solutions, and data from the cloud, applications and databases.
Outside data sources such as public employee social media posts (to determine which employees are at a higher risk of being malicious) or threat feeds like VirusTotal are also useful. Due to the sheer amount of contextual data required, successful behavior analytics solutions need many third-party integrations and the ability to accept a wide range of data feeds into a database or data lake. The more data, the better.
When done successfully, behavioral risk analysis can improve efficiency, reduce false positives, and detect insider threats and zero-day attacks that other threat detection methods cannot. As a side benefit, the ML analysis involved can also produce valuable data on how systems and devices are used (for example, looking at the normal usage patterns for a system or a set of devices could let the IT team know the best time to shut it down for updates).
Behavioral risk analysis can also enable automated responses to threats. Modern malware can shut down dozens of systems in seconds. It’s not possible for human operators to respond fast enough to stop this.
Behavioral analytics, if done correctly, can produce alerts that are accurate enough for responses to be automated. The amount of context that this approach provides means that automated remediation actions can be extremely targeted, such as removing access to one system for one user. This means there’s a lower chance of accidentally interfering in legitimate business processes. In turn, this may help convince risk-adverse CIOs or CISOs that automated responses are feasible.
Behavioral risk analytics has great potential to make threat detection more efficient and keep organizations safer. Building robust ML analytics drawn from adequate input data will be key to the success of this approach over the next several years as this technology becomes more standard in security platforms (such as next-generation SIEM).
Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More
About the Author:
Saryu Nayyar, CEO at Gurucul