Nearly every day another successful breach is reported. In 2016 alone, organizations from major governmental agencies such as the IRS and Department of Defense, to major retailers including Wendy’s, have succumbed to attack. These organizations are not alone; every major business and governmental sector has been compromised. Large tech companies such as LinkedIn and Oracle, healthcare providers including Premier Healthcare (as well as numerous hospitals), manufacturers, major educational institutions, and large financial organizations have all succumbed to either internal or external threats.
What do all of these organizations have in common? Quite a lot, actually:
- Dedicated manpower for cyber security defense and investigation.
- Significant information security-related budget, many in the multi-millions.
- Many of the most common defense tools (firewalls, IDS, SIEM, antivirus, etc.).
- Documented process and procedures for investigation and response.
The list could go on from there. If all of these organizations have money, people, technology, and best practices, why are they failing? And even more significantly, how can the rest of us with fewer resources hope to keep the attackers out?
This is where the change in paradigm must come into play. Traditional tools are focused on gathering logs and other data on activities that are taking place or have taken place within the environment. These data points are filtered, parsed, and/or interrogated for known bad activities. The artifacts could be indicative of single events or a chain of events for correlated activities. They can include anything from a failed login, login on restricted systems, and or/using restricted accounts, system changes, odd network packets, etc.
The problem with this approach is the defense relies on detection of “known bad” items. To be fair, a “known bad” approach will stop nuisance and opportunity attacks, which probably comprise 90%+ of attacks but it can never stop new attacks because as new attacks they were previously unknown. This means that at some point, this type of defense is destined to fail! Why? Because the hacking community adapts and innovates, thus, the realm of “bad” things is constantly changing, therefore “known bad” is behind the curve as a defense strategy. Those that hack for a living or as part of a business or a nation state develop new and different “cutting edge” techniques to get into their targets using an underground ecosystem. They use the same techniques commercial entities do for improving such as quality testing, and deployment and configuration releases and updates. They get paid to be successful in performing their jobs, just like the rest of us do.
Some of the faults of a “known bad” approach are:
- It does not detect insider attacks or an external attacker using a compromised identity until that attacker does something against policy.
- It does not notify when the threat actor moves laterally, performs reconnaissance or sets up the final attack unless the attacker does something against policy. Once in the network? environment, 99% of threat actors use standard tools and activities to move laterally and conduct other reconnaissance and data gathering activities, according to the recent Cyber Weapons Report from LightCyber.
- Over 70% of active malware identified in environments was unique to that environment, so deploying malware protection using a “known bad” methodology will result in compromise most of the time.
Using a “known bad” methodology is reactive and prone to failure at some point in its lifecycle, often sooner than later. It can’t be successful until something happens that is “known bad.” In many cases, once a “known bad” activity takes place, the threat actor is already inside and well on his/her way to success. This is why multiple creditable research firms publish attacker dwell times in the 5-7 month timeframe.
We must change to a “known good” methodology. Threat actors of any type, malware, compromised systems, applications, or users all behave in predictable and often specifiable ways. When they deviate from those identified patterns, methods, or channels, security has a good indication that they should be checked out. By better understanding the “known good” communications, data flows, accesses, etc. within the environment, the “unknown” can be more quickly identified and classified. If a new channel is needed or identified from authoritative channels, then it can be incorporated into the “known good” model and deprioritized as an event or threat characteristic. When a threat actor lands in a new environment, the reconnaissance and lateral movement activities violate “known good” policies and will therefore be detected early.
EMA research found that 44% of organizations experience a daily surplus of critical/severe alerts they need to investigate; it is over 90% when looking at total alerts. This is a symptom of using the “known bad model.” The “known good” model reduces priority alerts to a manageable amount, eliminating alert fatigue and investigation backlogs. When implemented correctly, rather than facing a daily insurmountable flood of hundreds to thousands of low-fidelity alerts, organizations using a “known good” model encounter a small volume of workable, high-fidelity alerts.
There are countless ways for an attacker to compromise a user account or commandeer a machine and land in a network. It will happen. The known bad model, though still necessary, simply does not readily address one of the most vexing problems in security today: only rarely can it uncover a post-intrusion active attacker or the signs of a disgruntled employee perpetrating an inside attack leaving companies blind to these attackers until it is far too late. It’s time to evolve our security practices and tools with a known good model to find these attackers by their operational activities. It is only then that we can put the expectancy of a data breach behind us.