infocyte mid-market threat and incident response cyber report

Mid-market Threat and Incident Response Report: Our Methodology

Last month, we released our inaugural Mid-market Threat and IR Report on the types of threats we’re finding in customer- and partner-led threat assessments and incident response investigations. One of the most interesting and controversial data points is the enormous amount of dwell time for malware and unauthorized access: over two years — well in excess of some other cybersecurity reports.

“Color me skeptical…”
– Kai Pfiester, Black Cipher Security on LinkedIn

Kai is a respected security industry leader known for keeping vendors honest on LinkedIn. His skepticism is expected from a cybersecurity report like this, which happens to stray from the majority position. Unfortunately, some of our data points were quoted in derivative articles without qualifiers. In response, we promised a deeper look at our methodology and explanations within that report.

Video: Our co-founder, Chris Gerritz, discussing our “Mid-market Threat and Incident Response Report” findings and methodology.

Why we released our Mid-market Threat and IR Report

The reason we printed our threat report is because the results we found are not in line with common industry reporting. How are these malware infections persisting for years within networks?

Our IR partners often parachute into an active/recent ransomware case, deploy Infocyte as an incident response tool, and lo and behold, the environment had pre-existing malware in addition to the ransomware.

We continually see cases like this.

Note: When Infocyte is used on a network for the first time, we’re proactively checking for active compromises and unauthorized remote access on their endpoints — specifically, threats that have bypassed their current endpoint protection (AV) solution. In other words, Infocyte is designed to find what other tools may have missed. The unusually long dwell times documented in our report is not an indication we also missed those threats for that long.

Articles are summaries, read the primary source

Most research has nuance and qualifiers for statements made. This is true for cybersecurity reports, just as it is true for any research that gets disseminated for mainstream readership. Inevitably, findings will be taken out of context — not always with ill intent. 

For example, some have applied the conclusions drawn our Mid-market Threat and Incident Response Report to all SMB environments…

The truth is, our data is representative of a cross section of mostly Small and Medium Enterprises in the 100 – 5000 node range with a few larger outliers (550k host inspections across a couple hundred networks in Q2 of 2019). Our largest network had 170k hosts, but these are outliers and more often than not, we have ultra-stringent data protection requirements in those engagements, making the data unavailable for these types of reports.

Infocyte makes no statement as to whether our data is statistically relevant enough to apply broadly. The report is better seen as conclusions based on what we are seeing, because our customer set is unique (mid-market enterprise vs. large enterprise) and our methodology for collecting this data is different (proactive vs. reactive). We hope our cybersecurity report inspires more research and/or partnerships that can add more rigor or data sources.

How are these compromises persisting for so long, undetected?

On one hand, it’s the state of the industry for many SMBs: most SMBs don’t have dedicated Security Operations Center (SOC) personnel proactively looking for this stuff. Most Managed Security Service Providers (MSSPs) they hire aren’t proactively looking either. Instead, SMBs typically install and monitor sensors or appliances, then wait for a rule to be tripped and an alert. Many SOCs, even in large companies, are stuck in this reactive-only pattern.

Below is an example of a cyber threat that broke through preventative controls that we flagged almost immediately from a customer who used us to actively look when a suspicious behavior was detected. (Most major protection platforms find this threat, but this environment happened to have endpoint protection not in this list).

Our AV rank here is for classification usefulness, not efficacy… ML engines like CrowdStrike don’t categorize threats beyond good/bad, at least not in VirusTotal.

Below is an example of a Neshta.A worm we found that had been propagating throughout an org’s environment for eight years. Their AV engine wasn’t “bad”… it was simply one of the three engines to never update a signature for this specific type of malware (for the record, no engine has 100% coverage).

At some point, dwell time stops being correlated with business impact…

Some of the more extreme examples of long dwell times aren’t always significant to the infected organization. The world didn’t end for the org we found harboring decade-old malware sitting on their legacy Win2k3 servers. For them, it was similar to being told they had rats in the over-stuffed garage, which had never been cleaned. Sadly, this is exactly how cybersecurity is perceived in some organizations.

Indeed, the longest dwelling malware we’ve seen (we have a number of cases going back to 2009-2013… found in 2019) weren’t materially impacting the infected org at the time we found it. When reviewing a couple of these cases a few patterns popped out, which throws the data into the years territory…

  • Some old malware was active but calling out to black-holed C2 domains or the criminal networks were taken out by law enforcement action. When law enforcement takes down a botnet of millions of systems, they don’t magically uninstall the client-side malware -— that stuff stays there until their AV provides a remediation or someone removes it. Did anything happen to these networks prior to the C2 takedown? We may never know: logs generally don’t go that far back.
  • Some long-term persistent compromises had no evidence of malicious action against the infected org. Perhaps they used the access to attack other more important orgs as a redirect to obfuscate the attack origin. Other infected systems are simply recruited for the occasional DDOS attack.
  • There are entire classes of attacks meant to minimize impact to the infected org so they can camp there forever. Malicious bitcoin mining, for instance, tries to play nice so it can go undetected longer. Not every hack has a popup or alert telling you “you’ve been hacked” and where to pay the ransom.

Attack Time Methodology

Infocyte’s primary data is forensic triage data and local host logs, which include MAC timestamps on disk, shimcache entries, event log entries, etc. One of our most-common use cases is Incident Response — our IR platform is deployed after-the-fact, which means we didn’t see the attack in progress when it occurred. This requires us to infer attack time from forensic analysis and automated host time-lining. The scalability of the Infocyte platform in conducting forensic triage collection and analysis gives us one of the largest samplings of such data ever collected.

Forensic professionals know this method, even though it’s often admissible in court, can be tricked via Time Stomping techniques which manipulate these timestamps. We can’t say, definitively, how pervasive time stomping is in practice, but it does not appear to be ubiquitous. In fact, it’s much less common than I assumed it would be and when it does occur, it sometimes doesn’t affect all files/stages of the attack. 

This is a sample of some data points that we’d expect from a recent attack with accurate timestamps — lots of examples of these in the dataset:

We made a concerted effort to compare multiple time sources (when available) and detect the use of time stomping and omit their results, but these methods aren’t exhaustive. For instance, if the timestamps of a file were older than the OS install date… probably not real. Sometimes the file was manipulated but it conflicted with a shimcache timestamp stored in the registry. We could theoretically use other techniques, like comparing disk sequence numbers, but we don’t have reliable or scalable method of doing this on more than one host at a time.

I am confident the bulk of our cybersecurity report data will hold up within an order of magnitude, but in the spirit of full transparency, it’s entirely possible the data could contain some tainted timestamps. The further back the compromise, the fewer data points we have to accurately verify these timelines. We are still making efforts to refine our methodology for verifying timestamps and omitting those that aren’t relevant so we can accurately and quickly measure dwell time in IR cases.

How do we define (classify) malware?

The textbook definition of malware: “Software that is specifically designed to disrupt, damage, or gain unauthorized access to a computer system.”

In practice, malware is more complex. For the purpose of our research, we chose a fairly conservative definition that was also easy to apply:

Malware events were included if at least five (5) anti-malware engines classified it as malicious or, if fewer than five were used, a malware analyst actively verified it was, in fact, malware. In other words, we only included known malware in these stats to avoid controversy.

Malware was also separated from riskware and other classifications that sometimes get lumped in with malware but aren’t considered large threats to most orgs. We looked for any samples that had the below classifiers and put them in their own “riskware” category:

not-a-virus|PUA|PUP|unwanted|adware|riskware|toolbar|search|RemoteAdmin|KMS|coupon|bundler|psexec|hacktool|winexec|angry|IPScan

Where to go from here

Our Q2 2019 Mid-market Threat and Incident Response Report was our first threat report and much debate went into how to properly sanitize and present the data. In fact, we’re already incorporating feedback and improvements to our methodology to improve future versions of this report. Even so, the feedback received so far is interesting… for some, the data has confirmed anecdotal evidence seen from the teams we support in the field. It highlights the many challenges our industry faces, from categorization challenges to the lack of threat information sharing.  

Join us for a live webinar review of our Mid-market Threat and IR Report’s findings September 18 at 1p CT.

We’d love your feedback. Feel free to send contact the author and Infocyte co-founder, Chris Gerritz, on LinkedIn or via Twitter @gerritzc.

Posted in