advanced-persistent-threat-detection-meeting

How an MSSP successfully fought off a major cyber attack

This post was last updated on November 19th, 2021 at 03:27 pm

Here at Infocyte, we are helping our customers and partners respond to major attacks on almost a weekly basis. When I say attack, I don’t mean an antivirus notification about a bad file that a user inadvertently downloaded. The attacks I am talking about are full on hands-on-keyboard (what red teamers call “interactive”) access that has tunneled past all the network security controls and protections. While some of these attacks occur against organizations that have under-invested in security, last month we had what we would consider a secure organization with a FULL SentinelOne deployment. This particular threat actor was incredibly persistent too, at one point I thought there might not be a way to save the network.

This is a tale of how one Infocyte partner, augmented by the Infocyte SOC, successfully beat back a major interactive attack.

Detection

In the second half of October, this partner, an MSP with a budding managed security practice, was investigating some activity on a customer network they managed. SentinelOne, their primary endpoint protection platform, had quarantined an odd file conspicuously named “C:\windows\spoolsv.exe” that was dropped onto the filesystem of several servers they managed. Infocyte’s dashboard started lighting up like a Christmas tree with alerts for obfuscated powershell commands servers: There were a lot of alerts and indicators because the activity was on 60+ servers.

About 60+ similiar alerts for Cobalt Strike’s first stage powershell downloader were detected within minutes of each other

Infocyte’s SOC joined a conference call with the MSP SOC for an initial situation brief. Even though they had sufficient data to work with from both Infocyte and SentinelOne to confirm the breach, they had to answer some questions that everyone needs answers to FAST when they are going through these things:

Question: How many systems did they hit exactly? Could some of these alerts be duplicates or false positives?
Answer: 60+ servers were running multiple malicious commands. The attacker used more than one stolen Domain Administrator account to perform this lateral movement. They performed both highly suspect and benign administrator commands that required some context to correlate.

Question: SentinelOne stopped the spoolsv malware, so how are they maintaining access?
Answer: Cobalt Strike beacons are injected directly in memory of a server… this type of memory resident malware is nearly impossible for protection platforms to stop proactively. Infocyte’s proprietary memory scanning capability is able to extract these beacons directly from memory though.

Question: What system is the primary beachhead? How did they get in?
Answer: The customer’s Exchange Server appears to be the first with malicious behaviors. Some of these initial indicators are prone to false positives so were tagged low severity by Infocyte and largely ignored by SentinelOne (had to run an explicit search of the backing data). In context though, these behaviors are gold for confirming the initial entry point while the lateral movement alerts proved the Exchange Server was the primary origination point:

The exchange server had multiple alerts for suspicious behaviors from the exchange web process running powershell to it performing domain admin discovery.

Additionally, Infocyte memory scans confirmed Cobalt Strike was loaded into memory of the Exchange Server:

By the end of the call the team had isolated several servers and blocked the IPs communicating by Cobalt Strike. Round 1 complete… or so they hoped.

Head-to-head against an interactive threat

Now it was obvious the attacker was interactive at this point because they noticed their executable payload and IPs got blocked. Within the hour they got back in, switched up IPs and launched a script which attempted to disable several possible AV engines (a common technique). Oddly, they didn’t seem to know what security software was running so they just ran a script that had a lot of them.

Disabling antivirus with Cobalt Strike enables the actor to drop ransomware which tools like MS Defender and SentinelOne are quite good at containing

The IP we blocked wasn’t their only C2 — as a competent attacker, they anticipated actions like the antivirus quarantining one of their malware payloads or a C2 IP being blocked.

The actor was ultimately able to retain administrative access through this fight for two primary reasons:

  1. The Exchange Server was vulnerable to a repeatable exploit — the actor could always return till it was patched or brought down.
  2. Endpoint Protection Platforms, even modern next-gen ones like SentinelOne, can’t stop customized Cobalt Strike injections and often miss them completely, hence the framework’s popularity among attackers.

This second flurry of activity saw the attacker double their efforts and switch up tactics. They preceded to launch Cobalt Strike to EVERY server in the network using the following command:

This command sent the Cobalt Strike download stager command to a list of all the customer’s servers found in all.txt

During an intense OSM, Infocyte advised the team to perform a network purge: Take down all servers and reset all admin accounts at once.

This plan was advised because the only persistence mechanisms we saw the attacker enable was a scheduled task on two servers. Isolating those two servers and initiating a network wide reboot of the rest would hopefully purge the actor with minimal impact to services. If they tried to reconnect another way, they’d find the admin accounts disabled or with new passwords.

Purging a threat hurts… let it.

Things didn’t go exactly to plan: Executives briefed on the plan asked their team to delay a couple critical servers as it would really hurt business to bring them down unannounced during the day. One of those critical servers, a server that provided VOIP phone service to the company, was the attacker’s primary beachhead… (smart attacker… smart…). Because of the care the team was taking to minimize service interruptions, the actor retained access through the purge.

This is, unfortunately, a common request incident responders hear and playing softly when the threat has admin access can often be the difference between purging the threat and having to rebuild the entire network from scratch. In this case, the attacker simply responded by switching tactics again. The attacker sent out multiple scripts to enabled Remote Desktop with reduced authentication requirements. This was likely a fallback to ensure they didn’t lose the network before ransoming it.

It took another attempt of the above plan — this time with more commitment — and the team was finally able to purge the actor. In the end, during this two day fight, the actor had:

  • Compromised an Exchange Server, database server, VOIP server and many other critical services.
  • Performed 20 separate lateral movement pushes to expand their footprint
  • Used 3 stolen administrator user accounts and a service account
  • Created a rogue administrator account of their own

We found no evidence that they stole any data during their brief time on those systems. This is likely due to how much they had to fight for it. We scrubbed every server log we could and watched our consoles for weeks to confirm the actor did not return. Mission successful.

Takeaways

Some people might tag this actor as “sophisticated” but this scenario has become pretty common for organized ransomware groups. I dare say most of today’s ransomware groups could be considered “APT” by standards set 5 years ago.

A common mistake made by companies responding to attacks like this is to think the antivirus quarantine of a file or script stopped the attacker. Multi-stage attackers anticipate a struggle and often have fallbacks that are hard to see with protection platforms without strong behavioral analytics. Even when you have a strong endpoint platform, it takes expertise to interpret and understand those behaviors that are being displayed. This is where a Managed Detection & Response (MDR) service comes in.

Most companies only have an incident like this at most every couple years. No incident response plan will ever be good enough for a team with that little experience. For that reason, we believe its essential companies not rely on automated platforms alone to protect them. You need an MDR service with experienced responders that can be called on. We have some of the best at Infocyte — so much so that nearly all of our MSSP partners leverage our SOC as an elite augmentation for their team during incidents.

Test out Infocyte's endpoint + Microsoft 365 detection and response platform for free. Sign-up for our community edition here and get started in minutes: