WannaCry Response

WannaCry hit. There are a million excellent write-ups on the malware. How about one about how to respond if it isn’t via a help desk ticket notifying you of the malware?

Initial reports started coming out in the morning of Friday May 12th that NHS was experiencing a significant outage due to ransomware. I didn’t think anything of it. Ransomware is common enough. It wasn’t until it was specifically given a name and a mention that the method of spread was via MS17-010.

  • In previous incidents, email was the attack vector. Everyone seemed to initially assume there was an email.
  • Windows XP was assumed to be the weakness, instead it is looking like Windows 7 was the weak link for most organizations.
  • Given SMB was the attack vector, how does an organization respond? Most rapid responses for WannaCry seem high risk.
    • I suspect an organization with SMB open at the edge could block it with minimal impact. Throwing up ACL blocks to segment the network or disabling SMBv1 under stress seems like an extremely high risk change.
    • Endpoint vendors (ex: McAfeeSymantec, and Trend Micro) released guides for customers. These seem like lower risk mitigations.
    • Given the attack was on a Friday, ‘malware Monday’ was my real concern. A device leaves a secure network missing a patch, spends the weekend on the open internet, then comes back. Even one device is an issue, let alone any file system it can access or internal movement as it spreads. ACL blocks are high risk, private VLAN’ing would be an absurd to implement as an emergency change.

PS: Splunk finally started releasing dashboards around major security incidents. I’ve been asking for dashboards from them for years. The dashboard seems really well done. It has use cases and an export (with the Splunk logo) suitable for leadership. If I weren’t sure what to do, this is a great start and a great advertisement for the benefit of Splunk come renewal time.

Finish Or Start New?

What’s better for security: finishing the deployment of a functional product or start fresh with the modern product? It seems like it really depends on circumstances.

I had an alright MSSP. There were some problems though. Picking a new MSSP allowed me to fix some configuration issues I couldn’t resolve due to organizational inertia.

On the other hand, I had hardware for network segmentation and additional hardware would have been pretty affordable. My issue was a lack of FTE’s. When leadership buy-in was available, we had to use a hot new product and start over. I’m not sure we were any better off after the replacement. The new product wasn’t really doing anything that much better. It had the potential, but we lacked the FTE’s to make it happen.

Similarly, I went from an integrated IDS/IPS that wasn’t managed to a newer modern profiling stand-alone IDS/IPS. Given the previous product was integrated, I’d argue we’d clearly lost effectiveness since the FTE cost was higher with this stand-alone product.

On to Monitoring.

I’ve got a SIEM type product. It’s not fully deployed, but who can say their SIEM is really fully deployed and tuned? I’m looking at UEBA and analytics platforms, but I suspect my time would be better spent focused on a well tuned SIEM than 3 partially deployed platforms. The vendors mostly agree/admit that the platforms all have steep FTE requirements for success.

Microsoft ATA is extremely affordable and claims to be a UEBA type product. It’s ignoring the non-Microsoft user realms, but can an attacker get in and get data out of an environment without touching anything Active Directory?

I couldn’t begin to write anything close to comparable to Anton Chuvakin’s blog post on the subject of security analytics.

It seems like money and FTE effort would be better devoted to furthering a SIEM deployment completeness / maturity. 

Holistic Enterprise Security

I participated in a brief discussion on Twitter regarding the Podesta email breach. I blame the DNC for the reach.

Clinton and Podesta were functionally using shadow IT. There really isn’t any excuse for not detecting it. So then what? Do you declare it unsupported or assess the risk and implement compensating controls?

How expensive is monitoring for a few dozen extra email addresses on HaveIBeenPwned?

These folks already have administrative assistants and likely physical security details. If your organization is willing to spend those resources, why can’t you spend some resources assisting with the secure configuration of personal devices?

To take it a step further, give them a SOHO router IT remotely manages. You can decide how comfortable the VIP and the organization is with the relationship. Are you just doing secure configuration? You can throw OpenDNS on it for ‘blind’ basic security.

In a Cisco environment, you can hand out 881 devices that automatically VPN back. The VIP can take home a hardware VOIP  phone. You can provision dedicated wireless for the VIP devices. The person won’t even have to worry about VPN in their home office.

Deception Technology?

Deception technology seems to be the latest buzzword. It was everywhere at Black Hat and now my favorite Gartner researchers are covering it.

I talked to the vendors at Black Hat. I can’t figure out what they are offering that I couldn’t give to an intern to deploy over the summer. I worked with an intern over a summer to deploy the basic technology.

My deception environment:

  • I reserved random IP space and assigned likely machine names in a couple segments of the network. If administrative cost is a concern for deploying a sensor, null it route it to the local router / switch and log traffic.
  • In the server space, I requested standard Linux and Windows systems following the IT standards. Throw standard software on them with default configs. Log the activity.
  • I created some honeypot DNS entries. Payroll, Mainframe, etc.
  • I had some honeypot accounts created in Active Directory following naming standards. I didn’t request the IT staff log in to some systems, that’s a clear deficiency in my setup. I should have requested the Oracle team create an extra account, the AD admins create an extra account, etc.

I didn’t get any confirmed malicious hits, but did catch some unexpected scanners. Printing services performed an unexpected/unplanned scan of the user environment, a system admin incremented system numbering in to my space, etc.

Clintonemail & Shadow IT

Note: The information in this post is from the Wikipedia article.

The Clinton email scandal is an interesting shadow IT failure. Based on the Wikipedia article, the domain and mail server were setup in 2008 at a residence before moving to a data center in 2013 for management.

In this situation, we have a user that wants mobile device access on her BlackBerry. I was a BlackBerry Enterprise Admin at that time. At that point in my career, I had advocated and implemented a mandatory password policy and a mandatory device encryption policy on BlackBerry devices for a global enterprise. I’d also implemented device health / status monitoring using the BlackBerry monitoring. BlackBerry also offered remote wipe capabilities that mostly worked. She was asking for the most secure device at that time. She wasn’t asking for an iPhone 1 nor an initial Android. She went rogue and setup her own mail server. I’m going to guess her setup didn’t include a BlackBerry Enterprise Server. IT just said no, the user went rogue.

There should be a history of security incidents / reports around this mail server usage. Why didn’t the security monitoring team notice anything? From a mail flow perspective, this should have stood out as odd. I’m assuming this would have been a relatively high volume external domain. Why didn’t the DLP team notice anything? There should have at least been alerts for classified information leaving the network. Why didn’t audit / compliance catch this? They should have caught the mail filter exemptions that surely existed.

Unless they all did and the reports were buried…

The final wrinkle in the story for me is she eventually moved the server to a cloud provider. Their website mentions they provide managed security services. What are their security practices like? Could any of their employees see her data? Did the investigation include reviewing data handling practices at this organization?


As a followup, I listened to the entire testimony via the Lawfare Podcast. Per the testimony, James Comey indicated the content on her mail server was improperly marked. A properly marked document should have a header and a footer. How did the DLP solution allow an improperly marked document leave the network? Shouldn’t it have flagged the document as being non-compliant?

UBA / User Behavior Analytics

I’m in the process of evaluating behavior analytics tools. UBA and UEBA seem to be the popular acronyms for the product space.

It appears the space has three separate platform styles. You’ve got your add-ons such as CyberArk Priveleged Threat Analytics, Microsoft Advanced Threat Analytics, and Rapid7 InsightUBA. You’ve got your ‘independent’ platforms from Exabeam and Gurucul. Finally are your network analytic platforms such as Observable Networks and Pwnie Express. My prediction is Microsoft is going to absolutely crush the user competition.

If you’ve already got a heavy Microsoft environment, ATA’s cost is negligible compared to the enormous additional cost of expanding your CyberArk or Rapid 7 environments. Those platforms can clearly  cover more ground than just Microsoft platforms, but is it worth the additional cost and the proper environment configuration? It’s much easier to blindly stumble upon building a functional Microsoft environment than properly build a Linux environment. Local account usage seems significantly less common in enterprise Microsoft environments than enterprise Linux environments from what I’ve seen and heard.

I like the network analytics platforms, but they’ve got a battle against Cisco’s Stealthwatch platform. Cisco has a really weak ability to deliver in the security space, but they’ve got integration benefiting them here. It’s tough to battle against Cisco security products in a Cisco environment. We’ll have to see how the other network analytics products perform.

DBIR – Exfil Time – Explore v Smash & Grab

Anton Chuvakin’s review of the DBIR is my favorite.  It is super concise and to the point.

Document page 10 / PDF 14 has a chart comparing compromise time and exfiltration time.  Compromise time is typically in minutes while 20% of exfiltration is in minutes and 70% is in days.  Both make sense given the data on hand.

 If you’re getting in, you’re getting in relatively quickly. The message is delivered or the application vulnerability is found.  If you fail, you’re changing tactics and your attack likely won’t be correlated with the previous attack.  I’m not aware of many organizations doing any real threat intelligence.  My MSSP’s and threat data providers can never answer if they’ve seen alerts from fellow customers.

My guess is time for exfiltration is based on the compromise.  The shorter compromises are if the target had the data while the longer exfiltration are if a pivot was required.  If the attacker can pull off a smash and grab, the exfiltration will be quick.

It’s too bad the vulnerability section is so poorly sourced.  Then again, patching and QA typically aren’t given the resources to properly function at most organizations anyway.