WannaCry hit. There are a million excellent write-ups on the malware. How about one about how to respond if it isn’t via a help desk ticket notifying you of the malware?
Initial reports started coming out in the morning of Friday May 12th that NHS was experiencing a significant outage due to ransomware. I didn’t think anything of it. Ransomware is common enough. It wasn’t until it was specifically given a name and a mention that the method of spread was via MS17-010.
- In previous incidents, email was the attack vector. Everyone seemed to initially assume there was an email.
- Windows XP was assumed to be the weakness, instead it is looking like Windows 7 was the weak link for most organizations.
- Given SMB was the attack vector, how does an organization respond? Most rapid responses for WannaCry seem high risk.
- I suspect an organization with SMB open at the edge could block it with minimal impact. Throwing up ACL blocks to segment the network or disabling SMBv1 under stress seems like an extremely high risk change.
- Endpoint vendors (ex: McAfee, Symantec, and Trend Micro) released guides for customers. These seem like lower risk mitigations.
- Given the attack was on a Friday, ‘malware Monday’ was my real concern. A device leaves a secure network missing a patch, spends the weekend on the open internet, then comes back. Even one device is an issue, let alone any file system it can access or internal movement as it spreads. ACL blocks are high risk, private VLAN’ing would be an absurd to implement as an emergency change.
PS: Splunk finally started releasing dashboards around major security incidents. I’ve been asking for dashboards from them for years. The dashboard seems really well done. It has use cases and an export (with the Splunk logo) suitable for leadership. If I weren’t sure what to do, this is a great start and a great advertisement for the benefit of Splunk come renewal time.
What’s better for security: finishing the deployment of a functional product or start fresh with the modern product? It seems like it really depends on circumstances.
I had an alright MSSP. There were some problems though. Picking a new MSSP allowed me to fix some configuration issues I couldn’t resolve due to organizational inertia.
On the other hand, I had hardware for network segmentation and additional hardware would have been pretty affordable. My issue was a lack of FTE’s. When leadership buy-in was available, we had to use a hot new product and start over. I’m not sure we were any better off after the replacement. The new product wasn’t really doing anything that much better. It had the potential, but we lacked the FTE’s to make it happen.
Similarly, I went from an integrated IDS/IPS that wasn’t managed to a newer modern profiling stand-alone IDS/IPS. Given the previous product was integrated, I’d argue we’d clearly lost effectiveness since the FTE cost was higher with this stand-alone product.
On to Monitoring.
I’ve got a SIEM type product. It’s not fully deployed, but who can say their SIEM is really fully deployed and tuned? I’m looking at UEBA and analytics platforms, but I suspect my time would be better spent focused on a well tuned SIEM than 3 partially deployed platforms. The vendors mostly agree/admit that the platforms all have steep FTE requirements for success.
Microsoft ATA is extremely affordable and claims to be a UEBA type product. It’s ignoring the non-Microsoft user realms, but can an attacker get in and get data out of an environment without touching anything Active Directory?
I couldn’t begin to write anything close to comparable to Anton Chuvakin’s blog post on the subject of security analytics.
It seems like money and FTE effort would be better devoted to furthering a SIEM deployment completeness / maturity.
I completed my SANS GMON / Continuous Monitoring certification. I’m pleased with the process.
My training history is a web app course at DerbyCon (amazing for the cost) in 2011 or so, Tao Security‘s old Black Hat course on security monitoring and the Black Hat version of Offensive Countermeasures, both at Black Hat 2010? Offensive Countermeasures was interesting, but I’ve never been in an environment I could apply any of it.
SANS SEC511 felt like a five day version of Tao Security’s course. I wish this course had existed when I was new.
SEC511 covered a wide range of tools you’d likely encounter in an enterprise and the labs covered some of the more functional open source alternatives. It just seemed to make sense. BroBro is a must for security monitoring while ModSecurity is a bit of a bastard to work with and less likely to provide value in most environments.
The exam was more tailored around understanding the tools and designing a functioning environment for monitoring. I was concerned the test was going to be a memory game of understanding the various flags in tools.
If you’re reading this and are concerned about the test, @Hacks4Pancakes guide for SANS on her website is fantastic.
I participated in a brief discussion on Twitter regarding the Podesta email breach. I blame the DNC for the reach.
Clinton and Podesta were functionally using shadow IT. There really isn’t any excuse for not detecting it. So then what? Do you declare it unsupported or assess the risk and implement compensating controls?
How expensive is monitoring for a few dozen extra email addresses on HaveIBeenPwned?
These folks already have administrative assistants and likely physical security details. If your organization is willing to spend those resources, why can’t you spend some resources assisting with the secure configuration of personal devices?
To take it a step further, give them a SOHO router IT remotely manages. You can decide how comfortable the VIP and the organization is with the relationship. Are you just doing secure configuration? You can throw OpenDNS on it for ‘blind’ basic security.
In a Cisco environment, you can hand out 881 devices that automatically VPN back. The VIP can take home a hardware VOIP phone. You can provision dedicated wireless for the VIP devices. The person won’t even have to worry about VPN in their home office.
Deception technology seems to be the latest buzzword. It was everywhere at Black Hat and now my favorite Gartner researchers are covering it.
I talked to the vendors at Black Hat. I can’t figure out what they are offering that I couldn’t give to an intern to deploy over the summer. I worked with an intern over a summer to deploy the basic technology.
My deception environment:
- I reserved random IP space and assigned likely machine names in a couple segments of the network. If administrative cost is a concern for deploying a sensor, null it route it to the local router / switch and log traffic.
- In the server space, I requested standard Linux and Windows systems following the IT standards. Throw standard software on them with default configs. Log the activity.
- I created some honeypot DNS entries. Payroll, Mainframe, etc.
- I had some honeypot accounts created in Active Directory following naming standards. I didn’t request the IT staff log in to some systems, that’s a clear deficiency in my setup. I should have requested the Oracle team create an extra account, the AD admins create an extra account, etc.
I didn’t get any confirmed malicious hits, but did catch some unexpected scanners. Printing services performed an unexpected/unplanned scan of the user environment, a system admin incremented system numbering in to my space, etc.
I participated in the Pros V Joes CTF competition at BSides Las Vegas this year. It was intense.
The setup on day one is that you are a member of a blue team entering a compromised environment. You have multiple tasks: respond to service requests, maintain uptime for a couple services such as WordPress and ftp, find artifacts left behind by the attackers, and repel new attacks. The attackers don’t attack for the first two hours.
The environment was very well designed. We had vSphere to see most systems. There was a nice mix of Windows end point and server versions, a few Linux systems, an Asterisk PBX, and a pfSense firewall. I’m happy it was a pfSense firewall. I’m told they used Cisco ASA in previous years. The ASA is extremely rough to manage, I’d prefer to never see one again.
We started by doing network discovery, patching, and checking configurations. We also started responding to customer requests via calls, tickets, and emails. The tickets were all pretty basic and represented real world requests in this situation. Through network discovery, we found a few systems the customer neglected to mention to us.
Our failing was our lack of experience with Asterisk. We focused on other systems as we knew those systems. The red team immediately hit it when they were able to attack and took it down. While the phone wasn’t under SLA, we couldn’t receive tickets via phone and started receiving email tickets asking us to fix the phone system.
Day two was a repeat of the day one environment with some minor changes. A member of the read team would join us and we would be battling the other blue teams. We could start attacking each other immediately. Given the results of day one, the PBX was the main target. Everyone’s PBX immediately went down. I’d suggested a strategy of immediately blocking the Internet from our environment and taking the SLA hit while we patched. We decided against the strategy, hoping we could remediate fast enough.
Would I participate again: Yes!
Note: The information in this post is from the Wikipedia article.
The Clinton email scandal is an interesting shadow IT failure. Based on the Wikipedia article, the domain and mail server were setup in 2008 at a residence before moving to a data center in 2013 for management.
In this situation, we have a user that wants mobile device access on her BlackBerry. I was a BlackBerry Enterprise Admin at that time. At that point in my career, I had advocated and implemented a mandatory password policy and a mandatory device encryption policy on BlackBerry devices for a global enterprise. I’d also implemented device health / status monitoring using the BlackBerry monitoring. BlackBerry also offered remote wipe capabilities that mostly worked. She was asking for the most secure device at that time. She wasn’t asking for an iPhone 1 nor an initial Android. She went rogue and setup her own mail server. I’m going to guess her setup didn’t include a BlackBerry Enterprise Server. IT just said no, the user went rogue.
There should be a history of security incidents / reports around this mail server usage. Why didn’t the security monitoring team notice anything? From a mail flow perspective, this should have stood out as odd. I’m assuming this would have been a relatively high volume external domain. Why didn’t the DLP team notice anything? There should have at least been alerts for classified information leaving the network. Why didn’t audit / compliance catch this? They should have caught the mail filter exemptions that surely existed.
Unless they all did and the reports were buried…
The final wrinkle in the story for me is she eventually moved the server to a cloud provider. Their website mentions they provide managed security services. What are their security practices like? Could any of their employees see her data? Did the investigation include reviewing data handling practices at this organization?
As a followup, I listened to the entire testimony via the Lawfare Podcast. Per the testimony, James Comey indicated the content on her mail server was improperly marked. A properly marked document should have a header and a footer. How did the DLP solution allow an improperly marked document leave the network? Shouldn’t it have flagged the document as being non-compliant?