We don’t yet know who has the Equifax data or the entirety of the information they have.
We know what we can see when we pull a credit report. We see our addresses, our phone numbers, and a list of every line of credit associated with our name.
The OPM data is in the hands of the Chinese government. They probably won’t leak / sell it.
What if the Equifax data gets sold or dumped? We could be looking at a collapse in remote verification. How do you verify someone if the common answers to questions are entirely public? I don’t know how.
Traditional banks could require in-person visits rather than online setup / reset. That’ll work for a good number of people. The transient (college students, travelers, armed services) will all be out of luck if there bank is regional and they’re out of the region. Online financiers like PayPal will entirely be out of luck.
What about healthcare providers? How do they handle their customers? If I have questions about my health insurance, do I have to travel to a participating practice? That seems obtrusive.
Will we see authentication services spring up? Oh, you’d like to open a PayPal account? You’ll need to visit a USPS facility or AuthCo for an verification code to complete your signup / application.
WannaCry hit. There are a million excellent write-ups on the malware. How about one about how to respond if it isn’t via a help desk ticket notifying you of the malware?
Initial reports started coming out in the morning of Friday May 12th that NHS was experiencing a significant outage due to ransomware. I didn’t think anything of it. Ransomware is common enough. It wasn’t until it was specifically given a name and a mention that the method of spread was via MS17-010.
- In previous incidents, email was the attack vector. Everyone seemed to initially assume there was an email.
- Windows XP was assumed to be the weakness, instead it is looking like Windows 7 was the weak link for most organizations.
- Given SMB was the attack vector, how does an organization respond? Most rapid responses for WannaCry seem high risk.
- I suspect an organization with SMB open at the edge could block it with minimal impact. Throwing up ACL blocks to segment the network or disabling SMBv1 under stress seems like an extremely high risk change.
- Endpoint vendors (ex: McAfee, Symantec, and Trend Micro) released guides for customers. These seem like lower risk mitigations.
- Given the attack was on a Friday, ‘malware Monday’ was my real concern. A device leaves a secure network missing a patch, spends the weekend on the open internet, then comes back. Even one device is an issue, let alone any file system it can access or internal movement as it spreads. ACL blocks are high risk, private VLAN’ing would be an absurd to implement as an emergency change.
PS: Splunk finally started releasing dashboards around major security incidents. I’ve been asking for dashboards from them for years. The dashboard seems really well done. It has use cases and an export (with the Splunk logo) suitable for leadership. If I weren’t sure what to do, this is a great start and a great advertisement for the benefit of Splunk come renewal time.
What’s better for security: finishing the deployment of a functional product or start fresh with the modern product? It seems like it really depends on circumstances.
I had an alright MSSP. There were some problems though. Picking a new MSSP allowed me to fix some configuration issues I couldn’t resolve due to organizational inertia.
On the other hand, I had hardware for network segmentation and additional hardware would have been pretty affordable. My issue was a lack of FTE’s. When leadership buy-in was available, we had to use a hot new product and start over. I’m not sure we were any better off after the replacement. The new product wasn’t really doing anything that much better. It had the potential, but we lacked the FTE’s to make it happen.
Similarly, I went from an integrated IDS/IPS that wasn’t managed to a newer modern profiling stand-alone IDS/IPS. Given the previous product was integrated, I’d argue we’d clearly lost effectiveness since the FTE cost was higher with this stand-alone product.
On to Monitoring.
I’ve got a SIEM type product. It’s not fully deployed, but who can say their SIEM is really fully deployed and tuned? I’m looking at UEBA and analytics platforms, but I suspect my time would be better spent focused on a well tuned SIEM than 3 partially deployed platforms. The vendors mostly agree/admit that the platforms all have steep FTE requirements for success.
Microsoft ATA is extremely affordable and claims to be a UEBA type product. It’s ignoring the non-Microsoft user realms, but can an attacker get in and get data out of an environment without touching anything Active Directory?
I couldn’t begin to write anything close to comparable to Anton Chuvakin’s blog post on the subject of security analytics.
It seems like money and FTE effort would be better devoted to furthering a SIEM deployment completeness / maturity.
I completed my SANS GMON / Continuous Monitoring certification. I’m pleased with the process.
My training history is a web app course at DerbyCon (amazing for the cost) in 2011 or so, Tao Security‘s old Black Hat course on security monitoring and the Black Hat version of Offensive Countermeasures, both at Black Hat 2010? Offensive Countermeasures was interesting, but I’ve never been in an environment I could apply any of it.
SANS SEC511 felt like a five day version of Tao Security’s course. I wish this course had existed when I was new.
SEC511 covered a wide range of tools you’d likely encounter in an enterprise and the labs covered some of the more functional open source alternatives. It just seemed to make sense. BroBro is a must for security monitoring while ModSecurity is a bit of a bastard to work with and less likely to provide value in most environments.
The exam was more tailored around understanding the tools and designing a functioning environment for monitoring. I was concerned the test was going to be a memory game of understanding the various flags in tools.
If you’re reading this and are concerned about the test, @Hacks4Pancakes guide for SANS on her website is fantastic.
I participated in a brief discussion on Twitter regarding the Podesta email breach. I blame the DNC for the reach.
Clinton and Podesta were functionally using shadow IT. There really isn’t any excuse for not detecting it. So then what? Do you declare it unsupported or assess the risk and implement compensating controls?
How expensive is monitoring for a few dozen extra email addresses on HaveIBeenPwned?
These folks already have administrative assistants and likely physical security details. If your organization is willing to spend those resources, why can’t you spend some resources assisting with the secure configuration of personal devices?
To take it a step further, give them a SOHO router IT remotely manages. You can decide how comfortable the VIP and the organization is with the relationship. Are you just doing secure configuration? You can throw OpenDNS on it for ‘blind’ basic security.
In a Cisco environment, you can hand out 881 devices that automatically VPN back. The VIP can take home a hardware VOIP phone. You can provision dedicated wireless for the VIP devices. The person won’t even have to worry about VPN in their home office.
Deception technology seems to be the latest buzzword. It was everywhere at Black Hat and now my favorite Gartner researchers are covering it.
I talked to the vendors at Black Hat. I can’t figure out what they are offering that I couldn’t give to an intern to deploy over the summer. I worked with an intern over a summer to deploy the basic technology.
My deception environment:
- I reserved random IP space and assigned likely machine names in a couple segments of the network. If administrative cost is a concern for deploying a sensor, null it route it to the local router / switch and log traffic.
- In the server space, I requested standard Linux and Windows systems following the IT standards. Throw standard software on them with default configs. Log the activity.
- I created some honeypot DNS entries. Payroll, Mainframe, etc.
- I had some honeypot accounts created in Active Directory following naming standards. I didn’t request the IT staff log in to some systems, that’s a clear deficiency in my setup. I should have requested the Oracle team create an extra account, the AD admins create an extra account, etc.
I didn’t get any confirmed malicious hits, but did catch some unexpected scanners. Printing services performed an unexpected/unplanned scan of the user environment, a system admin incremented system numbering in to my space, etc.
I participated in the Pros V Joes CTF competition at BSides Las Vegas this year. It was intense.
The setup on day one is that you are a member of a blue team entering a compromised environment. You have multiple tasks: respond to service requests, maintain uptime for a couple services such as WordPress and ftp, find artifacts left behind by the attackers, and repel new attacks. The attackers don’t attack for the first two hours.
The environment was very well designed. We had vSphere to see most systems. There was a nice mix of Windows end point and server versions, a few Linux systems, an Asterisk PBX, and a pfSense firewall. I’m happy it was a pfSense firewall. I’m told they used Cisco ASA in previous years. The ASA is extremely rough to manage, I’d prefer to never see one again.
We started by doing network discovery, patching, and checking configurations. We also started responding to customer requests via calls, tickets, and emails. The tickets were all pretty basic and represented real world requests in this situation. Through network discovery, we found a few systems the customer neglected to mention to us.
Our failing was our lack of experience with Asterisk. We focused on other systems as we knew those systems. The red team immediately hit it when they were able to attack and took it down. While the phone wasn’t under SLA, we couldn’t receive tickets via phone and started receiving email tickets asking us to fix the phone system.
Day two was a repeat of the day one environment with some minor changes. A member of the read team would join us and we would be battling the other blue teams. We could start attacking each other immediately. Given the results of day one, the PBX was the main target. Everyone’s PBX immediately went down. I’d suggested a strategy of immediately blocking the Internet from our environment and taking the SLA hit while we patched. We decided against the strategy, hoping we could remediate fast enough.
Would I participate again: Yes!