Thinking back to BSides Las Vegas this past summer, there was an interesting talk in the Hiring Ground track, “The Commoditization of Security: Will You Be Replaced By A Script” by Nathan Sweeney.
Nathan mentioned trying not to get too focused on a specific product or technology. He recalled working with DHCP specialists at one time. I’ve never heard of a dedicated role for DHCP. Conversely, COBOL is old but isn’t going away. What technologies stick around, what technologies easily translate, and what is automated away?
Looking at my own resume, my network firewall skills are likely the skill to rapidly fall out of demand. Everyone is buying ‘next-gen’ firewalls, but even the complexity of those are likely to fall out of popularity as we move to cloud services and remote workers. We’ll still have them, but it’s looking more and more like they’ll be going the way of DHCP. You’ve got it, but it mostly self-configures based on content from associated technologies in the vast majority of environments.
You plug in your Next-Gen Firewall. BGP participation sets up the routes for each interface. Cisco ISE applies the network layer controls. The CASB applies cloud access controls. Your threat data integration sets up the IPS type blocking policies. You’re done.
That should be the goal anyway.
We don’t yet know who has the Equifax data or the entirety of the information they have.
We know what we can see when we pull a credit report. We see our addresses, our phone numbers, and a list of every line of credit associated with our name.
The OPM data is in the hands of the Chinese government. They probably won’t leak / sell it.
What if the Equifax data gets sold or dumped? We could be looking at a collapse in remote verification. How do you verify someone if the common answers to questions are entirely public? I don’t know how.
Traditional banks could require in-person visits rather than online setup / reset. That’ll work for a good number of people. The transient (college students, travelers, armed services) will all be out of luck if there bank is regional and they’re out of the region. Online financiers like PayPal will entirely be out of luck.
What about healthcare providers? How do they handle their customers? If I have questions about my health insurance, do I have to travel to a participating practice? That seems obtrusive.
Will we see authentication services spring up? Oh, you’d like to open a PayPal account? You’ll need to visit a USPS facility or AuthCo for an verification code to complete your signup / application.
WannaCry hit. There are a million excellent write-ups on the malware. How about one about how to respond if it isn’t via a help desk ticket notifying you of the malware?
Initial reports started coming out in the morning of Friday May 12th that NHS was experiencing a significant outage due to ransomware. I didn’t think anything of it. Ransomware is common enough. It wasn’t until it was specifically given a name and a mention that the method of spread was via MS17-010.
- In previous incidents, email was the attack vector. Everyone seemed to initially assume there was an email.
- Windows XP was assumed to be the weakness, instead it is looking like Windows 7 was the weak link for most organizations.
- Given SMB was the attack vector, how does an organization respond? Most rapid responses for WannaCry seem high risk.
- I suspect an organization with SMB open at the edge could block it with minimal impact. Throwing up ACL blocks to segment the network or disabling SMBv1 under stress seems like an extremely high risk change.
- Endpoint vendors (ex: McAfee, Symantec, and Trend Micro) released guides for customers. These seem like lower risk mitigations.
- Given the attack was on a Friday, ‘malware Monday’ was my real concern. A device leaves a secure network missing a patch, spends the weekend on the open internet, then comes back. Even one device is an issue, let alone any file system it can access or internal movement as it spreads. ACL blocks are high risk, private VLAN’ing would be an absurd to implement as an emergency change.
PS: Splunk finally started releasing dashboards around major security incidents. I’ve been asking for dashboards from them for years. The dashboard seems really well done. It has use cases and an export (with the Splunk logo) suitable for leadership. If I weren’t sure what to do, this is a great start and a great advertisement for the benefit of Splunk come renewal time.
What’s better for security: finishing the deployment of a functional product or start fresh with the modern product? It seems like it really depends on circumstances.
I had an alright MSSP. There were some problems though. Picking a new MSSP allowed me to fix some configuration issues I couldn’t resolve due to organizational inertia.
On the other hand, I had hardware for network segmentation and additional hardware would have been pretty affordable. My issue was a lack of FTE’s. When leadership buy-in was available, we had to use a hot new product and start over. I’m not sure we were any better off after the replacement. The new product wasn’t really doing anything that much better. It had the potential, but we lacked the FTE’s to make it happen.
Similarly, I went from an integrated IDS/IPS that wasn’t managed to a newer modern profiling stand-alone IDS/IPS. Given the previous product was integrated, I’d argue we’d clearly lost effectiveness since the FTE cost was higher with this stand-alone product.
On to Monitoring.
I’ve got a SIEM type product. It’s not fully deployed, but who can say their SIEM is really fully deployed and tuned? I’m looking at UEBA and analytics platforms, but I suspect my time would be better spent focused on a well tuned SIEM than 3 partially deployed platforms. The vendors mostly agree/admit that the platforms all have steep FTE requirements for success.
Microsoft ATA is extremely affordable and claims to be a UEBA type product. It’s ignoring the non-Microsoft user realms, but can an attacker get in and get data out of an environment without touching anything Active Directory?
I couldn’t begin to write anything close to comparable to Anton Chuvakin’s blog post on the subject of security analytics.
It seems like money and FTE effort would be better devoted to furthering a SIEM deployment completeness / maturity.
I participated in a brief discussion on Twitter regarding the Podesta email breach. I blame the DNC for the reach.
Clinton and Podesta were functionally using shadow IT. There really isn’t any excuse for not detecting it. So then what? Do you declare it unsupported or assess the risk and implement compensating controls?
How expensive is monitoring for a few dozen extra email addresses on HaveIBeenPwned?
These folks already have administrative assistants and likely physical security details. If your organization is willing to spend those resources, why can’t you spend some resources assisting with the secure configuration of personal devices?
To take it a step further, give them a SOHO router IT remotely manages. You can decide how comfortable the VIP and the organization is with the relationship. Are you just doing secure configuration? You can throw OpenDNS on it for ‘blind’ basic security.
In a Cisco environment, you can hand out 881 devices that automatically VPN back. The VIP can take home a hardware VOIP phone. You can provision dedicated wireless for the VIP devices. The person won’t even have to worry about VPN in their home office.
Deception technology seems to be the latest buzzword. It was everywhere at Black Hat and now my favorite Gartner researchers are covering it.
I talked to the vendors at Black Hat. I can’t figure out what they are offering that I couldn’t give to an intern to deploy over the summer. I worked with an intern over a summer to deploy the basic technology.
My deception environment:
- I reserved random IP space and assigned likely machine names in a couple segments of the network. If administrative cost is a concern for deploying a sensor, null it route it to the local router / switch and log traffic.
- In the server space, I requested standard Linux and Windows systems following the IT standards. Throw standard software on them with default configs. Log the activity.
- I created some honeypot DNS entries. Payroll, Mainframe, etc.
- I had some honeypot accounts created in Active Directory following naming standards. I didn’t request the IT staff log in to some systems, that’s a clear deficiency in my setup. I should have requested the Oracle team create an extra account, the AD admins create an extra account, etc.
I didn’t get any confirmed malicious hits, but did catch some unexpected scanners. Printing services performed an unexpected/unplanned scan of the user environment, a system admin incremented system numbering in to my space, etc.
Note: The information in this post is from the Wikipedia article.
The Clinton email scandal is an interesting shadow IT failure. Based on the Wikipedia article, the domain and mail server were setup in 2008 at a residence before moving to a data center in 2013 for management.
In this situation, we have a user that wants mobile device access on her BlackBerry. I was a BlackBerry Enterprise Admin at that time. At that point in my career, I had advocated and implemented a mandatory password policy and a mandatory device encryption policy on BlackBerry devices for a global enterprise. I’d also implemented device health / status monitoring using the BlackBerry monitoring. BlackBerry also offered remote wipe capabilities that mostly worked. She was asking for the most secure device at that time. She wasn’t asking for an iPhone 1 nor an initial Android. She went rogue and setup her own mail server. I’m going to guess her setup didn’t include a BlackBerry Enterprise Server. IT just said no, the user went rogue.
There should be a history of security incidents / reports around this mail server usage. Why didn’t the security monitoring team notice anything? From a mail flow perspective, this should have stood out as odd. I’m assuming this would have been a relatively high volume external domain. Why didn’t the DLP team notice anything? There should have at least been alerts for classified information leaving the network. Why didn’t audit / compliance catch this? They should have caught the mail filter exemptions that surely existed.
Unless they all did and the reports were buried…
The final wrinkle in the story for me is she eventually moved the server to a cloud provider. Their website mentions they provide managed security services. What are their security practices like? Could any of their employees see her data? Did the investigation include reviewing data handling practices at this organization?
As a followup, I listened to the entire testimony via the Lawfare Podcast. Per the testimony, James Comey indicated the content on her mail server was improperly marked. A properly marked document should have a header and a footer. How did the DLP solution allow an improperly marked document leave the network? Shouldn’t it have flagged the document as being non-compliant?