I participated in the Pros V Joes CTF competition at BSides Las Vegas this year. It was intense.
The setup on day one is that you are a member of a blue team entering a compromised environment. You have multiple tasks: respond to service requests, maintain uptime for a couple services such as WordPress and ftp, find artifacts left behind by the attackers, and repel new attacks. The attackers don’t attack for the first two hours.
The environment was very well designed. We had vSphere to see most systems. There was a nice mix of Windows end point and server versions, a few Linux systems, an Asterisk PBX, and a pfSense firewall. I’m happy it was a pfSense firewall. I’m told they used Cisco ASA in previous years. The ASA is extremely rough to manage, I’d prefer to never see one again.
We started by doing network discovery, patching, and checking configurations. We also started responding to customer requests via calls, tickets, and emails. The tickets were all pretty basic and represented real world requests in this situation. Through network discovery, we found a few systems the customer neglected to mention to us.
Our failing was our lack of experience with Asterisk. We focused on other systems as we knew those systems. The red team immediately hit it when they were able to attack and took it down. While the phone wasn’t under SLA, we couldn’t receive tickets via phone and started receiving email tickets asking us to fix the phone system.
Day two was a repeat of the day one environment with some minor changes. A member of the read team would join us and we would be battling the other blue teams. We could start attacking each other immediately. Given the results of day one, the PBX was the main target. Everyone’s PBX immediately went down. I’d suggested a strategy of immediately blocking the Internet from our environment and taking the SLA hit while we patched. We decided against the strategy, hoping we could remediate fast enough.
Would I participate again: Yes!
Note: The information in this post is from the Wikipedia article.
The Clinton email scandal is an interesting shadow IT failure. Based on the Wikipedia article, the domain and mail server were setup in 2008 at a residence before moving to a data center in 2013 for management.
In this situation, we have a user that wants mobile device access on her BlackBerry. I was a BlackBerry Enterprise Admin at that time. At that point in my career, I had advocated and implemented a mandatory password policy and a mandatory device encryption policy on BlackBerry devices for a global enterprise. I’d also implemented device health / status monitoring using the BlackBerry monitoring. BlackBerry also offered remote wipe capabilities that mostly worked. She was asking for the most secure device at that time. She wasn’t asking for an iPhone 1 nor an initial Android. She went rogue and setup her own mail server. I’m going to guess her setup didn’t include a BlackBerry Enterprise Server. IT just said no, the user went rogue.
There should be a history of security incidents / reports around this mail server usage. Why didn’t the security monitoring team notice anything? From a mail flow perspective, this should have stood out as odd. I’m assuming this would have been a relatively high volume external domain. Why didn’t the DLP team notice anything? There should have at least been alerts for classified information leaving the network. Why didn’t audit / compliance catch this? They should have caught the mail filter exemptions that surely existed.
Unless they all did and the reports were buried…
The final wrinkle in the story for me is she eventually moved the server to a cloud provider. Their website mentions they provide managed security services. What are their security practices like? Could any of their employees see her data? Did the investigation include reviewing data handling practices at this organization?
As a followup, I listened to the entire testimony via the Lawfare Podcast. Per the testimony, James Comey indicated the content on her mail server was improperly marked. A properly marked document should have a header and a footer. How did the DLP solution allow an improperly marked document leave the network? Shouldn’t it have flagged the document as being non-compliant?
I’m in the process of evaluating behavior analytics tools. UBA and UEBA seem to be the popular acronyms for the product space.
It appears the space has three separate platform styles. You’ve got your add-ons such as CyberArk Priveleged Threat Analytics, Microsoft Advanced Threat Analytics, and Rapid7 InsightUBA. You’ve got your ‘independent’ platforms from Exabeam and Gurucul. Finally are your network analytic platforms such as Observable Networks and Pwnie Express. My prediction is Microsoft is going to absolutely crush the user competition.
If you’ve already got a heavy Microsoft environment, ATA’s cost is negligible compared to the enormous additional cost of expanding your CyberArk or Rapid 7 environments. Those platforms can clearly cover more ground than just Microsoft platforms, but is it worth the additional cost and the proper environment configuration? It’s much easier to blindly stumble upon building a functional Microsoft environment than properly build a Linux environment. Local account usage seems significantly less common in enterprise Microsoft environments than enterprise Linux environments from what I’ve seen and heard.
I like the network analytics platforms, but they’ve got a battle against Cisco’s Stealthwatch platform. Cisco has a really weak ability to deliver in the security space, but they’ve got integration benefiting them here. It’s tough to battle against Cisco security products in a Cisco environment. We’ll have to see how the other network analytics products perform.
Anton Chuvakin’s review of the DBIR is my favorite. It is super concise and to the point.
Document page 10 / PDF 14 has a chart comparing compromise time and exfiltration time. Compromise time is typically in minutes while 20% of exfiltration is in minutes and 70% is in days. Both make sense given the data on hand.
If you’re getting in, you’re getting in relatively quickly. The message is delivered or the application vulnerability is found. If you fail, you’re changing tactics and your attack likely won’t be correlated with the previous attack. I’m not aware of many organizations doing any real threat intelligence. My MSSP’s and threat data providers can never answer if they’ve seen alerts from fellow customers.
My guess is time for exfiltration is based on the compromise. The shorter compromises are if the target had the data while the longer exfiltration are if a pivot was required. If the attacker can pull off a smash and grab, the exfiltration will be quick.
It’s too bad the vulnerability section is so poorly sourced. Then again, patching and QA typically aren’t given the resources to properly function at most organizations anyway.
I gave a brief presentation on using Splunk for Enterprise Security after having used Splunk for a while. Here is a summary of my thoughts:
Splunk for Enterprise Security seems to primary be two things. A ticketing system and a investigation system.
The ticketing system is alright. I’d already had Splunk integrated in to an enterprise ticketing system. From the ticketing perspective, Splunk ES is relatively weak. Alerting / notifications and metric tracking leave a lot to be desired.
The investigation system is built on two explorer dashboards, one for identities and one for assets. If I view an asset for example, Splunk ES can show Windows events, IDS events, and firewall events all in a single dashboard. It is very polished. I’d suspect an organization further along in Splunk may already have a home brew version of this. My previous environment had something in the infant stages of this.
Splunk for Enterprise Security is heavily built on data models. I’d argue the Knowledge Object course is more useful to a security practitioner than the Using Enterprise Security or Administering Enterprise Security course. If you’ve already been using Splunk for a while, you likely already have dashboards and alerts with functionality similar to Splunk for Enterprise Security. The ES training seemed focused on understanding the dashboards more than understanding the underlying Splunk ES architecture.
If you’ve been using Splunk for a few years, Enterprise Security isn’t going to wow you. You’ve already got plenty of alerts and dashboards to compete with the package out of the box. It will however allow you to build much better searches going forward. Having used it for a few months, I have correlations in Splunk ES that I doubt I could have ever written as alerts in Splunk.
My organization is testing Geofiltering controls. I’m generally opposed to Geofiltering, but this is intriguing.
The easiest controls are the ones backed by policy. If your audit department has rules against international remote access, that’s mostly easy. Every modern firewall except for Cisco offers native Geofiltering. Apply the rules and walk away. The shortcoming is threat intelligence. If your remote access solutions (Cisco ASA VPN) can’t handle Geofiltering, you’re stuck. In an ideal world, I’d like to use dynamic access policies to block users after authentication.
User / Customer / Shareholder system controls are the interesting ones. You can’t block those people just because they travel overseas. Do you implement captcha technology, email notifications, etc?
Are conference attendees getting older on average? A fellow attendee made the observation at DerbyCon. ShmooCon is probably combating this with their Shmooze-A-Student program as well as their guarantee of tickets for West Point. But what about the others?
The theory given was that DerbyCon and ShmooCon are hard to attend. Tickets sell out extremely quickly. The tickets are more likely going to people who have already attended and want to return.
Anyone else observe this or have any thoughts?