This is another failure turned opportunity.
I originally submitted this as a conference talk. That talk was rejected, but I was offered a webinar instead. Then I saw a call for a similar topic in another forum, so out of the one rejection, I gave the talk twice.
The good news for my schedule was a significantly later deadline.
My first draft was an absolute mess. I practiced it and it felt off. Early feedback was primarily confused. The only option was to rip it apart and put it back together in an order that made better sense. Naturally, I put this off as long as possible and ended up rushing to finish it … despite having plenty of time. Good job, me.
I wanted to address some great questions I received from those that attended.
Your example of mitigations has both defensive and offensive models. Have you seen blue teams use offensive mitigations?
My example has offensive mitigations with the oil slick, caltrops, and tire slicer. A Bond-like spy would certainly use offensive measures as a defense. This makes sense; if someone punches you in the face, you are usually justified in punching them right back in the name of self-defense.
There’s been “hack-back” legislation proposed a few times, and it has yet to pass into law in the United States. One central argument is the lack of attribution. Organizations know they’ve been the victim of an attack, but the attacker isn’t standing right in front of them, winding up for another haymaker. There can be indicators that an attack was a certain group, but it’s difficult to be 100% sure. If the attack was launched from another victim organization, any “hack-back” activities won’t even get to the attacker. Not ideal!
To answer the question, no, I have not seen blue teams engaging in offensive mitigations.
Does this Threat analysis and Risk assessment work for off-board? If so, how?
In broad strokes, yes. The process of threat modeling isn’t wholly different from traditional IT risk analysis. We’re still looking at impacts and probabilities to determine risk. This process outlines a quantitative framework for determining those values. Appendix G has several options for the feasibility analysis, including CVSS scores, which may be more applicable to off-board use.
The attack feasibility analysis is just looking at the amount of resources required to execute a particular attack path. Zero-day vulnerabilities take time to find and craft the exploit and usually require some level of expertise in a particular technology stack. On the flip side, if an exploit is a point-and-shoot script anyone can execute, the level of expertise required is much lower.
If this process works for your organization, for on or off product analysis, then use it!
Many/most developers have not had training in secure programming techniques. How do you fill this gap?
That’s correct, most developers have not this training. The only way to fill this gap is to train developers. As the software is updated during the normal course of the software lifecycle, the more secure techniques will filter into the code.
I hope, anyway.
Can you post the slides?
Absolutely.
Leave a comment