Why Should I Prioritize One Security Effort Over Another?

A "Priorities" list that is empty with a full coffee cup to the side giving the impression that someone is waiting to figure out what their priorities are.

Security is one of the foundational pillars of building well in the cloud.

While this has long been the view of security teams, it’s often lost in the implementation. Security teams are usually brought in at various project milestones in order to help “secure” the workload.

In a traditional environment, this lack of collaboration with the teams building the workload was mitigated by a strong security perimeter. Essentially, the view has been, “We have a strong set of walls, anything inside should be safe.”

Of course, headline after headline would beg to differ. Now, adding the speed of change in the cloud gives you a real set of challenges to deal with.

Distributed Security Work

The good news is that a lot of the work done previously by dedicated security teams can be done by the team building the workload.

Why?

Well, it turns out that a lot of security work is just asking the question, “What else can this be made to do?” over and over again. How does that help?

The goal of cybersecurity is to ensure that whatever you do works as intended and only as intended.

It’s that last bit “…only as intended” that causes all of the problems. But if you continue to ask the team, “What else can this be made to do?” you can find a number of issues before they become problems.

This is similar to the design work you do everyday when trying to solve a specific problem in code. Yes, things can get tricky when it comes to the more esoteric problems but in general, you can get a lot of mileage out of this simple question.

This is true especially when you can still make changes to the code and the overall design. A lot of security compromises come from the fact that the security controls are not built in and are instead bolted-on after the fact.

OWASP Top Ten

The Open Web Application Security Project or OWASP has tracked the top ten web application security risks for over a decade. 

This list not only helps you prioritize your security efforts while building but it highlights how critical it is to address these issues while you’re building your solution.

Injection attacks have been atop this list since the beginning. That is not a good sign. The culprit in these attacks is almost always input that was assumed to be “ok” somewhere (a/k/a not sanitized) or a variable that has too broad a scope.

This highlights again why it’s critical to think of security early and often…strike that; constantly. Think of security constantly.

Security Bugs and Features

How to integrate that level of security thinking into your development process are topics for another day…and if you’d like to see a post on that, let me know in the discussion.

For now, let’s focus on what this change implies; security issues are now going to be development tasks like any other feature or bug.

Sadly, we don’t have unlimited time to work on things, we need to be able to figure out how to prioritize security work alongside other types of work.

In an ideal world, this would be simple; do it first!

Again, sadly, that’s not going to happen.

Security tasks are hard to prioritize because they rarely have a direct and obvious impact. Unlike a performance bug (this runs slower than it should) or a functionality bug (this button doesn’t work), security tasks lack that direct consequence that helps you to prioritize.

Security tasks are all about risk mitigation.

Risk Mitigation

“Risk mitigation” is a fancy term for addressing the possibility of something bad happening.

How likely is it to happen? How bad would it be if it did happen? These are critical parts of the risk equation.

The “equation” is simply the impact of a potential event combined with the likelihood that the event will occur. There are a number of risk frameworks that attempt to assign numbers to this equation and that can be useful.

At its core, all risk calculations are really just a simple combination of these two factors.

For example, advanced aliens coming to invade the planet from outside of the solar system would be very, very bad. It would have an extremely high impact. The likelihood of that happening is extremely low.

Similarly, if a vulnerability allows any random internet-based attacker to change the font color of your website from dark blue to slightly darker blue, that would be annoying but essentially inconsequential. It would be a low impact, high probability event.

Why high probability? Anything internet accessible tends to get scanned and triggered by continuously running cybercriminal scripts.

You could say that these two events have the same level of risk. One high impact, low probability. One low impact, high probability.

But it’s not that simple (despite what most risk frameworks try to get you to believe). That low impact event violates our goal of the workload doing only what we intend.

Add to that the significant level of complexity involved in any workload and there’s a reasonable chance that a future change could open that vulnerability up much wider.

Despite “scoring” these two risks equally we really should ignore the aliens and focus on the change in our workload.

Going On Instinct

Even in this contrived example, we have to ignore what little data we have and go with our “gut’.

The first step in properly evaluating these security risks is to gather more data. Often, when a vulnerability is publicly disclosed, a Common Vulnerability Scoring System or CVSS score is published with it.

These scores try to provide you with framing information around a vulnerability. The score takes in factors like attack complexity, the required privileges to exploit the vulnerability, the scope of the issue, and more.

It’s a reasonable metric but it does not cover the probability side of the risk equation. CVSS is solely designed to help you evaluate the impact of a possible event.

…and remember that CVSS scores are only available for public vulnerabilities. Your internal code vulnerabilities don’t get a score. Though you can assign one if you wish.

It’s up to you evaluate the probability of an attack or event occurring in your environment.

What Are The Chances?

The bad news? Generally, humans are really bad at predicting probabilities. 

The worse news? There isn’t enough data generally available to help you calculate the probability correctly.

This is the main reason that most cybersecurity risk decisions are based almost entirely on the potential impact of an event. Missing data to determine probability, the assumption is a 100% occurrence rate.

In cybersecurity, our tinfoil hats are well earned. 😉

But, this isn’t a reasonable way to prioritize your security work. You have to do your best. The first step is to inventory what mitigations you have in place to help reduce the likelihood of this event occurring. 

If we can exploit this vulnerability remotely, do you allow traffic from that port inbound? Do you have a security control inline that could look for the exploit’s signature and stop it?

If you do have a mitigation in place, that doesn’t mean you should ignore the issue and not fix the vulnerability. What is does mean is that you have time to fix it.

This will allow you to apply a reasonable prioritization instead of dropping everything and rushing to fix it.

No Magic Here

I would love to have written a post that provided a magic formula for prioritizing security work within your builds, but honestly I don’t think there is one.

As mentioned above, the biggest challenge here is the lack of data to help complete the risk equation and put the vulnerability into perspective.

Is this why security work is always a rush? Does this situation contribute to the hoarding of security work by the security team?

What do you think? Let’s discuss in the forums…

Join the Community

We’re building a community for people serious about succeeding in the cloud.

JOIN NOW