How to prioritise vulnerability remediation

In this blog post we look at the challenges that organisations face in this respect, at what causes these issues, and what techniques, tools, and methodologies your team can leverage to best deliver effective vulnerability remediation prioritisation.

Many organisations struggle with the question of how best to tackle vulnerability remediation, and to keep on top of what can feel sometimes like an overwhelming barrage of new vulnerability reports landing almost daily, all competing for attention. The better an organisation’s vulnerability detection and assessment coverage, the more vulnerabilities will be uncovered. It is common for maturity in detection capabilities to “lead” maturity in wider vulnerability management approaches including remediation and managed remediation prioritisation in particular.

 

What is vulnerability remediation?

Acquiring a vulnerability assessment or scanning tool can often be seen as the end goal or “job done” for vulnerability management. However, simply discovering what vulnerabilities exist within a given technical estate is only the first piece of the puzzle: it is vital information, but it is only useful if the output is appropriately considered and used to guide action.

Vulnerability remediation (literally “fixing vulnerabilities) is one possible outcome when a vulnerability is discovered. In teams that are newly addressing vulnerability management for the first time or have only just acquired a vulnerability assessment tool such as a vulnerability scanner, it can seem as if it is the only possible action that can be taken in response to vulnerability detection – although as we shall see shortly, that is not always the case.

 

What is the purpose of vulnerability remediation?

The aim of vulnerability remediation is quite simply to stop the vulnerability from being exploited. Most commonly this is in order to reduce or eliminate the risk to the organisation from a vulnerability being exploited (although other motivations such as delivery on compliance objectives and requirements are also possible).

A vulnerability by itself is “passive” in that it represents a weakness in an organisation’s security, but it requires a realised threat in order to be taken advantage of. A threat consists of a combination of an exploit that is possible against a given vulnerability, as well as a threat actor who is willing and able to carry it out in an attack.

The aim of vulnerability remediation is to remove the vulnerability itself from this equation: other security programme elements may focus on identifying and denying access to threat actors, for example As we shall see shortly, intelligence gathered around the current state of threats, threat actors and existent exploits can all be used in remediation prioritisation.

 

Why is the order of vulnerability remediation important?

Many factors can be used to determine an organisation’s remediation prioritization, but most commonly, the motivations are to minimise the attack window (the period of time during which a vulnerability is exploitable, as represented by the time between vulnerability discovery and vulnerability remediation.

The appetite to reduce the attack window for a given vulnerability may be greater or lesser than in comparison to another, based upon organisational requirements and objectives – most notable risk as we shall see shortly. In addition to risk considerations, remediation order may also be guided by compliance requirements (such as PCI DSS, which mandates certain timescales for remediation of vulnerabilities in environments that are required to comply with its requirements).

 

Why can it sometimes be difficult to manage vulnerability remediation?

Many teams, particularly in smaller organisations, can then find themselves faced with a volume of vulnerabilities that is orders of magnitudes greater than they ever envisaged before embarking on their vulnerability management journey. They can find themselves initially under-prepared, under-resourced or unequipped to be able to manage vulnerability remediation with existing, predominantly manual, workflows. Patching hosts manually or in response to individual vulnerability reports often does not scale well in response to an implemented vulnerability assessment tool that may be uncovering potentially thousands of vulnerabilities across hundreds of discrete systems. Organisations operate increasingly complex networks with thousands, tens of thousands or hundreds of thousands of endpoints split across campus (office), data centre/collocated and cloud platforms, and newly implemented vulnerability assessment tools can easily generate vulnerability reports that are hundreds of pages long.

Teams can therefore find themselves stuck in a seemingly endless game of whack-a-mole, struggling to keep on top of the remediation of vulnerabilities, with vulnerability reports showing steadily increasing counts of vulnerabilities discovered but not yet remediated.

At this point, teams can often find that they are overwhelmed and unable to address the issue. This journey is not unusual but is in fact very typical in almost every organisation: many organisations including SANS Institute have developed what are known as maturity models of how vulnerability management programmes progress through different stages of development once initiated. The early stages are typically characterised in all such models as being represented by a period of peak overload where the maturity of detection has accelerated faster than other elements of vulnerability management such as prioritization and remediation.

 

How can vulnerability prioritisation be decided?

The key element in being able to manage vulnerabilities in a more sophisticated manner involves taking a step back and examining vulnerability management processes and goals. During the early stages of vulnerability management, attempts to manage remediation can focus on fairly basic metrics such as “count” of total vulnerabilities, and it is by trying to achieve these – admittedly well-intentioned – goals that the problem of overload is introduced.

Although counter-intuitive, security can be increased – and risk reduced – by not attempting to resolve every vulnerability immediately, since there are never sufficient resources available to do so (as well as other practical concerns). The key step in prioritization determination is a stage of assessment of the vulnerabilities.

The word used for “prioritization and assessment” in the military and medical sphere – especially where there is an acknowledged lack of resource to immediately resolve every injury – is “triage”, and this same term can often be found applied to the step that is introduced between detection and remediation as a vulnerability management programme matures.

Triage combines a process of rationing of active response towards that which needs it, and that benefits most from it, as well as identifying potential hazards in remediation. Triage can take many forms, and use many different criteria, but these fundamental processes are adhered to in most forms. It can be either qualitative in that it simply determines order of treatment/remediation or quantitative in that a scoring system is used, with requirements (such as treatment/remediation timescales) fixed for a given score level.

 

What factors can be used to guide vulnerability remediation assignment during the triage process?

As vulnerability management matures within an organisation, a key step in implementing effective prioritization is achieved when the triage process begins to align remediation priority with the concepts of response to risk (from not remediating a given vulnerability) as well as knowledge of threats (who how and why a vulnerability may become exploited).

A key concept in why we seek to prioritise the remediation of one vulnerability over another is down to reducing the attack window but for a slightly broader understanding of how we can guide the prioritization of remediating vulnerabilities we can draw analogies to the military concept of threat prioritization. Threat prioritization as a concept is relatively simple to appreciate and is applied at everything from the lowest tactical level (of an individual soldier) up to overarching strategic objectives – the commonality is that there is a prioritization of response to so-called time sensitive targets (TST) that require a prioritised response because they have been identified as posing a danger or threaten operational objectives.

This is perhaps most simply illustrated if you imagine a police officer armed with a taser and facing a room full of subjects that threaten him. If one of these subjects begins to charge towards the officer and raises a weapon in the air, then the officer is highly likely to prioritize that subject for action: there is a risk of harm, and an identified threat actor that has an opportunity (in vulnerability management, an available exploit) as well as a clear intent (observed to be presenting a threat).

Slightly more rigorously than this loose analogy, the US military was one of the first organisations to codify a method of threat prioritization: later models are more sophisticated, but the simplicity of the initial CARVER matrix makes it useful for highlighting key factors that can influence prioritization. The factors that were identified as being important in determining the relative assessment of specific threats in order to guide prioritization were:

  • Criticality – How severe would be the consequences of the threat being exploited and not prevented – this is the most important consideration.
  • Accessibility – Do we actually have the capability and access to remediate the issue, or is it out of our hands (e.g., a system under control of a third-party cloud provider)?
  • Recoverability – What is the benefit from remediating this threat in terms of disruption of similar threats in future – i.e. are we able to fix just this instance, or prevent future similar instances?
  • Vulnerability – a confusing term given the context here but meant to consider whether we actually have the resources to effectively resolve the issue (is a patch even available to us?).
  • Effect – What is the overall effect of remediation, beyond resolving the threat? (e.g., are there additional supplementary benefits, such as bringing a system under patch management)
  • Recognizability – How confident are we at outset that it is actually possible to complete the task effectively? (e.g., a simple patch is more certain to resolve the issue than trying to debug an issue in our own code – we cannot be absolutely certain we will be able to find the issue or how long it might take).

.

How can these principles be applied to vulnerability management?

As with anything in information technology, there are dozens of competing standards and methodologies that aim to provide guidance that can be used to steer remediation prioritization. However, by far the most common currently is the Common Vulnerability Scoring System (CVSS). The CVSS is a free and open industry standard for assessing the severity of computer system security vulnerabilities that allows the assignment of severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to threat. CVSS attempts to apply many of the considerations we looked at in the CARVER matrix. It considers the impact should a vulnerability be exploited, as well as the likelihood that it may be exploited, based on the availability of exploits as well as the the access vectors available to attackers who seek to exploit the vulnerability.

Criticisms of the CVSS scoring system include that it is too abstract and fails to provide sufficient contextual information about risk that is based upon realised threats (knowing which exploits are currently happening, fails to consider the popularity of the target systems globally (and hence which are attractive to attackers), and fails to account for organisation-specific exposure. However, these criticisms are largely based upon a lack of familiarity with the CVSS model: the CVSS scores issues for a vulnerability are typically only what is known as the base metrics recorded for a vulnerability at time of its discovery: the CVSS model deliberately incorporates two further stages for score adjustment as below:

  1. Base Metrics for qualities intrinsic to a vulnerability
  2. Temporal Metrics for characteristics that evolve over the lifetime of vulnerability
  3. Environmental Metrics for vulnerabilities that depend on a particular implementation or environment

When used as intended, the CVSS system can provide organisations with real, threat-based understanding of vulnerability prioritization, that considers the exposure of the organisation at a given moment in time.

 

What is the best approach and how can it be implemented?

We’ve seen how the best practice around effective vulnerability prioritization relies on detailed intelligence, about factors both internal and external to your organisation, in order to be able to contextualise vulnerabilities beyond their base risk.

Factors internal to your organisation are based around a mature understanding of your organisational context, as well as the individual assets within it. From an organisational context, the industry that you operate in, your prominence and visibility and other similar factors can all impact risk. In order to identify the risk or threat posed by a vulnerability that has been found on a given asset, it is also necessary to understand the asset’s context within the organisation, including its exploitability and potential threat vectors (for example, is the asset exposed at network edge, or is it screened or even air gapped from remote network access) as well as the impact should an attack against that asset via the vulnerability in question be successfully conducted. This normally involves establishing and maintaining an asset management database, which records the system’s criticality for data it stores or processes, as well as the impact of any threats to that system’s availability.

If CVSS or any similar system is used, it is then possible to generate threat-driven metrics that clearly identify remediation priority, and an approach to remediation prioritization that goes far beyond simply driving down vulnerability counts.

This threat-focused remediation can be partnered with processes that can help to reduce “noise” and volume of vulnerabilities, via improved techniques such as proactive patching (patching on a set schedule and using automated or semi-automated processes, rather than only in response to specific vulnerability discoveries), as represented by the highest levels in the vulnerability management maturity models.

Noise also improved by robust workflow and risk treatment – don’t have to fix everything. Triage can be about more than just order. do you actually need to fix? Understanding fix is only one risk treatment – others are risk transfer, risk acceptance, etc. not just on risk but also negative risk and harm of taking down database and losing revenue if patch fails etc. “cure worse than disease”.

A final point worth noting is that once a holistic vulnerability management process is put in place based on risk, it is worth adopting a formal and broader risk management programme. This is a specialist area that is far too broad to cover in this article on remediation prioritization, but one aspect that is worth mentioning is that in such models, remediation is only one of many potential risk treatments that can be applied to risks (including vulnerabilities). This may seem surprising if starting from the assumption that all vulnerabilities must be remediated, and the only question to consider is which ones to remediate first. However, vulnerability remediation is not always simply. Patching will get remediate a significant number of vulnerabilities but is not always possible: a patch may simply not be available for a given vulnerability, or the vulnerability may relate not to a flaw in off the shelf software but in-house code, environmental configuration, or simply business logic. Alternatively, a system with the vulnerability in place may be found to be out of vendor support, or a “black box” that cannot be modified once deployed. Although alternative remediation solutions to patching are sometimes possible, known as mitigation or compensating controls – such as configuration changes, changes to procedures or processes, or implementing compensating or screening controls such as firewalls to block access to exposed services – sometimes even these are not possible.

Additionally, sometimes a fix is possible, but on balance of risk it is decided to be not worth applying. This can be either because the resource and manpower cost of remediation is greater than the estimated cost to the business if the vulnerability is exploited, or because the negative risk from applying the fix is too great: for example, if a patch to secure a database cannot be tested in an off-production environment and carries substantial risk of disrupting business service catastrophically if it does not apply successfully.

In these situations, and others, it is possible to consider a range of alternative options, some of which may seem surprising: including risk avoidance (such as simply shutting down the affected system completely and removing it from service), risk transfer (not resolving the risk, but taking out insurance against the possibility of its being exploited or simply outsourcing the entire system or project to another company along with penalty clauses if the system is breached), or risk acceptance (simply doing nothing, and continuing to leave the vulnerability exposed and in place).

 

How can AppCheck Help?

AppCheck help you with providing assurance in your entire organisation’s security footprint. AppCheck performs comprehensive checks for a massive range of web application vulnerabilities from first principle to detect vulnerabilities in in-house application code.

AppCheck provides services for both vulnerability prioritization in remediation, as well as asset grouping and management functionalities. AppCheck also offers an application programming interface (API) as well as integrations with systems including Atlassian JIRA, both of which offer methods to integrate into external systems for customers with existing asset management or risk management systems.

The AppCheck web application vulnerability scanner has a full native understanding of web application logic, including Single Page Applications (SPAs), and renders and evaluates them in the exact same way as a user web browser does.

The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.

 

About AppCheck

AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA).

 

Additional Information

As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please contact us: info@localhost

Get started with Appcheck

No software to download or install.

Contact us or call us 0113 887 8380

About Appcheck

AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network and cloud infrastructure. AppCheck are authorized by te Common Vulnerabilities and Exposures (CVE) Program aas a CVE Numbering Authority (CNA)

No software to download or install.
Contact us or call us 0113 887 8380

Start your free trial

Your details
IP Addresses
URLs

Get in touch