How can I improve my vulnerability management programme?

Vulnerability management is at the core of any effective cyber-security programme. However, very often efforts to perform vulnerability management within an organisation can be limited to a game of “whack-a-mole,” with teams functioning purely reactively in response to detected vulnerabilities. At times, it can be hard to know whether vulnerability management efforts are proving effective or not: is the programme successful if there haven’t been any reported breaches? Or is it failing if there are unfixed vulnerabilities still in place? And what can be done to improve performance?

In this blog post we don’t provide a list of “quick tips” for how to improve a vulnerability management programme, since these will vary from one organisation to another, based on how vulnerability management is currently performed. Instead, we lay out how to go about assessing your current programme, and where to look for guidance on how to take it to the next level based on what you find.

 

Why is there sometimes uncertainty about how to improve vulnerability management?

Vulnerability management – and cybersecurity as a whole – is a relatively young area of specialization. It is also one that has – certainly until very recently – lacked much in the way of formal education and training. Only recently have many universities begun to offer cybersecurity programmes at undergraduate level, and those typically focus on areas such as computer forensics rather than vulnerability management. Likewise, typical Computer Science undergraduate programmes contain little or no dedicated coverage of vulnerability management practices.

At the level of continuing professional development for those already within the industry, cybersecurity certification is relatively scattered, and few offerings relate specifically and exclusively to vulnerability management. What certifications exist tend to focus on areas such as security management (CISSP, CISM), auditing (CISA), ethical hacking and penetration testing (CEH, OCSP, GPEN) or entry-level security practices (Security+, GSEC).

It is no surprise therefore that vulnerability management efforts can sometimes be inconsistent, ad hoc, or unduly steered by factors such as vendor tooling rather than strategically-guided. Even when there is clear consensus within a security team as to how and why the vulnerability management programme should be expanded or improved, budget discussions with senior management and the board can be hampered by an inability to back up proposals with clear and objective justifications for the requirements.

 

What problems can this lead to?

Many vulnerability management programmes are stalled in a relative position of stasis, with tasking focused purely on tactical responses to detected vulnerabilities. Teams can be sucked into “firefighting” and believing that this in itself represents the pinnacle of vulnerability management practices.

The rapid rate of change in attack methods and techniques, and the seemingly endless publication of major new vulnerabilities can mean that the main focus of keeping technical environments secure can become focused purely and simply on reacting to new threats.

Without a strategic approach or awareness of how to take steps to improve vulnerability management practices beyond basic scan-and-patch cycles, vulnerability management can become a burden on development and operations teams. Members within the security team can be disillusioned with the apparent lack of progress, fighting new fires every day, and inter-team communications with other technology teams can sour based on apparently never-ending demands for vulnerability remediation.

 

How can you tell if your current vulnerability management programme is effective?

At the same time as frictions start to mount both within the team itself and with other areas of the business, the security team can find itself challenged by the board or senior management to evidence how vulnerability management is performing – either to understand the risk to the business, to challenge the security team’s focus and effectiveness, or to resolve reports from technical teams of burdensome requests for vulnerability remediation leading to missed delivery deadlines.

Security teams will often be aware that there are issues with the current vulnerability management practices, and actively want to improve them, but can be unaware of how to take the next step – or of where to turn to get appropriate guidance on what “good” looks like.

Often teams will begin to produce periodic metrics such as “number of vulnerabilities patched” either to demonstrate the effectiveness of the security function or contrarily to highlight technical units not keeping on top of patching. Very often these metrics, whilst well intended, can be counter-productive. Teams can begin focusing on trying to patch every single vulnerability, playing a “numbers game,” instead of focusing on the critical assets that are the most at risk and most important to the organization.

 

How do you know what “good” looks like?

In order to know what steps to take to improve a vulnerability management program, two things are needed: to know where you are currently; and to know where you need to be. Once you have these two pieces of information, you can then produce a clear roadmap that provides strategic direction towards improved performance.

The question is, how is “where you need to be” best determined? The first port of call for many teams and organisations is to determine best practice by either aligning with a formal standard such as those produced by the ISO/IEC, or else trying to achieve an accreditation or certification. This is often partnered with an audit, in which the current position is investigated against the required standard and a “gap analysis” produced of what needs to be done to move from the current position to an accredited or compliant one.

The problems with this approach are manifold. Firstly, by auditing a process and trying to achieve an accreditation or certification, you have – perhaps inadvertently – committed to a binary outcome: either you will gain the accreditation, or you will not. Failing to achieve the accreditation may be seen as a personal or team failure with consequences, so there can be pressure to do everything that is required under the accreditation, and immediately.

This can lead both to measures being implemented unquestioningly and not in line with business risk. They may not be directly relevant to an organisation, and the pressure to implement them quickly may lead to rushed decisions and compromised solutions that are “paper thin,” gaining accreditation without accurately addressing risk as intended. Teams can also become swamped trying to take on massive structural and organisational changes overnight and compressing many stages of progress into one, leading to failed implementations, lack of understanding, and team burnout.

A better approach is many situations is to look at a less-frequently considered option known as “maturity models,” which provide a less aggressive and more progressive form of guidance and assessment.

 

What is a maturity model?

“Maturity models” provide a measurement of the ability of an organisation for continuous improvement in a particular discipline, by assessing a range of factors such as the people, culture, processes, and technology that are involved.

Critically, a maturity model isn’t simply a benchmark of how well you’re performing a function right now, but of whether you’re constantly seeking to assess and improve the function as opposed to being stagnant – as well as providing guidance on how to take those efforts to the next level. It’s assumed that any team or organisation in the field in question will pass through all the levels in sequence as they become more capable.

 

What are the advantages of maturity models?

Critically, in contrast to standards and accreditations maturity models offer prescriptive guidance on how to improve and ongoing “guard rails” to guide you along that journey, rather than simply an “all or nothing” pass/fail bar of assessment. No matter what stage of maturity you’re at, you’ll be able to say “I’m at this level” – rather than being a “failure” because you don’t meet a certain standard, you have simply located your current progress on the path and can identify the next steps to take to continue to improve. Once you’ve conducted an assessment to determine your current maturity level, then you use the level above your own as a “benchmark” to prioritize what capabilities you need to implement next.

Since there is no “certification” there is no gamification or rush to accreditation, and it is OK to not be perfect: every organisation has gone through the same level you are at now on its journey. Since the recommendations for each level are all drawn from best practice, it can also facilitate communicating security performance and roadmap requirements to senior management and argue for investment on the basis of industry-recommended best practices, ensuring that security investments are optimised at an appropriate level for current organisational maturity, and in the right areas.

This helps greatly in ensuring that a security programme is balanced, and is not focused too heavily (or too lightly) in any one individual area due to personnel bias, historical decisions, personal experience/preference, or vendor selection etc.

 

How do maturity models work?

Maturity models allow you to assess processes rather than outcomes, based on a scale, which varies depending on the maturity model selected but may include increasing levels of maturity such as:

  1. Initial: Beginner stage
  2. Repeatable: Proficient stage
  3. Defined: Savvy stage
  4. Managed: Expert stage
  5. Optimizing: Mastery stage

Maturity models provide a useful roadmap and framework for strategic improvement. If you assess your vulnerability management programme against a maturity model and find that you are at level 2 out of 5, for example, then the maturity model will focus your efforts on the practices seen in organisations that have gained the “next level” of maturity (3) and ignore for the moment the requirements of higher levels 4 and 5 since these are likely not realistically achievable in the short term. The model therefore acts as guide to creating a programme roadmap for improvement and minimising the chance of being overwhelmed.

 

What maturity models are there for vulnerability management?

In the 1980s, there were few standard “best practice” approaches defined within IT and as a result, the growth in computing was accompanied by frequent project failures and it was common for projects to either run massively over-budget and late, or simply never be delivered all. In an effort to determine why this was occurring and how to prevent it, efforts within the United States Air Force (USAF)’s Software Engineering Institute (SEI) and the Carnegie Mellon University in the US began to formalize a process maturity framework that could be used to assess organisations and teams. The Capability Maturity Model (CMM) that resulted between 1991-1993 and its successor the Capability Maturity Model Integration (CMMI) have been used by many organizations since.

However, the CMM and CMMI are general process maturity models that are generic enough to fit many areas within information technology. Three later models that provide more specific assessments for cybersecurity and vulnerability management in particular are the SANS Vulnerability Management Maturity Model; the Cybersecurity Capability Maturity Model (C2M2) published by the US Department of Energy; and the NIST Cybersecurity Framework (CSF).

Each of these are worth exploring to find which might be the best fit for your team. Maturity models such as the CMMI Cyber Security Capability Assessment is of specific note because it allows teams to define their risk in each area, and this risk profile then establishes an initial target maturity by capability area: it is therefore more granular than providing a single maturity level for the function or organisation as a whole, as well as allowing a roadmap to be prioritized based on risk.

Whichever model you pick, hopefully this article has highlighted a potential direction for you to take when looking to improve your vulnerability management programme.

 

How can AppCheck Help?

AppCheck help you with providing assurance in your entire organisation’s security footprint. AppCheck performs comprehensive checks for a massive range of web application vulnerabilities from first principle to detect vulnerabilities in in-house application code.

AppCheck provides services for both vulnerability prioritization in remediation, as well as asset grouping and management functionalities. AppCheck also offers an application programming interface (API) as well as integrations with systems including Atlassian JIRA, both of which offer methods to integrate into external systems for customers with existing asset management or risk management systems.

The AppCheck web application vulnerability scanner has a full native understanding of web application logic, including Single Page Applications (SPAs), and renders and evaluates them in the exact same way as a user web browser does.

The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.

 

About AppCheck

AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA).

 

Additional Information

As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please contact us: info@appcheck-ng.com

Get started with Appcheck

No software to download or install.

Contact us or call us 0113 887 8380

Start your free trial

Your details
IP Addresses
URLs

Get in touch

Please enable JavaScript in your browser to complete this form.
Name