How to maximise developer buy-in to security

Organisations can often find themselves in a position where there is a degree of friction in the relationship between security and development teams that expresses itself as a lack of effective collaboration between the two. This can cause both “hard” issues such as inefficient development processes and a failure to effectively address security, leaving an organisation potentially vulnerable to attack, as well as “soft” issues such as dissatisfied or frustrated employees.

In this blog post we look at the challenges that organisations face in integrating security effectively within development processes, and how these challenges can be met head-on to encourage developer buy-in to security concerns.


Why is there sometimes friction between security and development teams?

There are organisations in which security can exist relatively happily as a standalone function that is both responsible and accountable for ensuring organisational security, and in which the security team is largely able to exercise autonomous control over any decision relating to the procurement and operation of security controls. However, within many organisations, the remit and role of the security team is somewhat more nuanced and is dependent upon efforts by other teams in order to deliver the security programme’s objectives.

One common example of this second scenario is an organisation that operates a web application that is developed and maintained by an in-house development team. Anyone who has worked within cyber security will know that there can sometimes be friction between security teams (or team members) and those tasked with development. The essential problem occurs when development teams identify their goal as the delivery of business requirements relating to new product functionality via code deployment, and security becomes seen culturally as a hindrance that delays this process by introducing unnecessary hurdles and delays.

At best, this can lead to a degree of tension between the two teams, hindering effective and productive collaboration and fostering an inharmonious working environment for all involved. At worst, it can lead to the two teams developing an antagonistic relationship, with development teams attempting to circumvent increasingly restrictive and inflexible checkpoints and processes introduced by security to try and force security controls upon development teams.

We will look at how and why this disconnect can sometimes occur and escalate, and how it can be avoided so that the two teams are able to collaborate more effectively with shared purpose.


Why might this disconnect occur?


Accountability Disconnect

Within project management, the concept of a RACI matrix is often used to define project responsibilities in terms of who is responsible for delivery of the work (i.e., who does the actual work and completes the task) being separately defined from those people who are accountable who owns the task or process and must sign off that the objective has been met). The fundamental disconnect between security and development teams occurs when these two functions are completely discrete, with security teams held to account for any security shortcomings (but powerless to address them directly), while development teams are the only party with the ability to directly remediate any detect security issues via bugfixes to code yet lack any accountability or motivation to do so.

This disconnect can be at best neutral, but it can also be actively counter-productive: within management theory, this is known as the “principal-agent” problem or “agency dilemma,” when one person is able to make decisions or take actions on behalf of or that impact another person.


Security “Silos”

This disconnect can be exacerbated when a security team is identified as an isolated “functional silo” within a business. This can often occur when security teams are not incorporated into the development teams and functions but are an explicitly separate function, much like Human Resources or Finance teams, with different reporting lines and different objectives. The concept of silo mentality is often used to describe organisations in which this is the case, with different business units being characterized by divergent and often conflicting goals and objectives.


Conflicting Incentives

The problem of differing objectives can be made worse the stronger the incentives provided for delivering on those divergent objectives within each team. Delivery teams are typically focused primarily on feature delivery. Within development methodologies such as scrum, this can be explicitly recognised via the use of sprints in which the focus is on delivering key features in a given timeframe. The use of scrum reviews and retrospectives can often focus on ensuring that delivery dates are met, and that development velocity is maintained in terms of enabling the maximum number of deliverables to be deployed to production within the sprint term. It is common to see teams tracking metrics such as number of tickets delivered per sprint, incentivizing teams to delivery functionality quickly above all other concerns, such as security.

In this worst-case scenario, the use of incentives that are blinkered or overly focused on one metric without consideration for others can mean that individuals and teams can be motivated to act in their own best interests, but contrary to those of their principals or the remit of other teams. The greater the degree of specialization or separation between the two teams, and the greater the number of different incentive mechanisms, the greater the risk that this issue will arise.


Adverse Selection

The use of metrics, whilst well intentioned in aiming to provide a quantitative and objective assessment of team productivity, can also lead to adverse selection if the metrics are not carefully selected. In this scenario, even if security vulnerabilities are considered, a team overly focused on delivery velocity may be incentivized to select tickets for remediation that are “quick fixes;” that is fixing what is easy to fix quickly rather than based on the items that represent the highest organisational risk. In this instance, the incentivization isn’t simply conflicting with that of another team, but directly harmful to the organisation’s best interests.


Lack of Agency & The Iron Law

Even if developers are aware of and concerned with security vulnerabilities, if they are not able to exercise any autonomy as individuals or a team on the selection and prioritization of development tickets or tasks, then they may simply lack the agency or freedom to dedicate time to the resolution or prevention of vulnerabilities.

This is typically less of an issue within teams that subscribe to development methodologies based upon collaborative decision-making and review but can often be the case within largely enterprises that operate extremely rigid hierarchies and assignment of responsibilities. The phrase the “iron law of oligarchy” is used to describe the observation in political and management science that management can often become increasingly oligarchical (rather than democratic) in organisations that focus on pushing decisions upon people. The term stovepipe organisation is used to describe businesses that restricts the flow of information within the organisation to up-down through lines of control, inhibiting cross-organisational communication between teams.

Methodologies such as scrum explicitly encourage the use of collaborative decision-making in order to deliver greater agency to developers and development teams, in collaboration with product owners and other parties such as security teams.


Implied Criticism

If security teams are an external party to the central development process, then their perspective can be “outside-in”, and their involvement can be seen as often antagonistic by those within development teams. Every security issue reported can be seen as an implicit criticism and suggestion of failure on the part of the development teams, especially if prescriptive statements are made regarding remediation details and timescales. Information that is provided in good faith by security teams such as how to best remediate a security vulnerability can be taken by development teams as both criticism and interference.


Security Comprehension

The issue of security teams making overly prescriptive recommendations on remediation to development teams can be made worse if the security teams believe (rightly or wrongly) that the development teams are not sufficiently familiar with security concerns to make remediation decisions in line with best practice.

This distrust and lack of confidence can stem from a security team believing, possibly based on past experience, that development teams lack the necessary comprehension of security concerns and best practices in order to make best-case remediation decisions.


Visibility & Asymmetric Information

An issue that is closely related to both the existence of functional silos, the lack of agency, and a lack of security comprehension is that of a “visibility horizon” for security issues. If security teams maintain a database of known security vulnerabilities (such as the collated output of security scans and reports) but either do not provide access to this (or else provide only filtered portions of the data on request) then the problem of split horizons can occur. In this scenario, development teams have no direct access to vulnerability data, and are commonly simply parcelled out discrete details about individual vulnerabilities on a case-by-case basis.

Although it may appear initially that this is a sound security practice, and in line with the principle of need to know and least privilege, it can lead from a cultural and management perspective to a situation in which the two parties (security and development) have both different interests but now also asymmetric information. From management science and economics, it is well established that situations where two parties have asymmetric information can lead to poor decision-making and sub-optimal outcomes. The Business Dictionary describes a mindset present when certain departments or sectors do not wish to share information with others in the same company as being a fundamental determinant of the silo mentality that we outlined above.


Development as a Risk

On the part of security teams, it is possible to view the development function as either a risk or an opportunity. When viewed as an opportunity, integration of security concerns directly into the development process can ensure that development becomes a positive asset within the security programme. If development is seen as a risk that must be constrained, however, then security teams may seek to impose increasingly arduous restrictions upon the development process, attempting to hold it in check via outside monitoring due to a lack of confidence in the development team itself.


Right-Shifted Security, Scope Creep & Overrun

The software development process is often modelled as an SDLC or software development lifecycle, with progressive stages from inception and business analysis through to architecture and design, development, testing, and deployment followed by operational maintenance and monitoring.

When security is described “right-shifted,” it engages with development tickets or projects only at the very end of the process – such as immediately prior to deployment. If an application did not meet quality standards, or otherwise failed to meet security requirements, it would be sent back into development for additional changes, causing significant bottlenecks.

This is often characterized by a security team that is seen as being an operational obstacle to deployment, a cost centre, and a team that “always says no.” It can lead to frustration, missed deployment deadlines, and even attempts to sidestep security team involvement completely.

If security is not involved from early within the development process but are seen to drop their non-functional requirements on a development team only after the product has been developed without them then this can be seen as a form of “scope creep,” changing the goalposts after the fact.


What principles can improve developer buy-in to security?


Agency & Integration

It is important that developers and development teams feel that they have the agency to prioritise and deliver on development tasks – including those needed for remediation of security vulnerabilities. This generally works best if the involvement of security is advisory and “inside out” rather than prescriptive and “inside in.” In the best-case scenario, an informed development team with a high priority vulnerability on their backlog would proactively “pull” the ticket for remediation (as with any other task) rather than having it “pushed” upon them. In a squad based or scrum-based development environment, this can involve security getting involved within development teams directly, such as attending developer stand-ups and making a case for the prioritization of a given ticket in a positive method of engagement.



Mutual trust between security and development teams can only be assured if each is confident that the other understands the processes, requirements, and concerns of the other. Developer security training needs to be managed carefully in order not to be seen as another form of implied criticism, but well-designed training can ensure that developers are less likely to introduce weaknesses, as well as naturally understand the consequences if a given vulnerability were to be exploited. Effective education can therefore ensure that developers are advocates for security, rather than standing in opposition to it.


Trust & Openness

It is important that both security and development teams have visibility of and access to the information of the other, in order to ensure that the perspective of the other team is understood rather than being alien and indiscernible. By understanding the challenges and workload facing the other team, their perspective can better be understood. This can only be ensured by delivering an environment in which there is no “split horizon” of visibility of both development workloads/backlogs as well as security vulnerability reports etc. Aligning the visibility of information helps to prevent a “split brain” situation in which the motivations of the other team are opaque or misunderstood.


Reward & Motivation

Both teams and individuals must feel motivated to address security concerns. This can involve various means aimed at incentivizing remediation of security issues. The reward can take many forms, either the intangible rewards of satisfaction of resolving an issue, explicit reward and recognition among peers, or even financial motivation.



Lastly, the relationship between security and development teams must be collaborative in nature. There are many ways that this can be achieved, including embedding security directly within development teams, but the key pillars for collaboration are the early involvement of security in the development process (whatever form that takes) as well as a flow of information that is characterized as being relatively lateral and democratic rather than hierarchical and vertical in nature.


How can these principles be applied in practice?

There is certainly no “one size fits all” approach to delivering an approach that ensures effective and productive collaboration between security and development teams. Every organisation will face unique challenges and will need to consider how best to address those challenges within its own environment and constraints. However, we have listed below what we believe to be relatively universal ways by which security can ensure buy-in from development teams and overcome the most commonly encountered hurdles:


Provide Developer Security Training

Delivering security training to developers has numerous benefits: most obviously it can ensure that developers are aware of certain risks and less likely to introduce security vulnerabilities into their code as a result. However, there are less immediately obvious benefits also: the more that developers understand security concerns, the greater the shift in perspective that security is something that they can themselves consider when making decisions. Knowing about potential security vulnerabilities and how to avoid them is a way of improving themselves as a developer, and one of the differences between a junior developer and a more senior developer. Developers can often find that they get the “bug” for security, and actually expend time and effort on their own account learning more about security issues simply because it is challenging or represents new information and opportunities for them within their coding.

Delivery security training effectively is hard. It is extremely easy to pitch security training at the wrong level for technical teams and lose developer engagement if it is seen that you are attempting to “teach your granny to suck eggs.” Some of the more effective training methods we have seen involve self-paced methods of training that are non-traditional, such as setting practical security challenges and simulations, rather than class-based or video instruction. This allows developers to learn whilst engaged in their strongest activity and has much less risk of disengagement or mis-judged pace of training delivery or level.

Challenges can involve tasking developers with “capture the flag” (CTF) puzzles, which can be accessed freely online. Rather than setting out the training as a series of piece of knowledge, the training is therefore presented as a challenge, and developers almost inadvertently absorb security-related knowledge whilst engaged in the task. CTF competitions simulate real-world scenarios in a gamified platform. The purpose of CTF competitions is to enable developers to learn new skills, give them firsthand experience with cybersecurity, as well as to help them sharpen the tools they have learned during any previous training.


Incentivise Security Concerns

Security training when delivered as a challenge to be overcome rather than a series of information items to be absorbed is one form of gamification which itself is just one way of incentivizing development teams to be concerned with security. It is possible to set artificial incentives, such as taking teams out for a drink if they hit a set number of days without a security issue being discovered, but it is also possible to incentivise teams internally.

One example of this might be to run an internal employee-only “bug bounty” programme in which developers are recognised for finding and reporting vulnerabilities in in-house code themselves. Reward can be either financial, such as awarding vouchers or tokens to the person who finds the most issues in a given period or can simply use gamification to provide a leaderboard and the associated kudos among peers.


Use Intelligent Metrics

It is important to ensure that where metrics are used to track development team productivity or excellence that the metrics are carefully considered to deliver the required behaviour and do not inadvertently incentivise unwanted behaviour. The example of simply counting “number of tickets delivered” in a development sprint is one that is often seen: it is well-intentioned but can mean that development teams prioritise tickets that are trivial and quick to deliver rather than those that are most important or urgent. Similarly, if metrics are established for development teams, then it is important to ensure that these don’t actively counter-incentivise work to remediate security vulnerabilities.

As Paul Holland, wrote in Computer Weekly, “CISOs need to realise that developers should be granted time to develop securely and not judge their performance solely by the time to build.”


Embed Security Within Development

One approach that has been seen within recent years is the attempt to break down organisational silos by creating cross-functional teams within IT departments. Characterised by names such as “DevOps” (Development and Operations) or more recently “SecDevOps” (Security, Development and Operations), these aim to bring multiple disciplines together into a single squad or team. The idea is that not only are the concerns of all teams represented at every stage, but cross-disciplinary skills can speed up development by removing hand-off points and blockers. Embedding security team members in development squads or teams may not necessarily be the optimal approach for all organisations, but it is strongly worth considering if it could work within your own organisation.


Shift Security Left

One of the benefits of a SecDevOps approach or embedding security within development teams and processes is that security becomes a concern earlier in the development process, rather than an afterthought or blocker that is introduced as a hurdle at a later stage and is seen as simply delaying code deployment as a barrier or hurdle to be overcome.

Regardless of whether a SecDevOps approach is undertaken, the general principle of “shifting security left” within the development lifecycle is a positive one, however it is performed. It is most effectively leveraged when developers are empowered to use the same tools and benchmarks as are used later in the development lifecycle. For example, if vulnerability scanning is to be used against the production instance, then integrating scanning into CI/CD pipelines, or even scanning test or development instances can help provide assurance that the code will pass any later quality gates or security checks prior to deployment.

If security team members are not embedded directly within development squads to function as stakeholders, then developers themselves must be aware of security at all stages of the development lifecycle. This does require a high degree of familiarity with security risks and issues and underlines the importance of security training and education that extends well beyond the “awareness” level of typical training and into the “practitioner” level. Developers working without embedded security team members must be sufficiently familiar with security practices to consistently account for security risks and take appropriate, informed action to mitigate them before code is pushed to production.


Explicitly Budget for Security

It is important to recognise that incorporating security can add some overhead to development timescales. This can express itself both in the time taken to develop and deploy an additional feature; but also, as an acknowledgement that the overall volume of work delivered by a team or squad may reduce to some extent, particularly if developers themselves are tasked with performing some of the security functions. While shifting security left results in a more efficient process overall, time must still be allocated for performing certain tasks aimed at preventing the introduction of security defects, such as code reviews.

Frustration can be avoided by making this expectation known up-front and by ensuring that there is widespread support for the approach regardless. Some organisations may “budget” or pre-allocate up to 10% of development time to security remediation rather than new feature development such that when and if urgent security remediation lands that requires development work, this does not have to compete with or detract from existing commitments and deliverables. Security can swiftly be pushed aside if it is presented as a reason a promised feature was not delivered in time.


Merge Horizons & Viewpoints

Finally, it is important to consider visibility horizons and try and ensure that these are common between all teams, so that all parties are working from a common set of information and are more likely to share a common viewpoint. This can mean sharing visibility of not just current security vulnerability backlogs, but of the exact security assessment methodology used in late (right-shifted) security gates or assessment. Rather than springing surprises on development teams, being open about the security checkpoint process allows development teams to better prepare and ensure that code is already screened for various vulnerabilities before being pushed to any final security gates prior to deployment.


How can AppCheck Help?

AppCheck help you with providing assurance in your entire organisation’s security footprint. AppCheck performs comprehensive checks for a massive range of web application vulnerabilities from first principle to detect vulnerabilities in in-house application code.

AppCheck provides services for both vulnerability prioritization in remediation, as well as asset grouping and management functionalities. AppCheck also offers an application programming interface (API) as well as integrations with systems including Atlassian JIRA, both of which offer methods to integrate into external systems for customers with existing asset management or risk management systems.

The AppCheck web application vulnerability scanner has a full native understanding of web application logic, including Single Page Applications (SPAs), and renders and evaluates them in the exact same way as a user web browser does.
The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.


About AppCheck

AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA).


Additional Information

As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please contact us:

Get started with Appcheck

No software to download or install.

Contact us or call us 0113 887 8380

Start your free trial

Your details
IP Addresses

Get in touch

Please enable JavaScript in your browser to complete this form.