“Daisy chaining,” “blended threats,” and “pivoting” are three closely related terms that describe how cyber-security adversaries and attackers can combine multiple weaknesses or exploits into a single, potentially crippling attack. The techniques are widely used by attackers in the kind of real-world attacks that are launched against organisations daily. However, the terms remain relatively poorly understood and under-used within security circles compared to other fundamental concepts. Understanding the three – and the subtle differences between them – is critical to developing a richer understanding of how attackers deliver cybersecurity attacks in the real world, and hence how they can best be protected against.
In this blog post we look at what daisy-chaining, blended threats and pivoting mean within a cybersecurity context, the differences between them, and what they can tell us about how attackers go about exploiting weaknesses in an organisation’s cybersecurity attack surface. We wrap up by summarising how this knowledge can be used to better protect against the potentially devastating attacks and exploits they can deliver.
The majority of security teams within organisations have a relatively strong grasp on managing individual vulnerabilities within a vulnerability management programme. Even relatively immature vulnerability management programmes within smaller organisations or those just starting to build out their security programme tend to have a solid grasp of the vulnerability management lifecycle and identifying, prioritising, and remediating vulnerabilities. This will often include a rudimentary management of risk by prioritising vulnerability remediation at the level of individual vulnerabilities, perhaps on their severity, impact or CVSS score.
Unlike attackers, security teams are faced with competing priorities, often limited time, and resources, and must necessarily prioritise threats, so this approach is not unreasonable on the face of it. It allows for periodic cycles of vulnerability detection and remediation, as well as easily tracked metrics on outstanding vulnerability counts for example. Vulnerabilities that are rated lower criticality or severity might be backlogged to fix at a later date or ignored indefinitely. The busier or more under-resourced a team is, however, these lower criticality vulnerabilities may remain unaddressed indefinitely. We’ll take a look below at whether this approach is safe as well as optimal within a broader risk context.
The approach outlined above might be appealing. It certainly seems on the face of it to be risk-based in prioritising the remediation of higher-severity risks. And it lends itself to the production of trackable metrics on remediation for consumption by management and to gauge the effectiveness of the vulnerability management programme.
However, there is a major issue with such an approach, which makes it far from optimal, especially as a security team grows in maturity along its maturity development curve. The fundamental problem with this approach is that the wider context of the vulnerability is not considered. A “first step” towards risk-based vulnerability management often involves the analysis of the risk posed by the vulnerability by considering not only its criticality or severity, but by the contextualised risk assessed by considering the impact of the specific host or system on which the vulnerability is located. A SQL injection weakness, for example, might be considered higher risk if the database receives queries from a public-facing web application than an internal system only.
However, this step alone still misses the mark compared to true risk-based vulnerability management. The problem is that this approach is “defender-focused” and optimised for organisational processes of vulnerability management. Critically, it is not “attacker-focused.” Attackers do not consider individual vulnerabilities alone and out of context. Instead, they have a goal in mind, and are interested only in what they need to exploit along the way in order to achieve it. Imagine the analogy of a bank robber planning a heist. He might plan to dig a tunnel under the bank from a neighbouring property, silence the arm by cutting wires, and break into the safe using a welding torch. He is not looking at how to overcome a single obstacle, he is looking for how individual weaknesses can be used to overcome a string of challenges.
Similarly, in cybersecurity a full system breach is rarely reliant upon the exploit of a single vulnerability. Rather, threat actors will assess and seek to exploit multiple vulnerabilities, in multiple systems, and via multiple means within the context of a single cyber-attack. If this possibility is not considered by security teams focused on remediation by criticality along then an attack path may be opened through combining seemingly innocuous individual vulnerabilities to devastating effect.
Advanced Persistent Threat (APT) groups and skilled attackers have for years been conducting highly effective attacks via these exact methods, combining multiple lower-risk vulnerabilities into devastating attacks. To understand precisely how this occurs, we will look at the concepts of blended threats, pivot attacks, and daisy chaining. Together, these concepts can help provide insight into how attacks are conducted in the real-world. Providing security teams with the knowledge necessary to consider an “attacker-eye view” of security vulnerabilities helps to ensure that they can be defended against more robustly.
Although blended attacks, pivot attacks and daisy chaining are all related concepts, each has a specific meaning that individually provides insight into different aspects of compound attacks.
Blended threats or complex threats is usually used to describe a single “package” or tool that contains the means to perform multiple exploits against different vulnerabilities in order to achieve a common objective. These can exist as either concurrent or consecutive measures. An example of a concurrent set of measures might be a tool that allows for multiple alternative routes for infection, such as malware that can make use of multiple propagation or distribution paths depending on what is available: it may contain executable code that (once it has infected a machine) will begin trying multiple channels to spread itself, such as FTP, network shares, and email. An example of a consecutive form would be where a single tool was able to take consecutive steps without intervention or direct control from its operator – for example, malware that once it has established a footprint is able to establish a remote connection, download further payloads, trigger them, and seek to gain privilege escalation.
Unlike blended threats, pivoting or pivot attacks is most commonly used to describe specific intentional acts by an attacker in order to perform horizontal privilege escalation between multiple systems. Also referred to sometimes as island hopping, lateral movement or multi-layered attacks, pivoting involves attackers using an initially compromised system to perform attacks on and trying to compromise in turn additional systems on the same network or connected networks.
This approach can be used to bypass firewall restrictions. For example, a database host may be screened from direct access from the internet, being located on a screened internal network segment that is accessible only from other internal systems. By compromising a host that does have a public web presence – such as a web application server in the DMZ (“demilitarized zone”) at network edge, an attacker may be able to then gain onward access to the database server by “pivoting” from the web host. Pivoting can be performed via a few different means: by compromising a host and establishing a reverse shell, by compromising a proxy host, or by compromising a VPN tunnel or service endpoint.
Daisy-chaining is perhaps the most nuanced of the three terms we are looking at within this article and is often confused or conflated with either pivot attacks in particular. However, it does have a discrete meaning and is worth clearly separating as a concept.
The key to understanding daisy chaining is that it is a logical concept, and that it relates to a technique. Unlike pivot attacks, which relate to the specific network-based traversal between multiple systems, daisy chaining instead involves performing a mix of horizontal and vertical privilege escalation and may not involve direct physical system access at all. It can be performed within a mixture of services (including cloud services) and also involve attack vectors such as social engineering. It normally involves an attacker escalating access – typically via a sequence of loosely connected accounts or exploits – in order to build a “daisy chain” to an ultimate goal or target. The key principle is that each step, however it is performed, delivers the attacker one step closer to compromising their ultimate goal or target.
In case this seems somewhat nebulous, consider the scenario in which an attacker finds that they can exploit procedural or logical issues in identity verification on an email system. They can then request a password reset via password reset functionality on a separate system that does not properly verify identity beyond possession of an email account. They can then access one service and gather other information and personal data. They can then in turn use this to perform a second password reset on a third, unrelated service that has stronger identity verification requirements. Unlike a pivot attack there is a chaining of weaknesses, not physical systems.
We’ve seen how individual vulnerabilities – even low-criticality ones – can be combined by attackers in complex threats consisting of compounded attacks. But how does this understanding inform our response to the problem as security professionals tasked with protecting an organisation’s technical estate and defeating these attacks?
The first key point to note is that attacks conducted in the real-world are not based upon technical vulnerabilities alone. Although remediating technical vulnerabilities is a core and fundamental requirement for any effective vulnerability management programme, it is not alone enough. Rather, it is said to be a necessary but not sufficient requirement.
Technical remediation needs to be partnered with effective defence against other vulnerabilities, such as physical security and social engineering in particular. This has been known for some time: in a 2012 article in Wired magazine, Matt Honan described the multiple stages of an attack that he was able to conduct using only social engineering mechanisms in order to gain eventual account compromise.
Vulnerability scanning, whilst essential, should be partnered with robust defenses against social engineering attacks. This can include user training, as well as review of administrative processes and workflows such as password recovery systems to ensure that they are resistant to bypass or compromise.
Human security is messy and complex and immune to simple metrics, which can make it unappealing and difficult to both tackle as well as to ensure effectiveness, yet it remains an essential component of an effective security programme.
Closely related to human security is enforcement of access control best practices and principles, including least privilege and dual control. Least Privilege is the principle that when provisioning user accounts for access to systems, the minimum permission set is granted to them that is possible in order for them to fulfil their assigned role or function and no further. This is often implemented via a Role Based Access Control (RBAC) authentication model such that users are assigned roles, which in turn have a defined set of functions required to access that role, which can be mapped to individual system access. Avoiding the direct assignment of privileges to individual users and ensuring that user roles are updated under a robust SML (Starters, Movers, Leavers) process to capture requirement changes, means that there is no “permission creep” in which users gradually accrue excessive and unnecessary permissions over time. Least privilege can also be applied as a principle to system access – for example in ensuring that firewall rules provide access between systems on a “need” basis only.
Dual Control is a principle in which key operations are protected by ensuring that they cannot be performed by a single individual alone, but require discrete actions by two parties, perhaps in a “request, approve” split or a “dual approve” requirement. An example by analogy might be the nuclear missile launch system seen in movies requiring two different parties to authorise and synchronise key turns to authorise a launch. Originating as a financial control, the same technique is used in many security applications. It can help prevent breach in the event of compromise of a single account, but also provides some additional protection against social engineering attacks since even if one party fails to question the transaction, the other may be more wary and do so, especially if they see their role as a reviewing and checking function.
The ability of attackers to step through multiple weaknesses and traverse across multiple systems laterally is made much easier by a reliance on single-factor authentication using passwords alone. Once one system is compromised, an attacker has options to gather further credentials via many means such as installation of a key-logger, or brute-forcing password hashes in a user database table using a rainbow or dictionary attack.
Although multi-factor authentication is not a silver bullet against all issues, it can provide additional complexity and resistance to attackers and slow down their efforts or thwart onward access from a single compromised host.
A key learning from an understanding of the attacker-eye view of security breaches is that a reliance on a single, hardened network perimeter is not sufficient to prevent attacks from occurring. Nor is it of any use once a single host has been compromised since an attacker once positioned within that boundary has relatively unrestricted onward access and can run rampant across the entire technical estate.
Defense in depth is a security concept or principle that involves ensuring that systems and networks are protected by multiple, overlapping controls and protection measures. The idea is that the failure of any one control is not sufficient in itself to deliver compromise. Layering of defense may be literal in being used to prescribe physical layers of defense using measures such as network segmentation. It may also be logical or analogous in prescribing multiple overlapping forms of control to protect a single service or resource.
The concept of defense in depth has been extended in recent years into a more restrictive and comprehensive theoretical model known as zero trust. The model involves establishing a so-called “default deny” pattern of access control, at multiple logical and physical layers throughout the technical estate. As in defense in depth, the controls implementing this are often overlapping and implemented via multiple, discrete technologies. The basic principle is that no-one and nothing (i.e., both humans and computer systems) have access to anything by default.
For example, a new device that is connected to the network may be denied network access at the data link layer until its MAC address is registered and authorised; further denied access at the transport and routing layers until its registered IP address is authorised for access to other network segments via firewall rules applied at routers and gateways; and denied access at the application layer to any of the organisation’s operated network applications and services if it lacks a required digital certificate.
At the same time, the human making use of the device may find that they have no login access to any system whatsoever – perhaps even to traditionally “open by default” portals such as intranet pages – unless they can provide valid authentication credentials once provisioned, typically backed up by strong identity verification mechanisms built on multi-factor authentication requirements.
Additionally, the zero-trust model prescribes that authentication and verification are ongoing, that is enforced at a transaction level, rather than granted for the duration of a session or indefinite period. This is typically enforced via some form of token-issuing service where an authentication provider is explicitly queried with details of the requesting user, target system and exact transaction or request details, and an access token issued if the request is granted (JSON web tokens are often used for this purpose).
The supposition is that by explicitly granting access, and enforcing access control decisions at the transaction level, any unauthorised access is highly unlikely and hence complex threats are neutralised. However, implementing a zero-trust regime is every bit as exacting as onerous as it sounds, especially within a “brownfield” infrastructure with multiple existing systems and services. For many organisations, it is often built out as a capability initially, with existing systems and services brought into scope and integrated gradually over time on a risk basis or as supported by client systems and applications, rather than attempting to enforce the model abruptly. Many of the benefits can be gained from partial implementation, rather than ruling that complete integration is unattainable.
Effective vulnerability management can only be performed if the risks to each asset are fully understood. As we saw above, simply enumerating attack vectors for critical systems can be key in understanding potential lateral attack vectors that could be used in a pivot attack. Understanding the attack surface and system interconnection helps in delivering the prioritisation of control selection as well as vulnerability remediation, by focusing on key systems with open attack surfaces.
In order to effectively drive prioritisation, it is necessary to initially map all critical assets and estate and understand their context. By assessing the interconnectedness of systems and the relationships between key assets, it is possible to produce a simple visualisation in the form of a graph. The topological graph produced by this attack path analysis can then be used to provide an intuitive and comprehensive visualisation of an environment. This can be used to identify likely attack vectors, in a threat modelling process, indicating how attackers could potentially gain unauthorised access to assets, and highlighting vectors that need to be screened against as a priority.
Since it is possible to group or class by threat actors and targets, these maps or graphs do not have to be unnecessarily complex and can provide a highly intuitive visibility into potential attack vectors.
Vulnerability scans are essential in providing fast and ongoing intelligence into the state of vulnerabilities across even the largest of technical estates. However rather than supplanting or usurping the place of penetration tests, on the contrary we strongly believe that the two are essential and complementary controls.
The advantage of penetration tests, particularly in the context of blended threats and daisy chaining attacks is that penetration testers are allowed to be “evil”: to think creatively and act freely, operating with “gloves off” and straddling every available avenue of attack. A full and broadly scoped penetration test of an organisation for example may deliver best value if the penetration testers are allowed to perform social engineering attempts via phone and email, or to attempt to gain building access testing physical security.
This is not a gimmick but rather motivates penetration testers to simply attempt any option – whatever is most effective – without constraint and operating under exactly the same conditions that a genuine external attacker would be free to use. Penetration tests may only be conducted annually or infrequently and lack many of the advantages of vulnerability scanning in areas such as ongoing coverage but can provide unique insights into exploitable attack vectors that chain multiple systems or techniques.
Operating a SIEM (“Security Information and Event Management”) system allows organisations to concentrate log collection from multiple sources and secure them centrally. It also provides techniques that provide insight into potential security events and aids in the detection and investigation of security incidents. One of the key features of an effective SIEM is event correlation, a technique for logically connecting individual discrete events that potentially are separated either spatially or temporally in origin yet can be connected to indicate a single, larger event such as an ongoing attack. The technique is more powerful than simple log aggregation alone and is highly relevant to incident detection and response. It can help to identify complex interwoven attacks including APTs and other multi-phase attack forms that we have looked at, which involve complex threats across multiple systems or forms of attack.
Hopefully, this article has provided useful insight into how attackers conduct attacks, and how attacks performed in the real world are often complex, multi-phase and interwoven as a composite involving multiple systems, techniques, steps, and forms. Understanding this “attackers-eye view” allows organisations to better understand the threats that they face, and to ensure that vulnerability management is driven via addressing threats using an accomplished and analytical approach that better addresses real-world risk by thwarting the approach taken by attackers.
AppCheck can help you with providing assurance in your entire organisation’s security footprint, by detecting vulnerabilities and enabling organisations to remediate them before attackers are able to exploit them. AppCheck performs comprehensive checks for a massive range of web application and infrastructure vulnerabilities – including missing security patches, exposed network services and default or insecure authentication in place in infrastructure devices.
External vulnerability scanning secures the perimeter of your network from external threats, such as cyber criminals seeking to exploit or disrupt your internet facing infrastructure. Our state-of-the-art external vulnerability scanner can assist in strengthening and bolstering your external networks, which are most prone to attack due to their ease of access.
The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.
AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA).
As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please contact us: info@localhost
No software to download or install.
Contact us or call us 0113 887 8380
AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network and cloud infrastructure. AppCheck are authorized by te Common Vulnerabilities and Exposures (CVE) Program aas a CVE Numbering Authority (CNA)