We need to talk about SSL & TLS

In this blog post we look at whether the current ubiquity of SSL/TLS has led to any drawbacks relating to either its specification or applied usage, how seriously each of these issues impacts TLS’ overall usefulness, and whether these warrant concern or necessitate action or changes in practice by either end users or website operators.

The standard known as “SSL”, but more correctly termed “TLS” or “Transport Layer Security”, is one of a number of protocols used to cryptographically secure connections made across a computer network. TLS is the de facto standard for the encryption of web traffic that uses the underlying HTTP (HyperText Transfer) protocol for its underlying data transfer. SSL is therefore almost unique in encryption standards in that it is encountered by and has an awareness among a broad spectrum of non-technical users across the general public within their daily lives via the familiar “lock symbol” in their web browser. Originally implemented sparingly and to encrypt only highly sensitive transactions such as logins or financial transfers only, the technology has become so easy to implement – and so relatively inexpensive in terms of computer resources to deliver as computers have become more powerful – that it is now nearly ubiquitous, following a drive by the Electronic Frontier Foundation (EFF) and Google in particular, under their “HTTPS Everywhere” initiative in the 2010s.

In this blog post we look at whether the current ubiquity of SSL/TLS has led to any drawbacks relating to either its specification or applied usage, how seriously each of these issues impacts TLS’ overall usefulness, and whether these warrant concern or necessitate action or changes in practice by either end users or website operators.

 

Context and Background

Websites make use of the HyperText Transfer Protocol (HTTP) as part of the wider internet protocol suite. It is well suited to webpage delivery and based upon a request-response mechanism within a client-server model. Web browser, acting as requesting clients, submit HTTP requests to a given server via its URL/address, and are returned a response – typically containing some delivered message or content.

 

What is SSL?

HTTP does not natively encrypt either the request or response content, and thus is vulnerable to what are known as man-in-the-middle (MITM) and eavesdropping attacks, which can let attackers gain access to user accounts and sensitive information and modify webpages to inject malware.

However, the Hypertext Transfer Protocol Secure (HTTPS) extension implements encryption functionality to secure HTTP communications. Originally using SSL (Secure Sockets Layer) as the encryption standard, this has since been superseded by the newer TLS standard, but the term “SSL” is still widely, if strictly inaccurately, used in a generic way to refer to either SSL or TLS encryption. TLS is a flexible protocol that is used in many communication and application layer protocols including SMTPS, SIPS, POP3S and SMTPS, but we will focus on its use with HTTPS in this article.

Two other methods for establishing an encrypted HTTP connection do also exist: Secure Hypertext Transfer Protocol (SHTP), and the HTTP/1.1 “Upgrade” header, however browser support for both these alternatives is extremely limited, and TLS is the near-ubiquitous solution used to deliver encryption for website communications.

 

Why SSL?

Most people are aware that encryption technologies such as SSL/TLS provide confidentiality – that is, that they prevent eavesdropping of communication between two parties. However, SSL/TLS additionally provides the functions of integrity (ensuring that the data sent from client to server has not been modified by an intermediary and is received as issued by the other remote party) as well as authentication (ensuring that the site a user is looking at is the site they think they are looking at!). HTTPS delivers this authentication using a system of digital certificates which (in theory) have been verified by a trusted third party.

Once the client and server have agreed to use TLS, they negotiate a connection by using a handshaking procedure to establish the cipher settings and other parameters to use for the connection, as well as a session-specific shared key with which further communication is encrypted using a symmetric cipher. When the handshake is complete, the secured connection is then used for further communication, and the traffic is encrypted and decrypted with the exchanged or generated session key until the connection closes.

 

What potential issues are there with SSL?

SSL/TLS is not completely without certain complications, technical flaws, and weaknesses in both its design as well as commonly implemented forms. The list below aims to summarise a few of the factors worth bearing in mind when either implementing or using SSL/TLS for securing or accessing a website:

 

Complex and ever-changing cipher sets

TLS is not one single protocol, but a series of protocols (currently at version 1.3 at the time of writing) and many websites implement multiple versions of the TLS protocol in parallel, to provide backwards compatibility with older browsers. Even more confusingly, however, TLS is an extremely flexible standard that allows websites to deliver encryption, authentication, and key exchange using a wildly variant set of constantly evolving ciphers and algorithms with names such as “ED25519”, “x448”, “ChaCha20”, “Poly1305” and “SHA384”. It can be hard for website administrators to stay on top of which ciphers represent the best choices to implement on their website, let alone for most users to know whether the ciphers a website is implementing are secure or vulnerable to known weaknesses that render their connection insecure.

 

Cipher Negotiation

Because of the wildly varying ciphers and algorithms available and implemented both client and server side, the browser and server establish a secure connection only following a prolonged handshake during which they establish what the highest common denominator (cipher suite mutually supported by both client and server) is, before establishing a connection using those parameters. This means that websites often support weaker and older ciphers to ensure that their content is available to a broad range of clients and does not exclude those on older hardware, meaning that client connections on an individual basis may be of wildly varying encryption strength, and some connections may be highly vulnerable.

 

Traffic Opacity

By definition, an SSL connection can’t be read (by design) by an intermediary. Similar to a “diplomatic pouch” used by ambassadors, diplomats and consular entities, the contents of SSL encrypted traffic are faithfully transported by the network without being able to read the contents. However, although this is generally useful, it can also present a real challenge for several other typical security measures within an organisation if the traffic is encrypted within its network. In particular, the contents of the messages cannot be examined for malicious payloads – the payload cannot be checked to see if being used as a “tunnel” to exfiltrate data out of the organisation, contains malware, or contains special characters or other parameters that would trigger a buffer overflow or injection attack.

 

Protocol Weaknesses

Like any other protocol, SSL is not immune to security weaknesses and vulnerabilities. This can be either due to central flaws in its core specification, down to coding or implementation errors common to all software, or – due to the nature of cryptography – down to workarounds found for some of the highly complicated mathematical calculations and methods used within cryptographic algorithms – it is not possible to positively prove that a given cryptographic algorithm is secure, and an algorithm can be in widespread use before a serious weakness in it is discovered and published. TLS is particularly vulnerable to these weaknesses precisely because it is not pinned to a single algorithm but offers dozens of variants, each of which may have its own flaws. The history of SSL/TLS is peppered with serious security flaws in its cipher algorithms, with names such as FREAK, BEAST, Logjam, DROWN, CRIME, POODLE and BREACH. The Heartbleed bug, for example, has repeatedly re-occurred in several variants, and allows anyone on the Internet to read the memory of the systems protected by vulnerable versions of the OpenSSL software widely used to provide SSL/TLS encryption. This compromises the private keys associated with the public certificates used to encrypt traffic and allows attackers to eavesdrop on communications, steal data, and impersonate services and users.

 

Side Channel Attacks & Traffic Analysis

Side channel attacks are those in which there is no weakness in the encryption protocol itself, but information about the connection can be calculated or observed nevertheless via some other, more obscure means derived from observing the participant system behaviour, such as variations in request timings, or fluctuating power levels. HTTPS has been shown to be vulnerable to a range of what is known as traffic analysis attacks, a type of side-channel attack that relies on variations in the timing and size of traffic to infer properties about the encrypted traffic itself. Traffic analysis of SSL/TLS is possible because whilst the encryption changes the contents of traffic, it has minimal impact on the size and timing of traffic. In May 2010, a research paper by researchers from Microsoft discovered that sensitive user data can be inferred via this method.

 

Threats from Quantum Computing

In addition to specific individual vulnerabilities and protocol weaknesses, there is a looming threat of a more existential nature against the entire class of encryption protocols known as public-key encryption protocols. The actual encryption of traffic across SSL/TLS connections make use of a symmetric key, which is highly performant and secure. However, the exchange or generation of the symmetric key needs to occur across a connection that is initially in an insecure (unencrypted) state. The mechanism used to do this relies on a highly complicated branch of mathematics and the protocols used are referred to as asymmetric or public-key. The mathematics underpinning public-key cryptography (PKC) essentially is based upon certain “hard” problems: calculations that are practical to do by parties with access to the cryptographic key, but impractically difficult or time-consuming to solve in a reasonable time without it.

The PKC systems used within SSL/TLS, including RSA, Diffie-Hellman, and ECDSA, rely on a class of “hard problems” known as integer factorization and discrete log problems. It is not too important to understand exactly what these are, but the key threat to this model is that this class of problem is only “hard” for classical computers to solve: a new form of computer known as quantum computers are able to reduce the time taken to solve the maths problems used in this form of PKC by several orders of magnitude. Algorithms such as Shor’s Algorithm are already developed for this purpose, and other, potentially more powerful algorithms may yet be design and uncovered. The net effect is that an attacker with access to such technology (still in its nascent stages as of 2022) could potentially crack most PKC forms in use and access the data exchanged using them.

 

Partial Path Encryption & HTTPS Interception

SSL/TLS connections are described as being point to point encryption, in that they have specific offload points at which the encryption/decryption occurs at either end of a connection, with the contents being essentially tunnelled between these two points. In the context of web security, this is a perfectly reasonable model in a simple web application infrastructure in which the web server and client act as the two respective endpoints. However, as networks and web services have scaled, application infrastructures have become increasingly complex, and this assumption no longer holds true – SSL connections established by clients are now likely to be terminated well short of the actual target (origin) server. This is because various elements within a network stack explicitly decrypt the client connection in order to provide functions such as SSL offloading, Content Distribution (CDNs), Denial of Service or traffic filtering, Web Application Firewall (WAF) functionality, and load balancing, among many others.

From a client perspective, this means that the naïve expectation that an SSL connection has been established directly with a webserver endpoint may not actually hold true, and traffic may in fact be decrypted (and potentially re-encrypted) at one or more intermediary points along the path between server and client. Not only is this not quite in line with user expectations, but this TLS/ HTTPS interception also introduces new security risks: the point where network traffic is available unencrypted providers attackers with the potential to access otherwise secure content if it can be accessed or compromised. In addition to “hacking” by external third parties, any decryption along the path also permits network operator, or other persons in a privileged position along the network path, to perform man-in-the-middle attacks against network users. A 2017 study found that HTTPS interception products as a class – despite in many cases being introduced as parts of a security programme – can have a net negative impact on connection security. In the case of distribution networks and CDNs this decryption can happen very near network “edge” – that is, located close to the originating client, and some number of hops away from the intended origin server.

 

Technical Protocol, Non-Technical Audience

The web is not limited to a technical audience but is now embraced and used daily by a nearly universal cross-section of the general population. Research that has investigated how well end users understand HTTPS security status and messages found that most users don’t understand – and in many cases simply ignore – security warnings where they are presented with them. End can often fail to check if the standard “green lock” symbol appears in their browser before entering sensitive information such as passwords or financial data. They can, likewise, fail to notice if the same symbol is missing even if they are familiar with the general scheme, or else believe that it confers security assurance that it does not in fact do, such as indicate that they are on a genuine website and have not fallen victim to a phishing attack or similar.

 

Performance Impact

The mathematics used in the encryption process under SSL/TLS, as well as the extra network overhead of establishing SSL connections, was historically relatively resource-intensive for both clients and servers. The exact impact depends greatly on the exact encryption cipher being used, and on the server implementing the SSL, however it was not uncommon ten years ago for servers to experience significantly higher connection times (and hence ability to support far fewer client connections per second). One study by Percona in 2013 found that the use of some SSL connections using the common OpenSSL library were 4 orders of magnitude slower to establish connections for example:

This became far less of an issue over the last decade as CPU speeds increased, CPUs began to integrate hardware support for some of the encryption methods used within SSL/TLS, as new ciphers designed to permit fast execution on lower power mobile processors became available (such as lightweight stream ciphers and those based on elliptical curve mathematics or ECC), and as techniques were introduced such as session resumption, multiplexed connections and OCSP stapling to allow faster checking of certificates.

However, the new horizon of quantum computing and other emerging challenges do present potential issues, since ECC is described as quantum unsafe and secure alternatives are more expensive in terms of resource to use, due to longer key sizes and longer signatures. Performance issues may yet become a more serious problem once more for lower power mobile devices depending on how SSL/TLS standards and ciphers are updated to address this potentially looming issue.

 

Certificate Issuer Assurance

Public key cryptography used within SSL relies upon the trust placed in the providers who issue SSL certificates to requesting organisations. If certificate authorities are unknown, untrusted or permit forged certificate then this trust cannot be relied on. In 2011, a certificate authority known as DigiNotar discovered that it had been hacked and a number of fraudulent certificates issued that claimed to be valid and signed certificates ensuring that the user could be assured that they had established a connection with a legitimate website. Over 500 certificates were registered as belonging to various commonly used sites including Google, Yahoo, and the Tor Project.

 

Typo Squatting & DV Certificates

There are three types of SSL certificates that can be issued, with Domain Validation or DV certificates being the easiest to obtain and requiring the least rigorous checks that the requester passes validation checks prior to issue. Specifically, DV certificates do not assure that any particular legal entity is connected to the certificate, even if the domain name may imply that.

The problem with DV certificates is that internet criminals can easily get SSL certificates for phishing sites with misspellings of a legitimate domain name. For example, if they were targeting people visiting example.com they could register examp1e.com (notice the “one” numeral substituted for the “L”) and easily get a certificate issued. When a visitor is tricked into visiting the phishing site -or in some cases where they simply make a mistake typing the URL if the attacker can register a common misspelling – then the site will appear secure to the visitor, displaying the “SSL padlock”.

In a variant that is even more difficult to spot, attackers can register domains containing characters known as homographs. Since most of us provide input to our computers using relatively standard 106-key keyboards and the like, we are used to seeing a standard (fixed) set of characters – however when these are transmitted to a computer, they are translated to code such as Unicode hex – and the “alphabet” of characters available in such a character set may contain thousands of characters, not just the ones visible on our keyboards. Common examples you may be familiar with include characters containing umlauts or accents, as used in various European languages. However, many of these glyphs (such as Cyrillic characters) are, when rendered on screen, indistinguishable from their Latin counterparts However since the hex codes assigned to them are different, separate domain entries and certificates can be registered using domain names consisting of these alternative glyphs, and valid certificates issued. A website may, to all intents and purposes appear to read www.google.com and belong to Google/Alphabet – but in fact use different underlying glyphs and be controlled by an attacker:

 

Certificate Expiry

A major headache for organisations attempting to manage their SSL certificates is that every certificate has a limited lifetime that it is issued for, after which it must be renewed or replaced. The issue with this is that it is perfectly possible for an organisation to simply miss an SSL certificate renewal date, meaning that the certificate lapses and the (genuine) site is flagged by browsers as being insecure.

 

Lack of Perfect Forward Secrecy

The public-key cryptography used in the key exchange/negotiation in setting up an SSL/TLS connection can optionally have (or lack) a property known as perfect forward secrecy or PFS. If it has this feature or property, then it means that the compromise of one message by an attacker cannot lead to the compromise of other messages, and that there is not a single secret value which can lead to the compromise of multiple messages. The risk this is meant to protect against is an attacker simply collecting/harvesting all the (encrypted) messages sent between a client and server. They cannot (yet) read them, but (if the algorithms used do not support PFS) then cracking the public key at any time in the future will allow the attacker to then decrypt every past message ever transmitted between the two.

SSL/TLS implementations can provide forward secrecy by making use of key exchange based on the Diffie–Hellman algorithms to establish session keys, however many implementations do not have this in place – or offer non-DHE equivalents for clients that do not support it.

 

Is SSL pointless then?

Not at all! It may seem at this point that this is a “council of despair”, and that SSL/TLS is a highly flawed and problematic protocol. However, that is not our intention at all – rather it is our belief that by understanding the potential hurdles with SSL/TLS, individuals and organisations can better avoid potential pitfalls and ensure that issues are understood and avoided wherever possible. SSL/TLS encryption remains a fundamental underpinning of web application security, and understanding its potential weak points enables individuals and organisations to make better informed decisions to ultimately better protect their customers, employees, partners and investors.

 

How AppCheck can help

AppCheck help you with providing assurance in your entire organisation’s security footprint. AppCheck performs comprehensive checks for a massive range of web application as well as infrastructure vulnerabilities – including SSL protocol and certificate issues, as well as other infrastructure security weaknesses.

The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.

 

About AppCheck

AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA).

 

Additional Information

As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please contact us: info@localhost

Get started with Appcheck

No software to download or install.

Contact us or call us 0113 887 8380

About Appcheck

AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network and cloud infrastructure. AppCheck are authorized by te Common Vulnerabilities and Exposures (CVE) Program aas a CVE Numbering Authority (CNA)

No software to download or install.
Contact us or call us 0113 887 8380

Start your free trial

Your details
IP Addresses
URLs

Get in touch