Databases, whether in the traditional relational form, or the increasingly common NoSQL/NewSQL variants, are the near-universal storage mechanism underpinning the handling of data within modern dynamic web application platforms. Typically providing methods supporting the remote creation, deletion and updating of data across the network, they offer an attractive target for remote, network-based attackers. Despite a long history of database design and development stretching back over fifty years, common patterns of security weaknesses continue to be seen in database design, deployment, configuration, and maintenance. Breaches of database security that impact data confidentiality have the potential to deliver to attackers data sets that include highly sensitive or personal data. They can also enable attackers to impact the integrity of the stored data, erasing or modifying it as they desire in order to deliver personal or financial gain – either by directly modifying it in a malicious manner, or increasingly by encrypting and then ransoming back the data to its owner in a ransomware attack.
In this blog post we take a look at the wider context of database security by briefly surveying the various scenarios surrounding database configuration, deployment and maintenance that can lead to security weaknesses. We then take a look at what measures organisations can take to harden their database systems to better resist attacks or exploits by adversaries.
Data breach statistics drawn from industry survey, such as the annual Verizon Data Breach Report, repeatedly show that insider threats from sources such as reckless or malicious employees continue to be an ongoing threat to organisational security. Robust cybersecurity does therefore require a holistic approach consisting of a broad range of controls including internal administrative procedures and policies, such as employee background checks. However, in this article we will be focusing largely on what measures an organisation can take at the predominantly technical level to harden database systems against attack, from sources both internal and external.
It is also possible (and advisable) for a database security programme to incorporate elements that deliver increased security assurance without preventing attacks: these include detective controls designed to detect when an attack is underway (such as Intrusion Detection Systems or IDS, and logging/monitoring and SIEM solutions); as well as corrective controls that can be applied in order to recover from a security incident. However, the focus within this article will be specifically on preventative controls – those designed to prevent an attack from being successfully conducted in the first place, via hardening the database system.
The concept of hardening, or target hardening to give it its full name or when referring to it out of context, originates in a concept that is used in the military and security services to refer to strengthening the security of a building or other physical installation. It would often include measures such as modifications to the building itself (such as upgraded doors and windows) as well as environmental alteration such as removing bushes or other ground cover that could offer hiding places or screened approach to the installation, as well as adding or improving gates, fences, or other barriers.
The idea behind the concept of hardening can serve many purposes, including:
Although the techniques and materials are different in modern security environments, the same approach to hardening physical installations is in fact very common in the realm of physical security when designing modern real-world compute facilities such as data centres. The same principles were used for centuries in building castles with strong walls, and surrounding structures such as moats, ditches, fences, and walls are all adopted still, albeit with different materials and technologies.
However, in this article we are not going to look at physical security and how hardening applies to data centres, but rather how analogous techniques can be used at the software level to improve the security of database systems against electronic forms of attack, typically performed across the network. This form of hardening is sometimes also variously described as security auditing or compliance testing.
Various factors combine to make database systems highly appealing targets for criminals and other attackers: the compromise of a database system or server can often be lucrative to criminals in numerous ways, whether by permitting the extortion of money via ransomware, or more directly via the exfiltration (theft) of valuable data such as credit card numbers.
The theft of data at scale from organisations has become so commonplace that the term data breach has entered popular culture and can often be attributed to a failure to maintain the confidentiality of data stored within a database system. In the last decade or so, the number of data breaches has risen almost exponentially. In addition to the significant damage that these threats pose to a company’s reputation, there are direct costs including forensic investigations, loss of customers, and financial penalties for data breaches under regulations including the General Data Protection Regulation (GDPR)
Effective database security is therefore key for remaining compliant, protecting an organisation’s reputation, and – in the worst-case scenario – ensuring that a business remains solvent and operable following a significant data breach and the surrounding fallout.
Database security encompasses all of the tools, processes, and methodologies which establish security inside a database environment, and holistic database security programs are designed to protect not only the data within the database, but also the data management system itself, and every application that accesses it, from misuse, damage, and intrusion. Database hardening in particular applies to the largely technical measures used to make a database system or ecosystem more directly resistant to exploit or attack.
The challenge for database hardening is the dichotomy between securing a database and making it useful: if a database system is designed for ease of access, then almost unavoidably it becomes less secure since its attack surface generally increases; but if it is made watertight (by for example disconnecting it from the network entirely), then it becomes impossible to use. Hardening is about striking a balance that delivers the maximal security that is practicable whilst ensuring that the database is usable and useful for its given purpose.
Hardening is often described in the context of computer systems as the reduction of the attack surface of the system in question. The attack surface of a system or network is normally defined as the sum of the different points (“attack vectors”) via which the system is exposed to attack. It can alternatively be viewed as a combination of the ways in which actions that can be performed on a system remotely. This may include several components; from externally facing services to internal components such as an associated database or host operating system.
Intuitively, the more actions available to a user, or the more components accessible through these actions, the more exposed the attack surface. The more exposed the attack surface, the more likely the system could be successfully attacked, and hence the more insecure it is. If we can reduce the attack surface to each component, we can decrease the likelihood of attack and make a system more secure. Before looking at what measures can be taken to harden a database, it is important to understand the attack surface of databases, and the kind of exploits that they are vulnerable to.
The threat vectors to databases are generally very well understood, although some (such as the threat of SQL injection) receive significantly more attention (and therefore awareness) than others (such as the inherent risks in stored procedures). The aim with the list below is to provide a brief summary of the main threat vectors only within the context of database hardening, so that hardening measures can be placed in context. We will not be doing a “deep dive” into any one of them within this article:
As with any other local or network service, a database has an intended user or service base that are the set of clients intended and authorised to interact with the database – either as end users (connecting directly to the database) or via authorised applications and web applications that utilise the database. In the majority of cases, this authorised list of clients are located on fixed, static source IP addresses, so it is not necessary to expose the database across the network to anything other than these sources. Failing to restrict the network sources that can connect to a database simply serves to drastically increase that attack surface of the database and makes the database system massively more vulnerable to exploit by a remote attacker.
In addition to the database services themselves (those that perform or execute the functions of the Data Query (DQL) and Data Manipulation (DML) languages to perform CRUD (Create, Read, Update, Delete) operations, databases will typically offer some form of metaservice – services designed to allow the modification of metadata (data about the database) such as the schema, views, triggers and stored procedures – as well as additional functions such as auditing, storage and indexing constraints and configuration. This metadata is typically stored in a discrete set of tables and views often referred to as a system catalogue or data dictionary. Just as with the database services for end users, this metadata is often exposed and available as a network service too, for administrative purposes. It is as important, or perhaps even more important, to carefully review any exposed administrative interfaces and programming interfaces (APIs) that permit access to these meta-services in the same way as with the database services themselves. It may be possible to disable network access to the meta-services entirely, or else to restrict them to a known-trusted set of administrative workstation IP addresses via a network whitelist.
Where an attacker does not have direct connectivity to a database (as above), they are likely to still have an attack vector via a mitigating or intermediate application layer service (such as a web application) that acts as a proxy between themselves and the database. Although more complex to execute, an attacker can in many instances still perform an attack through and via this mitigating application layer.
Perhaps the most commonly exploited weakness in database security in fact is that of SQL Injection, an attack vector that involves inserting malicious code in SQL statements via input data from the client to the intermediary application. It is possible for an attacker to “inject” code into their HTTP request to the web application, which in turn relays the malicious code into the SQL query it performs against the database on their behalf. The injected code, typically performed by escaping the data context via the use of special characters, subverts the intended SQL to be executed and either alters, expands or replaces it. If an attacker is able to have the server invoke their own SQL query in place of the intended one, they can instruct the server to execute a query to return all other customer data, or to delete all data from the database.
Databases typically do not exist in isolation but as part of a broader data management solution. In many cases, the majority of an organisation’s data security efforts can be focused upon securing the primary database server, and less attention given to additional repositories that may provide either partial or full replicas of the master data set. If these secondary data sources are neglected from a security perspective, or situated in a less heavily protected resource sphere, then they can offer a wider (and less hardened) attack surface for attackers to target. Examples can include backup data sets and servers, replicated data sets, data exports to test or staging environments, manual data extracts, data warehousing and reporting tools, and connecting interfaces, feeds, and batch processing services.
Mixed mode authentication refers to database configurations in which users (and administrators) are able to authenticate to a database using more than one method or authentication system. It is typically contrasted to the use of a centrally managed Single Sign On (SSO) solution, but this is not a necessity – single mode authentication can exit using local database authentication engine only, for example.
The issue with mixed mode authentication is that it introduces complexity to authentication. It becomes more difficult to ensure that authentication requirements are evenly applied across systems, increases the dependencies and attack surface of the database, makes it difficult to enforce non-repudiation (and attribution) of action and ensure that user accounts are unique, and complicates processes such as user permission reviews or user deactivation – an administrator may deactivate a user account, believing that they have blocked that user from access, whilst leaving a second, valid account in place inadvertently through Active Directory.
Stored procedures are essentially logical flows or sequences of commands that exist within a database itself rather than the application layer, consolidating and centralizing some of the logic that would typically be implemented in remote connecting applications and then executed as a discrete series of queries. Using subroutines, an application instead makes a single call to the database to execute the stored procedure. Although they offer some security advantages (such as partial protection against SQL injection attacks), they also introduce new potential security weaknesses. One common example is application or database logical confusion over the execution-assigned permissions – essentially, does a given stored procedure execute within the definer or invoker’s security context? If mistakes are made, stored procedures can potentially give attackers the ability to execute queries against the database with highly elevated privileges, granting them access to data that they are not authorised to have access to.
Although not unique to database security, Man in The Middle (MiTM) attacks are very simple to execute for a suitably positioned network attacker and can permit attackers to observe and capture any data that is flowing across a network segment to which they have access, if that data is not encrypted but being sent in plaintext. Although access to the data itself is a concern, the greater risk is that database credentials themselves will be sent across the network in plaintext if the database fails to enforce transport security. An attacker who is able to “sniff” an unencrypted (plaintext) database connection from an application server or administrative user may be able to intercept the credentials being used, and then use them in turn to establish a trusted connection to the database. Depending on the credentials captured, they may then have full access to all data within the database.
Data At Rest encryption is the encryption of the data that is stored in the databases once it has been written, rather than the data that is currently being transmitted across the network. It can be applied at various levels such as at the database itself, specific tables, views, or documents within the database, or of the underlying filesystem itself that provides storage for the data within the database.
If a database is not stored in an encrypted form, then an attacker who is able to gain access to the underlying host system, or to the storage system if network-based, is able to access any and all data within the database, and to exfiltrate it in a data breach attack.
The principle of least privilege is a key guiding principle in information security that highlights the importance of ensuring that users have all the permissions to access data and services in a way and to an extent that is necessary to perform their given function and role, but no privileges beyond or in excess of that. A common antipattern is simply to give all users flat or blanket access to all data or methods, without basing access upon need. There are a number of risks in such an approach: a disgruntled employee is able to perform more harm than otherwise they would; an attacker who is able to intercept or steal a user’s credentials has much broader system/data access; and the potential for accidental (as opposed to malicious) damage is also greatly increased.
Lastly, the binary files that themselves constitute the database application and are provided by a vendor may themselves contain logical or code flaws that leave the database open to one or more vulnerabilities that can be exploited by an attacker. If a database system is not patched regularly under an effective vulnerability management programme, then it may become increasingly susceptible to exploit over time as security weaknesses within the executables are discovered, published, and sought to be exploited by attackers.
Although there are many other weaknesses potentially present within database, the above is a summary of the most observed weaknesses, and we can look now at how these may best be protected against.
This article is designed to lay out at a high level the general best practices at a conceptual level/high level of abstraction, in order to increase awareness of the kinds of measures that can be taken to harden a database environment against attack. However, it is not intended to act as a complete resource in isolation and cannot cover in detail given the space available, how to implement database hardening at the low level of specific configuration changes for every database system variant.
Detailed product-specific guidance is generally published by the database system vendors, but perhaps the most detailed and low-level guidance is that available via organisations that produce compiled “checklists” of hundreds of individual configuration options that are recommended on a per-platform or per-technology basis. Perhaps the most well known and respected of these are the “CIS benchmarks” published by the Center for Internet Security, but the United States Defense Information Systems Agency (DISA) also produce a series of Security Technical Implementation Guides (STIG) that are available upon request. Specific platforms may also offer scripts and other tools (such as the mysql_secure_installation script for MySQL) that allow automated application of secure configuration options.
The drawback with these tools and guidelines is that they serve a very different purpose to this article, since they provide guidance (or in the case of the MySQL script simply change parameters automatically) specifically relating to database configuration parameters only, and without providing wider context or awareness as to the risks they are addressing or to wider environmental concerns that impact database security. In the list below, in contrast, we highlight the key hardening measures that you should expect to include in order to harden your database against attack, but it is recommended to use compliance tools from CIS, DISA or others in order to implement many of the changes that require system-level configuration changes.
Databases are able to produce various logs recording actions taken. These can include binary or relay logs that incorporate every action performed on the database, in sequence, and can be used to restore a database from scratch if “played back” in full order or used to “rewind” the database to a given point, as well as allow replication of changes to a slave database server in a replicated setup. However, there are also more standard application logs as seen in other system types that are designed primarily for logging and monitoring purposes and record key events only, such as security events. For database systems in particular, it is recommended to turn on auditing for high-risk query types such as GRANT (used in permission assignment for users) and DROP (that removes a database table and its data entirely).
It is sometimes said that capturing audit data is easy, but using it is not. In order to delivery on security, audit data must be transformed into actionable information that teams can respond to, so monitoring and alerting must be built on top of any logging put in place and an organisation must commit to providing the resources to allow appropriate vigilance required to support effective auditing and incident response. Auditing changes to your databases enables you to track and understand how data is accessed and used and gives you visibility into any risks of misuse or breaches.
One of the simplest and most robust measures to put in place is an effective firewall solution. Packet filtering firewalls operate by screening database services and ports, restricting access to them in a simple and robust manner to restrict access to certain source hosts and IP addresses, discarding or dropping any other requests to access the database. This can be used both to restrict access to database services to authorised application servers, as well as to restrict access to administrative interfaces and meta services to authorised workstations belonging to database administrators (DBAs). Conversely, packet filtering firewalls can also be used to effectively restrict outbound access from the database server also, limiting an attacker’s options for data exfiltration in certain exploit types.
In addition to packet filtering firewalls, it is also possible to deploy application-level gateway devices that screen the database at the application layer (also known as proxy firewalls). These “database firewalls” such as MySQL enterprise firewall operate at the application rather than packet layer and have a native understanding of the product being screened and the actual queries being performed. Given this application-specific context they are fairly resource-intensive but permit a different kind of security to be enforced: rather than applying blanket allow or deny access based on source IP, they enable the configuration of fine-grained directives that place more subtle restrictions on traffic from given sources or users. They can, for example, allow a database administrator to configure whether a SQL statement sent to the database server from the application server is permitted to execute, based on one or more rules matching against lists of accepted statement patterns known as signatures. Signatures can be either standard vendor-provided defaults that can help to harden the server against attacks such as SQL injection, or custom signatures added by administrators within an organisation designed to resist attempts to exploit applications by executing queries not permitted within the designed system function and query workload characteristics. Protection can be applied and tailored based on various factors such as account being used, application source, the time, and the query contents.
At the risk of turning this blog post into a “firewall varieties” listing rather than a guide on database hardening, there is a third firewall type that, whilst not specific to databases, is a key measure in hardening the environment as a whole and protecting the database, and that is a Web Application Firewall (WAF). Also known variously as Application Delivery Controllers (ADCs), Application Security Managers (ASMs) and Application Gateways, these devices are similar in many ways to database firewalls, in that they are application-aware and are used as proxies inserted in the path between the requesting client and the database. However, they are placed in front of (screening) the web application server that accesses the database, rather than behind it. The requests that they are used as a proxy for are therefore the client HTTP requests, rather than the SQL queries sent to the database. As with database firewalls, signatures can be provided by the vendor against generic attack types, and then extended by an organisation based on its own requirements.
The reason that this is an effective database security measure is that SQL injection and other attacks ultimately originate with the requesting client, and it is possible to intercept and block requests upstream of the application server before they even reach the application server and their payload unpacked and passed on to the screened database system.
Transport layer encryption can be applied to ensure that data sent to and from the database and requesting clients (such as application servers) is encrypted during transport and hence not subject to interception by an intermediary. Most database servers can be configured to operate in a mode that permits only encrypted connections, which may involve the database service listening on a different port.
In the introduction to this section, we mentioned that organisations such as CIS produce standard configuration baselines for database systems that can be used to ensure that systems are built and deployed securely, and this remains a recommended practice. However, it is not sufficient to rely on baseline configuration or golden image deployment alone: over time, adjustments to database configurations and functionality will be required, and the older the database, the more changes will have occurred in a process known as configuration drift. It is therefore recommended to perform ongoing or periodic compliance testing of deployed systems to detect variance from this secure baseline that may represent exploitable security concerns. Vendor-specific solutions such as Oracle’s Integrity AppSentry do exist, but a more common and broader set of tools is available from CIS in their Benchmark suite, allowing generation of configuration comparisons of the current configuration against “known good” secure baseline policy sets such as STIG or CIS’s own benchmarks.
In addition to encrypting data that is in transport to and from the database server, encryption can also be applied to data that is “at rest” (that is, stored within the database). The encryption process is performed on the database server before the data is written to disk and is completely transparent to the applications accessing the database. Known as Transparent Data Encryption (TDE) on some platforms, both data and log files can be encrypted in this way. It is also possible to apply storage/volume encryption to the underlying host or storage platform as a whole, to further ensure that the data is not accessible in a readable format to an attacker under various attack scenarios.
A general principle, as opposed to a specific technical measure, is to ensure that where putting in place a restriction – whether it is at the network level or relating to database table or user permissions – that the default (baseline) position is to deny all access, and then to apply specific “allow” exceptions. This is in contrast to an approach where the default position is for all access to be allowed, except certain specific blocked or denied instances. The reason that this is important is both that it makes permissions for a given user or system implicit and easy to review and understand, as well as that any new access vector or user created by default has no access, forcing administrators to carefully consider and grant only that access that is required – typically using the principle of least privilege and making use of Role Based Access Control (RBAC) via user-role assignment in the case of user permissions.
It is often necessary to store sensitive data within a database, and further for this data to then be needed to be exported in some form for further uses: especially within large enterprises, production databases are commonly copied or cloned to create test, support, and development environments. Rather than simply storing the sensitive data in raw (readable) format there are a few options for modifying the data so that it is fit for purpose (in terms of form or volume) yet does not disclose its contents. This typically involves the transformation of the data using obfuscation or perturbation techniques – the three most common options being data masking, encryption or tokenization. Data masking substitutes realistic, but fake, data for the original values; whereas data tokenization substitutes sensitive data with random (meaningless) surrogate values, referred to as a token. Modifying the data in this way allows its safe storage within less secure resource spheres such as testing environments without the risk of data exfiltration leading to a data breach involving sensitive data.
Certain attack types such as command injection can permit an attacker to execute ad hoc commands within the context of the database system and its executing owner. By configuring the database services to run under a low-privileged user account, an administrator can help to minimise the impact of such exploits should they occur.
In addition to the risks posed by an attacker assuming control of an authorised user’s credentials or account, there is also a risk posed by problematic configuration of authorised accounts, specifically in relation to configuration drift – most importantly, access to a database is not something that can be set once and then never amended. Employees join and leave and organisation, and change roles, and access that may once have been necessary for a business function may now no longer be appropriate and present a business risk. It is therefore important to consider the authorised user list as another type of configuration relating to database security, and one that requires periodic review under a process of account review process and account cull/de-provision in cases where access is no longer required or appropriate.
Whilst the primary focus on securing access to the database is typically around access via the connecting application or applications, there are a number of other channels that need to be considered and appropriately secured. In some cases, this may be local access (access from the host operating system command line itself), which brings with it its own potential security issues – such as the common issue of administrators saving database access credentials in plaintext configuration files on disk or passing in the password as a command line parameter and it being stored in the host’s command history file (and sometimes process listing too). Network based access also may offer a number of alternative channels for database access that can easily be overlooked when hardening a database server, such as permitted connections for backups or replication, for batch processing, by data warehouses or reporting tools, and by third-party interfaces and data feeds. These all offer alternative attack vectors for attackers and come with their own security risks.
Although the primary focus of hardening within a database environment will understandably be on the database system itself, in the context of considering likely attack vectors then it is also worth considering the security of client environments – that is, machines such as administrator workstations. These may have privileged access to administrative interfaces and metaservices that permit modification to the database itself, and therefore are tempting targets for attackers. An attacker compromising an administrative workstation can often then pivot this attack into an exploit against the database system or systems to which it has privileged access. These attacks can be either purely technical or rely on social engineering attacks such as phishing attacks that permit an attacker to install malware onto administrative machines. Measures that can be useful in hardening client environments include the use of anti-malware and anti-virus solutions on administrative machines and a clear enforcement of unprivileged accounts usage for day to day working, and separate “high privilege” administrative accounts only used when required. It is also possible to introduce measures such as bastion hosts for administrative access, as well as 2FA/MFA for authentication in order to add an additional “hurdle” for attackers – these measures mean that even if an operator/administrator host machine is compromised, access to the database server may not be guaranteed for the attacker.
As with any other computer system, database systems can have vulnerabilities inherent in their binaries as shipped from the vendor. It is therefore important to register for security advisories from software vendors and to apply security patches within a reasonable timeframe – with shorter timescales for higher criticality vulnerabilities.
Database administrators and others can often be hesitant to commit to applying updates to database servers, given the requirements for service availability and the significant issues if databases cannot be restored to service following update. Complicating the overall situation is the need to schedule business downtime to test changes. Downtime interrupts operations and can also have an adverse impact on company revenue. If the business perceives little-to-no benefit to testing and scheduling downtime to apply security patches, over time security vulnerabilities can easily accumulate. These obstacles can be overcome, however, by means such as testing of patches in off-production environments, and the use of multiple master database servers in a replica set, permitting individual hosts to be patched one at a time without impacting service delivery.
An often-overlooked measure that can be taken to harden databases is the careful review and optimisation of code. This can involve manual or automated code review aimed at identifying queries that are either dangerous or highly resource intensive. By identifying such queries, the application code can either be optimised to provide more efficient query execution, or else the queries in question moved to a batch processing system, if possible, rather than executed in real-time where they compete for database resources against other queries. The risk of leaving unoptimized queries in place is that they offer an easy amplification method for attackers seeking to perform a Denial of Service (DoS) attack. If an attacker is able to identify an expensive query that can be triggered by a single HTTP request (for a webpage) for example, then they can potentially trigger a complete database outage by simply making a handful of concurrent HTTP requests, at minimal cost to themselves but significant impact to the database availability.
Applicable to both system privileges as well as object privileges (those applying to specific database columns, tables, or views), it is possible to place access restriction controls on data to ensure that users and clients can only perform queries relating to data to which they are permitted access. If access is configured to specific tables or columns only then it is possible that even in the case of an SQL injection exploit an attacker will be unable to completely compromise the database server given the execution context of any SQL commands that are able to inject lacking privileges to perform queries on tables that they wish to target.
Lastly, it is important to have a strong password policy in place for both user and administrative access, and to pair this with multifactor authentication (MFA) if possible, for at least administrative connections. Passwords remain one of the most common weak points leveraged by attackers to gain unauthorised access. A good password policy will include requirements for password length, password strength and character set, password re-use and uniqueness.
AppCheck help you with providing assurance in your entire organisation’s security footprint. AppCheck performs comprehensive checks for a massive range of web application vulnerabilities – including SQL injection and other database security weaknesses – from first principle to detect vulnerabilities in in-house application code. Our custom vulnerability detection engine delivers class-leading detection of database vulnerabilities and includes logic for multiple detection methods (including Time Delay Detection, Error Detection, Out of Band Detection and Boolean Inference) as well as a range of database products and platforms (including Oracle, PostgreSQL, SQLite, MSSQL, MySQL and Azure).
The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.
AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA)
As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please get in contact with us: info@localhost
No software to download or install.
Contact us or call us 0113 887 8380
AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network and cloud infrastructure. AppCheck are authorized by te Common Vulnerabilities and Exposures (CVE) Program aas a CVE Numbering Authority (CNA)