A recent revelation indicates that the dark web maintains a trove of 1.4 billion credentials with cleartext (unencrypted) passwords that are not only accurate, but are stored in a search-able database, thereby enabling those who know of its existence to search for credentials and learn the credentials’ passwords in seconds. The contents of this database represent breached data from some of the best known companies on the internet.
Even when one of the strongest authentication technologies to eliminate passwords has been available to protect users and companies for over last two years, ill-informed decisions on the part of websites have left users and data exposed, with adverse consequences.
On the one hand, it is hard to fault business managers making technological risk-management decisions. Technology is evolving so rapidly, it leaves professionals gasping for breath while assimilating change. Add to this, a phalanx of consultants and industry analysts who don’t “eat their own dog-food”, and you are left with a perfect storm of vulnerabilities that aid attackers rather than protect consumers and companies.
Is there a play-book managers might use to understand current risks and learn how to effectively protect systems, data and users? We believe there is. While every system is unique, there are fundamental tenets useful in making risk-management decisions.
Given that information systems exist to receive, store and process data, information must possess the following three properties to be useful; it must be:
- Confidential, where appropriate, and
What does it mean for information to be authentic? The dictionary definition claims it must be “worthy of acceptance or belief as conforming to or based on fact”.
In a real-world context, for a doctor to be able to use the results of a diagnostic test to treat a patient effectively, the information in the report must pertain to the specific patient, originate from sources that are authoritative, and be accurate. These attributes are essential to the authenticity of the information within this context. The same attributes must hold true for a power company billing a customer for power consumption: the meter reading must pertain to the specific customer, originate from the meter that is authoritative and be accurate.
Even in environments where devices – such as the sphygnomanometer or power-meter – are not used to measure a quantitative metric, similar attributes are required for information to be deemed authentic. An investor making a judgement on a company’s stock or bond offering, must depend on reports for the company where revenue and expenditures pertain to the specific entity, originate from authoritative sources within the entity, and are accurate.
When data is accepted as being “authentic”, it establishes an initial level of trust in the data. However, we will show that an initial level of trust does not necessarily make data trustworthy at a different moment in time.
The information technology industry is currently at a moment in time, where not all devices that produce data have components built into them to guarantee data-authenticity – the cost is still too high for general-purpose computing. (However, the industry is slowly marching in that direction as is evidenced by efforts of standards groups like the Trusted Computing Group and the FIDO Alliance).
As a result, the world has learned to use proxies to attest to the authenticity of information: the laboratory clinician is trained and certified to identify patients, perform tests, accept readings and record them within information systems. The meter-reader is trained to identify customers’ premises, read meters and record information into information systems. Managers of companies releasing financial information are, similarly, trained and authorized to summarize data from a variety of sources and report results in a prescribed manner.
The vulnerability in such a proxy-based system is the technology used to authenticate humans to information systems. If an information system can be tricked into accepting a masquerader as the “authentic source” – which most current systems can – then assumptions that data are authentic fall apart.
While rules and regulations to ensure authenticity exist, rule-makers neither understand the complexity of modern information systems nor that of authenticating humans to a computer system. Most are familiar with only userids and passwords, an authentication technology created more than half-a-century ago. A smaller subset are familiar with biometric authentication, and an even smaller group have used strong cryptographic authentication schemes such as smartcards with digital certificates1. However, it is this author’s estimation that more than 99% of systems continue to use userids and passwords to authenticate humans to systems currently. This technology remains the single largest vulnerability to systems on the modern internet.
Defensive Play: Until systems are designed and deployed to rely upon themselves to deliver authentic information, aside from policies and practices to train, certify and trust humans, passwords and secret-based authentication schemes must be eliminated. The work of the FIDO Alliance is crucial in this respect since the standards-based protocols and their implementations make it possible to eliminate the vulnerabilities inherent in current authentication schemes.
Since 2003, when California’s seminal data-breach disclosure law was passed, more than 8,000 breaches have been publicly disclosed affecting more than 10 Billion records containing sensitive information. Some of the wealthiest and best-known companies in the world have been affected by data-breaches, including:
- Apple – 7 publicly known data-breaches affecting more than 1 Million records;
- Facebook – 5 publicly known data-breaches affecting more than 88 Million records;
- Google – 6 publicly known data-breaches affecting more than 1 Million records;
- LinkedIn – 3 publicly known data-breaches affecting more than 117 Million records;
- Yahoo – 4 publicly known data-breaches affecting more than 3Billion records;
- Zappos (Amazon) – 1 publicly known data-breach affecting more than 24 Million records;
… and many more.
The reasons and circumstances of the data-breaches vary; the result, however, is the same: sensitive data affecting customers and/or employees is breached.
Data-breaches have a corrosive effect on the economy – their negative effects are hard to quantify beyond class-action settlements and fines, if any. Consumers, however, are not only left with evidence that companies are putting their profits ahead of consumer safety, but also bear the hidden costs of breaches in lost time (cleaning up after the aftermath of identity-theft, if affected) and higher prices in goods and services.
Some industries – such as the payment card industry – have created data-security standards mandating the protection of payment card information, and going so far as to prescribe detailed requirements on how to protect sensitive information. Despite such stringent regulation, companies have been breached due to poor implementations2 of security controls.
The healthcare industry has had its own share of large data-breaches3 despite being regulated in the US by the Centers for Medicare & Medicaid Services and the Federal law – Health Insurance Portability & Accountability Act (HIPAA) – mandating the security and privacy of sensitive healthcare data.
Most data-breaches occur because of mistaken assumptions that it is easier to deter “barbarians at the gate” rather than actually protect sensitive data in the application. As a result, companies over-invest in network-based security tools – firewalls, anti-virus, malware detection, intrusion prevention, etc. – rather than invest in the control mechanism that provides the highest level of data-protection: application level encryption!
Defensive Play: Companies attempting to deter a data-breach using anything other than application-level encryption place a high-probability of having a data-breach. While some risk can be reduced by using FIDO-based strong-authentication controls, reasonable data-security mandates having multiple controls to deter attackers – a practice the security industry terms as “defense in depth”. Short of eliminating sensitive data from a system, encrypting and decrypting data within authorized applications (combined with a hardware-backed cryptographic key-management system) is the strongest data-protection control a company can hope to implement. When combined with FIDO-based strong-authentication, risk-mitigation becomes formidable.
A little-known fact about standard database management systems is, that due to the way they are designed, it is always possible for a privileged user to modify data-at-rest directly without the knowledge of the application or users who created the record.
Even when controls are in place to ensure the database system records changes and stores audit logs that privileged users cannot access, this risk is not easily mitigated. This is because database management systems use userids and passwords to authenticate users and applications; as a result, the probability of an attacker using a legitimate user’s compromised password to modify information in the database is very high.
This author has documented4 that, even when a database stores encrypted data, it is possible to attack such a system, substitute information in the database and cause the system to use incorrect information in processing the record. While this is a targeted attack requiring more knowledge and skill, it is nonetheless, feasible.
The vast majority of current applications operate on the assumption that information stored within their databases is accurate. Even application programmers and system administrators who design and operate such systems are constrained in protecting the integrity of data for a variety of reasons: lack of knowledge, lack of resources, lack of business imperative, etc. Consequently, it is possible to implement FIDO-based strong-authentication and application-level encryption, but still remain vulnerable to integrity attacks on data, unless additional controls are placed on the system.
Defensive Play: In order to create a comprehensive data-security strategy, companies must implement digital signatures for user-transactions and stored database records. Transaction digital signatures using FIDO-based protocols are one of the strongest risk-mitigation controls to ensure only authorized users are capable of modifying previously stored data. Similarly, transactions stored in databases must be additionally secured using digital signatures generated by the applications themselves; the cryptographic key performing application-level signatures must be inaccessible to any human user – privileged or otherwise – other than the application. Upon reading a database record, the application must also verify the signature of the retrieved record before attempting to use it. Only when the signature is verified successfully, can the application be sure it is using the same data it stored previously, thus ensuring trustworthiness.
What about …?
Some may question if data authenticity, confidentiality and trustworthiness (ACT) are sufficient. They may ask “What about firewalls, patching, intrusion prevention and all the other security technologies on the market that claim to stop data-breaches?” This is a legitimate question.
Information security has become specialized enough to create a fair amount of confusion. There was a time when a firewall and a reasonable password-policy was sufficient to deter most attacks. However, times have changed. As the complexity of web-applications grew, so did the number of vulnerabilities. As a result, the number of tools to deter security breaches have proliferated. Since most security technology providers attempt to build a business model that does not depend on modifying customer applications, the tools are generally focused on network or host-protection.
However, it is important to keep sight of the fact that the ultimate goal of information security is to protect data. While firewalls and patching are essential to cover vulnerability gaps that system designers and operators cannot control, attackers have proven themselves adept at getting past most known network- and/or host-security controls.
Protecting data through strong-authentication, encryption and digital signatures provides extraordinarily high levels of security because it assumes an attacker may already be within the network and/or host. When designed with appropriate cryptographic key-management, ACT controls create formidable barriers to attackers. While they are not infallible, they are the strongest risk-mitigation technologies available on the market today. What has been challenging in implementing them so far is the cost and complexity of integrating such controls into business applications; this is no longer true.
Information systems operate under extraordinary constraints today: attackers from the far corners of the earth are capable of compromising systems as easily as an attacker next door. Businesses have invested enormous amounts of money in information systems to deliver better products and services, faster and cheaper to markets. Without adequate information security, these investments are at great risk; consequential damages to customers, employees and shareholders are incalculable.
The guidelines defined in this article ensure the authenticity, confidentiality and trustworthiness of data, while creating formidable controls to protect information, users and investments.
1 The problem is actually more nuanced and is detailed in a 2008 paper comparing the security of different authentication schemes. Suffice to know that “a multi-factor authentication scheme using something you know, something you have and something you are” does not necessarily provide the right levels of risk-mitigation.