A Tale of Two Breaches

(With apologies to Charles Dickens)

Last week brought news of settlements and fines for two US firms related to cybersecurity breaches. Neither of these breaches represented the best of times, nor of an age of wisdom. Neither should have happened, but they did. Yet, they were treated very differently. The uneven treatment US government authorities meted out to violating companies sends a disturbing message to executives in the boardroom.

Uber, a privately held firm in the “sharing economy”, breached 57 million passenger and driver records in 2016. The story: an Uber software developer stored a service credential in application code to access sensitive information from their database, and stored the code in a private repository in Github. Including service credentials – a “shared secret” – inside software, in itself would have been a violation of “security best practices”, since these can be compromised in many places besides the Github repository: testing environments, staging machines and of course, the production infrastructure itself.  But, the software developer’s errors did not stop there.

Github deployed FIDO-based strong-authentication for the specific intent of software developers protecting their repositories from unauthorized access. Despite the deployment of one of the strongest authentication protocols produced by the industry, Github neither encourages their users to Sign Up, nor Sign In with FIDO technology. As a result, the software developer used one or more shared secrets – username/password, one time passcodes, etc. – to authenticate to the Uber repository.

The next mistake was that Uber automatically deployed their applications into Amazon Web Services (AWS) using yet another shared secret: an application programming interface (API) key with a secret key – a euphemism for username/password – using geeky terminology to imply sophistication: hashed message authentication codes (HMAC), a variation of the algorithms used by password authentication systems.

This disastrous chain of weak links: passwords to store software in a repository, containing passwords to access a protected database, using passwords to automatically deploy applications into the public cloud – eventually led to the compromise of sensitive data. It could have happened anywhere in the chain, but that it could be compromised, should have been a risk anticipated at all levels in Uber.

To make matters worse, Uber’s Chief Information Security Officer not only did not disclose the breach for a year, but paid hackers USD 100,000 to hack into the thief’s/thieves’ computers to attempt to delete the stolen data.  He didn’t succeed, and the officer lost his job in the process.

Last week, Uber settled with the 50 US States and the District of Columbia for USD 148 million for violating data breach laws.

The same week, the Securities and Exchange Commission (SEC) announced an agreement by Voya Financial Advisors, Inc. (VFA), a publicly traded financial services firm, to pay USD 1 million to settle charges related to failures in cybersecurity policies and procedures, which led to compromised information of 5,600 – that’s right, five thousand six hundred – VFA customers. The attackers called VFA’s support line, impersonated VFA contractors over a one-week period in 2016, and requested that their passwords be reset. Using the new passwords, they created new customer profiles and obtained unauthorized access to account documents for three – you read that correctly – three customers.

Now, comes the part that simply doesn’t make sense.

VFS is currently a USD 8 billion public company, and paid a fine of an average of USD 178.57 per customer record breached. Uber, on the other hand, estimated to be a USD 72 billion company, paid a fine of USD 2.59 per customer record breached.

A recent report indicates that the global average cost of a breached record was USD 148. Using this average cost, while VFS should have paid a fine of USD 828,800 for their lapses, Uber’s settlement should have been a whopping USD 8.436 billion!

Did the States egregiously devalue consumers’ data given independent research of the value of a breached data record?  How did the SEC arrive at a valuation closer to the global average than 50 US States? Are not the Attorneys General coordinating their information resources with the SEC to ensure that data breaches caused by companies’ negligence are punished equally? Did they not review these independent reports before they decided on the settlement amount?

These are questions that policy makers must answer if we intend to take control of our cybersecurity lapses. Without uniform laws and consequences, there can only be inconsistency in risk-mitigation approaches. The lesson boardroom executives are likely to take away from these two incidents is, it is far less expensive to breach more data than just a few records – leading to practices that result in more data collection and fewer controls in cybersecurity risk-mitigation.

Shades of “Too big to fail” seem to be resurgent again.

Secure your data by ACT-ing now

A recent revelation indicates that the dark web maintains a trove of 1.4 billion credentials with cleartext (unencrypted) passwords that are not only accurate, but are stored in a search-able database, thereby enabling those who know of its existence to search for credentials and learn the credentials’ passwords in seconds. The contents of this database represent breached data from some of the best known companies on the internet.

Even when one of the strongest authentication technologies to eliminate passwords has been available to protect users and companies for over last two years, ill-informed decisions on the part of websites have left users and data exposed, with adverse consequences.

On the one hand, it is hard to fault business managers making technological risk-management decisions. Technology is evolving so rapidly, it leaves professionals  gasping for breath while assimilating change. Add to this, a phalanx of consultants and industry analysts who don’t “eat their own dog-food”, and you are left with a perfect storm of vulnerabilities that aid attackers rather than protect consumers and companies.

Is there a play-book managers might use to understand current risks and learn how to effectively protect systems, data and users?  We believe there is.  While every system is unique, there are fundamental tenets useful in making risk-management decisions.

Given that information systems exist to receive, store and process data, information must possess the following three properties to be useful; it must be:

  • Authentic
  • Confidential, where appropriate, and
  • Trustworthy


What does it mean for information to be authentic?  The dictionary definition claims it must be “worthy of acceptance or belief as conforming to or based on fact”.

In a real-world context, for a doctor to be able to use the results of a diagnostic test to treat a patient effectively, the information in the report must pertain to the specific patient, originate from sources that are authoritative, and be accurate. These attributes are essential to the authenticity of the information within this context.  The same attributes must hold true for a power company billing a customer for power consumption: the meter reading must pertain to the specific customer, originate from the meter that is authoritative and be accurate.

Even in environments where devices – such as the sphygnomanometer or power-meter – are not used to measure a quantitative metric, similar attributes are required for information to be deemed authentic.  An investor making a judgement on a company’s stock or bond offering, must depend on reports for the company where revenue and expenditures pertain to the specific entity, originate from authoritative sources within the entity, and are accurate.

When data is accepted as being “authentic”, it establishes an initial level of trust in the data.  However, we will show that an initial level of trust does not necessarily make data trustworthy at a different moment in time.

The information technology industry is currently at a moment in time, where not all devices that produce data have components built into them to guarantee data-authenticity – the cost is still too high for general-purpose computing.  (However, the industry is slowly marching in that direction as is evidenced by efforts of standards groups like the Trusted Computing Group and the FIDO Alliance).

As a result, the world has learned to use proxies to attest to the authenticity of information: the laboratory clinician is trained and certified to identify patients, perform tests, accept readings and record them within information systems.  The meter-reader is trained to identify customers’ premises, read meters and record information into information systems.  Managers of companies releasing financial information are, similarly, trained and authorized to summarize data from a variety of sources and report results in a prescribed manner.

The vulnerability in such a proxy-based system is the technology used to authenticate humans to information systems. If an information system can be tricked into accepting a masquerader as the “authentic source” – which most current systems can – then assumptions that data are authentic fall apart.

While rules and regulations to ensure authenticity exist, rule-makers neither understand the complexity of modern information systems nor that of authenticating humans to a computer system.  Most are familiar with only userids and passwords, an authentication technology created more than half-a-century ago.  A smaller subset are familiar with biometric authentication, and an even smaller group have used strong cryptographic authentication schemes such as smartcards with digital certificates1. However, it is this author’s estimation that more than 99% of systems continue to use userids and passwords to authenticate humans to systems currently.  This technology remains the single largest vulnerability to systems on the modern internet.

Defensive Play: Until systems are designed and deployed to rely upon themselves to deliver authentic information, aside from policies and practices to train, certify and trust humans, passwords and secret-based authentication schemes must be eliminated.  The work of the FIDO Alliance is crucial in this respect since the standards-based protocols and their implementations make it possible to eliminate the vulnerabilities inherent in current authentication schemes.


Since 2003, when California’s seminal data-breach disclosure law was passed, more than 8,000 breaches have been publicly disclosed affecting more than 10 Billion records containing sensitive information.  Some of the wealthiest and best-known companies in the world have been affected by data-breaches, including:

  • Apple – 7 publicly known data-breaches affecting more than 1 Million records;
  • Facebook – 5 publicly known data-breaches affecting more than 88 Million records;
  • Google – 6 publicly known data-breaches affecting more than 1 Million records;
  • LinkedIn – 3 publicly known data-breaches affecting more than 117 Million records;
  • Yahoo – 4 publicly known data-breaches affecting more than 3Billion records;
  • Zappos (Amazon) – 1 publicly known data-breach affecting more than 24 Million records;
  • … and many more.

The reasons and circumstances of the data-breaches vary; the result, however, is the same: sensitive data affecting customers and/or employees is breached.

Data-breaches have a corrosive effect on the economy – their negative effects are hard to quantify beyond class-action settlements and fines, if any.  Consumers, however, are not only left with evidence that companies are putting their profits ahead of consumer safety, but also bear the hidden costs of breaches in lost time (cleaning up after the aftermath of identity-theft, if affected) and higher prices in goods and services.

Some industries – such as the payment card industry – have created data-security standards mandating the protection of payment card information, and going so far as to prescribe detailed requirements on how to protect sensitive information.  Despite such stringent regulation, companies have been breached due to poor implementations2 of security controls.

The healthcare industry has had its own share of large data-breaches3 despite being regulated in the US by the Centers for Medicare & Medicaid Services and the Federal law – Health Insurance Portability & Accountability Act (HIPAA) – mandating the security and privacy of sensitive healthcare data.

Most data-breaches occur because of mistaken assumptions that it is easier to deter “barbarians at the gate” rather than actually protect sensitive data in the application. As a result, companies over-invest in network-based security tools – firewalls, anti-virus, malware detection, intrusion prevention, etc. – rather than invest in the control mechanism that provides the highest level of data-protection: application level encryption!

Defensive Play: Companies attempting to deter a data-breach using anything other than application-level encryption place a high-probability of having a data-breach.  While some risk can be reduced by using FIDO-based strong-authentication controls, reasonable data-security mandates having multiple controls to deter attackers –  a practice the security industry terms as “defense in depth”.  Short of eliminating sensitive data from a system, encrypting and decrypting data within authorized applications (combined with a hardware-backed cryptographic key-management system) is the strongest data-protection control a company can hope to implement. When combined with FIDO-based strong-authentication, risk-mitigation becomes formidable.


A little-known fact about standard database management systems is, that due to the way they are designed, it is always possible for a privileged user to modify data-at-rest directly without the knowledge of the application or users who created the record.

Even when controls are in place to ensure the database system records changes and stores audit logs that privileged users cannot access, this risk is not easily mitigated. This is because database management systems use userids and passwords to authenticate users and applications; as a result, the probability of an attacker using a legitimate user’s compromised password to modify information in the database is very high.

This author has documented4 that, even when a database stores encrypted data, it is possible to attack such a system, substitute information in the database and cause the system to use incorrect information in processing the record.  While this is a targeted attack requiring more knowledge and skill, it is nonetheless, feasible.

The vast majority of current applications operate on the assumption that information stored within their databases is accurate. Even application programmers and system administrators who design and operate such systems are constrained in protecting the integrity of data for a variety of reasons: lack of knowledge, lack of resources, lack of business imperative, etc.  Consequently, it is possible to implement FIDO-based strong-authentication and application-level encryption, but still remain vulnerable to integrity attacks on data, unless additional controls are placed on the system.

Defensive Play:  In order to create a comprehensive data-security strategy, companies must implement digital signatures for user-transactions and stored database records. Transaction digital signatures using FIDO-based protocols are one of the strongest risk-mitigation controls to ensure only authorized users are capable of modifying previously stored data.   Similarly, transactions stored in databases must be additionally secured using digital signatures generated by the applications themselves; the cryptographic key performing application-level signatures must be inaccessible to any human user – privileged or otherwise – other than the application.  Upon reading a database record, the application must also verify the signature of the retrieved record before attempting to use it.  Only when the signature is verified successfully, can the application be sure it is using the same data it stored previously, thus ensuring trustworthiness.

What about …?

Some may question if data authenticity, confidentiality and trustworthiness (ACT) are sufficient. They may ask “What about firewalls, patching, intrusion prevention and all the other security technologies on the market that claim to stop data-breaches?”  This is a legitimate question.

Information security has become specialized enough to create a fair amount of confusion. There was a time when a firewall and a reasonable password-policy was sufficient to deter most attacks.  However, times have changed.  As the complexity of web-applications grew, so did the number of vulnerabilities.  As a result, the number of tools to deter security breaches have proliferated.  Since most security technology providers attempt to build a business model that does not depend on modifying customer applications, the tools are generally focused on network or host-protection.

However, it is important to keep sight of the fact that the ultimate goal of information security is to protect data. While firewalls and patching are essential to cover vulnerability gaps that system designers and operators cannot control, attackers have proven themselves adept at getting past most known network- and/or host-security controls.

Protecting data through strong-authentication, encryption and digital signatures provides extraordinarily high levels of security because it assumes an attacker may already be within the network and/or host.  When designed with appropriate cryptographic key-management, ACT controls create formidable barriers to attackers. While they are not infallible, they are the strongest risk-mitigation technologies available on the market today.  What has been challenging in implementing them so far is the cost and complexity of integrating such controls into business applications; this is no longer true.


Information systems operate under extraordinary constraints today: attackers from the far corners of the earth are capable of compromising systems as easily as an attacker next door.  Businesses have invested enormous amounts of money in information systems to deliver better products and services, faster and cheaper to markets.  Without adequate information security, these investments are at great risk; consequential damages to customers, employees and shareholders are incalculable.

The guidelines defined in this article ensure the authenticity, confidentiality and trustworthiness of data, while creating formidable controls to protect information, users and investments.

1 The problem is actually more nuanced and is detailed in a 2008 paper comparing the security of different authentication schemes. Suffice to know that “a multi-factor authentication scheme using something you know, something you have and something you are” does not necessarily provide the right levels of risk-mitigation.

2 TJ Maxx (TJX) stores’ breach of 100M records in 2007 and Target Corporation’s breach of 110M records in 2014

3 Anthem’s approximately 16 breaches affecting more than 159M sensitive records

Pages 33-35 of “An Introduction to FIDO” to the National Renewable Energy Laboratory, November 2016


Mitigating e-Commerce Fraud


Assuming the Pareto Principle applies to electronic commerce, most companies likely derive 80% of their profits from just 20% of their customers. While merchants surely value these customers highly, the customers’ credentials, credit-card numbers and personally identifiable information are equally valuable to cyber-attackers too.

On an internet awash with data-breaches, what can merchants do to protect their customers and themselves? While the cyber-security industry has created a litany of technology to address the problem, fraud rates continue to climb.

The principal reason current anti-fraud technologies do not work effectively is because they rely on secrets – secrets stored at merchant sites, and which are susceptible to compromise through scalable attacks (where a single attack can compromise large numbers of customers). Here are some examples of secrets that are vulnerable:

  • When customers are asked to authenticate themselves using passwords – a secret;
  • When customers are asked to authenticate using one-time-passcodes (OTP) – a secret – typically sent to their e-mail or mobile phones;
  • When customers are asked to confirm their identities using answers – a secret – to questions they were asked as part of account registration;
  • When merchants “fingerprint” a customer’s computer and match the stored machine-fingerprint – a secret – when customers come back to shop again.

Another trend is to analyse customers’ shopping behaviour and use algorithms to make real-time decisions about the risk of the transaction being executed by a bad actor. While this “artificial intelligence” is intended to automate human risk-management, it has the propensity to become expensive as more and more shopping data must be stored and processed to make real-time decisions.

It is this author’s contention that merchants can dramatically reduce the risk of fraud by simply eliminating secrets – starting with the most obvious one: the customer’s password.

Using a strong-authentication protocol from the FIDO Alliance, merchants can offer their top 20% of customers a free FIDO Authenticator (aka Security Key) – available for as little as USD10 – to protect their accounts. By using FIDO technology, merchants enable one of the strongest authentication protocols in the industry to ascertain their customers’ identity.  FIDO protocols and Authenticators based on them:

  • Require a hardware-based Authenticator so they are not susceptible to attacks from the internet as file-based credentials are;
  • Require the customer to prove their presence in front of the computer originating the purchase, with possession of the FIDO Authenticator;
  • Are unphishable – attackers cannot compromise the protocol’s cryptographic messages and use them to masquerade as the legitimate customer;
  • Are privacy-protecting. Even with a stolen or lost Authenticator, attackers cannot learn a customer’s identity and use it to compromise the customer’s account.

The National Cybersecurity Center of Excellence (NCCoE) at the US National Institute of Standards and Technology (NIST) recently initiated a project to show how multi-factor authentication using FIDO protocols can help mitigate e-commerce fraud. As one of the Technical Collaborators chosen by NIST to assist with this effort, StrongAuth modified the popular open-source e-commerce platform, Magento, to integrate FIDO protocols into the purchasing process as a proof-of-concept.

StrongAuth will be presenting the modified Magento flow during an NCCoE webinar on November 14th 2017 at Noon EST, and subsequently releasing the Magento modifications to the open-source community. I encourage interested parties to join us on the webinar and learn how the simple step of FIDO-enabling an e-commerce application has the potential to eliminate fraud while strengthening the relationship between merchants and their customers.

Analysts, heal thy selves!

Last week, I received an invitation from a new cloud-related community forum:

We’ve identified you as a thought leader and would like to give you a chance to join us and start benefiting from the collective insights of over 15k other IT decision makers while building your professional network and strengthening your reputation.

The embedded link led me to a web-page which boldly stated “Powered by Gartner” under their logo. The Sign Up page, however, led me to LinkedIn’s password-login page. The site was, obviously, using LinkedIn as an Identity Provider (IDP) to authenticate users and learn about them from LinkedIn’s trove of user-data. My response to the Community Manager who sent the invitation:

As much as possible, I am restricting myself from signing up at new sites that do not use FIDO protocols for authenticating users. When your site has a FIDO-based sign-up on its landing page, please let me know.

P.S. I’m confident you’re aware of LI’s breaches in 2012, 2016 and last week?

There was no response from the Community Manager. I forwarded the e-mail thread to 10-12 people at Gartner who I’m professionally connected with, and asked the following:

Considering that Gartner is used by thousands of companies for advice on many things related to IT, should Gartner not be setting an example to the industry-at-large?

But, then again, one has to wonder whether the IT industry is lost and is itself causing one of the biggest problems on the internet today – some of the world’s largest companies that do support FIDO do not mention that they support FIDO on their sign-up page; so what hope should we have for those who are either ignorant of FIDO or choose not to support FIDO for their own selfish reasons?

There was no response from anyone at Gartner.

Doing an ad hoc survey of the Registration/Login pages of 5-6 of the best-known IT Analyst websites, I saw the same technology in use to protect access to their sites: Userid and Passwords (shared-secrets).

We are in the 21st century. We’ve left radio-valves, rabbit-ear antennas, rotary phones and crank starters behind in the 20th century where they belong. We have rockets that go into space and come back intact; self-driving cars; more computing power in our pockets than the mainframe from 50 year ago – and yet the very companies on whom thousands of enterprises and government agencies rely upon for advice on how to navigate the complexity of information technology, chose to protect their internet-facing sites with a 50-year old technology that can, potentially, be hacked by script-kiddies.

FIDO protocols have been a standard for two years. More than 300 companies have chosen to get behind this resurgence of public-key cryptography for strong-authentication. 125 products have been certified – including an enterprise-scale open-source server from yours truly – and yet, the market – and the analyst companies – hesitate to implement the one control that can stop password-breaches, phishing attacks and account hijacks dead in its tracks. Excuses abound – but I’ll spare you those.

I’ve recently realized that analysts may be responsible for some of the confusion and hesitancy in the marketplace.

When this author started building Public Key Infrastructures in 1999, the accepted definition of strong-authentication in the security community was:

  • A public-private key pair;
  • Generated and stored on a cryptographic hardware device (preferably FIPS 140 certified);
  • Protected by a password or PIN known only to the user; and
  • All combined for Client Authentication with the Secure Socket Layer (SSL) protocol to a website or application.

Recent conversations suggest the industry is under the misconception that possession of any two of the following represents strong-authentication:

  • What you know (a shared-secret: such as a password, phrase or a PIN);
  • What you are (a shared secret: the biometric template);
  • What you have (a device, most likely, embedded with a shared secret or an OTP to your mobile phone)

This author presented a peer-reviewed paper at the NIST IDTrust Conference in 2008, attempting to quantify the protection levels of different forms of authentication (much like Moh’s scale of mineral hardness). On a scale of 0 to 10, a shared secret – or a combination of shared secrets – came in at 2 or, at best, 4 if external hardware was in use to store the shared-secret. FIDO protocols, when used with hardware containing secure elements, would come in between 7 and 8 – higher than smartcards with digital certificates – similar to those issued by the US Department of Defense, the US Federal Government and dozens of countries with National ID cards in the EU and Asia.

To the extent the analyst industry – and their enterprise and government customers – are under this misconception, it is unlikely that FIDO deployments will permeate the internet – leaving them vulnerable to embarrassing breaches on a regular basis.

To the extent the world’s largest websites that currently support FIDO protocols for strong-authentication (the 1999 definition) choose not to mention it on their Sign Up/Login pages, consumers are unlikely to learn they can protect themselves with authentication technology for as little as ten (10) US dollars – little more than the price of a latte at a coffee shop.

Deterring transaction-fraud … and ransomware


In a recent panel discussion on financial fraud prevention, a question was raised to the panelists – a credit-card issuer, a prepaid-card payment processor, and others: Were any of them using FIDO protocols to prevent transaction fraud?

There was silence for a full 15 seconds before the panelists slowly responded, one by one, to state they were not; no reasons were provided. The responses surprised this author. Not only because FIDO protocols are well known in the security industry and have been available as standards for over two years, but also because the card-issuer on the panel is a Board member of the FIDO Alliance! They had to be aware of the transaction-fraud prevention capabilities of FIDO protocols.

FIDO standard protocols – the Universal Authentication Framework (UAF) and the Universal 2nd Factor (U2F) – are the result of an industry alliance of over 300 companies worldwide to eliminate shared secret authentication: mechanisms such as userid-passwords, one-time password tokens, etc., which are at the heart of numerous data-breaches over the last decade. Since the protocols were standardized two years ago, more than 150 products have been FIDO Certified® – including StrongAuth’s own CryptoEngine FIDO Server – and are available to serve risk-mitigation needs.

In addition to strong-authentication – an industry term implying the use of cryptographic digital signatures emanating from hardware authenticators to confirm a user’s identity – FIDO protocols also support acquiring digital-signatures from end-users to confirm transactions. The most important facets of FIDO-based digital signatures are their mandates that:

  • End-users be physically present in front of the computer/device initiating the transaction;
  • End-users possess a FIDO Authenticator with a private-key to authenticate themselves; and optionally
  • End-users confirm transactions with a digital signature using their FIDO Authenticator.

While FIDO protocols were primarily designed to enable strong-authentication to web-applications, the ability of the protocols to support transaction-authorization is icing on the cake. Yet applications, apparently, are not using this feature to stem multi-billion dollar losses to the industry. This is a shame; because FIDO protocols not only have the potential to strengthen transaction security, but they eliminate the password hell end-users are subjected to, while protecting them and web-applications from many attacks on the internet.

Notwithstanding the availability of a patch, the most recent ransomware attack in May 2017 could have been mitigated if applications accessing sensitive files required such digital-signature authorization to modify and/or delete files. Technically, from an application’s point-of-view, writing to a file is analogous to an electronic transaction such as buying a book at an e-commerce site: both modify the state of a file or database upon the conclusion of the transaction.

Currently, ransomware attacks work because applications allow authenticated users to modify files (encrypting them and deleting the original file) without secondary authentication and/or authorization. Consequently, malware executing on users’ computers execute with full privileges of the user. FIDO digital signatures change that paradigm, leading to higher levels of security.

This author has written about the benefits of application-level encryption and strong-authentication in the past. Building on that foundation, companies would do well to add transaction-level authorization to not only deter transaction fraud, but also inoculate themselves from ransomware. The protocols to support this are available today; the tools to enable this are available now; all that is required is the resolve to get the job done.

Not all biometric authentication is equal


Recently, the Chief Executive Officer of Wells Fargo Bank spoke about biometric authentication at an Economic Outlook Conference.  He stated that we may be the last generation to use user names and passwords, and that Wells Fargo (amongst other banks) was testing biometric technology – fingerprint matching, voice recognition, iris matching, etc. – as a means of authenticating users to websites and systems.

While biometric technology is not new (it has been commercially available for at least two decades), the fingerprint reader in Apple’s iPhone and newer Android-based devices have done more to bring biometrics to the masses than all previous efforts combined.  Notwithstanding this deluge, it would be a fallacy to assume all biometric technologies are equal – or secure.  While biometrics certainly appear to make the user’s authentication experience easier, it doesn’t necessarily make it more secure; as the adage goes: the devil is in the details.

Given the variability of biometrics, how does a user tell if a given biometric technology is secure?  The quick answer: they can’t. It would be futile to educate users about security factors associated with biometrics when its difficult enough for professionals to do so within a rapidly evolving landscape.

There is one concern, however, most users have – or are likely to have – with biometric authentication: are the biometric data being sent to a site’s servers?   If so, are they stored there?  What are the protections around that storage?  Given the breaches we’ve witnessed over the years, and given the propensity of site operators to profit from customer data, this is a very legitimate concern.  Unlike healthcare data, US law says precious little about biometric data security and privacy.

A solution that has the potential to address this concern is the use of biometric technology when combined with a Fast Identity Online (FIDO) protocol for strong-authentication.

When a device, a site and the application between them use a FIDO protocol for strongly authenticating users, they’re following an industry standard designed with the user’s security and privacy in mind.  FIDO protocols do not require biometric data to be sent to the site; FIDO cryptographic keys used to authenticate a user are unique for each site.  Finally, devices and applications using FIDO with biometrics, typically, use biometric data to verify a user’s identity locally on the device.  When successful, they use FIDO keys on the device to authenticate the user to the site.  This two-step authentication process ensures that a FIDO-enabled site respects their users’ privacy in not requiring biometric data on the site to authenticate the user.

Biometric devices not using FIDO protocols may legitimately claim they do not send biometric data to sites for authentication; however, only FIDO protocols currently provide standardized cryptographic proof that biometric data is neither needed by, nor sent to, the site to authenticate users.

Given the potential for confusion as biometrics are used by more applications and sites, what is needed is a standard identification mark to affirm the following:

  1. To distinguish ‘plain-vanilla biometric’ devices from ‘FIDO-enabled biometric’ devices;
  2. To distinguish FIDO-enabled applications from non-FIDO-enabled applications (much as the SSL/TLS lock identifies the security protocol in browsers); and
  3. For sites to identify when they’re using FIDO protocols for strong-authentication.

While the FIDO Alliance has such identifying marks for devices, its uncertain whether Android/iOS and browsers (the most common FIDO-enabled application) and sites will choose to highlight the marks even if devices are so labelled.  This assurance by mobile operating systems, browser manufacturers and sites may be necessary to provide consumers the confidence they are not being dumped from the frying-pan of passwords into the fire of biometrics.

What is ALESA?


Question:  Aside from eliminating sensitive data from your business process, what are two things you can do to eliminate much of the risk of a data-breach?

Answer:  Application Level Encryption and Strong Authentication.


Longer Answer:  While we all recognize that encrypting sensitive data can protect you, most people – even in the security business – don’t realize that not all encryption is equal.  Even if using NIST-approved algorithms with the largest key-sizes available, data can still get breached.  How is that possible?

When encrypting data, all else being equal from a cryptographic point-of-view, two design decisions matter:  1) Where is data being cryptographically processed? and 2) How are cryptographic keys managed?

If data is encrypted/decrypted in any part of the system – the hard-disk drive, operating system, database, etc. – other than the business application using that data, significant residual risks remain despite the encryption.  An attacker need only compromise a software layer above the encrypting-layer to see unencrypted (plaintext) data.  Since the application layer is the highest layer in the technology stack, this makes it the most logical place to protect sensitive data as it affords the attacker the smallest target.  This also ensures that, once data leaves the application layer, it is protected no matter where it goes (and conversely, must come back to the application layer to be decrypted).

The second design-decision of encryption is how you protect cryptographic keys.  If you use a general-purpose file, keystore, database or device to store your keys, this would be the equivalent of leaving company cash in a general-purpose desk or drawer.  Much as you need a safe to store cash in a company, you need a purpose-built “key-management” solution designed with hardened security requirements to protect cryptographic keys.  These solutions have controls to ensure that, even if someone gains physical access to the device, gaining access to the keys will be very hard to near impossible.  If the key-management system cannot present sufficiently high barriers, even billion-dollar companies can fail to protect sensitive data – as many did this year and continue to do so even as I’m writing this!

While cryptography tends to get complex and the details might seem burdensome, it is important to recognize that an encryption solution provides the last bastion of defence against determined attackers; it is well worth a company’s time to give it the proper attention and not attempt to invent it themselves.

Conversely, the first line of defence should be strong-authentication. Strong-authentication is the ability to use different cryptographic keys combined with secure hardware (in the possession of the user) to confirm that the user is who they claim to be.  While digital certificates on smartcards provided such capability for over two decades, they are expensive, and not easy to use and support even in highly technical environments.  A standards group (fidoalliance.org) is attempting to simplify this problem; some early solutions have already made it to market this year with successful deployments under way.

Between application-level-encryption on the back-end and strong-authentication on the front-end, even if an attacker managed to slip past network defences – as they always seem to do – they will have little wiggle-room to compromise sensitive data.  While no security technology is absolutely fool-proof, implemented correctly, ALESA raises the bar sufficiently high to “encourage” the vast majority of attackers to move onto easier targets.