Deterring transaction-fraud … and ransomware


In a recent panel discussion on financial fraud prevention, a question was raised to the panelists – a credit-card issuer, a prepaid-card payment processor, and others: Were any of them using FIDO protocols to prevent transaction fraud?

There was silence for a full 15 seconds before the panelists slowly responded, one by one, to state they were not; no reasons were provided. The responses surprised this author. Not only because FIDO protocols are well known in the security industry and have been available as standards for over two years, but also because the card-issuer on the panel is a Board member of the FIDO Alliance! They had to be aware of the transaction-fraud prevention capabilities of FIDO protocols.

FIDO standard protocols – the Universal Authentication Framework (UAF) and the Universal 2nd Factor (U2F) – are the result of an industry alliance of over 300 companies worldwide to eliminate shared secret authentication: mechanisms such as userid-passwords, one-time password tokens, etc., which are at the heart of numerous data-breaches over the last decade. Since the protocols were standardized two years ago, more than 150 products have been FIDO Certified® – including StrongAuth’s own CryptoEngine FIDO Server – and are available to serve risk-mitigation needs.

In addition to strong-authentication – an industry term implying the use of cryptographic digital signatures emanating from hardware authenticators to confirm a user’s identity – FIDO protocols also support acquiring digital-signatures from end-users to confirm transactions. The most important facets of FIDO-based digital signatures are their mandates that:

  • End-users be physically present in front of the computer/device initiating the transaction;
  • End-users possess a FIDO Authenticator with a private-key to authenticate themselves; and optionally
  • End-users confirm transactions with a digital signature using their FIDO Authenticator.

While FIDO protocols were primarily designed to enable strong-authentication to web-applications, the ability of the protocols to support transaction-authorization is icing on the cake. Yet applications, apparently, are not using this feature to stem multi-billion dollar losses to the industry. This is a shame; because FIDO protocols not only have the potential to strengthen transaction security, but they eliminate the password hell end-users are subjected to, while protecting them and web-applications from many attacks on the internet.

Notwithstanding the availability of a patch, the most recent ransomware attack in May 2017 could have been mitigated if applications accessing sensitive files required such digital-signature authorization to modify and/or delete files. Technically, from an application’s point-of-view, writing to a file is analogous to an electronic transaction such as buying a book at an e-commerce site: both modify the state of a file or database upon the conclusion of the transaction.

Currently, ransomware attacks work because applications allow authenticated users to modify files (encrypting them and deleting the original file) without secondary authentication and/or authorization. Consequently, malware executing on users’ computers execute with full privileges of the user. FIDO digital signatures change that paradigm, leading to higher levels of security.

This author has written about the benefits of application-level encryption and strong-authentication in the past. Building on that foundation, companies would do well to add transaction-level authorization to not only deter transaction fraud, but also inoculate themselves from ransomware. The protocols to support this are available today; the tools to enable this are available now; all that is required is the resolve to get the job done.

Not all biometric authentication is equal


Recently, the Chief Executive Officer of Wells Fargo Bank spoke about biometric authentication at an Economic Outlook Conference.  He stated that we may be the last generation to use user names and passwords, and that Wells Fargo (amongst other banks) was testing biometric technology – fingerprint matching, voice recognition, iris matching, etc. – as a means of authenticating users to websites and systems.

While biometric technology is not new (it has been commercially available for at least two decades), the fingerprint reader in Apple’s iPhone and newer Android-based devices have done more to bring biometrics to the masses than all previous efforts combined.  Notwithstanding this deluge, it would be a fallacy to assume all biometric technologies are equal – or secure.  While biometrics certainly appear to make the user’s authentication experience easier, it doesn’t necessarily make it more secure; as the adage goes: the devil is in the details.

Given the variability of biometrics, how does a user tell if a given biometric technology is secure?  The quick answer: they can’t. It would be futile to educate users about security factors associated with biometrics when its difficult enough for professionals to do so within a rapidly evolving landscape.

There is one concern, however, most users have – or are likely to have – with biometric authentication: are the biometric data being sent to a site’s servers?   If so, are they stored there?  What are the protections around that storage?  Given the breaches we’ve witnessed over the years, and given the propensity of site operators to profit from customer data, this is a very legitimate concern.  Unlike healthcare data, US law says precious little about biometric data security and privacy.

A solution that has the potential to address this concern is the use of biometric technology when combined with a Fast Identity Online (FIDO) protocol for strong-authentication.

When a device, a site and the application between them use a FIDO protocol for strongly authenticating users, they’re following an industry standard designed with the user’s security and privacy in mind.  FIDO protocols do not require biometric data to be sent to the site; FIDO cryptographic keys used to authenticate a user are unique for each site.  Finally, devices and applications using FIDO with biometrics, typically, use biometric data to verify a user’s identity locally on the device.  When successful, they use FIDO keys on the device to authenticate the user to the site.  This two-step authentication process ensures that a FIDO-enabled site respects their users’ privacy in not requiring biometric data on the site to authenticate the user.

Biometric devices not using FIDO protocols may legitimately claim they do not send biometric data to sites for authentication; however, only FIDO protocols currently provide standardized cryptographic proof that biometric data is neither needed by, nor sent to, the site to authenticate users.

Given the potential for confusion as biometrics are used by more applications and sites, what is needed is a standard identification mark to affirm the following:

  1. To distinguish ‘plain-vanilla biometric’ devices from ‘FIDO-enabled biometric’ devices;
  2. To distinguish FIDO-enabled applications from non-FIDO-enabled applications (much as the SSL/TLS lock identifies the security protocol in browsers); and
  3. For sites to identify when they’re using FIDO protocols for strong-authentication.

While the FIDO Alliance has such identifying marks for devices, its uncertain whether Android/iOS and browsers (the most common FIDO-enabled application) and sites will choose to highlight the marks even if devices are so labelled.  This assurance by mobile operating systems, browser manufacturers and sites may be necessary to provide consumers the confidence they are not being dumped from the frying-pan of passwords into the fire of biometrics.

What is ALESA?


Question:  Aside from eliminating sensitive data from your business process, what are two things you can do to eliminate much of the risk of a data-breach?

Answer:  Application Level Encryption and Strong Authentication.


Longer Answer:  While we all recognize that encrypting sensitive data can protect you, most people – even in the security business – don’t realize that not all encryption is equal.  Even if using NIST-approved algorithms with the largest key-sizes available, data can still get breached.  How is that possible?

When encrypting data, all else being equal from a cryptographic point-of-view, two design decisions matter:  1) Where is data being cryptographically processed? and 2) How are cryptographic keys managed?

If data is encrypted/decrypted in any part of the system – the hard-disk drive, operating system, database, etc. – other than the business application using that data, significant residual risks remain despite the encryption.  An attacker need only compromise a software layer above the encrypting-layer to see unencrypted (plaintext) data.  Since the application layer is the highest layer in the technology stack, this makes it the most logical place to protect sensitive data as it affords the attacker the smallest target.  This also ensures that, once data leaves the application layer, it is protected no matter where it goes (and conversely, must come back to the application layer to be decrypted).

The second design-decision of encryption is how you protect cryptographic keys.  If you use a general-purpose file, keystore, database or device to store your keys, this would be the equivalent of leaving company cash in a general-purpose desk or drawer.  Much as you need a safe to store cash in a company, you need a purpose-built “key-management” solution designed with hardened security requirements to protect cryptographic keys.  These solutions have controls to ensure that, even if someone gains physical access to the device, gaining access to the keys will be very hard to near impossible.  If the key-management system cannot present sufficiently high barriers, even billion-dollar companies can fail to protect sensitive data – as many did this year and continue to do so even as I’m writing this!

While cryptography tends to get complex and the details might seem burdensome, it is important to recognize that an encryption solution provides the last bastion of defence against determined attackers; it is well worth a company’s time to give it the proper attention and not attempt to invent it themselves.

Conversely, the first line of defence should be strong-authentication. Strong-authentication is the ability to use different cryptographic keys combined with secure hardware (in the possession of the user) to confirm that the user is who they claim to be.  While digital certificates on smartcards provided such capability for over two decades, they are expensive, and not easy to use and support even in highly technical environments.  A standards group ( is attempting to simplify this problem; some early solutions have already made it to market this year with successful deployments under way.

Between application-level-encryption on the back-end and strong-authentication on the front-end, even if an attacker managed to slip past network defences – as they always seem to do – they will have little wiggle-room to compromise sensitive data.  While no security technology is absolutely fool-proof, implemented correctly, ALESA raises the bar sufficiently high to “encourage” the vast majority of attackers to move onto easier targets.