Multi-factor authentication is less about security and more about spying on you.
Multi-factor authentication (MFA), simplistically, is using two or more routes to prove you are who you say you are. Some of the underlying (again, I am simplifying here) concepts of MFA are that something you know, something you have, or something you are can be factors for identifying you. The concept is nifty, but in practice it is not much more secure than not using MFA. For the user, it also increases risks.
A common 2-factor authentication (2FA) being heavily promoted by Google and others is for the user to provide phone number to the authenticating service. When called upon to identify, the authenticating service sends an SMS message to the phone containing a token, which the user then uses to authenticate. This implements a “something you have” factor – you have a phone device which can receive SMS messages.
The unfortunate side effect is that it also – completely unrelated to the authentication transaction – tells the authentication provider where you are on the planet right at that moment. It probably tells the provider who you are, what kind of device you are using, what kinds of services you use, and can be correlated with other data to find out a lot more about you, like your credit score, how much available balance you have on your credit card, and let’s not even begin to get into all the other information about you that Google knows. In short, it severely violates your privacy.
But it increases your security, right? maybe.
The mobile device/SMS connection says the person attempting to identify has access to your SMS text messages, as well as whatever other method you were using which triggered the 2FA – probably logging in with an account id and password. But at the very least that SMS message was sent to you in plain text from the nearest cellphone tower. So anyone with an appropriately tuned scanner now has that token, too. And of course there are other methods to capture that SMS-delivered token as well, such as a malware (or compromised) app with SMS permissions. It is rather far-fetched to believe you are important enough for someone to target you, follow you around with a scanner, and nip in to your account. And you have never received a text message or email saying “…if you did not request this, just ignore this message.”
I imagine I can hear people readying arguments, but stop and ask yourself (again): if something is free on the internet (like Google’s 2FA), what is the product being sold? Never fear, it is you.
What about those other forms of MFA? What about iris scans, and fingerprints? Once something is digitized, like all those fingerprints device makers have been getting from people who want to instantly unlock mobile devices, it is a password. It may be complex, long, and nearly unguessable, but it is a password. Yes, it is something you are, but the representative digitized data is something you know, and now that it is digitized it can be copied, or it can be taken without your permission or knowledge from a picture you uploaded to Snapchat or Facebook. In fact, it probably has. And now it is a password.
It is a password stored on someone else’s server.
Which brings us to those nifty “keys” certain people like to carry around, often a USB which will generate a token to be used as something you have. So the USB stick has an algorithm, and sometimes a clock, which it uses to produce a token – a password, something you know – which an authenticating object can also create for comparison. So one program can do what your key can do. Would that not tell you that yet a third program can do the same thing, with the same exact level of accuracy? Again, it is unlikely you are important enough to target individually. But if there are millions of people using those keys someone is going to automate the process.
But there are other USB keys with big long RSA certificates on them. Yes, you guessed it, this is just a static password. And whoever has possession of the USB has the password. And can probably copy it, although that is potentially really hard to do. Even if they cannot, and/or the process of copying it causes it to self-destruct, the fact you no longer possess the key means you are now locked out of whatever it is your MFA was supposed to give you access to. And how is this an improvement over a physical key, anyway? after all, physical keys are already a technology balanced between complexity, portability (wait ’til you see how quick it is to replace a lock barrel,) replaceability, security, and cost.
It seems to me that any model of security must first attempt to protect the individual’s privacy, and then provide consistent authentication. Without completely protecting the individual the identification method may itself be used to improve the specificity of the attempt to defeat it. It also seems to me a variety of security measures, rather than a few centralized models, do in fact allow a form of security by obscurity; attackers must determine what model(s) you are using in order to figure out a way to circumvent them, with rarer/more unusual models likely receiving less attention than more common even if more robust models.
The final word on MFA is they all come down to being something you know, at least so far. Yes, the sound of your voice may unlock your front door, but only because it is compared to a recording of your voice. And that means another recording, something you know, can fool your front door. When someone designs a system which can provably physically interact with something and only then digitize it to compare for authentication, then and only then will there be a ‘pure’ MFA for something you are or have. Of course someone will likely develop a hack for that, too.