AI Brings Need for Robust Security
-
AuthorNok Nok News
-
Published26 Jun 2023
-
0 commentsJoin Conversation
Title: AI Brings Need for Robust Security to the Next Level
By: Dr. Rolf Lindemann
New technologies often enable new business models. That is not a new observation. The latest new technology is (or more precisely the latest technological break-through has been made using) Artificial Intelligence or “AI”. Whenever the first signs of such new business models are visible, they trigger discussions around potential benefits and threats of the new technologies. The latest discussion that caught my attention was the potential (mis-)use of AI for running large scale attacks. I think it is worth putting that in perspective, because there is an underlying pattern here. If that catches your attention, you are invited to follow me…
Years back, the internet brought us search engines that made information available in an instant. To collect and keep updated all that information is an enormous and expensive effort (fixed cost), but it pays off since the cost per query is negligible but the revenue per search generates (advertising) revenue [1]. This is typically called a scalable business model [2].
First phase of scalable attacks. What works “for good” often also works “for bad”, meaning that people with bad intentions can create scalable business models based on scalable attacks. We have seen those already. Back in 2013, hundreds of millions of passwords were stolen [3] adding up to more than a billion stolen passwords at that time [4]. In some way that was the first phase of scalable attacks. Those were focused on authenticating returning users. Nok Nok was one of the founding members of the FIDO Alliance that published FIDO Authentication specifications which protect against scalable attacks [5] [6] and hence brings robust security to authentication.
Second phase of scalable attacks was focused on knowledge-based authentication [7], i.e. asking for your mother’s maiden name, name of your first pet, your first car model, social security number etc. to then assume it is only you that could correctly answer those questions. Unfortunately, the search engines in the internet (see above) often can find the answer to such questions hence making it easy for attackers to create new accounts on behalf of a user. This is not really surprising as that type of information was not considered a secret by any party and as a result was shared frequently. Essentially breaking Identity Verification. As a result, the Federal Financial Institutions Examination Council (FFIEC) stated that identity verification generally shall not solely depend on knowledge-based questions [8]. Document-centric identity proofing evolved as a response to this attack [9]. This means that users scan their picture ID and their face and let a remote service verify they match.
Third phase of scalable attacks. When your mom calls you on the phone, how do you know it is her? First, you might see her phone number in the display of your phone. Second, you will recognize her voice, and third, during the call you will recognize the way she interacts with you. Now, maybe there is a number four: you will recognize her face when she uses a video call. So that makes four independent methods – that looks pretty secure. Maybe let’s not rely on the phone number as we know about the “Caller ID” spoofing attacks [10] – yet another method lacking robust security: the Caller ID is not cryptographically tied to the phone line.
So, what about her voice? This is where AI comes in. You might have heard about DALL-E [11], the engine using a large-language model that creates images based on your instructions (and the hundreds of millions of images used for training). There is a similar engine called VALL-E, which simulates anyone’s voice after being trained with just 3 seconds of audio [12]. When I first saw this technology being used in a Bond movie [13], I considered it “science fiction” – now it is real and might soon be easily available to anyone on the darknet.
What about mom’s face video? The publicly used term for that is “Deepfake”. There are very convincing examples available on the internet [14]. The Chaos Computer Club (CCC), Europe’s largest association of hackers, demonstrated how to attack a prominent document-centric identity proofing service using such an attack [15]. Again, these types of tools might soon be easily available to anyone on the darknet (or already are, but I have not checked).
So, the only security method that remains is the way she interacts with you. I am sure this interaction does not change that frequently, so someone observing it once will likely be able to replicate it using the tools mentioned above. Again, neither my voice nor a video recording of me are considered a secret. They are often even available on the internet. So methods that are only secure if no one can replicate such public information at the right time are not robust.
What are the “lessons learned” here? When it comes to security that is robust against scalable attacks, we cannot rely on the difficulty to create voices nor videos of other people. Instead, we really must use cryptographic methods directly backed by something the user possesses, like electronic ID cards and hardware backed wallets or indirectly backed by something the user possesses, like cloud wallets with strong proven security using strong user authentication backed by FIDO security keys, passkeys or similar [16].
The recent advances in AI make it very clear that we have to accelerate the shift towards robust security for all remote interactions as methods that were only theoretically attackable (but not practically attacked) before are practically attacked now (or will be in the near future). This shift towards robust security will then help us to keep fraud under control, accelerate business and provide peace of mind to users – getting us closer to the ePromiseland [17].