AI and the War on Trust

(342 views)

The AI Revolution

A revolution of Artificial Intelligence has occurred in the past few years, leapfrogging previous machine learning systems, with LLMs (Large Language Models) taking front and center in this revolution. LLMs learn by training on large volumes of text scraped from the internet and other sources and, essentially, attempt to predictively guess what text should come next when presented with a prompt from a human (or another computer or AI). In addition, LLMs can be paired with powerful visual based tools relying on diffusion to conjure up images based on the input from LLMs. These tools have quickly gone from fringe tools to powerful mainstream tools that can enable people to code, create art or perform tasks in which they have no experience, with just a prompt into the tool.  

Whenever humanity is handed a new and powerful tool, some people will always seek to use it for profit, disruption of existing paradigms and, of course, crime. From deep-fakes that allow you to create porn or embarrassing videos of celebrities (or victims) by simply training a model on their face, or malware that can program itself to avoid detection using LLMs, or enabling life-like chat bots that even the most observant humans have a difficult time detecting, there are ample and deep ways this technology can be abused. This technological revolution is going to change and disrupt many of the existing systems we rely on, both in cyber security and in society. 

Spear-fishing powered by AI

Many of us in the Information security industry have had to deal with spear phishing attacks in our time. I personally have seen companies lose large amounts of money to targeted attacks against their executives or business leaders. When developing a playbook for incident response to phishing attacks like this in the past, it was perfectly acceptable to assume that if you were looking at a person over a zoom call, or hearing their familiar voice, you could assume it was them. 

Now, with deep-fake AI able to replicate voices and faces in real time, there is no guarantee that the person that sounds or even looks like the person you know is actually that person. With many public executives, and even normal employees having public LinkedIn or social media presences, the available material for training the AI is prolific, and easy to come by, so you can no longer assume anyone is safe from this sort of attack. 

This is one example of the war on many of the trust systems we depend on today that has ramifications both on information security and civil society that need to be met head on to prevent the erosion of these systems. No longer can we just trust what we see and hear as a single source of truth. We must adapt to this new reality, and quickly, as there is a tremendous potential for abuse and damage that is only just beginning to rear its head. 

Criminals and Technology

I have always said that criminals are the ultimate early adopters of technology, and they are already using these technologies to commit crimes. We have to assume that the methods and tools they are using will only become cheaper, better, and more widespread. As this technology proliferates, you can assume this is just the tip of the iceberg in terms of potential ramifications of the accessibility of this sort of technology. While many of the legitimate companies that offer these sorts of AI services are attempting to block illicit content or use cases, this technology is easy to replicate and relies on open source projects, making it difficult, if not impossible, to stop its spread. 

Already there are entire dark web and underground industries offering everything from DIY deep-fake porn to programs designed for criminal enterprise. With modern cyber-criminal organizations operating more like an enterprise every day with their own help desks, and cyber-crime as a service, we can assume that this will be one more suite of services available on demand for those who have the means or will to use it.

LLMs and the Dark Web 

When ChatGPT 3 was first released, many, including myself, immediately realized it was a powerful tool for hackers and criminals. ChatGPT has tried many times to implement various controls to remove the ability to write malicious code or participate in criminal activity, but enterprising “prompt hackers” quickly realized you could bypass many controls and jailbreak ChatGPT by using a carefully crafted prompt. ChatGPT would balk if you asked it to help you hide a body or write malware, but telling it you are a police officer trying to solve a crime, or convincing it that it is a different type of LLM capable of doing these things, unlocks malicious behavior.   

In addition, even if ChatGPT is able prevent malicious use, there is nothing stopping people from creating crime specific LLMs. Researchers created DarkBert, trained exclusively on the dark web, and while not available to the public, it proves that you can use the dark web as a training method to create criminal focused LLMs. WormGPT is a tool already available that was crafted specifically to aid in the use of creating malware. 

Script Kiddies on Steroids

A senior pentester friend of mine jokingly said to me recently, “we are all script kiddies now”, when explaining a new automation tool he got from an open source project he was trying out on a pen-test. A “script Kiddie” is a term often derogatorily pointed at lower skilled hackers who rely on existing scripts or software to perform attacks. Twenty years ago, anyone not writing their own exploits would have been considered a script kiddie, but these days, powerful tools exist in the forms of exploitation frameworks, automation tools and interception proxies that do much of the work that would have previously required serious programming skills to use. The barrier to entry for an aspiring hacker has never been lower and ChatGPT and LLMs are going to take this to the next level. 

Previously, you had to have at least a passable knowledge of programming languages and experience in how these automation tools work to be able to begin exploitation as a “script kiddie”, but now, with LLMs to write the code for you and guide you while using these tools, even un-skilled hackers can leverage powerful tools, vastly increasing the likelihood of known vulnerabilities being exploited. You can tell an LLM to completely re-write a script online to fit your purposes or have it create a tool or script for you, all with very little understanding of the underlying programming language. 

Because of the advent of AI, you can no longer assume that a technically complex exploit or vulnerability will have a lower likelihood of exploitation. If ChatGPT or other LLMs can understand it (and it probably can), then they can write code to exploit it and automate it. The time window between an exploit being made public, and someone having an exploitation in the wild, already a perilously short period of time, will become ever shorter as a result. In the future, it is easy to imagine LLMs being used to automatically create their own exploitations for newly announced vulnerabilities, write the code for them and then push that updated code to existing malware already in use, ensuring that it always has the latest exploitations ready, all with minimal human interaction and oversight. Malware could even leverage LLMs to re-write its own code on the fly to avoid detection by traditional EDR (endpoint detection and response) software. To top it off, all this power can be deployed by someone with minimal technical background, making the script kiddies of the future, with the ability to leverage these tools, able to make even seasoned pentesters of the past seem ineffective by comparison.  

Biometrics

Another victim of AI’s war on trust is biometrics, which has formed a core of authentication and trust on which many systems in information security rely. There is growing evidence that the attack against biometrics is just getting warmed up and many other forms of biometrics must now be considered under attack. Signature matching as a physical form of verification, long the target of talented forgers, is now able to be replicated by AI. Face scan, a widely used form of biometrics used to unlock devices, can now be bypassed with the same deep-fake technology  used by scammers. Even fingerprint based biometrics are not safe from these attacks with researchers using machine learning to create fake master key fingerprints

Society's Readiness and Challenges

It’s clear that the war against biometrics using AI is happening much faster than companies and regulators are able to adapt. Even as I write this, many financial institutions still use and advertise voice identification, likely without any specific protections against deep-fakes. Just a few months ago, six banks were questioned by lawmakers because their use of voice authentication was compromised, exposing customers to the potential for fraud. Many of us in information security have had to struggle just to get basic controls, such as MFA, enabled or adopting NIST standards taken seriously in an enterprise. Based on that experience, I cannot imagine that most companies are ready for the proliferation of AI, and if you asked me what most companies are doing to combat deep-fake based scams, I would hazard a guess they haven’t even considered it yet, in spite of the current threats. Regulators and lawmakers are similarly behind the curve and not able to keep up with the rapid rate of proliferation.  

In addition, civil society is not ready to process and account for the use of these tools. We have been living in a post truth world for years now with extreme ideologies pushing fake news and disinformation. Already we have seen a massive spread of misattributed videos and photos to push un-true narratives and outright dismissal of fact-checkers and anyone who disrupts the propaganda or narrative. Deep-fakes, as they become more difficult to identify for the average person, will only serve to make this worse. Already, Midjourney stopped free access to its service after abuse of it resulted in life-like fake images of Donald Trump being arrested were propagated across the internet. 

The unfortunate truth is that many of those that will be mostly affected by this new reality will be people that are already the most vulnerable. Elderly people who are already the target of a large percentage of cyber-crime will struggle to understand this new reality, as one example. We may talk about the financial ramifications of an executive becoming the target of these technologies, but we cannot forget in our solutions for these problems the vulnerable people that may not have the resources or ability to combat it themselves. 

Possible Solutions

All this brings us to the question, what do we do? How do we continue to use biometrics knowing that they are under sustained attack? How do we adjust as a society to the proliferation and use of these tools? How do companies deal with these threats? 

There is no magical technical silver bullet to solve these issues. We must adjust our thinking, deploy new technologies, and most importantly, change our underlying assumptions about how the world works and how we authenticate and establish truth.

On the technical front, there are new techniques based on AI, such as Fakecatcher, designed to detect deep-fake videos. These solutions can be integrated into social media, video sites, and conferencing software as a first line of defense. If Facebook, YouTube or Zoom can tell you real time that the video or audio you are seeing is fake, or better yet just block it all together, that would go a long way to resolving some of these issues. There is no reason similar tools could not be deployed on other methods of biometric authentication, such as fingerprint and face scanners. 

That being said though, these technical solutions are not perfect and an arms race between AI used to make deep-fakes and the AI used to detect them will inevitably continue. In addition, false positives will happen, hampering legitimate use of these tools. From a security perspective, even with a large-scale use of anti-deep-fake tools, we cannot rely on those alone as a security measure. 

Training people to look for behavior that presents a red flag regardless of who is telling it to them is a another low tech but effective solution. This can include someone who is in a hurry and trying to rush something important like the transfer of large sums of money or the rest of a password. Irate behavior, a sudden change of circumstance or someone acting out of character for themselves should all be something we train people to watch for. We should be actively training people to trust these red flags over the effect of hearing their voice or even video of the person on a zoom call. 

In addition, good cyber hygiene has never been more important. Businesses ignoring the boring nuts and bolts of security will continue to be the place where script kiddies armed with these new tools will find the most fertile ground. Patching and vulnerability management often not prioritized in smaller companies over availability will need to be bullet proof and timely. A business cannot assume they are too small to avoid notice or that they have ample time to react to a vulnerability if even a non-technical actor armed with an LLM can perform a complicated attack against their infrastructure. 

As a society, we need to have an understanding that the day of simply seeing a video or hearing audio of someone is proof enough of authenticity is over. Instead, we should be relying on the same tools good journalists use, attribution and chain of custody, to ensure we know where that video or audio came from, and how much trust we can put into it. In addition, we need to be more patient - not jumping to conclusions from a single source of data and wait and trust in organizations that expose fakes to do their work. 

Conclusion

We have entered a new age. Simply banning these tools will not stop their spread as many are available as open source products that anyone with cloud computing or even end user hardware can deploy. Banning them would likely only serve to slow down their spread, and hamper those working to develop tools to combat their negative effects. We need to adjust the way we think and the tools we use to accommodate the reality of their existence much in the same way we did with other disruptive technologies in the past. In a way, this is a new-old problem. Image, video, and audio manipulation has been a problem for as long as technology has existed to capture them. We forget that when technologies like Photoshop or digital video editing were first deployed, they also caused issues for existing trust systems and created societal challenges. When exploitation frameworks like Metasploit were first introduced, there were many of the same problems with enabling threat actors and each automation tool or advance forward has enabled both defenders and attackers. While the problem facing us today with AI is orders of magnitude larger, looking to the past can give us hope that these are challenges we can overcome if we are willing to confront them and acknowledge their effects. 

Opinions expressed are solely my own and do not express the views or opinions of my employer.

September 14, 2023
Subscribe
Notify of
guest
1 Comment
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Lizzy Agnes
7 months ago

A great hacker is really worthy of good recommendation , Henry
really help to get all the evidence i needed against my husband and
and i was able to confront him with this details from this great hacker
to get an amazing service done with the help ,he is good with what he does and the charges are affordable, I think all I owe him is publicity for a great work done via, Henryclarkethicalhacker at gmail com, and you can text, call him on whatsapp him on +12014305865, or +17736092741, 

© HAKIN9 MEDIA SP. Z O.O. SP. K. 2023
What certifications or qualifications do you hold?
Max. file size: 150 MB.
What level of experience should the ideal candidate have?
What certifications or qualifications are preferred?

Download Free eBook

Step 1 of 4

Name(Required)

We’re committed to your privacy. Hakin9 uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.