Back To The Blog

Is AI-Related Fraud on the Rise?

Cybersecurity Legal/Litigation Regulatory / August 24 , 2023

A Q&A with Hart Brown of Future Point of View

A year ago, almost no one was talking about ChatGPT, and as of this writing, it is being investigated by the FTC. Developments in the world of AI are happening at breakneck speed, including the ways these technologies are being leveraged for fraudulent ends.

To get a better handle on the most imminent threats associated with AI, we sat down with Hart Brown, CEO, and Security Practice Lead of Future Point of View.

Can you give us some background on your career in cybersecurity?

I’ve spent about 25 years now in this space. The first part of my career was with the US government, working with the Department of Justice, the Department of Defense, Department of State, both here in the US and in about 50 countries around the world.

Then I transitioned into the corporate sector where I built a number of different insurance programs and platforms on the cyber side of the house and did some work in incident response as well. I’m now with a company called Future Point of View.

The intent is that we find out what the future state is like, both from a market perspective as well as where our clients want to be in ten years. Then we back into an investment strategy to help those clients achieve the future state that they would like to have.

What interests you most about this industry at this particular moment?

We’re at a fascinating point in time. It’s very similar to 1950 in the world of farming. In 1950, only about half of farmers had a tractor in the US, and only about a third had a phone. Over the next 40 years, there was a massive transition from being heavily manually focused to much more automated. You had a 75% drop in the number of people needed to farm, and productivity went up about 120% while expenses stayed roughly the same.

If you fast forward to 2023, you’re starting to see something similar with the first AI wave. We’re going to need potentially many fewer people to do the same types of tasks, but productivity is going to go up. Obviously business is going to change. For many businesses right now, the majority of their spend is on human capital. If we look ahead ten years, it is highly likely we will see the majority of spend going into technology. That means more digital risks and a greater need for cybersecurity.

So far, what are the most common instances of AI-related fraud that you’ve seen?

As of right now, it’s hard for people to comprehend what AI is and what it’s not. As it exists today, the majority of the AI tools that people use are roughly 85% accurate, meaning they’re 15% wrong. That creates a risk if you’re not fully engaged and don’t understand when the system is actually providing you with good information.

The challenge over the next couple of years is that AI fraud is about to become a $1 trillion problem. Global disasters are about $2.5 trillion a year. Global fraud alone would be almost half of the total disasters that we would see on a cost basis. Now we are seeing AI-related crime and in certain cases arrests as well.

What attacks should businesses be most concerned about?

In the world of crime we’re starting to see an increase in synthetic businesses that look and feel legitimate from a digital angle. They are then using those businesses to try and create false business models, whether they’re trying to steal brand identity or trying to transition clients over as they’re trying to become a third-party provider or a vendor.

There’s lots of different angles that are starting to be played out with synthetic companies. We also see synthetic people trying to apply to jobs, especially in the classified types of spaces or defense contractor types of spaces. The thing we fear is AI systems themselves actually attacking a corporate infrastructure in some form or fashion.

Now, why is that scary? In an AI-enabled system, you’ll send malware to a company. If it doesn’t work, the system then learns and recreates the malware and sends something completely new out to another company. Again, it comes back, reworks the malware company until it goes through. Each attack then becomes unique and different. It becomes much more difficult from an intelligence angle to say, “We’re seeing X type of attack in the market and we need to do Y about it.”

What does this mean for the prevention and detection of attacks?

These attacks are likely to be somewhat unique or what we call polymorphic, meaning they’re changing in each iteration. We’ve already started to see some low-level things happening with ChatGPT and other large language models. They’ve been testing code and looking for code variabilities. The time to attack using these tools is actually now much lower.

I’m a certified ethical hacker and there’s a process generally that we go through to target a company from a fraud angle or anything else, and it takes a little bit of time and research. The challenge now with AI is I can take all of that research and put it into AI, or I look at samples of code and then have the AI write the malware for me and send it—all within an hour. We don’t have the time to get indicators of compromise to pick some of these attacks up.

What are some other ways AI attacks might show up?

We’re seeing them in the social engineering space and by that I mean phishing attempts. Programs are already being developed like WormGPT and FraudGPT. A lot of times there’s these classic indicators of telltale hyperlinks, translations, or words that are misspelled. Now you can tell the AI tool “I want you to write an email. I want you to sound like a CEO. I want you to sell something.” And now all of a sudden, you have this perfectly written email that sounds like an executive and has no errors, so phishing attempts now become much more sophisticated.

What can businesses be doing to mitigate those risks?

Make sure you have the governance in place to meet the regulatory environment that’s coming down. Ensure that everybody in the company is using these tools appropriately and responsibly so as to not increase liability. On the external side, it’s going to be a matter of monitoring for concerns that are important to the brand and the culture of the company. The nature of PR in many cases is going to shift because of these very public fraud attempts.

We are also going to need to get better about processes. When people call in to change their account or ask questions over a customer service line, how quickly can we authenticate them? That’s going to become expensive, much more so than it is today.

Four AI Criminal Activities to Watch:

  • Deep fakes: The technology is becoming easier to use; we are headed into election cycles; and the risk of company leadership being impacted by fraudulent misrepresentation is increasing. Organizations should consider a strategy of creating a source of truth and include this in incident response plans.
  • AI-powered attacks at scale: Once a vulnerability is identified or published on a system, there is generally a race to implement a patch before an incident occurs. An AI-powered event would allow for scale and speed against known vulnerabilities. Many organizations have a policy for patches that allows for a certain amount of time to complete the action based on risk. Organizations will need to review and potentially increase the speed of patches, updates, and interim remediations if a patch is not available.
  • AI-designed malware: While AI malware that can modify itself is still in the concern stage, the level of concern is rising. Organizations will need to monitor this area and begin to rely more on digital behavioral indications of suspicious activity rather than immediate recognition, such as with anti-virus protection.
  • AI-powered DDoS: Rather than an on/off type automated process to use bots to flood a network with requests, AI can be used in combination with botnets for attack problem-solving and predictive defenses. As the Internet of Things (IoT) world grows, the number of devices that can be leveraged for attacks increases. Organizations will need to conduct a new risk assessment with these elements in mind and increase the level of DDoS protections.

Are there any helpful resources you’d like to share with business leaders who want to stay informed about AI technology and its implications?

We put out AI Risk Review, which is a way to test your understanding of AI. AI is a divisive topic now, so you do really have to decipher which audience a piece of content is written for. Was it written in an effort to create some level of concern and fear, or is it urging you to go full steam ahead with these tools? That context is going to be increasingly important.

We also have a few events that can be helpful to everyone. On 9/13/23 from 1-2 PM ET, we will be doing a joint presentation on AI risks with Gallagher’s Cyber Liability practice. Then we are doing an event on 9/14/23, where we are demonstrating live deep fakes to show what the technology can do and how to mitigate the risk.

To learn more about Hart Brown and Future Point of View, visit their website.

To learn more about Mark Greisiger, visit this page.


Related Blog Posts

Download 2024 Cyber Claims Study

The annual NetDiligence® Cyber Claims Study uses actual cyber insurance reported claims to illuminate the real costs of incidents from an insurer’s perspective.

Download

© 2024 NetDiligence All Rights Reserved.