Back To The Blog

Deepfakes: A Rising Cyber Threat

Threats / June 24 , 2020

A Q&A with John Farley of Gallagher

One of the most dangerous cyberattacks emerging on the threat landscape is also among the most difficult to detect or prevent: deepfakes. Deepfake technology enables perpetrators to mimic the voices and images of real people, and has significant consequences for individuals, companies, and democratic processes.

Recent examples include: a deepfake video of Mark Zuckerberg bragging about “being in control of billions of people’s stolen data,” as well as a video of Game of Thrones actor Kit Harrington recording a “sincere apology for the show’s poorly received ending.”

This budding technology’s advancements and possibilities are multiplying by the minute, and vulnerable institutions, businesses, and parties alike are scrambling to get ahead. Don’t scramble—instead, turn to NetDiligence. We have been helping a wide range of businesses stay fully protected and prepared since 2001. We take a holistic approach to cyber security, providing a one-stop-shop for clients in preparing and responding to a cyberattack.

To get an insider’s perspective on this controversial technology, NetDiligence’s very own Mark Greisiger sat down with John Farley, managing director of the Cyber Liability Practice of Gallagher, for a Q&A on the details surrounding deepfakes.

What are deepfakes and how are they created?

JF: Deepfakes are videos and audio snippets created to make it appear as if a person did or said something they never actually did or said. Artificial intelligence and deep learning techniques are used to analyze real video and audio featuring these individuals and imitate it in a believable fashion.

How are you seeing deepfakes being used?

JF: So far, the vast majority of deepfakes have been found on pornography sites—with prominent women from a wide range of professions being targeted and mimicked. Lately we’ve begun to see this technology used in a more mainstream way, and often with a political agenda.

These deepfakes make it appear as if someone from an opposing party did something that would alienate voters or sway their opinions. Given that there’s an upcoming presidential election, it’s a worrying matter. There hasn’t been wide-scale damage thus far, but the general public is not equipped to decipher what footage is real or fake.

Who creates deepfakes?

Businessman in suit from the neck down holding a white mask near his chest.JF: They can come from different sources. With ransomware, there were initially only a handful of threat actors who knew how to use the technology. But, over time, it has become more widespread. It could come from nation states, organized crime groups, or anyone who knows how to navigate the dark web to hire someone to carry out these crimes for them.

Most often, they are distributed through social media, which is the easiest and fastest way to disseminate this disinformation. Recently, an actor/director [Jordan Peele] made a deepfake video of Obama saying something negative about President Trump. In that particular instance, it wasn’t done to be harmful, but rather to create awareness about the technology and its risks.

How might a company incur cyber risk or liability from deepfakes?

JF: There is a real risk of financial crime associated with deepfakes. It has already happened in the UK, where an energy company executive was mimicked on a voicemail, which led an employee to transfer $243,000 to an account controlled by the criminal. This is just one example of how it could be used in conjunction with social engineering. Deepfakes can cost the company reputational harm, lost funds, business interruptions, and litigation fees.

What can be done about this cyber threat?

JF: Right now, the challenge is that no single person, organization, or technology can really control the creation and dissemination of digital content on an end-to-end basis. The trick is to figure out how to monitor this in a way that won’t stop the flow of real information.

Last year, a group called the DeepTrust Alliance convened the best and brightest minds from all walks of life to create solutions. There are many issues to figure out—not just the technology needed to combat the threat, but also how we think about deepfakes in the context of free speech. If they’re used for art, are they protected by free speech, or are they going to be considered a crime?

Where does cyber insurance come into play?

JF: As I mentioned, deepfakes can lead to losses, such as reputational harm to the company or brand, loss of funds, and business interruption. Cyber insurance has always risen to the challenges posed by evolving cyber risks, but deepfake technology has only been around since 2017, so it’s not explicitly accounted for yet.

While some of these losses could be paid for by cyber insurance, the nature of deepfakes might not trigger the policy. For example, often there needs to be a network intrusion to make a loss claim, but making a deepfake in most cases would not require penetrating the network. While we haven’t seen any direct solutions proposed by the cyber insurance industry, we know these companies need to stay competitive and that there will soon be clarifications made to these policies.

What advice do you have for risk managers anticipating the proliferation of deepfakes?

JF: I would just say that this is a very different type of threat and you really can’t implement preventative controls as you might for other cyber threats (such as ransomware). What you can do is implement an incident response plan now that addresses a deepfake attack scenario, such as devoting resources to PR to issue a public statement that could help mitigate the risks incurred.

Deepfakes: Where Do We Go From Here?

During the discussion, John has given us a well-rounded overview of the emerging deepfake technology, but its future (and developing defenses against it) is still unknown.

Often, it’s after the first class-action lawsuit that the industry wakes up and starts to better understand the theories of liability unfolding. Looking into the cyber crystal ball is always challenging and new forms of cyber risk can be difficult to forecast until they’re all around us.

The ransomware avalanche perfectly exemplifies the inherent difficulty in getting ahead of an emerging threat. As such, John’s willingness to speak and sound the warning bell about this new peril is commendable.

If you have any questions or are ready to schedule a consultation, please contact us at 610.525.6383 or send us a message.


Tags

Related Blog Posts

Download 2023 Cyber Claims Study

The annual NetDiligence® Cyber Claims Study uses actual cyber insurance reported claims to illuminate the real costs of incidents from an insurer’s perspective.

Download

© 2024 NetDiligence All Rights Reserved.