Back To The Blog

The Truth About Deepfakes

Ransomware/Malware / August 26 , 2020

A Q&A with Kathryn Harrison of the DeepTrust Alliance

The rise of deepfake technology has the capacity to destabilize markets, sow public distrust and undermine the credibility of organizations and their leadership. As these tools become more accessible, the risks to insurers and their insureds loom in the not-distant horizon. We spoke with Kathryn Harrison, founder and executive director of the nonprofit DeepTrust Alliance, about why these threats need to be addressed from multiple angles.

What inspired you to create the DeepTrust Alliance?

KH: I was working in payment and block chain product management at IBM and I started to have questions about how to verify where data comes from, whether it’s in the supply chain or a different type of asset. That led me to thinking about the 2020 election and how knowing what’s true and what’s not is vitally important for our society. While individuals and large companies were working on these questions, there was no one pulling the threads together in the same direction. This is a problem that requires human leadership as much as technology.

What is the DeepTrust Alliance doing to combat disinformation?

KH: We’ve been holding FixFake convenings, bringing together experts on disinformation from policy, journalism, technology, academic and business perspectives. One of the first objectives was to look at current threats and potential solutions and we summarized our findings in a report released last month. With the advent of COVID-19, this has become an even more urgent process—we’ve been conducting a survey about whether the disinformation and misinformation around the pandemic differs from or reinforces what the stakeholders have been seeing in other contexts. We’ve also been prioritizing working groups on key interventions; creating fundamental standards and a shared taxonomy for talking about deepfakes; creating a certification for those using synthetic media to ensure it’s applied in an ethical way; and finally, developing policy that can address these issues.

Why should a risk manager or cyber security officer be concerned about deepfakes? What risks do they pose for insureds and insurers?

KH: We put the risks into three major buckets. First and foremost, social engineering. People can be impersonated through images or audio and that creates all sorts of challenges not only for companies but also their employees. This can affect everything from cyber insurance to employment insurance. Second, market manipulation. It’s now so easy to create false information about a person, product or company that can have significant impact on its share price and market performance. Third, extortion or harassment. This may be of less concern to insurance companies, but it’s become very easy to mimic celebrities and public figures and mislead people with real consequences.

How easy is it to access this technology?

KH: The technology has accelerated at an incredible rate—in the time since the Alliance was formed, it has materially improved. The two principal ways people can connect with deepfake technology is through open source tools which are still relatively rough and require technical capabilities, or through communities such as Reddit. There are many technologies that are commercially available, but these companies are more focused on the ethics of using them. Nevertheless, there was a recent case where a thief used commercially available deepfake technology to mimic the voice of a CEO and used it to steal €223,000 from the company bank account. We only know of this because of the insurance report that followed.

What can be done to mitigate this risk?

KH: Most important is to create more awareness. There’s a lot of work needed to adapt and adjust to these risks. As more learning machine models emerge, we need to make sure that there’s a mix of human intervention and automation to foil them. We also need to think about how these threats will impact workflows and business processes—where are the opportunities for exposure and how can these threats impact organizations in new ways? That might mean reviewing all media images that are published, for example, or how organizations execute their social media campaigns. Technological solutions, such as a tool focused on tracking an image from camera to edits would help us forensically detect where deepfakes are created. This is also a threat that should be accounted for in incident response plans—communications and PR teams need to be ready to quickly and effectively respond to deepfakes and disinformation when they emerge.

What concerns you the most?

KH: The loss of credibility of institutions is deeply concerning and what we haven’t really talked about yet is how it might have what’s called a liar’s dividend by sowing doubt about real facts and truths. Insurance companies play trusted broker roles in the economy and that may be at risk if their credibility comes under fire. We need better verification pipelines for information so that insurers can help their clients and partners. In the long run, everything in the digital ecosystem is likely to be edited or manipulated by AI and it’s a very different way to communicate.

What is being done by government agencies and technology companies in the private sector to address deepfakes?

KH: The Defense Advanced Research Projects Agency (DARPA) in the Department of Defense is working on developing forensics to distinguish real images and audio from fake. There have been some quiet proposals of legislation but only one so far has been passed by Congress, and that advances the leadership of NIST on this issue. Some states have implemented legislation with regard to deepfakes and political cases and pornography. All of the major technology and social media companies have recently announced deepfake policies with rules about what information can be shared on their platform. This is all promising, but it’s going to take a mix of policy, education and technology to help mitigate this risk—and it’s likely something that can’t be solved completely.

In summary…

We would like to thank Ms. Harrison for her thoughtful and insightful discussion on this emerging risk. Several years ago, nobody paid attention to a thing called ransomware, and now it’s a leading cause of loss within the cyber risk insurance industry. Ms. Harrison cited an interesting example where a CEO’s voice was impersonated, resulting in financial theft. One can easily see this exploit growing because we are already witnessing the lower-tech version with BEC (Business Email Compromise) cyber claims in which social engineering via email leads to wire fraud. A clever perpetrator impersonating an executive can make millions of dollars vanish in an instant. In the cyberspace, things change so rapidly it’s hard to predict the next threat, but we suspect our insurance industry partners will be paying closer attention to this issue as the deepfake technology matures and emboldened threat actors opt to leverage it.


Related Blog Posts

Download 2023 Cyber Claims Study

The annual NetDiligence® Cyber Claims Study uses actual cyber insurance reported claims to illuminate the real costs of incidents from an insurer’s perspective.

Download

© 2024 NetDiligence All Rights Reserved.