Back To The Blog

The Future of AI and Cyber Response

Incident Response / September 17 , 2024

The “set it and forget it” mindset has, in the opinion of Trend Micro’s senior global incident response program manager Chris Lafleur, been the bane of cybersecurity. When it comes to AI cybersecurity tools, technology has not advanced to the point where it can replace human vigilance—at least not entirely. While AI solutions can help automate incident detection and response, they should be part of a larger cybersecurity toolkit. He explained why in this Q&A.

How are companies leveraging AI to improve cybersecurity?

A bearded businessman looks at the computer screen of a co-worker in a dimly lit office. The top five ways we see it being used now are:

  • Cyber threat detection and response
  • Predictive analytics for threat prevention
  • Enhancing incident response times
  • Bridging the cybersecurity skills gap
  • Automating routine security tasks

AI can enhance cybersecurity by identifying and mitigating threats faster than traditional methods. AI cybersecurity tools analyze vast amounts of data, recognize patterns, and detect anomalies that could indicate a security breach. As such, AI can help bridge the cybersecurity skills gap by accelerating the learning process for individuals, providing them with enough foundational knowledge to ask the right questions and seek appropriate solutions.

What are AI’s limitations for cybersecurity?

While vendors are jumping on the AI bandwagon, it’s not a total security solution for analysts and it doesn’t replace the insight of senior analysts with years of experience. There’s still a lot of data out there that’s wrong on the response side—and the funny thing about AI bots is that when they’re wrong, they are very confident that they are right.

What, if anything, can/or should be automated in your cybersecurity/IR program?

Digital icons such as fingerprints, computer chips, and wifi display in the air over a closeup of a woman using her laptop and cellphone on a desk. There are so many benefits to cybersecurity automation. Humans have other jobs to do, especially in the cybersecurity and IT realm. AI helps us with this. Recent advancements in AI for cybersecurity include improved threat detection algorithms, automated incident response, and predictive analytics to foresee potential attacks.

AI models can provide actionable insights to improve security postures and detect impacts of specific events, as demonstrated by tools developed by teams like those at the Trend Micro AI summit.

If you see something of concern in your logs, you can use AI knowledge and tooling to basically analyze the command structure and explain to you in simple language what’s going on and how severe the problem is.

AI can also validate that a vulnerability has been exploited in your environment via the internet. Or you can train it to conduct internal assessments on your endpoints, your operating system and their functionality.

What kind of liability are orgs opening themselves up to by deploying AI/automation?

AI systems, if not properly managed, can compromise data privacy. They can create risks such as unauthorized data access, misuse, and breaches. There’s already been reports of people opening AI into their data, giving unauthorized users full access. It’s not a concern about data dumping for users but there are major concerns for regulatory compliance.

Businesses must understand the structure and data usage of the AI tools they deploy. Public AI models may not ensure privacy and could be susceptible to data poisoning or unauthorized access. Awareness of how input data is utilized on a broader scale is crucial. This knowledge helps businesses make informed decisions about their AI tools and data protection strategies.

Regardless of the industry and usage, data must be siloed—if you’re asking AI a question you need to make sure you are not making sensitive data vulnerable to people who can access the AI logs. Data sovereignty will be an ongoing issue: Who owns the data, and can it be overwritten by AI?

AI is also being exploited by threat actors, necessitating continuous development and vigilance in AI applications. We will be making an announcement at Black Hat highlighting the evolving nature of AI in cybersecurity.

AI adoption in cybersecurity must consider the potential for threat actors to exploit AI models. Red team tools with AI functionalities may emerge, aiming to bypass security measures. Understanding that AI tools can be abused, like other network tools, is key. Defense strategies should incorporate AI to counteract these threats.

What are your top tips for using AI cybersecurity solutions?

  1. Ensure continuous human oversight and involvement with AI cybersecurity tools
  2. Understand and control data input and usage within AI models
  3. Regularly update and train AI models to keep pace with evolving threats
  4. Implement strict access controls and data privacy measures
  5. Stay informed about the latest developments and best practices in AI cybersecurity and automated threat response

What’s your opinion about the future of AI?

I’m a big, big supporter of AI. I like the fact that where it’s going to take us is going to make the last 20 years of technological progress seem slow. We’re going to see the evolution of human behavior change drastically like when the automobile was first introduced, when electric power was introduced, when computers were introduced. I would say organizations that are early adopters are setting themselves up for the future.

Want to learn more about AI and cybersecurity?

Download our tips sheet below!


Related Blog Posts

Download 2024 Cyber Claims Study

The annual NetDiligence® Cyber Claims Study uses actual cyber insurance reported claims to illuminate the real costs of incidents from an insurer’s perspective.

Download

© 2024 NetDiligence All Rights Reserved.