Industry Experts React to DARPA’s AI Cyber Challenge

1112

At Black Hat USA 2023, the Department of Defense (DoD) Defense Advanced Research Projects Agency (DARPA) unveiled a two-year “AI Cyber Challenge” (AIxCC) competition aimed at driving innovation in AI and the creation of new cybersecurity tools. The competition will catalyze industry-wide collaboration in an increasingly complex threat landscape where cybercriminals already use AI to identify and expose vulnerabilities.

Reflecting on the announcement of AIxCC, three security experts shared their thoughts on the future use of AI in cybersecurity and the impacts the new DARPA initiative will have on our understanding of AI and its uses in cybersecurity.

Javed Hasan, CEO and co-founder, Lineaje

“An organization’s digital estate consists of all the software products it buys and builds. Over the last several years, we’ve seen an increase in adversaries exploiting this software unbeknownst to developers and security teams, leaving massive breaches at the scale of SolarWinds, 3CX, and Log4j in its wake. The dramatic increase in frequency of these attacks have left many security experts asking, ‘Where is the disconnect?’ The answer is that an enterprise can have a wide array of cybersecurity products in its security stack, but if it is missing tools to identify and remediate issues in the software supply chain, all of its investments will go to waste.

The problem is that many vulnerability approaches today fail to do two critical things:

  1. Consider all of the components that make up the software in the first place, failing to get deep enough to discover which layer is the problem.
  2. Weigh-in the importance of executability by developers, focusing too much on security urgency.

Fortunately, advances in generative AI can enable organizations to optimize maintenance and security at the deepest layers of software to help both developers and security teams. We are in full support of DARPA’s new AI competition and commend them for recognizing the importance of finding new solutions that can be used to identify and remediate gaps in securing the software supply chain.”

Dane Sherrets, Senior Solutions Architect at HackerOne

“DARPA’s AI Cyber Challenge (AIxCC) is a great example of public-private sector collaboration to help reduce cybersecurity threats from a defensive perspective. We’ve already seen how applications of AI can reduce time to contain a breach, and this new dedicated focus will supercharge these ongoing efforts.

But it’s important to remember that artificial intelligence is a double edged sword. While AI will undoubtedly supercharge defenders it can also supercharge adversaries. Malicious applications of AI are getting better at identifying vulnerabilities and augmenting traditional attack techniques, leading to an expanded attack surface that challenges the expertise of both offensive and defensive security teams.

So, while defensive tools are an essential component of every organization’s security strategy it is important to recognize that adversarial testing (i.e Red Team Operations, Pentesting, or Bug Bounty programs) conducted by humans, aided by AI, is necessary for navigating this evolving threat landscape.

Steve Povolny, director of security research, Exabeam

“We’re in the midst of an AI race between cybersecurity professionals and adversaries. Generative AI has lowered the barrier of entry for cybercriminals, making it easier to automate widespread phishing campaigns, write malicious code, and more. Rather than fearing generative AI, I encourage security professionals to fight fire with fire and use the technology to their advantage — which is why the DARPA competition is so important.

The best innovation happens during moments of competition and collaboration. Seeing the entire security community come together to develop cutting-edge technology is exciting. AI is an incredible tool that is revolutionizing the way we all approach cybersecurity. Through a combination of human and machine effort, opportunities proliferate for the good guys to gain the upper hand in the ongoing global cyber conflict. I look forward to seeing the results of the DARPA contest and how it influences the security community’s approach to AI moving forward.”

As continued software development expands attack surfaces for bad actors, AIxCC will be an excellent opportunity to invoke the production of the next generation of cybersecurity tools. If approached correctly, AIxCC can show cybersecurity experts and non-specialists alike that AI can be used to better society by protecting its critical underpinnings. With the final phase of the competition planned to be held at DEF CON 2025, we are all eagerly awaiting to see what teams will develop over the next two years.

 

Image by DCStudio on Freepik

Ad

No posts to display