Home

Bank built its own threat hunting agent because vendors can’t keep pace with new threats

Australia’s Commonwealth Bank built its own agentic AI threat hunting tools, because vendors are too slow to develop tools that can cope with emerging AI-powered threats, according to General Manager of Cyber Defence Operations Andrew Pade.

Speaking at analyst firm Gartner’s Security & Risk Management Summit in Sydney on Tuesday, Pade said he joined the bank six years ago when it logged 80 million daily threat signals. That figure now tops four billion, and he said AI is one reason for the growth.

Pade told the event that the bank investigated attacks such as phishing emails and sites, and found the same code – sometimes including clear artefacts of AI coding tools – in many different attacks.

“The lure changed, but the backend was the same,” he said. Since the advent of AI, the volume of attacks the bank detects has also increased.

“When I joined [six years ago], we ingested 80 million signals a week,” Pade said. “Last week it was 400 billion.”

“You cannot manage that with traditional cyber defences.”

Pade worried that the sheer scale of threats is also a career-killer. He said the bank now hires graduates with cybersecurity skills, a change from his own career path that saw early career IT workers start on a help desk and learn infosec on the job. He said cybersecurity graduates now walk into a high-pressure environment that represents a mental health challenge.

“One of the things that really concerns me is taking that off the table,” Pade said.

“I wanted our first-level analysts the access the same knowledge our senior people have, in the fastest way,” he added. “That was the tipping point: How do I take scale off the table, and how do I ensure all our agents are working in cyber in 20 years time” instead of burning out?

The bank’s response was to build its own agentic AI tool that ingests threat information from sources such as new research, analyses it using the bank’s own data, and identifies threats that could pose a risk to its sprawling estate of legacy systems, on-prem infrastructure, SaaS, and cloud-hosted workloads.

Pade said building that tool was necessary because infosec vendors can’t keep up with emerging threats and the bank can’t wait for a product. He said the bank previously required two days to assess the seriousness of emerging threats and prepare a hypothesis about the risks it poses. The agent does it in 30 minutes and prepares reports.

AI also created problems for his team when the bank used the tech to conduct red team security assessments. Pade said human-authored red team reports include detailed evidence to satisfy a lawyer, but AI-generated documents may not report the same threat twice.

“AI is non-deterministic,” Pade said. “So we had to find a way to put deterministic points in a non-deterministic flow. It was a real mind shift for our red teams.”

The bank now tries to assign deterministic outcomes to attacks, so its agents can make more repeatable predictions.

Developing agents proved tricky. Pade said his team asked the bank’s data scientists for help, as they are already skilled at creating AI applications that he said represent “real AI” rather than “automation on steroids.”

Their first attempt at creating tools for the bank’s infosec teams “didn’t solve the problem,” Pade admitted. Once frontline security staffers worked alongside data scientists a useful tool emerged.

“Throwing the problem over the fence and waiting for a solution was not the answer,” Pade said. “They knew the AI, we knew the outcome. The people closest to your problem are best to solve it.”

The security chief said the bank is now “learning how to integrate AI to take the monotony out of our day” and suggested every organization needs to do the same given AI will mean cyber-criminals can scale the volume of their attacks to new heights.

“You will see attacks like we do, like it or not,” he said. “I would be asking your teams: ‘How are we solving that problem?’” ®

Source: The register

Previous

Next