Senior US cybersecurity official reveals use of AI to counter hackers targeting critical infrastructure

Senior US cybersecurity official reveals use of AI to counter hackers targeting critical infrastructure

A senior U.S. cybersecurity executive disclosed the use of AI (artificial intelligence) by intelligence agencies to detect and counter hackers targeting critical infrastructure. The AI technology is capable of identifying indications of hackers employing AI in their attacks, while machine learning tools are aiding U.S. security agencies to catch operations relying on so-called ‘living off the land’ techniques. The development comes as cyber expertise is in short supply but AI could fill gaps and improve efficiency. 

Rob Joyce, cybersecurity director at the NSA (National Security Agency), spoke Tuesday at the International Conference on Cyber Security at Fordham University in New York, where he said that machine learning and artificial intelligence are helping cybersecurity investigators track digital incursions that would otherwise be very difficult to see. He identified that cybersecurity leaders discussed burgeoning aspects of AI used by hackers—as well as by law enforcement. 

Specifically, Chinese hackers are targeting U.S. transportation networks, pipelines, and ports using stealthy techniques that blend in with normal activity on infrastructure networks, Joyce revealed. These methods are ‘really dangerous’ as their aim is societal disruption, as opposed to financial gain or espionage, Joyce said. The hackers don’t use malware that common security tools can pick up, he added.

“They’re using flaws in the architecture, implementation problems, and other things to get a foothold into accounts or create accounts that then appear like they should be part of the network,” Joyce said, referring to the living-off-the-land techniques. 

He further detailed that recent Chinese operations deviate from conventional methods and do not rely on easily detectable malware based on signatures. Instead, the hackers exploit architectural flaws, misconfigurations, or default passwords to gain access to networks. They create seemingly legitimate accounts or users, which are then utilized to navigate the networks and engage in activities that are not typically performed by regular users.

AI tools are helping the NSA catch these operations. “Machine learning, AI, and big data help us surface those activities,” Joyce said because the models are better at detecting anomalous behavior of supposedly legitimate users.

Recent advances in AI and machine learning have raised concerns among researchers and security officials that they might provide an advantage to offensive cyber operations, but Joyce said Tuesday that he’s encouraged by the defensive dividends offered by the technology.

“You’re going to see that on both sides, people that use AI/ML will do better,” Joyce said.

He further revealed that “We already see criminal and nation-state elements utilizing AI. We’re seeing intelligence operators, we’re seeing criminals on those platforms. They’re much better at English-language content today.”

“One of the first things they’re doing is they’re just generating better English-language outreach to their victims, whether it’s phishing emails or something much more elaborative in the case of malign influence,” he said.

Joyce, however, didn’t name any specific AI company, but he said the issue is widespread. “They’re all subscribed to the big-name companies that we would expect, all of the generative AI models out there,” he said.

Microsoft had in May discovered stealthy and targeted malicious activity targeted at U.S. critical infrastructure organizations, largely focused on post-compromise credential access and network system discovery. Using ‘living-off-the-land’ techniques and hands-on-keyboard activity, the attack is carried out by Volt Typhoon, a state-sponsored hacker group based in China that typically focuses on espionage and information gathering. These attacks have targeted critical infrastructure sectors including communications, manufacturing, utility, transportation, maritime, and government.

NBC News was able to pick up responses from OpenAI’s ChatGPT and Google’s Bard on Tuesday.

In a statement, a Google representative said: “We have policies and protections in place against the use of generating content for deceptive or fraudulent activities like phishing. While the use of generative AI to produce negative results is an issue across all LLMs, we’ve built important guardrails into Bard that we’ll continue to improve over time.”

An OpenAI spokesperson said in an emailed statement that “We have studied cyber applications of LLMs, and are funding research and development toward an evaluation suite for LLM cybersecurity capabilities.”

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related