UK NCSC report warns of increased ransomware threat with rise of AI affecting cyber operations

UK NCSC report warns of increased ransomware threat with rise of AI affecting cyber operations

The U.K. National Cyber Security Centre (NCSC) has released a report highlighting the potential impact of artificial intelligence (AI) on the global ransomware threat. The report underscores that the rise of AI is expected to amplify both the frequency and severity of cyber attacks in the coming years. As a result, the NCSC urges organizations and individuals to take proactive measures to protect themselves. The report also outlines how AI will influence cyber operations, including areas such as social engineering and malware, further emphasizing the need for heightened vigilance in the face of evolving cyber threats.

Additionally, the assessment assumes no significant breakthrough in transformative AI in this period. However, this assumption should be kept under review, as any breakthrough could have significant implications for malware and zero-day exploit development and therefore the cyber threat.

Titled ‘The near-term impact of AI on the cyber threat assessment,’ the NCSC report concludes that AI is already being used in malicious cyber activity and will almost certainly increase the volume and impact of cyber-attacks – including ransomware – in the near term. “The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design. It does not address the cyber security threat to AI tools, nor the cyber security risks of incorporating them into system architecture. More work is required to understand the extent to which AI developments in cyber security will limit the threat impact,” it added.

“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” Lindy Cameron, NCSC CEO, said in a media statement. “The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.”

Cameron added that as the NCSC does all it can to ensure AI systems are secure-by-design, “we urge organisations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyber attacks.”

James Babbage, director general for threats at the National Crime Agency, said “ransomware continues to be a national security threat. As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals.”

Babbage emphasized that the NCA’s primary focus is safeguarding the public and mitigating the risk of serious crime in the U.K. This includes actively addressing the misuse of Generative AI (GenAI) by criminals and promoting the responsible adoption of the technology within its own operations, prioritizing safety and effectiveness.

The NCSC assessment finds that effective preparation is central to preventing ransomware attacks. Implementing the NCSC’s advice, such as the simple protective measures outlined in its ransomware guidance, will help U.K. organizations to reduce their likelihood of being infected. Most ransomware incidents typically result from cyber criminals exploiting poor cyber hygiene, rather than sophisticated attack techniques.

The report identified that the threat to 2025 comes from evolution and enhancement of existing tactics, techniques and procedures (TTPs). All types of cyber hackers – state and non-state, skilled and less skilled – are already using AI, to varying degrees. AI provides a capability uplift in reconnaissance and social engineering, almost certainly making both more effective, efficient, and harder to detect. Also, more sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realized before 2025.

The report added that AI will almost certainly make cyber attacks against the UK more impactful because threat actors will be able to analyze exfiltrated data faster and more effectively and use it to train AI models. AI lowers the barrier for novice cybercriminals, hackers-for-hire, and hacktivists to carry out effective access and information-gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.

Moving towards 2025 and beyond, commoditization of AI-enabled capability in criminal and commercial markets will almost certainly make the improved capability available to cybercrime and state actors.

The NCSC report said that AI will primarily offer threat actors capability uplift in social engineering. GenAI can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling, and grammatical mistakes that often reveal phishing. This will highly likely increase over the next two years as models evolve and uptake increases. Additionally, AI’s ability to summarize data at pace will also highly likely enable threat actors to identify high-value assets for examination and exfiltration, enhancing the value and impact of cyber attacks over the next two years.

Threat actors, including ransomware hackers, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing, and coding. This trend will almost certainly continue up to 2025 and beyond. Phishing, typically aimed either at delivering malware or stealing password information, plays an important role in providing the initial network access that cyber criminals need to carry out ransomware attacks or other cybercrime. It is therefore likely that cybercriminal use of available AI models to improve access will contribute to the global ransomware threat in the near term.

The NCSC report recognizes that AI is likely to assist with malware and exploit development, vulnerability research, and lateral movement by making existing techniques more efficient. “However, in the near term, these areas will continue to rely on human expertise, meaning that any limited uplift will highly likely be restricted to existing threat actors that are already capable. AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data. There is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose.”

Cyber resilience challenges will become more acute as the technology develops. By 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing, or social engineering attempts. 

The NCSC report also pointed out that the time between the release of security updates to fix newly identified vulnerabilities and threat actors exploiting unpatched software is already reducing. This has exacerbated the challenge for network managers to patch known vulnerabilities before they can be exploited. AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise.

The NCSC report said that by 2025, training AI on quality data will remain crucial for its effective use in cyber operations. “The scaling barriers for automated reconnaissance of targets, social engineering, and malware are all primarily related to data. But to 2025 and beyond, as successful exfiltrations occur, the data feeding AI will almost certainly improve, enabling faster, more precise cyber operations.”

Increases in the volume and heightened complexity and impact of cyber operations will indicate that threat actors have been able to effectively harness AI. This will highly likely intensify UK cyber resilience challenges in the near term for the UK government and the private sector.

Last November, the NCSC recognized the emergence of state-aligned hackers as a new cyber threat to critical national infrastructure (CNI), the continuation of Russia’s illegal invasion of Ukraine, and the concerns around the potential risks from AI – all of which drive the need for NCSC interventions and support. The agency stated that critical sectors in the U.K. are facing an ‘enduring and significant’ threat, which is partly attributed to the emergence of state-aligned groups and a rise in aggressive cyber activity.

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related