Managing cyber risk challenges from emerging technologies, including generative AI, across OT sector

Managing cyber risk challenges from emerging technologies, including generative AI, across OT sector

With the introduction of AI and automation tools into enterprises, addressing cyber risks from these new technologies, including generative AI in OT environments, requires a comprehensive approach. These emerging technological advancements can play a significant role in OT (operational technology) infrastructure, as they have the potential to improve efficiency, productivity, and safety across industrial environments. Some of these tools have also been identified to help researchers and attackers find vulnerabilities in source code, write exploits from scratch, and craft queries to find vulnerable devices online. 

AI-powered cybersecurity systems are invaluable to the industrial sector as they can detect potential threats, such as abnormal network activity or suspicious behavior, in real-time. These systems use machine learning algorithms to learn from past attacks and adapt to new threats. By leveraging these capabilities, manufacturers can proactively prevent cyberattacks and minimize the impact of security breaches. These advanced systems act as a reliable defense mechanism, providing manufacturers with enhanced protection and peace of mind in the face of evolving cyber threats.

Generative AI (artificial intelligence) systems fall under the broad category of machine learning, and they use deep-learning models that can take raw data and ‘learn’ to generate statistically probable outputs when prompted. Recent breakthroughs in the field of generative AI have the potential to create content rather than analyze or act on existing data, which can be used both by organizations and their adversaries.

By analyzing data from sensors and other sources, generative AI can be used to generate insights, boost productivity, optimize industrial processes, reduce downtime, improve energy consumption, simplify the execution of activities, and enable predictive maintenance. However, careful consideration of cybersecurity risks is necessary to ensure the safe and secure implementation of these technologies in industrial settings. 

For instance, a generative AI system called ChatGPT, which can be used to create new content, is trained on billions of parameters, though datasets of such a large size do not exist for industrial environments. When such a pre-trained model is integrated with raw data in a data lake, the patterns aren’t readily identifiable to the model. For the model to understand industrial environments, it would require retraining the model on industrial data, which can be costly considering the variety of industrial data that gets generated, or performing significant ‘prompt engineering.’ Adoption of such models on uncontextualized, unstructured industrial data significantly increases the risk of incorrect and untrustworthy answers – referred to as ‘AI hallucinations.’

Increased use of AI and automation in various areas of the cyber kill chain can enable hackers to advance farther and faster, significantly accelerating steps such as reconnaissance, initial access, lateral movement, and command and control. Hackers can tactfully target and exploit different parts of the cyber kill chain, enabling them to go further faster, and this can make a difference across the industrial sector, which is still heavily reliant on human input. 

Another interesting dynamic is that AI has been able to explain its output in a much easier way for an attacker who is unfamiliar with a specific environment; describe which assets in a network are most valuable to attack or most likely to lead to critical damage; provide hints for next steps to take in an attack; while also linking these outputs in a way that automates much of the intrusion process.

With advanced AI and automation technologies becoming available to malicious actors, Industrial Cyber asked cybersecurity executives how industrial companies can prepare for AI-powered cyberattacks and what emerging AI cyber threats they see.

Daniel dos Santos, head of security research at Forescout's Vedere Labs
Daniel dos Santos, head of security research at Forescout’s Vedere Labs

Daniel dos Santos, head of security research at Forescout, outlined to Industrial Cyber that there are two main emerging AI cyber threats: first, using generative AI to create very convincing social engineering artifacts, such as phishing e-mails, BEC scams or deepfake audio and video, which has been reported in real attacks and can be used to dupe legitimate users into granting access to malicious actors; second, automatically generating or improving malicious code for exploits and malware, which is currently being explored by threat actors.

“Although the artifacts used in these attacks are generated in a new way, the attack techniques using them remain largely the same, which also means that security best practices do not change significantly,” dos Santos said. “The main differences are that the volume of attacks will probably increase, since the barrier to entry will be lower, and the pace at which the attacks happen will probably be faster, since much of the time spent by human actors after initial access can be reduced with the use of automated tools.”

Therefore, dos Santos recommends that to prepare for AI-assisted cyberattacks, “stick to the basics, such as maintaining a complete inventory of every asset on the network, understanding their risk, exposure and compliance state and being able to automatically detect and respond to advanced threats targeting these assets.”

Vytautas Butrimas, an industrial cybersecurity subject matter expert
Vytautas Butrimas, an industrial cybersecurity subject matter expert

“There are two possible measures that can be taken,” Vytautas Butrimas, industrial cybersecurity consultant, and member of the International Society of Automation (ISA) told Industrial Cyber. “One is for the asset owner to have in place a capability to monitor for anomalous process flows, data flows, and equipment performance. I would call this capability the Industrial cybersecurity operations center (ISOC). This is easier said than done for setting up such a capability requires hiring and training professional people knowledgeable enough about the enterprise’s operations to be of help to the senior plant engineer. Both would need to work well together to be effective.” 

Butrimas added that A.I. can be one of the tools used to detect anomalous behavior but only someone who has intimate knowledge of the operation such as the senior plant engineer can make the final call.

The second measure that Butrimas suggests is to limit connectivity to only what is required to monitor and control the process. “The temptations of using the cloud or applying Industry 4.0 to the enterprise need to be resisted. If it is successful much of the risk from hostile A.I. entering the operation can be reduced. The final decision on increasing connectivity should be left to the senior plant engineer, not the IT department. CISO, accounting, or other management who spend time in marketing or administration,” he added.

“As far as evaluating the emerging threats from A.I., we do not have many real examples to judge on,” Butrimas points out. “I have heard of ‘swarms’ in reference to military use and automated systems have been around for years that scan the Internet for openings in control operations.”

Ryan Heartfield chief technology officer at Exalens
Ryan Heartfield chief technology officer at Exalens

AI has garnered a lot of attention for supposedly revolutionizing the capabilities of hackers, Ryan Heartfield, chief technology officer at Exalens, told Industrial Cyber. “However, it is crucial to recognize that the fundamental tactics, techniques, and procedures employed by threat actors have not changed significantly due to AI. Instead, AI has made advanced techniques more accessible, allowing even novice hackers to leverage them without deep programming knowledge. This democratization of advanced tools is expected to increase the scale of sophisticated threats,” he added.

“One area where AI poses emerging threats is reinforcement learning. By utilizing AI, malware can autonomously make decisions and adapt based on its environment, eliminating the need for external command and control from the original authors,” Heartfield pointed out. “This autonomous malware presents a challenge for threat actors in concealing malicious intent, as traditional indicators like file/process size, CPU, and RAM utilization might give it away. Consequently, we can anticipate a rise in the trojanization of legitimate software, where attackers hide resource-intensive code and execution requirements within software that would normally utilize significant computing resources on the targeted device. This tactic allows them to blend in and avoid detection more effectively.”

The experts also analyze how defenders can take advantage of AI advances to help automate defensive measures. They examine what safeguards need to be in place and the proactive measures organizations can take to manage the risks associated with implementing AI and automation technologies.

“AI can also facilitate and accelerate many defensive use cases, such as extracting information from threat intelligence reports, creating code for threat hunting based on adversary techniques or indicators of compromise, explaining reverse engineered code in natural language and finding vulnerabilities in systems before attackers do it,” dos Santos explained.

He added that there are currently two main challenges when trying to apply generative AI to defensive use cases: first, private or sensitive data may be leaked to a third party providing the AI services depending on how prompts are crafted and which services are used; second, the answers given by the generative AI tool may be very convincing but entirely wrong (which is known as a ‘hallucination.’) 

So, according to dos Santos, “There are two crucial steps to take for organizations considering the use of generative AI for cyber defense: first, ensure that no sensitive data is used as part of a prompt that will be processed by an external entity; second ensure that every answer received is verified for accuracy by a human.”

“Using A.I. to monitor for anomalous process flows, data flows and equipment performance is a possibility. However, care should be taken that the senior plant engineer makes the final call when an action is suggested by an A.I.,” Butrimas said. “The stakes are much higher than using A.I. to detect and guide response to an A.I. intrusion in the office IT environment. The industrial environment is where technologies are used to monitor and control processes governed by the laws of physics and chemistry. The consequences are quite different,” he added.

Heartfield said that AI has shown its potential in cybersecurity by significantly enhancing the speed and scalability of analysis and response. “While AI detection systems have made strides in detecting threats, security analysts still spend considerable time verifying alerts and reducing false positives. Unfortunately, AI has not fundamentally addressed this challenge. However, AI can play a role in automating and expediting detection analysis and response processes,” he added.

“By deploying emerging AI cybersecurity analysts, organizations can leverage models trained to follow consistent detection analysis and hunting workflows. These AI analysts can improve detection confidence and determine appropriate responses based on the assessed criticality and potential impact of threats,” according to Heartfield. 

However, Heartfield added that organizations must be cautious in managing the risk associated with automated decision-making by AI systems, ensuring that automated responses align with their policies and incident response playbooks. “It is essential to have a balance between deterministic and automated responses permitted by the organization and human verification when critical decisions based on risk and impact are required.”

The experts also address how organizations urgently need to adapt their cybersecurity strategies to incorporate AI-powered detection and response to keep pace with these new threats.

Organizations need to start by understanding what parts of their security and incident response processes can be automated or accelerated with AI, dos Santos said. “AI will probably, at the first moment, not replace any human analysts, but allow them to focus on the investigation of relevant cases, while it takes care of providing the needed information for the analysts to be successful.” 

He added that after understanding these use cases, “they need to balance the potential speed provided by AI against the risks mentioned above (leaking private data and getting false answers) and determine whether the end result is advantageous.”

Butrimas said that “organizations must first not feel that they have to ‘urgently adapt’ their strategies to incorporate A.I. detection and especially ‘Response.’ The response concerns me.”

“Organizations must recognize the need for rapid detection and response to combat modern cybersecurity threats like advanced ransomware campaigns,” Heartfield said. “In today’s fast-paced threat landscape, relying solely on human resources for root-cause analysis may no longer be efficient or effective. Additionally, in the industrial sector, organizations should adopt a holistic approach to detection and response that considers both cyber and physical aspects.”

Heartfield also said that with increasing connectivity and automation between IT and OT systems, it is essential to break down silos between IT and OT security monitoring. By seamlessly monitoring network activities and physical process behavior together, organizations can achieve a comprehensive approach called ‘Cyber-Physical Detection and Response’ (CPDR). CPDR enables the determination of whether an incident is caused by a cybersecurity threat, fault, or failure. Although these three factors may have the same impact on industrial operations, the response required for each is distinct,” he added.

Notably, physical indicators of disruption often serve as the initial warning signs for issues, whether related to machine failure or cybersecurity compromise, Heartfield pointed out. “Therefore, organizations should expand their detection and response strategies to encompass the cyber-physical domain. This approach also fosters collaboration and strengthens the relationship between IT and OT teams.”

The experts analyze the role regulations play in preparing industrial companies for AI-based cyberattacks. They also examine whether existing regulations take into account the threat of AI in different parts of the cyberattack chain, which could allow hackers to move more quickly.

After the sudden popularity of ChatGPT, there has been an increasing push for regulations on the training and use of generative AI and large language models more specifically, dos Santos recognized. “The same organizations developing these technologies are currently advocating for some form of supervision that keeps the use of AI ethical.”

Although these discussions are important and at some point, there may be regulations that slow down the malicious use of AI, “the tools that are currently available to malicious actors can probably already cause enough damage. Malicious actors will also continue to try to find ways to bypass whatever restrictions are imposed on AI tools by upcoming regulations,” dos Santos outlined.

“I think back to a recent government hearing in the US where a director of an A.I. company practically asked for regulations,” Butrimas said. “With any new technology, the regulations come later. Think back to when the first automobiles were introduced on roads dominated by horses and carts. Much chaos occurred and then the regulations and traffic lights appeared later.” 

He added that the regulations can be useful, but regulation of technology is only as good as the knowledge of the regulator. “If the legislator has an MBA or degree in Political Science then he or she will not likely make a smart regulation about A.I. The regulators, if they are wise, will learn to make friends with technology professionals to get a better feel for the environment they will be regulating.” 

Butrimas also suggests reading some science fiction and inviting sci-fi writers over for coffee for an informal discussion and to the hearings. “This would help the regulator who is not likely to have this kind of experience to fill the knowledge gap and make a more useful regulation,” he added.

Regulating AI to mitigate the risk of AI-assisted cyberattacks is a challenging task, similar to the struggles faced in regulating cryptographic algorithms, Heartfield highlighted. “While there are cybersecurity compliance frameworks, specific regulations for cybersecurity controls, including AI-enhanced attacks and defense, are limited outside of GDPR. It’s important to understand that regulations alone cannot completely prevent threat actors from leveraging AI to enhance their operations.”

He also added that AI systems and technologies are widely accessible, similar to the Internet, and regulations may not significantly reduce the capabilities of threat actors. “However, regulation could potentially impede the widespread adoption of sophisticated threats by both novice and expert hackers, at least temporarily. While regulation may not be a foolproof solution, it can have a deterrent effect on the increasing scale and prevalence of such threats,” according to Heartfield.

The experts also address the question of whether the use of AI-based defense, especially in OT /ICS environments, should be governed by regulations.

dos Santos said that he does not “believe that specific regulations for the use of AI-enabled cyber defense are coming any time soon. There is already a large amount of work being done on regulating cyber defense in general, such as defining measurable goals, increasing vendor liability, and requiring reporting of incidents.” 

Until these basic items are well defined and regulated, adding discussions about the role of AI in these situations will probably create more confusion, he pointed out. “Nevertheless, it is certainly crucial for both industry representatives and policymakers to be aware of ongoing AI advancements and to try to understand which parts of these developments could or should be regulated for cyber defense.”

“First, one should understand that no regulation will stop a hostile actor (especially a military-oriented one) from developing and executing an A.I. based cyber attack on industrial environments,” Butrimas outlined. “One of the features of Stuxnet, the attack on safety systems at a petrochemical plant or the discovered Pipedream tools is an interest in the denial or disruption of a physical process. The temptation of engaging in an activity that is effective, cheap (for a state), and most importantly deniable is hard to resist when considering effective ways to achieve a policy objective.” 

Modern industrial operations from national power grids, water supply systems, and petrochemical plants are complex, Butrimas said. “Some call them ‘systems of systems.’ Any change made by an A.I. (still is the problem of determining if it is authorized) especially when in ‘learning mode’ can have serious consequences in terms of people, property, and the environment. Better perhaps to try these new capabilities in the office IT environment first and do the trial, error, and learning there (of course make sure there are no connections to the industrial operation),” he added.

Heartfield said that regulations play a crucial role in governing AI-enabled defense systems to ensure their response actions do not cause more harm than good. “For instance, in OT/ICS environments, if an AI defense system decides to isolate a PLC device from the network to address a potential threat, it could inadvertently disrupt the entire process dependent on that PLC. In such cases, the AI defense system may create a more significant and disruptive impact, potentially leading to safety incidents, than the threat it intended to mitigate,” he added.

“Another concern arises from the prevalence of false positives in detection systems, where AI-enabled responses may struggle to accurately differentiate low-threat alerts from legitimate behavior,” according to Heartfield. “Regulation can address this issue by holding organizations accountable for their incident response workflows and policies, ensuring specific AI-enabled responses are implemented in a controlled and safe manner. Deploying AI-enabled defense without considering potential risks can pose significant dangers, such as disrupting supply chains or critical services.”

Developing ‘impact-aware’ AI-enabled defense, incorporating methodologies like Consequence-Driven Cyber-Informed Engineering (CCE), and enforcing regulations can help address these challenges effectively, Heartfield concluded.

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related