US DHS delivers safety and security guidelines to secure critical infrastructure from AI-related threats

US DHS delivers safety and security guidelines to secure critical infrastructure from AI-related threats

The U.S. Department of Homeland Security (DHS), in coordination with the Cybersecurity Infrastructure and Security Agency (CISA), released new safety and security guidelines to address cross-sector AI (artificial intelligence) risks impacting the safety and security of U.S. critical infrastructure systems. The DHS developed these guidelines with the Department of Commerce, the Sector Risk Management Agencies (SRMAs) for the 16 critical infrastructure sectors, and relevant independent regulatory agencies.

The guidelines begin with insights learned from the CISA cross-sector analysis of sector-specific AI risk assessments completed by SRMAs and relevant independent regulatory agencies in January 2024. The CISA analysis includes a profile of cross-sector AI use cases and patterns in adoption, in addition to establishing a foundational analysis of cross-sector AI risks across three distinct types – Attacks using AI, attacks targeting AI systems, and failures in AI design and implementation.

In terms of attacks using AI, the focus is on utilizing AI to enhance physical or cyber attacks on critical infrastructure. For attacks targeting AI systems, it addresses targeted attacks on AI systems supporting critical infrastructure. Lastly, failures in AI design and implementation cover deficiencies in the planning, structure, or execution of AI tools leading to malfunctions impacting critical infrastructure operations.

The key findings, drawn from SRMAs’ sector submissions, highlight commonalities related to cross-sector AI risks to U.S. critical infrastructure. SRMAs consistently highlighted the possibilities of AI as a transformative technology for many critical infrastructure functions; however, they also noted the tension between the benefits of AI and the risks introduced by a complex and rapidly evolving technology. To date, SRMAs reported their sectors have adopted AI primarily to support functions that were already partially automated, and they envision the application of AI to more complex functions as a future advancement.

Additionally, SRMAs noted the possibility that AI could support solutions for many long-standing, persistent challenges, such as logistics, supply chain management, quality control, physical security, and cyber defense. SRMAs consistently viewed AI as a potential means for adversaries to expand and enhance current cyber tactics, techniques, and procedures. 

SRMAs have outlined methods to mitigate and minimize risks to critical infrastructure operations. These include established risk mitigation practices like ICT supply chain risk management, incident response planning, and continuous workforce development through awareness and training. Additionally, specific mitigation strategies for AI involve activities, such as dataset and model validation, human oversight of automated processes, and the implementation of AI usage policies.

DHS drew upon this analysis, as well as analysis from existing U.S. government policy, to develop specific safety and security guidelines to mitigate the identified cross-sector AI risks to critical infrastructure.

The guidelines also incorporate the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), including its four functions that help organizations address the risks of AI systems: Govern, Map, Measure, and Manage.

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks. Our Department is taking steps to identify and mitigate those threats,” Alejandro N. Mayorkas, Secretary of Homeland Security, said in a media statement. “When President Biden tasked DHS as a leader in the safe, secure, and reliable development of AI, our Department accelerated our previous efforts to lead on AI. In the 180 days since the Biden-Harris Administration’s landmark EO on AI, DHS has established a new AI Corps, developed AI pilot programs across the Department, unveiled an AI roadmap detailing DHS’s current use of AI and its plans for the future, and much more.” 

He added that the DHS is more committed than ever to advancing the responsible use of AI for homeland security missions and promoting nationwide AI safety and security, building on the unprecedented progress made by this Administration. “We will continue embracing AI’s potential while guarding against its harms.”

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” Jen Easterly, CISA Director, said. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk.”

As part of the sector-specific AI risk assessments, SRMAs identified more than 150 beneficial uses of AI across their respective sectors, the DHS document disclosed. Critical infrastructure owners and operators should use the guidelines to implement AI safely and securely. CISA developed and applied 10 AI use categories for ease of interpretation and discussion. These AI use categories are likely to evolve in future summaries as more complex applications are introduced to critical infrastructure. 

The categories include Operational Awareness which involves using AI to gain a clearer understanding of an organization’s operations; Performance Optimization which uses AI to improve the efficiency and effectiveness of processes or systems; automation of Operations which refers to using AI to automate routine tasks and processes in an organization, such as data entry or report generation; and Event Detection that refers to the use of AI to detect specific events or changes in a system or environment.

Furthermore, it covers forecasting which involves the use of AI to predict future trends or events based on current and historical data; Research and development (R&D) refers to the use of AI in the development of new products, services, or technologies; Systems Planning that refers to the use of AI in the planning and design of systems, such as IT infrastructure; and Customer Service Automation that involves using AI to automate aspects of customer service, such as answering frequently asked questions or processing orders. 

It also includes modeling and simulation which involves using AI to create models and simulations of real-world scenarios; and physical security which refers to the use of AI in maintaining the physical security of a facility or area. 

“SRMAs indicated that the most common critical infrastructure AI use cases involved predictive AI, though recent advances in widely accessible generative AI capabilities may shift that dynamic in future assessments,” the document detailed. “SRMAs also reported relatively lower levels of current AI adoption for use cases that generate outputs with greater uncertainty or leverage more complex logic, such as forecasting, optimization, modeling, and simulation. This trend is consistent with the overall finding that SRMAs envision the adoption of AI in more complex infrastructure operations as a potential future endeavor for their respective sectors. Finally, most assessments indicated an increasing trend in the degree of AI adoption.” 

AI risk management for critical infrastructure is a continuous process performed throughout the AI lifecycle. Critical infrastructure owners and operators should consider the three AI risk categories when implementing these guidelines. 

AI risks are also contextual. Critical infrastructure owners and operators should account for their own sector-specific and context-specific use of AI when assessing AI risks and selecting appropriate mitigations. Some mitigations may address multiple risks, while others will be narrowly focused. While the guidelines broadly apply to all sixteen critical infrastructure sectors, specific sectors have already developed and may continue to refine guidelines for managing AI risk tailored to specific settings and contexts and for use as part of annual sector-specific AI risk assessments. 

Critical infrastructure owners and operators may focus on different aspects of the AI lifecycle depending on their sector or role. In some cases, critical infrastructure owners and operators will be involved in the design, development, or procurement of AI systems. In other cases, they may not be the original designers or developers of AI systems but may have a level of responsibility to bear in deploying, operating, managing, maintaining, or retiring these systems.

To address these risks, DHS outlines a four-part mitigation strategy, building upon the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), that critical infrastructure owners and users can consider when approaching contextual and unique AI risk situations:

  • Govern: Establish an organizational culture of AI risk management – Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.
  • Map: Understand individual AI use context and risk profile – Establish and understand the foundational context from which AI risks can be evaluated and mitigated.
  • Measure: Develop systems to assess, analyze, and track AI risks – Identify repeatable methods and metrics for measuring and monitoring AI risks and impacts.
  • Manage: Prioritize and act upon AI risks to safety and security – Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

These resources build upon the Department’s broader efforts to protect the nation’s critical infrastructure and help stakeholders leverage AI, which includes the recent establishment of the Artificial Intelligence Safety and Security Board. This new Board, announced last week, assembles technology and critical infrastructure executives, civil rights leaders, academics, state and local government leaders, and policymakers to advance responsible development and deployment of AI.

Last week, the European Commission initiated calls for proposals within Horizon Europe’s 2023-2024 digital, industrial, and space work program, focusing on research and innovation in AI and quantum technologies. With an investment of €112 million in AI, and quantum research and innovation, a new series of calls has been introduced, totaling over €112 million from the 2023-2024 Horizon Europe Digital, Industry, and Space work program.

Related