New NSA guidance identifies need to update AI systems to address changing risks, bolster security

New NSA guidance identifies need to update AI systems to address changing risks, bolster security

The U.S. National Security Agency (NSA) released a Cybersecurity Information Sheet (CSI) on Monday, offering guidance on enhancing the security of AI systems. This initiative introduces a fresh set of best practices to assist organizations in bolstering their security measures. The aim is to aid National Security System (NSS) owners and Defense Industrial Base (DIB) companies as they prepare to deploy and manage AI systems created by external entities.

Titled, ‘Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems,’ the document is the initial release from the NSA’s Artificial Intelligence Security Center (AISC), in partnership with the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and United Kingdom National Cyber Security Centre (NCSC-UK). 

Targeted at organizations deploying and operating AI systems designed and developed by another entity, these best practices may not apply to all environments, so the mitigations should be adapted to specific use cases and threat profiles. 

Additionally, the best practices align with the cross-sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures.

NSA established the AISC last September as part of the Cybersecurity Collaboration Center (CCC). The AISC was formed to detect and counter AI vulnerabilities; drive partnerships with industry and experts from U.S. industry, national labs, academia, the IC, the Department of Defense (DoD), and select foreign partners; develop and promote AI security best practices; and ensure NSA’s ability to stay in front of adversaries’ tactics and techniques. 

The AISC plans to work with global partners to develop a series of guidance on AI security topics as the field evolves, such as data security, content authenticity, model security, identity management, model testing, and red teaming, incident response, and recovery.

The goals of the AISC and the report are to improve the confidentiality, integrity, and availability of AI systems; assure that known cybersecurity vulnerabilities across these systems are appropriately mitigated; and provide methodologies and controls to protect, detect, and respond to malicious activity against such systems and related data and services.

“AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis,” Dave Luber, NSA Cybersecurity Director, said in a media statement.

The NSA guide identified that organizations typically deploy AI systems within existing IT infrastructure. “Before deployment, they should ensure that the IT environment applies sound security principles, such as robust governance, a well-designed architecture, and secure configurations. The security best practices and requirements for IT environments apply to AI systems, too.” 

It added that the best practices are particularly important to apply to the AI systems and the IT environments the organization deploys them in. 

Furthermore, “if an organization outside of IT is deploying or operating the AI system, work with the IT service department to identify the deployment environment and confirm it meets the organization’s IT standards,” according to the NSA guidance. “Understand the organization’s risk level and ensure that the AI system and its use is within the organization’s risk tolerance overall and within the risk tolerance for the specific IT environment hosting the AI system. Assess and document applicable threats, potential impacts, and risk acceptance.”

It also called for identifying the roles and responsibilities of each stakeholder along with how they are accountable for fulfilling them; identifying these stakeholders is especially important should the organization manage its IT environment separately from its AI system. The NSA also pushed for identifying the IT environment’s security boundaries and how the AI system fits within them and requiring the primary developer of the AI system to provide a threat model for their system. 

The agency also recommends considering deployment environment security requirements when developing contracts for AI system products or services. It also suggests promoting a collaborative culture for all parties involved, including the data science, infrastructure, and cybersecurity teams in particular, to allow for teams to voice any risks or concerns and for the organization to address them appropriately.

The NSA also sought to establish security protections for the boundaries between the IT environment and the AI system; identify and address blind spots in boundary protections and other security-relevant areas in the AI system the threat model identifies; and identify and protect proprietary data sources the organization will use in AI model training or fine-tuning. It also prescribes examining the list of data sources, when available, for models trained by others and applying secure by design principles and Zero Trust (ZT) frameworks to the architecture to manage risks to and from the AI system.

When it comes to adopting a ZT mindset, which assumes a breach is inevitable or has already occurred, the NSA is guided to implement detection and response capabilities, enabling quick identification and containment of compromises. It also suggests using well-tested, high-performing cybersecurity solutions to identify attempts to gain unauthorized access; enhance the speed and accuracy of incident assessments; and integrate an incident detection system to help prioritize incidents. 

The NSA recommends collecting logs to cover inputs, outputs, intermediate states, and errors; and automate alerts and triggers. It also suggests monitoring the model’s architecture and configuration settings for any unauthorized changes or unexpected modifications that might compromise the model’s performance or security, and monitoring for attempts to access or elicit data from the AI model or aggregate inference responses.

The document also prescribed engaging external security experts to conduct audits and penetration testing on ready-to-deploy AI systems, to help identify vulnerabilities and weaknesses that may have been overlooked internally.

The NSA also recommends monitoring the system’s behavior, inputs, and outputs with robust monitoring and logging mechanisms to detect any abnormal behavior or potential security incidents, and watching for data drift or high frequency or repetitive inputs. It also suggests establishing alert systems to notify administrators of potential oracle-style adversarial compromise attempts, security breaches, or anomalies, as timely detection and response to cyber incidents are critical in safeguarding AI systems.

The document also suggests that when updating the model to a new/different version, organizations must run a full evaluation to ensure that accuracy, performance, and security tests are within acceptable limits before redeploying. 

The guidance also prescribes using an immutable backup storage system, depending on the requirements of the system, to ensure that every object, especially log data, is immutable and cannot be changed. It also suggests performing autonomous and irretrievable deletion of components, such as training and validation models or cryptographic keys, without any retention or remnants after any process where data and models are exposed or accessible. 

In conclusion, the authoring agencies advise organizations deploying AI systems to implement robust security measures capable of both preventing theft of sensitive data and mitigating misuse of AI systems. “AI systems are software systems. As such, deploying organizations should prefer systems that are secure by design, where the designer and developer of the AI system takes an active interest in the positive security outcomes for the system once in operation,” it added. 

Although comprehensive implementation of security measures for all relevant attack vectors is necessary to avoid significant security gaps, and best practices will change as the AI field and techniques evolve, some particularly important measures involve conducting ongoing compromise assessments on all devices where privileged access is used or critical services are performed; hardening and updating the IT deployment environment; and reviewing the source of AI models and supply chain security

The NSA document also suggests validating the AI system before deployment; enforcing strict access controls and API security for the AI system, employing the concepts of least privilege and defense-in-depth; and using robust logging, monitoring, and user and entity behavior analytics (UEBA) to identify insider threats and other malicious activities. It also suggests limiting and protecting access to the model weights, as they are the essence of the AI system, while also maintaining awareness of current and emerging threats, especially in the rapidly evolving AI field, and ensuring the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.

“In the end, securing an AI system involves an ongoing process of identifying risks, implementing appropriate mitigations, and monitoring for issues. By taking the steps outlined in this report to secure the deployment and operation of AI systems, an organization can significantly reduce the risks involved,” according to the NSA guidance. “These steps help protect the organization’s intellectual property, models, and data from theft or misuse. Implementing good security practices from the start will set the organization on the right path for deploying AI systems successfully.”

Last week, the NSA offered guidance to enhance data security and protect data both at rest and in transit. The recommendations focused on limiting data access to authorized individuals. The CSI emphasizes the importance of the data pillar and its capabilities in reducing risks through encryption, tagging and labeling, data loss prevention strategies, and the use of data rights management tools. These capabilities outlined in the CSI are in line with a comprehensive Zero Trust (ZT) Framework.

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related