CISA, NCSC roll out guidelines for secure AI system development

CISA, NCSC roll out guidelines for secure AI system development

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC) jointly released the ‘Guidelines for Secure AI System Development’ on Sunday. The publication, co-signed by 23 global cybersecurity organizations, marks a significant milestone in tackling the intersection of artificial intelligence (AI), cybersecurity, and critical infrastructure. 

The guidelines provide essential recommendations for AI system development and emphasize the importance of adhering to Secure-by-Design principles that CISA has long championed. It is imperative for stakeholders, including data scientists, developers, managers, decision-makers, and risk owners, to thoroughly review these guidelines. Doing so will empower them to make well-informed decisions regarding the design, deployment, and operation of their machine learning AI systems.

The guidelines complement the U.S. Voluntary Commitments on Ensuring Safe, Secure, and Trustworthy AI released in September to provide essential recommendations for AI system development and emphasize the importance of adhering to Secure by Design principles. Broken down into four key areas within the AI system development life cycle, the guidelines cover secure design, secure development, secure deployment, and secure operation and maintenance. It also suggests considerations and mitigations that will help reduce the overall risk to an organizational AI system development process.

“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” Alejandro N. Mayorkas, secretary of Homeland Security, said in a media statement. “The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core.” 

Mayorkas outlined that by integrating ‘secure by design’ principles, these guidelines represent a historic agreement that developers must invest in, protecting customers at each step of a system’s design and development. “Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology,” he added.

“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design,” according to Jen Easterly, CISA director. “As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices.”

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” Lindy Cameron, NCSC CEO, said. “These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.” 

“I believe the UK is an international standard bearer on the safe use of AI,” Michelle Donelan, U.K. Secretary of State for Science, Innovation and Technology, said. “The NCSC’s publication of these new guidelines will put cyber security at the heart of AI development at every stage so protecting against risk is considered throughout.”

The guidelines follow a ‘secure by default’ approach. They are aligned closely to practices defined in the NCSC’s Secure Development and Deployment guidance, NIST’s Secure Software Development Framework, and secure by design principles published by CISA, the NCSC, and international cyber agencies. They prioritize taking ownership of customer security outcomes, embracing radical transparency and accountability, and building organizational structure and leadership so that security by design is a top business priority.

In line with secure-by-design principles, providers of AI components should take responsibility for the security outcomes of users further down the supply chain. These providers should implement security controls and mitigations where possible within their models, pipelines and/or systems, and where settings are used, implement the most secure option as default. 

Where risks cannot be mitigated, the provider should be responsible for informing users further down the supply chain of the risks that they and (if applicable) their own users are accepting. They must also advise them on how to use the component securely. “Where system compromise could lead to tangible or widespread physical or reputational damage, significant loss of business operations, leakage of sensitive or confidential information and/or legal implications, AI cyber security risks should be treated as critical,” it added.

The guidelines apply to all types of AI systems, not just frontier models. It also provides suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems.

The secure design section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modeling, as well as specific topics and trade-offs to consider in system and model design. The secure development section contains guidelines that apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.

Following that comes the secure deployment section that contains guidelines that apply to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release. The secure operation and maintenance section contains guidelines that apply to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management, and information sharing.

These guidelines are the latest effort across the U.S.’s body of work supporting safe and secure AI technology development and deployment. 

Last month, U.S. President Joe Biden issued an Executive Order that directed the Department of Homeland Security (DHS) to promote the adoption of AI safety standards globally, protect U.S. networks and critical infrastructure, reduce the risks that AI can be used to create weapons of mass destruction, combat AI-related intellectual property theft, and help the United States attract and retain skilled talent, among other missions. 

Earlier this month, CISA released its Roadmap for Artificial Intelligence, a whole-of-agency plan aligned with the national strategy to address efforts to promote the beneficial uses of AI to enhance cybersecurity capabilities, ensure AI systems are protected from cyber-based threats, and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day. 

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Webinar: Transforming Manufacturing Security: The 5-Step Approach to Rolling Out and Scaling Up OT Cybersecurity

Register: May 22, 2024 | 8am PDT | 11am EDT | 5pm CEST

Related