House Homeland Security Subcommittee examines role of DHS in implementing AI executive order

House Homeland Security Subcommittee examines role of DHS in implementing AI executive order

The U.S. House Homeland Security Subcommittee on Cybersecurity and Infrastructure Protection conducted on Tuesday a hearing to scrutinize the Department of Homeland Security’s (DHS) obligations concerning the Biden administration’s recent Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The focus of the hearing was on the role of DHS’s Cybersecurity and Infrastructure Security Agency (CISA) in implementing the EO.

Chaired by Chairman Andrew Garbarino, a Republican from New York, the witnesses to the hearing were Ian Swanson, chief executive officer and founder at Protect AI, Debbie Taylor Moore, senior partner and vice president for global cybersecurity at IBM Consulting, Timothy O’Neill, vice president, chief information security officer and product security at Hitachi Vantara, and Alex Stamos, chief trust officer at SentinelOne. 

“I’m proud that this Subcommittee has completed thorough oversight over CISA’s many missions this year from its federal cybersecurity mission to protecting critical infrastructure from threats,” Garbarino said in his opening statement. “Now, as we head into 2024, it’s important that we take a closer look at emerging threats and technologies that CISA must continue to evolve with, including AI.”

Garbarino outlined that AI is a hot topic today amongst Members of Congress and Americans, though he identified it as “a broad umbrella term, encompassing many different technology use cases from predictive maintenance alerts in operational technology to large language models like ChatGPT, making building a common understanding of the issues difficult.” 

As the general curiosity in and strategic application of AI across various sectors continues to develop, it’s vitally important that government and industry work together to build security into the very foundation of the technology regardless of the specific use case.

The Administration’s Executive Order, or EO, is the first step in building that foundation. DHS and CISA are tasked in the EO with ensuring the security of the technology itself and developing cybersecurity use cases for AI, according to Garbarino. “But the effectiveness of this EO will come down to its implementation. DHS and CISA must work with the recipients of the products they develop, like federal agencies and critical infrastructure owners and operators, to ensure the end results meet their needs. This Subcommittee intends to pursue productive oversight over these EO tasks.”

“The timelines laid out in the EO are ambitious, and it is positive to see CISA’s timely release of their Roadmap for AI and internationally-supported Guidelines for Secure AI System Development,” Garbarino pointed out. “At its core, AI is software and CISA should look to build AI considerations into its existing efforts rather than creating entirely new ones unique to AI. Identifying all future use cases of AI is nearly impossible, and CISA should ensure that its initiatives are iterative, flexible, and continuous, even after the deadlines in the EO pass, to ensure the guidance it provides stands the test of time.”

Today, “we have four expert witnesses who will help shed light on the potential risks related to the use of AI in critical infrastructure, including how AI may enable malicious cyber actors’ offensive attacks, but also how AI may enable defensive cyber tools for threat detection, prevention, and vulnerability assessments,” he added.

“As we all learn more about improving the security and secure usage of AI from each of these experts today, I’d like to encourage the witnesses to share questions that they might not have the answer to just yet,” Garbarino said. “With rapidly evolving technology like AI, we should accept that there may be more questions than answers at this stage. The Subcommittee would appreciate any perspectives you might have that could shape our oversight of DHS and CISA as they reach their EO deadlines next year.”

Swanson urged the committee and other federal agencies to acknowledge the pervasive presence of AI in existing U.S. business and government technology environments. “It is imperative to not only recognize but also safeguard and responsibly manage AI ecosystems. This includes the need for robust mechanisms to identify, secure, and address critical security vulnerabilities within US businesses and the United States Federal Government’s AI infrastructures,” he wrote in his testimony.

Swanson recommended three starting actions to the committee and other US government organizations, including CISA, when setting policy for secure AI/ML. These include creating a Machine Learning Bill of Materials (MLBOM) standard in partnership with NIST and other USG entities, investing in protecting the AI/ML open-source software ecosystem, and continuing to enlist feedback and participation from technology startups.

To address the security risk of an AI system, IBM Consulting’s Moore said in her statement to ‘breakdown’ AI to learn of its potential weaknesses. “In addressing security, to protect a system —whether software or hardware —we often tear it down. We figure out how it works but also what other functions we can make the system do that it wasn’t intended to. Then, we address appropriately –from industrial/military grade strength defense mechanisms to specialty programs built to prevent or limit the impact of the unwanted or destructive actions.” 

She added that collectively as industry and critical infrastructure providers, “have the tools to do this–and in many cases are already doing this. We also have the governance and compliance know-how to enforce.”

Further, the critical infrastructure ecosystem is also aware of the increased risk vectors that could be applied to critical infrastructure due to AI, Moore added. “Critical infrastructure providers are not only taking internal steps, or working with companies like IBM, to address this, but also working with the technology industry, government, and others to set and advance best practices and tools.”

In her recommendations, Moore encouraged CISA to accelerate existing efforts and broaden awareness, rather than reinvent the wheel; DHS should have a collaborative and strategic AI Safety and Security Advisory Board as directed by the EO on AI; and the DHS should implement the directives from the EO on AI promptly. 

Hitachi Vantara’s O’Neill said that it is also important that CISA recognize the potential benefits AI could pose to critical infrastructure systems to help them identify possible attacks or defend against cyber or physical attacks, and not just on the ways AI could make them vulnerable to failure.

“There is great potential for CISA to work across agencies to support or augment their AI work and provide insight into cybersecurity guidance and/or threat identification,” O’Neill wrote in his testimony. “CISA is also discouraged against creating separate frameworks, processes, or testbeds and instead should work collaboratively across the federal government to utilize the resources other agencies have, have already or are currently creating. Manufacturers, especially those who are making products for critical infrastructure industries, have been engaged with their respective agencies and are assisting in the development of AI systems. 

He added that while some manufacturers may not have engaged with CISA as they implement technology solutions in their operations, “as CISA coordinates across agencies to implement the EO, it can broaden its reach to educate all on the crucial role cybersecurity plays in core IT and AI processes.”

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related