US federal agencies urge critical infrastructure firms to take note of synthetic media threats, techniques, trends

US federal agencies urge critical infrastructure firms to take note of synthetic media threats, techniques, trends

Three U.S. security agencies jointly published a cybersecurity information sheet (CSI) that provides an overview of synthetic media threats, techniques, and trends. It identifies that such threats, such as deepfakes, have exponentially increased—presenting a growing challenge for users of modern technology and communications, including the National Security Systems (NSS), the Department of Defense (DoD), the Defense Industrial Base (DIB), and national critical infrastructure owners and operators.

Titled, ‘Contextualizing Deepfake Threats to Organizations,’ the CSI, published by the Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA) and Federal Bureau of Investigation (FBI) Cyber Division, identifies that as with many technologies, synthetic media techniques can be used for both positive and malicious purposes. “While there are limited indications of significant use of synthetic media techniques by malicious state-sponsored actors, the increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors indicate these types of techniques will likely increase in frequency and sophistication,” it added. 

Industry has been aware of such threats for some time now. Late May, Forescout Technologies disclosed how AI-assisted attacks are coming to target OT (operational technology) and unmanaged devices. The shift comes as hackers are exploiting publicly available proof-of-concept (PoCs), increasing the versatility and potentially the damage of existing malicious code, though it still takes some time and effort from threat actors. These developments demonstrate how generative AI can be used to improve productivity, while also being deployed for nefarious purposes.

Deepfakes are AI-generated, highly realistic synthetic media that can be abused to threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information. 

The document outlines that deepfakes are a particularly concerning type of synthetic media that utilizes artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media. “The most substantial threats from the abuse of synthetic media include techniques that threaten an organization’s brand, impersonate leaders and financial officers, and use fraudulent communications to enable access to an organization’s networks, communications, and sensitive information.”

The CSI urges organizations to take a variety of steps to identify, defend against, and respond to deepfake threats. “They should consider implementing a number of technologies to detect deepfakes and determine media provenance,including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications. Organizations can also take steps to minimize the impact of malicious deepfake techniques, including information sharing, planning for and rehearsing responses to exploitation attempts, and personnel training.” 

It added that, in particular, “phishing using deepfakes will be an even harder challenge than it is today, and organizations should proactively prepare to identify and counter it.” 

The document warns that apart from the obvious implications for misinformation and propaganda during times of conflict, national security challenges associated with deepfakes manifest in threats to the U.S. government, NSS, the DIB, critical infrastructure organizations, and others. 

It also cautioned that organizations and their employees may be vulnerable to deepfake tradecraft and techniques, which may include fake online accounts used in social engineering attempts, fraudulent text and voice messages used to avoid technical defenses, faked videos used to spread disinformation, and other techniques. “Many organizations are attractive targets for advanced actors and criminals interested in executive impersonation, financial fraud, and illegitimate access to internal communications and operations,” it added.

The most significant synthetic media threats to the DoD, NSS, the DIB, and critical infrastructure organizations, based on potential impact and risk, include, but are not limited to executive impersonation for brand manipulation, impersonation for financial gain, and impersonation to gain access. 

The document identified that major trends in the generation of media include the increased use and improvement of multimodal models, such as the merging of LLMs (large language models) and diffusion models; improved ability to lift a 2D image to 3D to enable the realistic generation of video based on a single image; faster and tunable methods for real time modified video generation;and models that require less input data to customize results, such as synthetic audio that captures the characteristics of an individual with a few seconds of reference data. All of these trends point to better, faster, and cheaper ways to generate fake content.

“The major trends on detection and authentication are toward education, detection refining, and increased pressure from the community to employ authentication techniques for media,” the agencies disclosed. “Eventually, these trends may lead to policies that will require certain changes. For now, efforts like the public/private detection and authentication initiatives referenced in this report and ethical considerations prior to releasing models will help organizations take proactive steps toward more transparent content provenance.”

When it came to the recommendations for resisting deepfakes, the document urged organizations can take various steps to prepare to identify, defend against, and respond to deepfake threats. 

The agencies suggested selecting and implementing technologies to detect deepfakes and demonstrate media provenance enabling real-time verification capabilities and procedures. Organizations should implement identity verification capable of operating during real-time communications. Identity verification for real-time communications will now require testing for liveness given the rapid improvements in generative-AI and real-time rendering. Additionally, mandatory multi-factor authentication (MFA), using a unique or one-time generated password or PIN, known personal details, or biometrics, can ensure those entering sensitive communication channels or activities are able to prove their identity. 

“To protect media that contains the individual from being used or repurposed for disinformation, one should consider beginning to use active authentication techniques such as watermarks and/or CAI standards,” the document said. “This is a good preventative measure to protect media and make it more difficult for an adversary to claim that a fake media asset portraying the individual in these controlled situations is real. Prepare for and take advantage of opportunities to minimize the impact of deepfakes.”

The document also suggests planning and rehearsing by executing several tabletop exercises to practice and analyze the execution of the plan. It also suggests reporting the details of malicious deepfakes with appropriate U.S. Government partners, including the NSA Cybersecurity Collaboration Center for Department of Defense and DIB organizations, and the FBI, to spread awareness of trending malicious techniques and campaigns.

Additionally, every organization should incorporate an overview of deepfake techniques into their training program. “This should include an overview of potential uses of deepfakes designed to cause reputational damage, executive targeting and BEC attempts for financial gain, and manipulated media used to undermine hiring or operational meetings for malicious purposes. Employees should be familiar with standard procedures for responding to suspected manipulated media and understand the mechanisms for reporting this activity within their organization,” the document added.

The document recommends leveraging cross-industry partnerships; and understanding what private companies are doing to preserve the provenance of online content. It also called for organizations to actively pursue partnerships with media, social media, career networking, and similar companies in order to learn more about how these companies are preserving the provenance of online content. This is especially important considering how they may be working to identify and mitigate the harms of synthetic content, which may be used as a means to exploit organizations and their employees. 

Eduardo Azanza, CEO at Veridas, wrote in an emailed statement to Industrial Cyber that the recommendations set out by CISA indicate a promising step forward in enhancing organizations’ resilience against the negative use of AI. “We’ve seen already that it can be extremely difficult to spot deepfakes, and humans are starting to fall for deepfake scams. The need to address the situation is more urgent than ever.” 

He added that “deepfakes can affect every aspect of our society – from the integrity of elections and trust in politicians to financial fraud and illegitimate access. With this vulnerability, organizations need to harness technology as their main weapon in fighting adversaries utilizing deepfakes.”

“There are currently several companies developing cutting-edge deepfake detection tools. Tools such as biometrics leverage AI-trained algorithms to evaluate and ascertain the authenticity and liveliness of voices and faces present for access and authentication purposes,” according to Azanza. “This approach can significantly enhance organizations’ verification methods and overall safeguard assets from theft. However, there is a performance gap in deepfake detection algorithms. For organizations looking to implement such solutions, it’s important they have been properly assessed and certified by third-party evaluators.”

Earlier this month, two senators urged the White House to provide updates on its efforts to mitigate the potential threats posed by AI to the nation’s cyber infrastructure. As advanced AI models rapidly evolve, the proliferation of generative AI raises significant concerns regarding software security and resilience. They raised inquiries about how those responsible for safeguarding critical infrastructure employ AI for system protection and also sought clarification on whether the widespread availability of open-source large language models has contributed to an increase in cybercrime activities.

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related