White House issues policy on AI risk mitigation and benefits in line with President Biden’s Executive Order

White House issues policy on AI risk mitigation and benefits in line with President Biden's Executive Order

The U.S. administration recently announced that the White House Office of Management and Budget (OMB) is issuing its first government-wide policy to mitigate risks associated with artificial intelligence (AI) and leverage its benefits, in alignment with President Joe Biden’s AI Executive Order. The Order mandated comprehensive measures to enhance AI safety and security, safeguard Americans’ privacy, promote equity and civil rights, advocate for consumers and workers, foster innovation and competition, bolster American leadership globally, and more. 

Additionally, federal agencies have confirmed the completion of all 150-day actions outlined in the Executive Order, following their successful completion of all 90-day actions. The multi-faceted direction to federal departments and agencies builds upon the Biden-Harris administration’s record of ensuring that America leads the way in responsible AI innovation. 

In a memorandum for the heads of executive departments and agencies, M-24-10, Shalanda D. Young focused on ‘Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.’ 

“While agencies must give due attention to all aspects of AI, this memorandum is more narrowly scoped to address a subset of AI risks, as well as governance and innovation issues that are directly tied to agencies’ use of AI,” Young pointed out. “The risks addressed in this memorandum result from any reliance on AI outputs to inform, influence, decide, or execute agency decisions or actions, which could undermine the efficacy, safety, equitableness, fairness, transparency, accountability, appropriateness, or lawfulness of such decisions or actions.”

By Dec. 1, this year, federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety. These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. These safeguards apply to a wide range of AI applications from health and education to employment and housing.

Agencies must continue to comply with applicable OMB policies in other domains relevant to AI, and to coordinate compliance across the agency with all appropriate officials. All agency-responsible officials retain their existing authorities and responsibilities established in other laws and policies.

By adopting these safeguards, agencies can ensure that:

  • When at the airport, travelers will continue to have the ability to opt out of the use of TSA facial recognition without any delay or losing their place in line.
  • When AI is used in the federal healthcare system to support critical diagnostics decisions, a human being oversees the process to verify the tools’ results and avoid disparities in healthcare access.
  • When AI is used to detect fraud in government services there is human oversight of impactful decisions and affected individuals have the opportunity to seek remedy for AI harms.

If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.

To protect the federal workforce as the government adopts AI, OMB’s policy encourages agencies to consult federal employee unions and adopt the Department of Labor’s forthcoming principles on mitigating AI’s potential harm to employees. The Department is also leading by example, consulting with federal employees and labor unions both in the development of those principles and its governance and use of AI.

The guidance also advises federal agencies on managing risks specific to their procurement of AI. Federal procurement of AI presents unique challenges, and a strong AI marketplace requires safeguards for fair competition, data protection, and transparency. 

OMB’s policy will also remove unnecessary barriers to Federal agencies’ responsible AI innovation. AI technology presents tremendous opportunities to help agencies address society’s most pressing challenges. Examples include addressing the climate crisis and responding to natural disasters, advancing public health, and protecting public safety. 

Advances in generative AI are expanding these opportunities, and OMB’s guidance encourages agencies to responsibly experiment with generative AI, with adequate safeguards in place. Many agencies have already started this work, including by using AI chatbots to improve customer experiences and other AI pilots.

Another critical focus area of the U.S. administration lies in building and deploying AI responsibly to serve the public starting with people. OMB’s guidance directs agencies to expand and upskill their AI talent. Agencies are aggressively strengthening their workforces to advance AI risk management, innovation, and governance.

The memorandum lays down that by the summer of 2024, the U.S. administration has committed to hiring 100 AI professionals to promote the trustworthy and safe use of AI as part of the National AI Talent Surge created by Executive Order 14110 and will be running a career fair for AI roles across the Federal Government on April 18. To facilitate these efforts, the Office of Personnel Management has issued guidance on pay and leave flexibilities for AI roles, to improve retention and emphasize the importance of AI talent across the federal government.

Furthermore, the Fiscal Year 2025 President’s Budget includes an additional $5 million to expand the General Services Administration’s government-wide AI training program, which last year had over 4,800 participants from across 78 federal agencies. 

In addition to this guidance, the administration announced several other measures to promote the responsible use of AI in government. These initiatives include the OMB will issue a request for information (RFI) on Responsible Procurement of AI in Government, to inform future OMB action to govern AI use under federal contracts; agencies will expand 2024 Federal AI Use Case Inventory reporting, to broadly expand public transparency in how the federal government is using AI; and the administration has committed to hiring 100 AI professionals by summer 2024 as part of the National AI Talent Surge to promote the trustworthy and safe use of AI.

With these actions, the administration is demonstrating that the government is leading by example as a global model for the safe, secure, and trustworthy use of AI. The policy builds on the administration’s blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework and will drive federal accountability and oversight of AI, increase transparency for the public, advance responsible AI innovation for the public good, and create a clear baseline for managing risks.

Last month, non-profit organization MITRE launched its AI Assurance and Discovery Lab to discover and mitigate critical risks in AI-enabled systems that need to operate in increasingly complex, uncertain, and high-stakes environments. The lab, which is based at MITRE’s McLean, Virginia, headquarters, features configurable space for risk discovery in simulated environments, AI red teaming, large language model evaluation, human-in-the-loop experimentation, and assurance plan development.

A complimentary guide to the who`s who in industrial cybersecurity tech & solutions

Free Download

Related