Technology

The 2024 National Security Memorandum on AI: A Timeline and Index of Responsibilities

On October 24, 2024, the Biden administration published the “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence”—or for short, the National Security Memorandum on AI. To help audiences better understand and track the directives in this Memorandum, the Responsible AI Network (RAIN) has produced a timeline and entity-by-entity index of responsibilities.

What is the National Security Memorandum on AI?

The National Security Memorandum (NSM) outlines the U.S. government’s plan to lead in AI innovation for national security while safeguarding human rights, privacy, and democratic values. The actions to achieve these objectives are outlined in sections 3-6 of the Memorandum:

  • Section 3, Promoting and Securing AI Capabilities, outlines efforts to advance domestic AI development, protect the U.S. AI ecosystem from foreign threats, and ensure AI safety, security, and trustworthiness.
  • Section 4, Harnessing AI for National Security, details how AI will be integrated into national security with updated policies, partnerships, and governance to ensure responsible and effective usage.
  • Section 5, International AI Governance, emphasizes the U.S. role in shaping global AI norms that support safety, security, and democratic values while preventing misuse of AI.
  • Section 6, Coordination and Reporting, establishes mechanisms for interagency coordination, reporting, and talent management to support swift, effective implementation of AI policies.

The National Security Memorandum on AI is accompanied by a “Framework to Advance AI Governance and Risk Management in National Security” as well as a classified annex, which addresses additional sensitive national security issues, such as countering adversarial use of AI. Explainers of the Memorandum have been published, for example, by CSIS and CSET.

The Memorandum builds on the 2023 Executive Order on AI, fulfilling the requirement to provide direction on appropriately harnessing artificial intelligence (AI) models and AI-enabled technologies in the United States government. In contrast to an Executive Order or other forms of executive action, a National Security Memorandum specifically directs the government agencies’ use of and procedure around AI. For example, the 2023 Executive Order specifically targeted non-governmental organizations, such as reporting requirements for AI developers.

Why Produce an Index and Timeline?

Although the NSM is only slightly over half as long as the Biden Executive Order on AI, the information contained within can still be hard to process. The NSM has 30 directives with explicit deadlines (in addition to dozens of other directives without explicit deadlines) but the document understandably does not present these deadlines chronologically. The NSM provides direction to over 30 named departments/agencies on a wide variety of topics, some of which are not explicitly listed in the recipients’ list at the top of the Memorandum. Additionally, although readers can use control+f to search the Memorandum to find most of the named agencies, some agencies are indirectly referenced through group titles (e.g., the Coordination Group, the working group), and the language for open-ended taskings (e.g., “other relevant agencies”) is not standardized. To help interested audiences sort through the information, we have produced a timeline and an entity-by-entity index of the Memorandum’s provisions.

Links to the Index and Timeline

  • The index is viewable here.
  • The timeline can also be accessed separately here.
  • A copy of the underlying spreadsheet/data for these products can be accessed here.

Project Authorship

This project is the work of students and alumni in the Georgetown Responsible AI Network (RAIN). The project has been led by Harrison Durland, the Programs Director and a co-founder of RAIN, with support from Raghav Akula, Carolina Oxenstierna, Eva Siegmann, Robert Chong, Rishi Dinesh, Wisdom Obinna, and Jonah Schiestle.


Views expressed are the author’s own and do not represent the views of GSSR, Georgetown University, or any other entity. Image Credit: Wikimedia Commons edited with Canva Images