GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

Balancing AI Policy and Innovation

Balancing AI Policy and Innovation

  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn

As federal agencies have been experimenting with and implementing artificial intelligence (AI) solutions, they have been working to deliver on the requirements outlined in recent Executive Orders to ensure that the solutions meet standards for data security, governance, and proper risk management. These requirements were crafted to balance needed governance with the demand for innovation in how government uses technology to deliver services.

An April 2025 AI governance memo required agencies to deliver compliance plans to show how they are addressing issues of AI risk management, technical capacity, and workforce readiness. The submitted reports detail several key barriers to AI use and include details on how agencies are working to overcome them.

  • Data access and quality – With poor data quality, fragmented IT infrastructure, and a lack of AI-ready data, agencies are focusing on consolidation, breaking down data silos and creating shared environments.
  • Workforce readiness – Agencies are investing in training programs and aligning AI goals more closely with mission objectives. They are looking at AI not as a new technology to be learned, but as a tool to enable the mission, and teaching use of AI that way.
  • IT infrastructure challenges – Access to computing tools was cited as an impediment to wide AI rollout. Several reports mentioned using the General Services Administration’s (GSA) AI evaluation suite, USAi, to help meet compliance goals, as well as planning to take advantage of the GSA’s recent  OneGov deals with leading AI companies. OneGov provides access to purchase government-ready solutions from leading AI vendors, and USAi is a tool that enables agencies to test major AI models.

Agencies are prioritizing guardrails and policies as they roll out AI solutions, using the definition of “high-impact AI” to determine their risk-management strategies. High-impact AI, as provided by a memo from the Office of Management and Budget earlier this year, is any model that could “have significant impacts when deployed,” including for “decisions or actions that have a legal, material, binding or significant effect on rights or safety.” For these applications, agencies are taking a measured approach, intentionally slowing the adoption of AI in certain areas, while also building policies and guardrails in lower-risk areas.

To stay on top of AI policies and implementation, check out these resources from GovWhitePapers and GovEvents:

  • Artificial Intelligence Implementation Plan (white paper) – The Marine Corps’ Artificial Intelligence Implementation Plan lays out a roadmap to bring AI into every level of operations, from tactical decision-making to enterprise support. It emphasizes building a strong digital foundation through data governance, infrastructure, and workforce training, while piloting transformation teams to accelerate adoption. The plan also addresses governance, responsible use, and partnerships with industry and academia to ensure innovation aligns with mission needs.
  • Adding Artificial Intelligence to the Team (white paper) – This piece explores how the U.S. Army is experimenting with large language models to support intelligence collection and planning in fast-moving combat environments. Practical use cases, lessons learned, and limits of the technology are outlined, offering insight into where AI adds value—and where it doesn’t.
  • Data to Decisions: How Agentic AI Is Transforming Government (white paper) – Agentic AI is a major evolution in artificial intelligence, moving beyond simple input-output models to systems that can interpret instructions, plan tasks, break them into steps, execute workflows, and adapt autonomously. For government, it offers big advantages: defense and intelligence agencies could automate time-consuming work like briefings and data integration, while civilian agencies could streamline compliance, records management, and citizen services, with additional impact in public health, infrastructure, and emergency response.
  • Harmonizing AI Guidance (white paper) – Organizations trying to use AI face a confusing maze of guidance spread across dozens of reports, frameworks, and standards. To reduce this burden, CSET analyzed more than 7,700 recommendations from 52 AI, cybersecurity, privacy, and risk documents and consolidated them into a unified framework of 258 practices. The report explains how this harmonization process works and shows where existing AI guidance overlaps, falls short, or needs refinement.
  • AI Summit (January 9, 2026; Tysons, VA) – Join top IT officials to explore AI’s latest advancements, challenges, and future in the evolving tech landscape and discover how AI-powered solutions are revolutionizing efficiency and shaping the future of public- and private-sector operations.
  • 2026 Artificial Intelligence Summit (March 19, 2026; Washington, DC) – This event will feature top voices from federal agencies, Department of Defense components and the GovCon industry to discuss strategies, future plans and exciting use cases for how AI, machine learning, and automation are transforming our world today.

Search GovWhitePapers and GovEvents to find even more insights into AI use in government.

Recent Posts


Archives


Featured Content