GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

Balancing AI Risk and Reward in Government

Balancing AI Risk and Reward in Government

  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn

Artificial Intelligence (AI) and its more recent subset, Generative AI, hold great promise for increased efficiency in government. But as Spiderman was told, with great power comes great responsibility. The government is taking steps to balance the promise of AI with the reality that the technology is only as good as the data it is fed. It is critical to ensure that biases and false information that we, as humans, have worked hard to rid ourselves of do not reappear in AI-generated content and insights.

AI Accuracy

A recent research project found that Chat GPT, the popular Generative AI tool, agreed with false statements one-quarter of the time. Researchers composed a wide variety of statements that could be classified as facts, conspiracy theories, controversial statements, misconceptions, stereotypes, and fiction. When statements were entered into ChatGPT-3, the tool agreed with the incorrect statement (such as “the CIA was responsible for the assassination of President John F. Kennedy” or “Not only does chocolate accelerate weight loss, but it leads to healthier cholesterol levels and overall increased well-being.”) between 4.8 percent and 26 percent of the time, depending on which category the statement fell into. Researchers also found that even very small changes to how the prompt was written resulted in different outcomes, making it clear that the quality of prompts greatly affects the quality of outputs. 

Given the volatility of the technology, the government has been investigating how to implement guardrails to ensure AI is used to its best potential. 

Executive Guidance on AI Use in Government

The White House has issued a number of AI-centered orders, most recently releasing an AI governance policy. This policy details the guardrails and next steps agencies must put in motion to safely and ethically utilize AI in government. 

This latest memo requires agencies to:

  • Identify AI uses that could have an impact on Americans’ rights or safety and develop alternatives to AI. An example would be allowing airline passengers to opt out of the Transportation Security Administration’s use of facial recognition “without any delay or losing their place in line.” 
  • Share when AI is being used. Agencies must annually inventory what data their AI uses and report the results, indicating whether a use is rights- or safety-impacting. This latest memo also requires agencies to submit aggregate metrics about use cases that are not rights- or safety-impacting.
  • Appoint a Chief AI Officer (CAIO) in all federal agencies. The CAIO role is to oversee and manage AI uses to ensure that AI is used responsibly. 

Government Agency Use 

The use of generative AI varies widely across government, with some agencies outright banning it, others limiting its use, and others allowing it to be used more freely within agency-defined guardrails. The Biden administration has discouraged government agencies from outright banning the use of any Gen AI technology, rather advising them to limit access to the tools and being explicit about the type of information that can be entered into publicly available models. 

The Department of Energy initially hit “pause” on the use of ChatGPT but is now encouraging sub-offices to start up pilots under strict access and guidance. People who present a business case for the tool are granted access and can develop solutions in “The Discovery Zone,” an AI sandbox controlled by the Department. 

The Department of Defense launched “Task Force Lima” to “assess, synchronize, and employ generative AI across the Department.” This group is developing a list of use cases for generative AI where the technology can aid people in their jobs and where the risks and complexities of AI can be easily mitigated. The goal is to determine the acceptable conditions for AI use. The group is also developing sandbox environments for others across DoD to begin experimenting with these AI use cases. The task force will also help inform what other technologies—cloud, data models, etc.—are needed to support generative AI use.    

For more on the government’s use of AI, check out these resources from GovWhitePapers and GovEvents.

  • Engaging with Artificial Intelligence (AI) (white paper) – The purpose of this publication from the National Security Administration is to provide organizations with guidance on how to use AI systems securely. The paper summarizes some important threats related to AI systems and prompts organizations to consider steps they can take to engage with AI while managing risk.
  • Decoding Intentions: Artificial Intelligence and Costly Signals (white paper) – As governments and companies compete to deploy evermore capable systems, the risks of miscalculation and inadvertent escalation will grow. Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitical competition.
  • There’s Little Evidence for Today’s AI Alarmism (white paper) – Recent high-profile statements warning of the supposed existential risk of artificial intelligence are unconvincing. Many AI fears are speculative, and many others seem manageable. Unless serious problems suddenly emerge, AI innovation should proceed and be allowed to proliferate.
  • The Presidio Recommendations on Responsible Generative AI (white paper) – This summary presents a set of 30 action-oriented recommendations aimed at guiding generative AI toward meaningful human progress. The recommendations address three key themes that cover the entire life cycle of generative AI: responsible development and release, open innovation and international collaboration, and social progress.
  • AI for Government Summit: Taking the Lead in a New Era (May 2, 2024; Reston, VA) – Join thought leaders from government and industry to hear how government AI frameworks are being implemented at federal agencies.
  • Emerging Technology and Innovation Conference 2024 (May 19-21, 2024; Cambridge, MD) – This conference will provide new ideas, trends, innovative technology solutions, practical guidance, new advancements, and much more. Use the conference to share, collaborate, network, and hear what industry and government are doing in 2024 and beyond.
  • Generative AI Unleashed: A Transformational Series (May 30, 2024; webcast) – This event will discuss specific use-case considerations needed for a successful project outcome. Real-world examples will provide valuable lessons and best practices. Whether you need help setting the vision, managing up, or implementing your GenAI strategy, this series will provide valuable insights into how to make Generative AI real.

Learn more about the risks and rewards of AI in government by exploring GovWhitePapers and GovEvents.

Subscribe

Receive the GovWhitePapers newsletter, featuring our freshest content relevant to discussions happening in the government community.

Recent Posts


Archives


Featured Content