GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

Enabling AI Access and Security

  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn

AI is a key enabler for some of today’s biggest national challenges – public health, racial disparities, cybersecurity, and more. Developing AI tools requires a lot data and even more computing resources. The 2021 National Defense Authorization Act included the creation of the National AI Research Resource Task Force, which “has been directed by Congress to develop an implementation roadmap for a shared research infrastructure that would provide artificial intelligence researchers and students across scientific disciplines with access to computational resources, high-quality data, educational tools, and user support.” The goal is to democratize access so that innovative programs can be developed when and where they are needed.  

Meeting the unique needs of health data

The Department of Health and Human Services (HHS) is looking to democratize access within their department, taking cues from the Department of Defense to set up an AI office that can feed innovation into all of the agencies. In doing so, HHS can address some of the unique challenges of the health sector including the fact that some data is protected by the Health Insurance Portability and Accountability Act (HIPAA). Having to determine which datasets are protected or contain personally identifiable information ultimately is a critical step in streamlining the process of obtaining it.

Securing AI

Once organizations have the computing power and data needed, there is still the matter of security. Traditional security measures do not always address the unique methodologies and needs of AI. Security policies do not always account for the number of large datasets needed by AI applications. Zero Trust solves many access issues, but it is not the best approach for securing the training of AI. In securing AI it is critical to know where the data came from, how it has been accessed, and by whom. The Air Force is already taking steps to apply new security practices to AI by implementing the practice of AI Safety, ensuring that deployed AI programs not only work as expected, but that they are safe from attack in terms of design, underlying data stream, and computer architecture. 

AI makes a difference

Once access and security issues are addressed, the power of AI to allow humans to work with data in new ways is limitless. For example, Yolo County, CA is using AI to redact police narrative, stripping out any racially identifiable information before charging decisions are made. This includes redacting information like names (which are replaced with suspect 1, witness 1, victim 1, etc.) as well as any physical description like hair or eye color in addition to skin tone. Geographic location can also be an indicator of race, so that’s also redacted when possible. 

In addition to rendering an opinion on if the case will be charged, the deputy DA also answers questions about the quality of the redaction to continue educating the algorithm. After the decision is entered, the deputy DA reviews the report without redactions as well as additional background for a final decision. While results of this program have not been released yet, a spokesperson did comment that these race blind decisions are not being reversed. This solution not only addresses bias in the charging process, it also sends a message to the community that the county takes procedural justice seriously.

GovWhitePapers has a multitude of resources that detail the security, policy, technical specifications, and applications of AI for government. 

  • Human Centered AI – Many AI systems in use today were originally designed for low-stakes situations, or situations with a low cost of failure. What about the design and development of AI systems when the stakes are far higher? In high-stakes scenarios, humans and AI systems must work together in human-machine teams with trust, ethical integrity, and mutual understanding of a shared goal.
  • From Ethics to Operations: Current Federal AI Policy – There are currently dozens of separate AI ethics, policy, and technical working groups scattered among various Federal departments and agencies, spanning the defense, civil, and legislative spheres. While a few overall governance structures for AI policy have begun, resulting policies may be incomplete, inconsistent, or incompatible with each other. Read over a general framework and an assessment of the current state of Federal government AI policy.
  • The Role of AI Technology in Pandemic Response and Preparedness: Recommended Investments and Initiatives – This white paper outlines a series of investments and initiatives that the United States must undertake to realize the full potential of AI to secure our nation against pandemics.
  • Key Considerations for the Responsible Development and Fielding of Artificial Intelligence – The key considerations provided here are a paradigm for the responsible development and fielding of AI systems. This includes developing processes and programs aimed at adopting the paradigm’s recommended practices, monitoring their implementation, and continually refining them as best practices evolve.

You can browse additional government AI assets through our search engine here:

Browse AI Content




Receive the GovWhitePapers newsletter, featuring our freshest content relevant to discussions happening in the government community.

Recent Posts


Featured Content