GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

Artificial intelligence (AI) systems are on a global multi-year accelerating expansion trajectory. These systems are being developed by and widely deployed into the economies of numerous countries, leading to the emergence of AI-based services for people to use in many spheres of their lives, both real and virtual. There are two broad classes of AI systems based on their capabilities: Predictive AI and Generative AI.

As these systems permeate the digital economy and become inextricably essential parts of daily life, the need for their secure, robust, and resilient operation grows. These operational attributes are critical elements of Trustworthy AI in the AI Risk Management National Institute of Standards and Technology Framework and in the taxonomy of AI Trustworthiness.

  • Author(s):
  • Apostol Vassilev
  • Alina Oprea
  • Alie Fordyce
  • Hyrum Andersen
  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Format:
  • White Paper
Topics:
Website:Visit Publisher Website
Publisher:National Institute of Standards and Technology (NIST)
Published:January 4, 2024
License:Public Domain

Featured Content

Contact Publisher

Claim Content