GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

AI Safety and Automation Bias: The Downside of Human-in-the-Loop

Automation bias—our tendency to overly trust AI systems—can have dangerous consequences, from misjudged autopilot systems to mishandled safety alerts. This bias emerges from a mix of human, technical, and organizational factors. Addressing it requires better design, training, and policies to ensure that humans and AI work safely and effectively together. The goal is not just smarter AI but smarter collaboration.

  • Author(s):
  • Lauren Kahn
  • Emelia S. Probasco
  • Ronnie Kinoshita
  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn
AI Safety and Automation Bias: The Downside of Human-in-the-Loop
Format:
  • White Paper
Topics:
Website:Visit Publisher Website
Publisher:Center for Security and Emerging Technology
Published:November 1, 2024
License:Creative Commons
Copyright:© 2024 by the Center for Security and Emerging Technology. This work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/4.0/.

Featured Content

Contact Publisher

Claim Content