Many artificial intelligence (AI) systems in use today were originally designed for low-stakes situations, or situations with a low cost of failure. If your streaming service recommends a movie that isn’t to your taste, or if your device misinterprets a request to play the Beach Boys, and the Beastie Boys starts blaring instead, you may be temporarily frustrated or laugh off the experience—but you can quickly move on.
What about the design and development of AI systems when the stakes are far higher? For example, how do we design and develop AI systems that have the potential to dictate whether or not someone receives a mortgage for a home purchase, or where to send first responders during a rapidly spreading wildfire? In high-stakes scenarios, humans and AI systems must work together in human-machine teams with trust, ethical integrity, and mutual understanding of a shared goal.