The last decade of progress in machine learning research has given rise to systems that are surprisingly capable but also notoriously unreliable. The chatbot ChatGPT, developed by OpenAI, provides a good illustration of this tension. Users interacting with the system after its release in November 2022 quickly found that while it could adeptly find bugs in programming code and author Seinfeld scenes, it could also be confounded by simple tasks.
An intuitive way to handle this problem is to build machine learning systems that “know what they don’t know”—that is, systems that can recognize and account for situations where they are more likely to make mistakes.
Format: |
|
Topics: | |
Website: | Visit Publisher Website |
Publisher: | Center for Security and Emerging Technology |
Published: | June 1, 2024 |
License: | Public Domain |