As generative AI tools grow more sophisticated, distinguishing between AI- and human-written content is becoming increasingly difficult. In this pilot study, NIST evaluated how well large language models can generate human-like summaries—and how well detection systems can tell the difference. The study revealed that while some AI models can still fool even advanced detectors, many detection tools are evolving just as fast, with measurable improvements over three rounds of testing. These findings will shape future research and policy around content authenticity, misinformation, and responsible AI use.
Format: |
|
Topics: | |
Website: | Visit Publisher Website |
Publisher: | National Institute of Standards and Technology (NIST) |
Published: | June 1, 2025 |
License: | Public Domain |