Generative AI is transforming software development, but its ability to produce secure code remains in question. Veracode’s study shows that while LLMs are strong at generating syntactically correct code, nearly half of outputs still contain vulnerabilities like SQL injection, cross-site scripting, log injection, and weak cryptography. Surprisingly, newer and larger models don’t show significant improvement in security, and performance varies widely across languages and CWE categories. These findings highlight the growing need to integrate security-by-design practices into AI-assisted coding before these tools become deeply embedded in development workflows.
Format: |
|
Topics: | |
Website: | Visit Publisher Website |
Publisher: | Veracode |
Published: | July 1, 2025 |
License: | Copyrighted |
Copyright: | © 2025 Veracode. All rights reserved. |