GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

2025 GenAI Code Security Report

Assessing The Security of Using LLMS For Coding

Generative AI is transforming software development, but its ability to produce secure code remains in question. Veracode’s study shows that while LLMs are strong at generating syntactically correct code, nearly half of outputs still contain vulnerabilities like SQL injection, cross-site scripting, log injection, and weak cryptography. Surprisingly, newer and larger models don’t show significant improvement in security, and performance varies widely across languages and CWE categories. These findings highlight the growing need to integrate security-by-design practices into AI-assisted coding before these tools become deeply embedded in development workflows.

  • Author(s):
  • Veracode
  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn
2025 GenAI Code Security Report
Format:
  • White Paper
Topics:
Website:Visit Publisher Website
Publisher:Veracode
Published:July 1, 2025
License:Copyrighted
Copyright:© 2025 Veracode. All rights reserved.

Featured Content

Contact Publisher

Claim Content