Skip to content
Home / Data Protection & AI

OWASP Top 10 for Large Language Model Applications

The OWASP Top 10 for Large Language Model (LLM) Applications is a community-driven list identifying the most critical security risks specific to LLMs and generative AI systems. It highlights vulnerabilities such as prompt injection, where attackers manipulate input prompts to cause unauthorized actions; insecure output handling, which can lead to exploits like code execution; training data poisoning that corrupts model behavior; denial of service attacks targeting model availability; and supply chain vulnerabilities affecting system integrity. Other key risks include sensitive information disclosure, insecure plugin design, excessive agency granting LLMs unchecked autonomy, overreliance on LLM outputs, and model theft. The list aims to educate developers and organizations on these threats, providing mitigation strategies to improve the security posture of LLM applications.


Publication's URL

URL: https://owasp.org/www-project-top-10-for-large-language-model-applications/

Publication's scorecard

Issuer: OWASP
Country: USA
Scope: Cyber
Typology: Standard
Publication's date: November 18, 2024
Category: Data Protection & AI
Sector: Cross-Sector
Rating: 1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...




Error: unable to get links from server. Please make sure that your site supports either file_get_contents() or the cURL library.

Share and follow us

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *