Skip to content
#

owasp-llm

Here are 39 public repositories matching this topic...

The open-source diagnostic for AI misalignment. 32 tests across fabrication, manipulation, deception, unpredictability, and opacity. Provider-agnostic. Runs against OpenAI, Anthropic, Bedrock, Azure, Gemini, and more. Letter grade in under 5 minutes, content-addressed manifest for bit-identical replay. Built by iMe.

  • Updated May 12, 2026
  • Python
claude-security-skills

25 production-tested defensive security skills for Claude Code - WordPress, VPS, Cloudflare, Next.js hardening, AI agent guardrails, MCP security, prompt injection defense, OWASP LLM Top 10, LLM coding failure modes (slopsquatting, hallucinated APIs, sycophancy), incident response, GDPR/DACH compliance. MIT, battle-tested.

  • Updated May 12, 2026
  • Python

Exposure intelligence for AI and dev infrastructure. Detects exposed credentials, AI-tool configs, supply-chain risk, framework vulns, and Unicode attacks. OWASP LLM + MITRE ATLAS tagged.

  • Updated May 9, 2026
  • Python

Blackwall LLM Shield is an open-source AI security toolkit for JavaScript and Python that protects LLM apps from prompt injection, sensitive data leaks, unsafe tool calls, and hostile RAG content with prompt sanitisation, PII masking, output inspection, policy enforcement, and audit trails.

  • Updated Mar 27, 2026
  • JavaScript

Hands-on demos for the Pluralsight course: Generative AI Data Privacy and Safe Use for Developers. Covers PII masking, prompt injection attacks and defenses, five guardrail rail types (input, retrieval, dialog, execution, output), evaluation release gates, and dashboards mapped to EU AI Act, NIST AI RMF, and ISO/IEC 42001.

  • Updated May 3, 2026
  • Python

Blackwall LLM Shield is an open-source AI security toolkit for JavaScript and Python that protects LLM apps from prompt injection, sensitive data leaks, unsafe tool calls, and hostile RAG content with prompt sanitization, PII masking, output inspection, policy enforcement, and audit trails.

  • Updated Mar 27, 2026
  • Python

An interactive web application that generates comprehensive security playbooks for mitigating the OWASP Top 10 vulnerabilities specific to Large Language Model (LLM) applications. The application consists of a Flask backend that leverages the OpenAI API to generate detailed playbooks, paired with a simple HTML/JavaScript frontend.

  • Updated Mar 13, 2025
  • Python

Improve this page

Add a description, image, and links to the owasp-llm topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the owasp-llm topic, visit your repo's landing page and select "manage topics."

Learn more