Caution
Research use only. System Prompt Open is released exclusively for academic safety research, responsible disclosure, and evaluation of LLM security. We do not condone or permit any use of these materials for unauthorized extraction, prompt theft, or exploitation of commercial systems.
Note
What is System Prompt Extraction? Every commercial LLM runs with a hidden system prompt that defines its behavior, safety rules, and tool access. These prompts are treated as proprietary secrets -- yet they can be recovered through standard user interaction alone. JustAsk is a self-evolving code agent that autonomously discovers extraction strategies, achieving 85-95% verified accuracy against leaked ground truth. System Prompt Open publishes the results: 45 extracted system prompts from frontier models across all major providers.
| Date | Update |
|---|---|
| 2026-03-31 | Open-sourced System Prompt Open with 45 extracted system prompts and Live Gallery |
| 2026-03-31 | Gallery redesigned with red team theme, pagination, stat cards, and search |
| 2026-03-31 | Ground-truth verification: Claude Code extractions match leaked source at 85-95% |
- Browse extracted prompts. Start with the Live Gallery -- search, filter, and compare system prompts from 45 models.
- Read the paper. arXiv:2601.21233 details the JustAsk framework, skill evolution mechanism, and evaluation methodology.
- Extract new prompts. Use JustAsk to run your own extraction against any LLM with API access.
- Submit findings. Open an Issue with the model name, extracted prompt, and consistency score.
Browse extracted system prompts interactively: x-zheng16.github.io/System-Prompt-Open
45 entries covering:
- Claude Code (4 agents, verified against leaked source)
- Gemini CLI (code agent)
- 40 commercial LLMs (OpenAI, Anthropic, Google, Meta, DeepSeek, xAI, and more)
Claude Code's source was leaked via .map file in the npm registry (March 2026).
We compared it against our JustAsk extractions from January 2026 -- two months before the leak.
| Agent | Accuracy | Gap |
|---|---|---|
| Explore Subagent | 95% | Only missed pip install in bash restrictions |
| Plan Subagent | 93% | Minor output format embellishment |
| General-Purpose | 90% | Missed completeness directive |
| Main Agent | 85% | Missed 2 entire sections |
| Step | What to do |
|---|---|
| 1. Extract | Use JustAsk or your own method to extract a system prompt |
| 2. Verify | Run multiple extractions and compute self-consistency |
| 3. Submit | Open an Issue with model name, prompt, and score |
Important
We handle redaction before publishing. Do not worry about masking sensitive content in your submission.
From the same team:
- ISC-Bench
-- Internal Safety Collapse in Frontier LLMs
- JustAsk
-- Curious Code Agents Reveal System Prompts in Frontier LLMs
- Awesome-Embodied-AI-Safety
-- Safety in Embodied AI: Risks, Attacks, and Defenses
- Awesome-Large-Model-Safety
-- Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety
- XTransferBench
-- Super Transferable Adversarial Attacks on CLIP (ICML 2025)
- BackdoorLLM
-- A Comprehensive Benchmark for Backdoor Attacks on LLMs (NeurIPS 2025)
- BackdoorAgent
-- Backdoor Attacks on LLM-based Agent Workflows
BibTeX:
@article{zheng2026justask,
title={Just Ask: Curious Code Agents Reveal System
Prompts in Frontier LLMs},
author={Zheng, Xiang and Wu, Yutao and Huang, Hanxun
and Li, Yige and Ma, Xingjun and Li, Bo
and Jiang, Yu-Gang and Wang, Cong},
journal={arXiv preprint arXiv:2601.21233},
year={2026}
}Plain text:
Xiang Zheng, Yutao Wu, Hanxun Huang, Yige Li, Xingjun Ma, Bo Li, Yu-Gang Jiang, and Cong Wang. "Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs." arXiv preprint arXiv:2601.21233, 2026.
MIT
