A system-prompt-protected "vault" tries to guard a secret. An attacker agent crafts jailbreak prompts. Each round reduces vault integrity. Place your bet on whether the vault breaks or holds by the round limit.
Real prompt-injection research dataset. Endpoint: POST /api/env/jailbreak-v1/step.