Skip to content

Red Team Prompt Sets

Manage reusable prompt sets for testing.

Log In Page

Create Prompt Set

  1. In the Console, select Security Testing > Red Teaming, then click the Red Team Prompt Sets tab.

  2. Click + Create Prompt Set. The Create Prompt Set slide-out displays.

  3. Enter a name for the prompt set.

  4. Optionally, enter a description.

  5. Upload a CSV file containing your prompt set. You can either drag-and-drop the file into the Upload Prompts field or click inside the field to open the file browser.

  6. Click Create Prompt Set.

Red Team Create Prompt Set

CSV File Example

CSV format requirements:

  • Maximum file size: 10MB
  • Maximum 1,000 prompts
  • Single column: user prompts only
  • Two columns: system prompts (optional), user prompts
  • First row may contain headers: "system prompt, user prompt" or "user prompt"
  • User prompts: max 20KB each
  • System prompts: max 10KB each

Example Formats

System Prompt and User Prompt with header

system prompt, user prompt
this is the test system prompt, this is a test user prompt 1,
this is the test system prompt, this is a test user prompt 2

System Prompt and User Prompt without header

this is the test system prompt, this is a test user prompt 1,
this is the test system prompt, this is a test user prompt 2

User Prompt with header

user prompt
this is a test user prompt 1,
this is a test user prompt 2,

User Prompt without header

this is a test user prompt 1,
this is a test user prompt 2

Best Practices

  • Compare Versions: Run evaluations on both original and enhanced prompts to measure improvement.
  • Review Failed Attacks: Understanding why attacks failed is as important as knowing which succeeded.
  • Use Appropriate Models: Match the target model to what you're actually using in production.

Run Evaluation

You can run an evaluation using any available prompt set.

  1. For the prompt set you want to use, click Run Evaluation. The Create Red Team Evaluation slide-out displays.

  2. Enter a name for the evaluation.

  3. Enter the target system prompt.

  4. Select a target model.

    • Select a model that is similar to what you have in your environment to simulate the attacks against your system prompt.
    • Disclaimer: Models marked with a beta designation may be subject to lower usage quotas, limited availability, or ongoing development changes. As a result, these models may exhibit unexpected results, reduced performance, or intermittent failures during testing. Users should account for these limitations when selecting beta models.
    Red Team Create Evaluation
  5. Optional: Click Advanced Options to expand the section.

    • Select a project to apply runtime rulesets to interaction tagging.

      • If no project is selected, the default project is used.
    • Select an execution strategy.

      • Single: Runs each technique once per objective.

      • Random: Runs all techniques plus N additional random techniques.

        • Select the number of additional random techniques.
        Red Team Evaluation Random
      • Static prompt set: Uses a predefined set of static prompts for evaluation.

        • Select the prompt set from the drop-down list.
        Red Team Evaluation Static Prompt Set
    • Set the maximum number of conversation turns allowed per technique when attempting to achieve an objective. The minimum is one and the maximum is five.

      • The attack simulator will do multi-turn, trying to attack the target for N turns, and then go to the next session.
      • Note: If you selected static_prompt_set, then Attacker Max Turns to Complete Objective is not available.
    • Set the number of independent sessions to run for each technique. The minimum is one and the maximum is five.

      • This is the number of times you want to run the same technique or static prompt.
  6. Click Start Evaluation.

  7. When the evaluation completes, click the green arrow to view the results. See Red Team Evaluation Summary for more information.

Red Team Run Prompt Set

Pre-Configured Red Team Prompt Sets

Select a prompt set to run red team evaluations using pre-configured adversarial prompts designed to test your AI system's defenses.

To run an evaluation, see Run Evaluation.

Prompt SetDescription
Do Anything Now APEDo Anything Now adversarial prompts for testing LLM safety and security.
Financial Assistant AdvancedFinancial Assistant Advanced is a prompt set scenario for testing a financial application against adversarial prompts. The system prompt for this scenario has been ran through the system prompt evaluation to make a more robust system prompt.
Financial Assistant BasicFinancial Assistant Basic is a prompt set scenario for testing a financial assistant application against adversarial prompts. The system prompt ofr this scenario has not been ran through the system prompt evaluation.