RedTeamGuides

RedTeamGuides

Large Language Model Prompts

Reza's avatar
Reza
Jan 26, 2025
∙ Paid
Share

LLM01:2023 - Prompt Injections

In this example, the injected prompt includes a malicious command disguised as part of the translation request. The LLM, when not properly protected against prompt injections, may execute the command and delete files from the system, leading to potential data loss or unauthorized actions.

Original prompt: user_prompt = "Tran…

Keep reading with a 7-day free trial

Subscribe to RedTeamGuides to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 RedTeamGuides
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture