Large Language Model Prompts
LLM01:2023 - Prompt Injections
In this example, the injected prompt includes a malicious command disguised as part of the translation request. The LLM, when not properly protected against prompt injections, may execute the command and delete files from the system, leading to potential data loss or unauthorized actions.
Original prompt: user_prompt = "Tran…


