References Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (Jan 2024) Can AI really be protected from text-based attacks? (Feb 2023) Prompt injection attacks against GPT-3 (Sep 2022) Adversarial Prompting in LLMs Using GPT Eliezer against ChatGPT Jailbreaking Jailbreak Chat