# How SW and HW Vulnerabilities Can Complement LLM-Specific Algorithmic Attacks (UT Austin, Intel et al.) - Date: 2026-03-21 - Category: Artificial Intelligence Guardrails can stop a jailbreak prompt. ---