“Attackers conceal instructions via ultra-small fonts, background-matched text, ASCII smuggling using Unicode Tags, macros that inject payloads at parsing time, and even file metadata (e.g., DOCX custom properties, PDF/XMP, EXIF),” Granoša explained. “These vectors evade human review yet are fully parsed and executed by LLMs, enabling indirect prompt injection.”

Countermeasures

Justin Endres, head of data security at cybersecurity vendor Seclore, argued that security leaders can’t rely on legacy tools alone to defend against malicious prompts that turn “everyday files into Trojan horses for AI systems.”

“[Security leaders] need layered defenses that sanitize content before it ever reaches an AI parser, enforce strict guardrails around model inputs, and keep humans in the loop for critical workflows,” Endres advised. “Otherwise, attackers will be the ones writing the prompts that shape your AI’s behavior.”