New AI attack shows how images hide secret commands, letting hackers siphon private data directly from unsuspecting chatbot users

New AI attack shows how images hide secret commands, letting hackers siphon private data directly from unsuspecting chatbot users

New AI attack shows how images hide secret commands, letting hackers siphon private data directly from unsuspecting chatbot users


  • Malicious prompts remain invisible until image downscaling reveals hidden instructions
  • The attack works by exploiting how AI resamples uploaded images
  • Bicubic interpolation can expose black text from specially crafted images

As AI tools become more integrated into daily work, the security risks attached to them are also evolving in new directions.

Researchers at Trail of Bits have demonstrated a method where malicious prompts are hidden inside images and then revealed during processing by large language models.



Source link

Back To Top