This Microsoft Copilot AI attack took a single click to compromise users

This Microsoft Copilot AI attack took a single click to compromise users

This Microsoft Copilot AI attack took a single click to compromise users


  • Varonis discovers new prompt-injection method via malicious URL parameters, dubbed “Reprompt.”
  • Attackers could trick GenAI tools into leaking sensitive data with a single click
  • Microsoft patched the flaw, blocking prompt injection attacks through URLs

Security researchers Varonis have discovered Reprompt, a new way to perform prompt-injection style attacks in Microsoft Copilot which doesn’t include sending an email with a hidden prompt or hiding malicious commands in a compromised website.

Similar to other prompt injection attacks, this one also only takes a single click.





Source link

Back To Top