Israeli AI access control company Knostic has published research this week, which uncovers a new cyberattack method on AI search engines, which takes advantage of an unexpected attribute - impulsiveness. The researchers demonstrate how AI chatbots like ChatGPT and Microsoft's Copilot can reveal sensitive data by bypassing their security mechanisms.
The method, called Flowbreaking, exploits an interesting architectural gap in large language models (LLMs) in certain situations where the system has 'spat out' data before the security system has had sufficient time to check it. It then erases gthe data like a person that regrets what they have just said. Although the data is erased within a fraction of a second, a user who captures an image of the screen can document it.
Knostic cofounder and CEO Gadi Evron, who previously founded Cymmetria, said, "LLM systems are built from multiple components and it is possible to attack the user interface between the different components." The researchers demonstrated two vulnerabilities that exploit the new method. The first method, called 'the second computer' causes the LLM to send an answer to the user before it has undergone a security check, and the second method called "Stop and Flow" takes advantage of the stop button in order to receive an answer before it has undergone filtering.
Published by Globes, Israel business news - en.globes.co.il - on November 26, 2024.
© Copyright of Globes Publisher Itonut (1983) Ltd., 2024.