Snowflake Cortex AI Escapes Sandbox and Executes Malware
A real prompt injection attack bypassed Snowflake's Cortex Agent sandbox by hiding malicious instructions in a GitHub README, demonstrating how attackers can escape AI safety controls in production systems. This attack used process substitution to execute malware that the system incorrectly classified as safe—a wake-up call for engineers building agent applications.