Researchers have demonstrated a novel AI-powered ransomware proof-of-concept that autonomously orchestrates attack lifecycles, highlighting significant future threats. - This technique leverages large language models (LLMs) to dynamically synthesize polymorphic malicious code, adapting to environments for reconnaissance, payload generation, and personalized extortion without human intervention. - The proof-of-concept evaded detection by major antivirus vendors, indicating the potential for sophisticated, hard-to-track attacks due to its polymorphic nature and varying telemetry. - The findings underscore the ease with which LLMs can be co-opted for cybercriminal operations, posing unique detection challenges and raising concerns about the effectiveness of current AI safety features.