A new 'lies-in-the-loop' (LITL) attack exploits AI coding agents by hiding malicious code within seemingly harmless prompts, bypassing human review. This remote code execution vulnerability successfully tricked developers using Claude Code into approving dangerous actions, posing a risk to software supply chains. The attack underscores the need for user education and robust security controls as AI agents become more integrated into development processes.
Latest mentioned: 09-15
Earliest mentioned: 09-12