AI in Cybercrime: State Hackers Refine Attacks, OpenAI Says
A new threat report from OpenAI reveals that state-sponsored hackers and cybercriminals are using AI to enhance existing attack methods rather than invent new ones. Government-linked groups have been observed using large language models for reconnaissance, crafting phishing emails, and improving malware development workflows. The report also details how organized scam centers use AI to generate convincing fraudulent content and manage their illicit day-to-day operations. Researchers highlighted the dual-use challenge, where threat actors bypass safety measures by requesting seemingly harmless code that is later assembled for malicious purposes. Despite this misuse, OpenAI notes that its models are used up to three times more often by the public to identify and avoid scams than to create them.