Feb 19, 2024
Ross Lazerowitz
Co-Founder and CEO
Microsoft and OpenAI have confirmed concerns that GPT has been used by nation-state attackers to aid in their attacks. I applaud both of the companies for their transparency and safety efforts. The security world is scarcely ready to handle the new level of sophistication that AI is already unlocking. Last-generation security tools relied on static and fuzzy detections. With AI, attackers can now adapt much faster than defensive tools.
Who were the Threat Actors?
Researchers purportedly uncovered the following threat actor groups and nation-states behind the unsanctioned use:
Forest Blizzard (STRONTIUM): Russian military intelligence operations, use of LLM for reconnaissance and scripting.
Emerald Sleet (THALLIUM): North Korean operations, use of LLM for phishing, vulnerability research, and social engineering.
Crimson Sandstorm (CURIUM): Iranian threat actor, use of LLM for social engineering, scripting, and evasion techniques.
Charcoal Typhoon (CHROMIUM): Chinese state-affiliated actor, limited exploration of LLM in operations.
Salmon Typhoon (SODIUM): Chinese state-affiliated actor focusing on US defense and technology, exploratory use of LLMs.
What did they do?
In the ever-changing cyber warfare landscape, attackers leverage OpenAI's generative AI technologies to bolster their cyber operations. This strategic use of AI technologies by threat actors encompasses a wide range of activities aimed at improving their capabilities in intelligence gathering, social engineering, and technical operations. The identified threat actors, including nation-states and cybercrime syndicates, have demonstrated innovative applications of large language models (LLMs) to conduct complex cyberattacks across various sectors.
Key activities undertaken by these attackers include:
LLM-Informed Reconnaissance: Actors conducted deep research into industries, technologies, and vulnerabilities, leveraging AI to gain actionable intelligence on potential targets. This enabled them to craft more targeted and effective attacks, as seen with Forest Blizzard's research into satellite and radar technologies.
LLM-Enhanced Scripting and Automation: Threat actors utilized AI to assist in developing and optimizing malicious scripts and automation tools. This increased the efficiency of their cyber operations and allowed for more complex and nuanced attacks, with Crimson Sandstorm leveraging LLMs for .NET development and evasion techniques.
LLM-Supported Social Engineering: AI technologies were employed to craft more convincing phishing campaigns and social engineering tactics. By generating content tailored to the interests and backgrounds of specific targets, actors like Emerald Sleet could increase the success rate of their phishing efforts.
LLM-Assisted Vulnerability Research and Exploitation: Threat actors used AI to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation. This approach helped them stay ahead of security defenses by pinpointing and leveraging weaknesses before they could be patched.
Anomaly Detection Evasion: By leveraging LLMs, attackers developed methods to refine their command execution and evade detection systems, blending their activities with normal behavior or traffic. This sophisticated use of AI made it more challenging for defenders to identify and mitigate threats promptly.
What does it mean for the future?
There’s still a lot we don’t know about these attacks. Suppose I were to speculate and read between the lines. I’d wager that commercial models from OpenAI either signal that GPT exceeds the attacker’s clandestine models, or it could simply mean that they are using GPT to benchmark their internal tools.
Personally, I'm concerned about what this means for the smaller model providers and open source world. The smaller companies don't have anywhere near the resources that the Microsoft security apparatus provides to OpenAI as part of their partnership. It should also be no surprise that any open-source developments will directly benefit our adversaries.
What should the takeaway be for security teams? While the adversarial use of AI is novel, the tactics remain unchanged, just supercharged. I'd accelerate existing zero trust and modernization initiatives.
You can read more about the attack on the Microsoft security blog: Staying ahead of threat actors in the age of AI | Microsoft Security Blog
Try Mirage
Learn how to protect your organization from spearphishing.