In recent years, the criminal ransomware market has been based on a reproducible model: pre-packaged kits, payment infrastructures, and affiliates that run campaigns (RaaS).
The emergence of local AI tools changes the attack surface: PromptLock, a demonstration prototype developed in an academic environment, has shown that a locally loaded LLM can act as a “decision engine” for ransomware, generating dynamic payloads and orchestrating the attack cycle without relying on a traditional C2.
This fact requires a review of defensive TTPs: no longer just signature-based detection, but behavioral telemetry and governance of AI models in infrastructure.
PromptLock operating mechanics
PromptLock is structured as an orchestrator (implemented in Go) that sends text prompts to a local LLM (e.g., gpt-oss-20b exposed via Ollama). The LLM generates Lua code on-the-fly that is executed on the compromised system. The main phases are: file system reconnaissance, file scoring by value (target selection), conditional exfiltration, and selective encryption. Each phase is implemented as prompt – code – execution, in a loop repeated until the attack is complete.
Key technical features:
- Non-deterministic output: the LLM, which is stochastic by nature, produces functional variants of the payload with each run (variable names, different instruction ordering), complicating the generation of static IOCs.
- Offline-first: the engine can operate entirely locally, so it is not necessarily visible in network logs like a typical C2.
- Decision delegation: the attacker provides the high-level flow; the LLM makes operational decisions (e.g., which files to exfiltrate first), reducing direct human control.
Architecture and operational flow
Orchestrator (Go) | v Prompt (NL) -> Local LLM | v Generating Lua code (scan/exfil/encrypt) | v Script execution -> Telemetry (EDR) observable | v New prompt / repetition
From a defensive standpoint, useful signals remain behavioral: repeated execution of interpreters/scripts, I/O spikes, massive creation of encrypted archives, calls to suspicious processes.
Differences from traditional ransomware
The operational discrepancies with the classic RaaS model are significant:
- No static locker: there is no identifiable monolithic executable; the payload is generated dynamically.
- Reduced network telemetry: the absence of persistent C2 contacts makes network IOC techniques less effective.
- Intrinsic polymorphism: LLM introduces functional variability, hindering hash- or string-based matching.
- Greater automation: the need for manual skills to create exploits or custom payloads is reduced.
These characteristics require a defensive repositioning towards behavior-driven solutions, runtime integrity checks, and execution environment control.
Limitations of the PoC and possible vectors for evolution
The prototype is incomplete: robust modules for persistence, privilege escalation, and lateral movement are missing. Some technical choices (use of simple encryption algorithms or gaps in data destruction mechanisms) suggest a demonstrative purpose. However, the architecture is modular: by integrating exploit chains, credential theft, and AI orchestration, capabilities comparable to high-end ransomware can be easily achieved. The operating cost is also low: local open-weight models eliminate recurring API costs, lowering the entry threshold for actors with limited technical capabilities.
Underground reaction and risk of weaponization
The reaction in criminal forums was swift: well-known figures in the RaaS landscape commented favorably on the technical concept. This indicates pragmatic interest: an LLM orchestrator can be provided to affiliates as a “smart kit” that produces payloads on demand, transforming the RaaS model into a hypothetical LLM-as-a-threat. Weaponization will require integrations (persistence, lateral movement, data offloading), but the trajectory is technically feasible.
Implications for defense and practical countermeasures
To mitigate this emerging class of threats, the following is necessary:
- Control of local model execution (whitelisting, container isolation, execution policies).
- MDR with behavioral telemetry and managed response: identify patterns such as massive execution of interpreters/scripts, I/O spikes, or processes that produce numerous encrypted archives, and activate operational countermeasures.
- Network restrictions for unsigned processes and egress/ingress control for new executables.
- Governance policies for LLM in the enterprise: who can deploy models, in which environments, and with which prompt logging.
- Immutable backups and recovery testing to reduce the economic leverage of extortion.
Ultimately
PromptLock is a proof-of-concept that crystallizes a trend: AI can become the decision-making layer of an attack, making it more adaptive and less tied to static signatures. The RaaS model will not disappear immediately, but it could rapidly evolve toward hybrid solutions where the LLM orchestrator reduces operational complexity for affiliates.
Defense and detection must anticipate: control planes for local models, advanced telemetry, and governance over execution flows will become critical levers in responding to the new generation of ransomware.
Analysis by Vasily Kononov – Threat Intelligence Lead, CYBEROO