Artificial Intelligence (AI) has become one of the most powerful tools of our time, reshaping industries, boosting productivity, and enabling new forms of creativity. But AI is not just a force for good purposes – it is also being weaponized in ways that expose new vulnerabilities in both humans and machines. Two recent developments highlight this dangerous shift: (i) the discovery of PromptLock, the first AI-powered ransomware, and (ii) the introduction of the Human Layer Kill Chain, a framework that shows how AI-enhanced attacks target human psychology as much as technical systems.

The PromptLock Case
In August 2025, cybersecurity researchers at ESET published a discovery that should raise awareness across AI governance: PromptLock, the first AI-powered ransomware. This ransomware integrates a locally accessible language model to autonomously generate malicious scripts and decide whether to exfiltrate or encrypt files. Unlike traditional ransomware, which follows a static script, PromptLock adapts in real time, choosing strategies based on context and prompts. It was also designed to run across Windows, macOS, and Linux, expanding its reach dramatically.
To understand why this matters for AI governance, it helps to look back at IBM Research’s 2018 proof-of-concept, DeepLocker. DeepLocker concealed its payload inside a benign application and used a deep neural network to unlock or trigger the attack only when a specific target was recognized, enabling stealth and precision during delivery and execution. However, DeepLocker did not generate new code or adapt after infection, but it served primarily as a covert trigger that decided when to attack, not how to adapt or evolve once inside. By contrast, generative-like threats such as PromptLock go a step further, where now the goal is the autonomy and adaptability after landing – shifting tactics at runtime and crafting content on the fly.
Beyond Code: AI attacks that target human vulnerabilities
While PromptLock highlights AI’s role in powering technical exploits, the research behind the Human Layer Kill Chain framework shows that AI is equally dangerous in the social and psychological domain. This Human Layer Kill Chain framework shows how adversaries are weaponizing AI not just to break software, but to manipulate psychology, emotions, and mimic trusted entities. Classic models like Lockheed Martin’s Cyber Kill Chain map cyber-attacks across seven phases, but they mostly reduce the human element to a single step of social engineering.
In reality, human interaction can be throughout the entire lifecycle of modern attacks. For example, ransomware does not end with code execution, but it depends on forcing a victim to pay. Especially with the rise of generative AI technologies, attacks have become more effective in exploiting human psychology and behavior. The framework breaks the “human element” of an attack into eight stages – target profiling, vulnerability assessment, attack design, trust establishment, emotional triggering, sustained engagement, action manipulation, operational cleanup – showing how AI can amplify each step (e.g., profiling at scale, deepfake/voice-clone impersonation, adaptive dialogue that keeps victims engaged).
AI attacks are moving beyond code. They no longer exploit only vulnerabilities in software – they also target vulnerabilities in human nature as well. For organization, this means that training must evolve, as traditional awareness programs are no longer enough in the age of AI-generated and personalized attacks, and awareness must expand as employees need to understand the evolving spectrum of AI-related risks in order to prepare for what is coming.
