In a warning issued on 10 November 2025, current and former military officers told Defense News that many widely-deployed artificial intelligence (AI) chatbots carry a hidden vulnerability that could be exploited by adversaries to sow chaos, steal data or manipulate trusted users. The risk centers on “prompt injection” attacks—where hidden or malicious instructions are embedded in content that a chatbot processes, causing unintended behavior.
Background
Large-language models (LLMs) underpin modern AI chatbots and assistants: they analyze large volumes of user text, context and system instructions to generate responses. Because many such systems cannot reliably distinguish between legitimate user instructions and malicious ones, the vulnerability of prompt injection is now receiving attention in defense circles. As one analyst described, “the AI is not smart enough to understand that it has an injection inside, so it carries out something it’s not supposed to do.”
The problem is particularly acute as militaries and defense contractors increasingly adopt AI assistants, automated workflows and decision-support tools that integrate LLMs. A breach or manipulation in such systems could thus have wide-ranging consequences.
Details: What the Experts are Saying
According to reports, adversaries — including state-backed actors from countries such as China and Russia — are already using advanced tools to exploit LLM-driven chatbots like ChatGPT, Gemini and Copilot. The exploitation can range from creating malware and fake personas to issuing hidden instructions to a chatbot to extract data or influence decisions.
For instance, security researcher Liav Caspi — a former member of the Israel Defence Forces cyber-warfare unit and co-founder of the firm Legit Security — explained that prompt injection can effectively turn an insider:
“It’s like having a spy in your ranks.”
One marked example: a prompt injection attack was shown against Microsoft Copilot which may have allowed the chatbot to be tricked into stealing sensitive data such as emails. Another researcher demonstrated an attack on ChatGPT’s “Atlas” browser-based model, where a hidden instruction caused the bot to respond “Trust No AI” when asked to analyze a seemingly innocuous document on horses.
In response, tech firms are stating that prompt injection is a known and evolving threat. For example, Microsoft stated that its security team “continuously tries hacking Copilot to find any prompt injection vulnerabilities” and monitors abnormal behavior in its generative-AI systems. Meanwhile, OpenAI’s chief information security officer noted that “prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make [ChatGPT] agent fall for these attacks.”
Expert & Policy Perspective
To limit the impact of prompt injection, experts suggest organizations using AI assistants must adopt risk-management strategies rather than relying purely on model-level guardrails. Caspi advised that organizations limit an AI assistant’s access to sensitive data and compartmentalize the system so that even if attacked it cannot reach the broader enterprise data footprint.
In the U.S. military context, the U.S. Army awarded contracts worth at least US$11 million for its “Ask Sage” tool, which is designed to allow users to query publicly-approved data via Azure OpenAI, Gemini and other models — while isolating Army data from external user prompts and uncontrolled sources.
However, the security challenge remains complex: prompt injection is conceptually different from traditional cyber threats and lacks a one-size-fits-all fix. As one research summary explained, the root weakness is that the model cannot reliably distinguish system instructions, user commands and malicious payloads embedded in content.
Closing: Impact & What’s Next
The warning issued by military and cyber experts over prompt injection in AI chatbots signals that generative-AI systems are now a significant part of the defence-tech threat-landscape. If exploited, these vulnerabilities could enable adversaries to manipulate decision-support tools, exfiltrate data, influence public opinion or act inside trusted networks without detection.
As defense organizations integrate chatbots and AI assistants into operations, the emphasis now shifts to rigorous threat-modelling, continuous red-teaming of AI systems, compartmentalization of data access and robust monitoring of AI-driven workflows. Without such measures, the risk remains that what appears a benign assistant could serve as an “insider” agent for adversarial exploitation.
Get real time update about this post category directly on your device, subscribe now.
11 comments
[…] Experts warn that any re-opening of nuclear testing — even in non-explosive form — could undermine longstanding testing taboos, giving breathing space for states to embark on new testing programs and weakening verification and monitoring regimes. The IDSA commentary emphasizes: […]
[…] some experts warn of potential fallout: Western countries may demand tighter export monitoring, while Russia could […]
[…] triggered a ceasefire between Russian and Ukrainian forces to allow IAEA-supervised repairs. But critics warn the crisis could be politically manufactured to tighten Russian control over the […]
[…] Iran’s ballistic missile program remains one of the most expansive in the Middle East. Over the past year, Tehran has conducted launches in response to regional security incidents, support operations for proxy factions, and domestic demonstrations of capability. Many of these activities drew international criticism and sanctions-related warnings. […]
[…] Security researchers have documented scenarios where chatbots, once “jailbroken” or manipulated, produced harmful outputs such as hacking instructions or disinformation. […]
[…] expanded the military’s role in deterring crossings between ports of entry, a move some legal experts argue may violate federal restrictions against military law enforcement on domestic […]
[…] Dassault Aviation would remain the prime contractor, with French firms retaining control over critical technologies. Indian industry participation would likely involve assembly support, spares production, and long […]
[…] Space Force required substantial budget growth as a newly established service. Growing awareness of space as a critical warfighting domain has reshaped the political landscape surrounding space defense […]
[…] the same time, experts warn that Iran retains asymmetric response options. These include harassment of commercial shipping, […]
[…] to Colin Whelan, president of Advanced Technology at Raytheon, the development advances critical security technologies for commercial shipping in high-threat regions. The system’s scalable architecture and […]
[…] and training. It plays a key role in shaping how NATO adapts to emerging threats, including cyber warfare, space operations, and high end conventional […]