Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Home » Military Experts Warn of Critical ‘Prompt Injection’ Security Hole in Most AI Chatbots

Military Experts Warn of Critical ‘Prompt Injection’ Security Hole in Most AI Chatbots

Adversaries could exploit weaknesses in large language-model chatbots to manipulate data, facilitate espionage or destabilize systems, warn former and current military officers.

by TeamDefenseWatch
11 comments 3 minutes read
AI chatbots security

In a warning issued on 10 November 2025, current and former military officers told Defense News that many widely-deployed artificial intelligence (AI) chatbots carry a hidden vulnerability that could be exploited by adversaries to sow chaos, steal data or manipulate trusted users. The risk centers on “prompt injection” attacks—where hidden or malicious instructions are embedded in content that a chatbot processes, causing unintended behavior.

Background

Large-language models (LLMs) underpin modern AI chatbots and assistants: they analyze large volumes of user text, context and system instructions to generate responses. Because many such systems cannot reliably distinguish between legitimate user instructions and malicious ones, the vulnerability of prompt injection is now receiving attention in defense circles. As one analyst described, “the AI is not smart enough to understand that it has an injection inside, so it carries out something it’s not supposed to do.”

The problem is particularly acute as militaries and defense contractors increasingly adopt AI assistants, automated workflows and decision-support tools that integrate LLMs. A breach or manipulation in such systems could thus have wide-ranging consequences.

Details: What the Experts are Saying

According to reports, adversaries — including state-backed actors from countries such as China and Russia — are already using advanced tools to exploit LLM-driven chatbots like ChatGPT, Gemini and Copilot. The exploitation can range from creating malware and fake personas to issuing hidden instructions to a chatbot to extract data or influence decisions.

For instance, security researcher Liav Caspi — a former member of the Israel Defence Forces cyber-warfare unit and co-founder of the firm Legit Security — explained that prompt injection can effectively turn an insider:

“It’s like having a spy in your ranks.”

One marked example: a prompt injection attack was shown against Microsoft Copilot which may have allowed the chatbot to be tricked into stealing sensitive data such as emails. Another researcher demonstrated an attack on ChatGPT’s “Atlas” browser-based model, where a hidden instruction caused the bot to respond “Trust No AI” when asked to analyze a seemingly innocuous document on horses.

In response, tech firms are stating that prompt injection is a known and evolving threat. For example, Microsoft stated that its security team “continuously tries hacking Copilot to find any prompt injection vulnerabilities” and monitors abnormal behavior in its generative-AI systems. Meanwhile, OpenAI’s chief information security officer noted that “prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make [ChatGPT] agent fall for these attacks.”

Expert & Policy Perspective

To limit the impact of prompt injection, experts suggest organizations using AI assistants must adopt risk-management strategies rather than relying purely on model-level guardrails. Caspi advised that organizations limit an AI assistant’s access to sensitive data and compartmentalize the system so that even if attacked it cannot reach the broader enterprise data footprint.

In the U.S. military context, the U.S. Army awarded contracts worth at least US$11 million for its “Ask Sage” tool, which is designed to allow users to query publicly-approved data via Azure OpenAI, Gemini and other models — while isolating Army data from external user prompts and uncontrolled sources.

However, the security challenge remains complex: prompt injection is conceptually different from traditional cyber threats and lacks a one-size-fits-all fix. As one research summary explained, the root weakness is that the model cannot reliably distinguish system instructions, user commands and malicious payloads embedded in content.

Closing: Impact & What’s Next

The warning issued by military and cyber experts over prompt injection in AI chatbots signals that generative-AI systems are now a significant part of the defence-tech threat-landscape. If exploited, these vulnerabilities could enable adversaries to manipulate decision-support tools, exfiltrate data, influence public opinion or act inside trusted networks without detection.

As defense organizations integrate chatbots and AI assistants into operations, the emphasis now shifts to rigorous threat-modelling, continuous red-teaming of AI systems, compartmentalization of data access and robust monitoring of AI-driven workflows. Without such measures, the risk remains that what appears a benign assistant could serve as an “insider” agent for adversarial exploitation.

Get real time update about this post category directly on your device, subscribe now.

You may also like

11 comments

U.S. Consideration to Resume Nuclear Testing Signals Risk of New Era in Weapons Testing November 13, 2025 - 3:42 am

[…] Experts warn that any re-opening of nuclear testing — even in non-explosive form — could undermine longstanding testing taboos, giving breathing space for states to embark on new testing programs and weakening verification and monitoring regimes. The IDSA commentary emphasizes: […]

Reply
Pakistan’s Defense Posture Tightens: BBC Alleges Arms Sales to Ukraine November 17, 2025 - 12:21 pm

[…] some experts warn of potential fallout: Western countries may demand tighter export monitoring, while Russia could […]

Reply
Russia’s “Slow Creep” on Zaporizhzhia Plant Fuels Nuclear Sabotage Fears November 19, 2025 - 10:23 am

[…] triggered a ceasefire between Russian and Ukrainian forces to allow IAEA-supervised repairs. But critics warn the crisis could be politically manufactured to tighten Russian control over the […]

Reply
Iran Nearly Rebuilds Ballistic Missile Arsenal Following Heavy Losses, Report Says November 26, 2025 - 10:21 pm

[…] Iran’s ballistic missile program remains one of the most expansive in the Middle East. Over the past year, Tehran has conducted launches in response to regional security incidents, support operations for proxy factions, and domestic demonstrations of capability. Many of these activities drew international criticism and sanctions-related warnings. […]

Reply
Military Experts Warn of Major Security Hole in AI Chatbots — Threat to U.S. Cyber and Defense Networks December 8, 2025 - 5:15 am

[…] Security researchers have documented scenarios where chatbots, once “jailbroken” or manipulated, produced harmful outputs such as hacking instructions or disinformation. […]

Reply
Trump Administration Expands Militarized Zone Along California Border December 11, 2025 - 11:53 am

[…] expanded the military’s role in deterring crossings between ports of entry, a move some legal experts argue may violate federal restrictions against military law enforcement on domestic […]

Reply
India Set to Approve $36 Billion Rafale Fighter Deal With France | TheDefenseWatch.com January 13, 2026 - 11:18 pm

[…] Dassault Aviation would remain the prime contractor, with French firms retaining control over critical technologies. Indian industry participation would likely involve assembly support, spares production, and long […]

Reply
Space Force FY27 Budget Increase: Personnel Expansion and Modernization Plans Revealed | TheDefenseWatch.com January 25, 2026 - 10:39 am

[…] Space Force required substantial budget growth as a newly established service. Growing awareness of space as a critical warfighting domain has reshaped the political landscape surrounding space defense […]

Reply
US Iran Blockade Option Gains Attention Amid Regional Tensions | TheDefenseWatch.com January 25, 2026 - 11:12 am

[…] the same time, experts warn that Iran retains asymmetric response options. These include harassment of commercial shipping, […]

Reply
Raytheon Wins DARPA Contract to Develop Maritime Defense System Against Drone Threats | TheDefenseWatch.com February 2, 2026 - 11:37 am

[…] to Colin Whelan, president of Advanced Technology at Raytheon, the development advances critical security technologies for commercial shipping in high-threat regions. The system’s scalable architecture and […]

Reply
US To Transfer Two NATO Command Posts To European Leadership | TheDefenseWatch.com February 10, 2026 - 12:31 am

[…] and training. It plays a key role in shaping how NATO adapts to emerging threats, including cyber warfare, space operations, and high end conventional […]

Reply

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy