What is the “Improv AI” Concept
The term “Improv AI” — as used in recent public chatter — captures Lockheed Martin’s evolving push to embed adaptive, real‑time artificial intelligence (AI) into battle‑management and command‑and‑control (C2) systems. At its core is the idea of AI that can “improvise on the fly” — reacting to unexpected developments mid‑fight, filling in gaps human operators may miss, and dynamically managing sensors, shooters, and data flows across domains.
That push dovetails with the broader U.S. Department of Defense (DoD) strategic initiative Combined Joint All-Domain Command and Control (CJADC2), which seeks to network sensors and shooters across air, land, sea, space, and cyber domains into an interconnected mesh — enabling rapid detection, decision, and response at machine speed.
In this context, “Improv AI” is less a single product than a growing portfolio of capabilities: airborne autonomy, AI‑driven data fusion and translation, open‑architecture C2/databuses, and live human‑machine teaming.
What Lockheed Has Demonstrated So Far
Live AI‑piloted Jets in Air‑to‑Air and Crewed‑Uncrewed Missions
- In June 2024, Lockheed Skunk Works, partnering with the University of Iowa Operator Performance Laboratory (OPL), flew full-scale L‑29 Delfin jets in air‑to‑air intercept scenarios under AI control. The AI handled heading, speed, and altitude commands against virtual adversaries — going through eight scenarios including offensive, defensive, off-aspect, and missile-support / missile-defeat tests. The AI exhibited “intentional and decisive” behaviour.
- Later in 2024, in a crewed‑uncrewed teaming (manned/unmanned) demonstration, a human “battle manager” flying in an L‑39 Albatros used a touchscreen pilot‑vehicle interface (PVI) to direct two AI‑controlled L‑29 jets. The pair successfully executed a simulated offensive counter‑air mission, “defeating” two mock enemy aircraft.
- According to Lockheed, these tests mark the start of a broader autonomy roadmap: future flights may include more complex formations, more unmanned systems, and tighter integration with existing or legacy aircraft platforms.

These demonstrations show that autonomous flight — even in kinetic air‑to‑air scenarios — is transitioning from simulation to real‑world flight tests. The “human‑in‑the‑loop” model remains central: AI flies, but humans supervise, assign targets, and provide final decision authority.
CJADC2 Interoperability Factory: AI‑Enabled System‑of‑Systems Integration
In March 2025, Lockheed announced its self‑funded CJADC2 Interoperability Factory — a modular, open‑architecture software stack designed to bridge disparate military platforms and data standards (e.g., sensors, shooters, satellites, legacy systems) into a cohesive, cross-domain network.
- Translates between a multitude of “machine languages” used by different weapon systems — even if they use incompatible or proprietary messaging standards.
- Uses model‑based systems engineering and AI/ML-based “smart translators” to automate and speed up onboarding of new systems — reducing manual integration burden and shortening fielding timelines.
- Supports cross‑domain data exchange — connecting platforms such as land‑based missile launchers, naval systems, aircraft, satellites — enabling “any sensor to any shooter” data flow.
According to Lockheed, initial internal demonstrations have already successfully linked the Open Mission Systems-Universal Command and Control Interface (OMS-UCI) with legacy Link 16–style communications (TADIL-J), as a proof of concept. Future iterations aim to incorporate additional standards (e.g., IBS, MADL) and support even broader platform sets.
Why This “On‑the‑Fly AI Battle Management” Matters
Shrinking the Kill Chain, Accelerating Decision Speed
In modern, high-end conflict — especially against peer adversaries — reaction time, speed of decision-making, and agility in commanding diverse assets become critical. “Improv AI” helps accelerate what used to be manual, time‑intensive decision and communication loops: sensor data ingestion → classification → target identification → shooter assignment → engagement decision.
By automating aspects of this cycle, AI-enabled battle management can:
- Fuse high-volume multi-domain sensor inputs (air, space, land, sea, cyber) in near-real time.
- Allocate shooters (aircraft, missiles, drones) dynamically, even under communications stress or partial connectivity.
- Respond to unexpected events — such as decoys, jamming, electronic warfare — faster than human-only systems might allow.

In effect, that could compress kill‑chains from minutes to seconds, giving a decisive tempo advantage — something especially relevant under the umbrella of Joint All-Domain Command and Control (JADC2) doctrine.
Interoperability: Legacy + Future Assets, Allies + Partners
Not all platforms in use today — or even coming online in the next decade — were built with common data standards or C2 interoperability in mind. The Interoperability Factory addresses precisely that problem: enabling legacy systems, modern fighters, ground weapons, satellites, and future unmanned platforms to “speak” to each other.
This helps not just U.S. services, but allied and partner militaries. Under CJADC2’s “Combined” framework, coalition forces — with differing equipment, standards, and legacy systems — could theoretically share sensor and targeting data in real time. That interoperability could prove decisive in high-end multi-domain operations where multinational coordination is key.
Limitations, Challenges and What Remains Unclear
While promising, the path to fully operational “improv‑AI battle management” faces several constraints and risks:
- Human Oversight Required: Even in the most advanced demos, humans remain “on the loop.” AI commands, maneuvers, or target assignments — but final decisions and engagement authorizations still rest with human operators.
- Integration and Interoperability Complexity: Building translators between dozens of heterogenous systems — sensors, weapons, communications standards — is not trivial. The Interoperability Factory aims to solve that, but wide-scale fielding across U.S. and allied platforms will require rigorous testing, verification, and standardization.
- Adversary Countermeasures: In contested environments, adversaries may deploy jamming, deception, cyber‑attacks, and other counter‑AI or counter‑C2 measures. The resilience of AI‑enabled systems under such stress remains to be tested — especially at scale and under real-world warfighting conditions.
- Ethical, Legal and Command‑Control Considerations: The shift of decision‑making speed and partial autonomy raises questions about responsibility, escalation control, and risk of unintended engagements. While programs so far emphasize “humans in the loop,” real‑world pressures may challenge that model.

Implications & What Comes Next
The progress made by Lockheed Martin signals that “adaptive AI battle management” is moving from speculative future concept to tangible capability — but it still remains a layered and gradual evolution. Key implications and next steps:
- From Tests to Integration: Expect more complex flight tests with larger numbers of AI‑controlled platforms — drones, UAVs, mixed manned/unmanned formations — increasing realism and expanding envelope.
- Accelerated Adoption Under CJADC2: The Interoperability Factory could speed up fielding of integrated, AI-enabled C2 networks across services and allies, making JADC2 less vision and more operational doctrine.
- Doctrinal Shift — Decision Velocity Over Platform Excellence: As multi-domain operations accelerate, success may depend less on individual platform performance and more on network speed, data fusion, and decision agility. This could reshape procurement and force-structure priorities.
- Operational & Legal Frameworks Needed: As automation increases, DoD and allied militaries will need doctrine, rules of engagement, and command frameworks to govern how, when, and under what circumstances AI-driven systems can act — especially in contested or coalition environments.
FAQs
It means that while AI may pilot, navigate or manage aspects of a mission (flight control, data fusion, targeting recommendations), a human operator retains ultimate control and decision authority — particularly for lethal engagements or mission‑critical decisions.
That’s exactly the aim of the CJADC2 Interoperability Factory: to provide a software-based “translator” layer that enables old and new systems — even if they use different communications or data protocols — to exchange information seamlessly.
Not at this stage. The demonstrations underscore AI as a force multiplier and decision support tool — not as a wholesale replacement for human judgment or command oversight.
Risks include adversary electronic/cyber countermeasures, system vulnerabilities, integration failures, mis‑identification or targeting errors, and ethical/legal challenges around autonomous or semi-autonomous engagements.
While flight tests and internal demonstrations are underway now (2024–2025), broad deployment — especially in coalition, multi-domain contexts — likely depends on further testing, standardization, and doctrinal adoption. 2020s–early 2030s seems a realistic window, depending on policy, funding, and adoption pace.
Get real time update about this post category directly on your device, subscribe now.
1 comment
[…] Analysts• Operations Officers• Air Mobility Officers• Air Traffic Controllers• Air Battle Managers• Maritime Patrol and Response Officers• Weapon Systems Officers• Electronic Warfare […]