Multi-Domain AI and the New Era of Command and Control

Armed forces worldwide are turning to multi-domain AI to process overwhelming battlefield data. The Department of War’s CDAO explains how AI is moving from lab experiments to operational decision-making with responsible implementation.

There’s now more data on the battlefield than humans can process in time-and that simple reality is forcing militaries to rethink everything about command and control. Multi-domain artificial intelligence-the use of AI across land, air, sea, cyber, and space-has become central to solving this challenge.

Here’s what you need to know: At AIPCon 9, Cameron Stanley, Chief Digital and Artificial Intelligence Officer of the Department of War, laid out how his organization is pushing enterprise-wide adoption of data, analytics, and AI. The goal? Turn information overload into genuine decision advantage.

From Experimental Add-On to Core Decision Tool

For years, defense organizations treated AI as something you experimented with in labs. Stanley’s office is flipping that script entirely. Rather than treating AI as an add-on, they’re weaving it into the core of how decisions get made-from strategic planning to real-time operations.

The vision is clear: data and algorithms shouldn’t sit in isolated labs. They need to move at the speed of the warfighter, in the environments where decisions carry the highest stakes. If your AI can’t handle the chaos of real operations-latency, connectivity issues, incomplete data-it’s not mission-ready.

Breaking the Prototype Trap

The defense world has been rich in proofs of concept but poor in scalable deployment. Experiments demonstrate impressive AI capabilities in controlled environments, only to stall when confronted with messy data, legacy systems, and complex approval processes.

Stanley argues this pattern is no longer acceptable if militaries want to maintain an edge over adversaries who are also racing to exploit AI. His mandate as CDAO is to break this prototype trap.

That means designing a pipeline that carries technology all the way from early experimentation to reliable, repeatable use in live missions. In practice, this involves working closely with operators to identify high-impact use cases, funding iterative development, and building the institutional pathways that allow successful prototypes to become standard tools in the field.

Data as a Strategic Weapon System

At the heart of this transformation is a simple idea: data is now a weapon system in its own right. Stanley’s office treats enterprise data with the same seriousness as a physical platform, because without the right data foundation, even the most advanced AI is effectively blind.

This data-centric approach has several elements:

1. Building enterprise-wide data platforms – Integrating inputs from sensors, logistics, intelligence, and command systems, rather than maintaining isolated islands of information.

2. Establishing common data standards – So feeds from different domains and services can be fused, searched, and analyzed by AI tools.

3. Creating secure but flexible access controls – Allowing information to be shared rapidly with those who need it while protecting sensitive sources and methods.

By treating data architecture as a core mission enabler, the CDAO is laying the groundwork for AI systems that can reason across the full spectrum of military activity-from high-level campaign planning to split-second tactical decisions on the edge.

How Multi-Domain AI Actually Works in Combat

Modern operations rarely unfold in a single domain. A typical scenario might involve space-based sensors, cyber operations, air assets, naval platforms, and ground forces all contributing to and drawing from the same operational picture. The commander’s challenge is to coordinate these elements faster than an adversary can react.

Multi-domain AI helps address this challenge in several ways:

Fusion of heterogeneous feeds – AI tools can ingest radar tracks, satellite imagery, signals intelligence, logistics status, and human reports, then synthesize them into a coherent picture rather than leaving analysts to stitch it all together manually.

Prioritization and triage – Instead of presenting all data as equal, AI systems can highlight emerging threats, anomalies, and opportunities that matter most for the current mission objectives.

Course-of-action support – Algorithms can simulate potential options, estimate risks, and suggest resource allocations, giving commanders a decision-support “co-pilot” that extends human judgment rather than replacing it.

Consider a crisis in which an adversary is probing both networks and physical borders. AI-enabled command and control might simultaneously flag unusual cyber activity, detect changes in electronic emissions from enemy platforms, and correlate these with abnormal movements observed by drones or satellites. The result is not only faster detection, but a richer context for deciding how to respond and where to apply scarce assets.

Moving at the Speed of the Warfighter

Speed is a recurring theme in Stanley’s message. Traditional acquisition and IT processes can take years to deliver new capabilities, yet adversaries and technology trends can shift in months or even weeks. The CDAO’s strategy is therefore grounded in agility and iteration.

Several practices support this faster tempo:

1. Shortened development cycles – Small increments of functionality are rapidly delivered to units, tested in real conditions, and refined based on user feedback.

2. Modular, open architectures – Components-models, interfaces, data connectors-can be swapped or upgraded without rebuilding entire systems.

3. Embedding technologists – Data engineers and AI specialists work directly with operational units so tools are shaped by real needs, not assumptions made in distant offices.

This approach mirrors the best of commercial software development but adapts it to military constraints, including classification, reliability, and safety. The goal is for warfighters to feel that AI tools evolve with them, rather than being frozen at the moment of initial fielding.

Building Trust and Practicing Responsible AI

Even the most powerful AI system is useless if operators do not trust it-or, worse, if they trust it blindly. Stanley emphasizes that responsible AI is not a separate side project; it is a necessary condition for adoption at scale.

Responsible use involves several intertwined concerns:

Transparency – Users need to understand, at a practical level, why a system is surfacing certain alerts or recommendations. Full technical explainability is not always possible, but intelligible behavior is essential.

Human judgment – AI is positioned as an assistant, not an autonomous decision-maker. Commanders remain accountable for choices, using AI as an additional lens, not a final arbiter.

Testing and validation – Systems must be rigorously evaluated across realistic scenarios to ensure that performance holds up under stress, edge cases are understood, and failure modes are documented.

Training is a critical part of this trust-building process. Warfighters are taught both how to leverage AI outputs and how to question them-when to lean on automated suggestions and when to fall back on experience, intuition, and additional data.

Culture, Talent, and Organizational Change

Technology alone does not deliver transformation. The shift to multi-domain AI-enabled command and control requires cultural and organizational change across the Department of War. Stanley’s role as CDAO sits at the intersection of technology and leadership, tasked with aligning stakeholders who may have different priorities, timelines, and risk tolerances.

This change agenda includes:

1. Developing and retaining technical talent – Data engineers, AI researchers, product managers who understand both the mission and the technology landscape.

2. Creating incentives for adoption – Encouraging units to adopt new tools, share data, and participate in experimentation rather than clinging to familiar but outdated processes.

3. Building partnerships – Working with industry and academia to tap into cutting-edge capabilities while ensuring they are tailored to military realities.

In this sense, multi-domain AI is as much about people and processes as it is about code and infrastructure. The aim is to build an institution that can continuously absorb new technologies and turn them into enduring advantage.

Looking Ahead: AI as a Permanent Advantage

As AIPCon 9 makes clear, AI is no longer a futuristic concept on the margins of defense planning. It is central to how leading militaries intend to fight and deter conflict in the coming decades.

Stanley’s vision for the Department of War is one in which data, analytics, and AI are not special initiatives but standard features of every major decision and operation. If successful, this effort will result in command and control systems that are faster, more adaptive, and more resilient than those of potential adversaries.

Multi-domain AI will not remove uncertainty or risk from warfare, but it can help leaders navigate that uncertainty with greater clarity and speed. By moving cutting-edge technology from the lab to the warfighter at pace, the CDAO is working to ensure that AI becomes a durable source of decision advantage, rather than a one-time experiment.

What This Means for AI Development Beyond Defense

While the military context is unique, the lessons from multi-domain AI apply far beyond defense. Any organization dealing with complex, multi-source data streams can learn from this approach.

Think about emergency response coordinating police, fire, and medical services during a disaster. Or a global corporation managing supply chains across continents. Or a smart city integrating traffic, energy, and public safety systems.

The principles are the same: break down data silos, build systems that can handle real-world chaos, focus on practical impact, and-most importantly-build trust through transparency and human-centered design.

Multi-domain AI isn’t just changing how militaries fight-it’s showing us what’s possible when we stop treating AI as a lab experiment and start treating it as an essential tool for navigating complexity. The battlefield is just the most urgent proving ground.