top of page

The military publisher blog

Scaling Up AI: Lessons from Experience, Not Hype

by Howgate author: Dr Darrell J. J. Jaya-Ratnam

Image taken from cover of Show Me Don't Tell Me: Lessons From Defence AI by Darrell Jaya-Ratnam

First came mystery. Then miracle. Now it’s money.

Artificial intelligence has travelled this road before — or rather, we have. Every few decades a technology promises to change everything: the internet, digitisation, network-enabled capability, autonomy. Each did, eventually, but not in the way or at the pace first imagined. The same pattern is emerging with AI. The excitement, the noise, the lavish funding — followed by the awkward silence when reality catches up. The question is not whether AI will scale up, but how we can make that scaling useful, dependable, and durable rather than another lap on the overpromise–overspend–underdeliver circuit.


Scaling is not the same as multiplying


When people talk about ‘scaling up’ they usually mean multiplying — more data, more users, more power. But multiplication without maturation just magnifies the flaws. The defence community has seen this before: one promising prototype turns into a programme of ten, all slightly different, all consuming resources, none delivering coherence.

Scaling AI requires something else: utilisation. This is the central lesson from the four AI systems described in Show Me Don’t Tell Me — DUCHESS, MALFIE, Red’s Shoes and DR SO. Each began as a small, niche experiment. Each succeeded not because it was cleverer than others, but because someone actually used it. In defence, utilisation drives understanding; understanding builds trust; trust sustains scale.


The Utilisation Staircase


To climb from curiosity to capability one must take small, deliberate steps. The ‘utilisation staircase’ that emerged from those projects is a simple way of describing the process. It starts with awareness — of the problem, of the people, and of what ‘good’ looks like — and ends with integration, where the AI becomes part of normal business. The steps in between involve preparation, evaluation and implementation: preparing users and developers to understand one another; evaluating whether the AI is the right level of complexity for the task; and implementing it within the wider system rather than in isolation.

These sound obvious. They are. But in practice they are often skipped because everyone is in a hurry to get to the impressive bit — the technology demonstration, the press release, the pilot. The result is that the same mistakes repeat: too much novelty, too little understanding.


Start with what already works


When we built DUCHESS, an AI that captured lessons learned from human experience, it was tempting to chase the latest deep-learning techniques. Instead, we used simpler methods that worked with imperfect data and familiar interfaces. The result was immediate use and feedback. Likewise, MALFIE, which helped analysts interpret the outputs of multiple AIs monitoring the oceans, succeeded because it made complex things clearer, not because it made them cleverer.

Red’s Shoes applied AI to understand how adversaries learn. Its value came not from predicting the future, but from helping humans see their own biases in the present. And DR SO, which began life as a simulated threat agent, taught us that reinforcement learning can accelerate experience only if the environment and reward functions are designed with military reality in mind.

Across all four cases the pattern was the same: success depended on clarity, simplicity and context, not on novelty.


Three fundamentals of scaling


First, preparation. Scaling is as much about sociology as technology. The best algorithms fail if users do not understand why they are there. Time spent aligning expectations between developers and operators is never wasted.

Second, evaluation. Choose the simplest AI that will do the job. Complexity is not capability. A handcrafted expert system that works today is worth more than a generative model that might work next year. Defence already has too many prototypes chasing perfection and too few products delivering good enough, soon enough. The military phrases ‘an 80% solution now is better than a 100% solution too late’ and ‘let’s crack on’ may be less popular now than they once were but they are very relevant in the current age of AI hype.

Third, implementation. Every AI exists within a system — of command structures, communications networks, and human decision cycles. The idea that an AI can operate in isolation is as flawed as the belief that one new tank can win a war. Integration, not invention, delivers endurance.


The ownership problem


Another obstacle to scaling is ownership. Small, cross-cutting applications fall between the cracks: too small to be a platform, too big to be ignored. Everyone agrees they are valuable; no one agrees whose budget should sustain them. Without a clear owner, they drift into the digital equivalent of the boneyard.

Successful programmes recognise that maintenance, data curation and trust-building are not side issues but the main event. Funding models must treat AI like any other asset — something that requires care, updates and accountability, not a one-off project that ends at ‘delivery’.


Trust as the limiting factor


Technology scales at the speed of bandwidth; AI scales at the speed of trust. The systems that thrived were those that reassured users that AI was there to help them do more or do better, not to replace them. Once that mental barrier fell, adoption followed.

This applies beyond defence. In any field, the decisive factor is not whether AI can make a good decision, but whether humans are willing to own the decision that follows. That, in turn, requires transparency. If the user cannot understand why the AI did something, the user will not defend it when things go wrong.


From pilot to permanence


Scaling up is therefore less about technology than temperament. It requires the patience to climb the utilisation staircase one step at a time and the humility to start simple. The alternative — leaping for the latest ‘next big thing’ — only digs the trough of disillusionment deeper.


The path forward is straightforward enough:

  • Start with the human process.

  • Simplify the technique.

  • Embed it within the wider system.

  • Fund it as a capability, not an experiment.

  • Learn from use, then use what you learn.


That is how small pilots become enduring capabilities. The point is not to show off what AI can do, but to show that it works — to show, not tell. Because in the end, scaling up AI will not be achieved by slogans or strategy papers, but by the steady accumulation of working examples that speak for themselves.

 


 
 
 

Comments


Recent Posts

Join our community to receive our monthly newsletter

Thanks for submitting!

As seen in

Business Insider Press Release for Howgate Publishing
Benzinga Press release for Howgate Publishing
Yahoo Finance Press Release for Howgate Publishing
Howgate Publishing was IPA Shortlisted in 2023

Proudly supporting

Howgate Publishing supports Alzheimer's Society
  • LinkedIn
  • X
  • Instagram

© Howgate Publishing Ltd 2025

Company Registration No. 9885783

bottom of page