MSS AI Study Group Meeting on 16 December 2025
Practical Frameworks for Industrial AI
by Dr Jack Saunders
In Practical Frameworks for Industrial AI, Dr Jack Saunders, Technical Director at Trident Intelligent Solutions, presents a pragmatic, outcome-focused approach to delivering AI in complex and regulated environments.
With a PhD in Machine Learning and extensive applied experience, Dr Saunders argues that successful AI projects are defined not by sophisticated models, but by their ability to deliver measurable business value. He emphasises disciplined problem framing, execution, and organisational alignment as the real determinants of success.
Talk Overview
The talk is structured around Trident’s AI delivery framework, built on four core questions:
Why · What · How · Who
The session focuses primarily on What (problem translation) and How (execution), identified as the most critical phases for ensuring AI delivers real-world impact.
The “Why”: Business Motivation and Strategic Context
The Why phase establishes clear business motivation. Rather than starting with vague aspirations such as “using AI,” Dr Saunders stresses the importance of identifying tangible business objectives through relevant KPIs.
Teams must pinpoint operational friction — bottlenecks, manual processes, or data gaps — and translate these into clear, measurable goals.
Case studies illustrate this reframing in practice:
A UK water utility moved from asking whether AI could locate air valves to defining a public-health objective: proactively identifying and classifying 95% of high-risk assets to reduce contamination risk.
An education trust reframed a general efficiency goal into a specific target of halving policy query response times, reducing administrative burden while improving compliance.
The “What”: Translating Business Problems into Technical Tasks
The What phase focuses on converting business problems into technical formulations. Dr Saunders warns against solution-first thinking — such as defaulting to large language models — and urges teams to begin with the decision the system must support.
This requires close engagement with stakeholders to understand decisions, desired outcomes, constraints, and acceptable risk.
A practical checklist is used to assess readiness across four areas:
- The decision the system supports
- The outcome and success KPIs
- Operational constraints
- Risk boundaries defining “good enough”
If these cannot be clearly articulated, the project is not ready for technical design.
Once the problem is understood, it can be mapped to a technical task by defining outputs, inputs, evaluation metrics, and selecting the simplest appropriate approach.
Success must be measured at three levels: offline model performance, online system behaviour, and business impact. This discipline prevents teams from delivering technically effective solutions that fail to create real value.
The “How”: Execution Through Evidence
The How phase focuses on execution through iterative, risk-led experimentation.
Teams should identify the riskiest assumption, test it with small but realistic experiments, and decide whether to scale, pivot, or stop based on evidence. Solutions must also prove useful to real users in context — systems that are not trusted or acted upon deliver no value.
Only once usefulness is demonstrated should teams engineer for reliability, governance, monitoring, and scale.
Key Takeaway
The central message is clear:
AI only matters if it changes outcomes that organisations care about — and that change must be measurable, intentional, and aligned with real operational needs.



