AI agents are moving from hype to reality in business, and the biggest driver of adoption is not technology – it is executive vision and sponsorship. Strategic Blue CEO James Mitchell and CPO Andy Watson share how they built a human–AI joint org structure and what other leaders can learn from that journey.
AI agents are moving from hype to reality in business, and the biggest driver of adoption is not technology – it is executive vision and sponsorship. That simple fact shapes how companies like Strategic Blue are approaching AI: not as a shiny tool, but as a catalyst to redesign how people, processes, and software work together.
In this article, Strategic Blue CEO and founder James Mitchell and Chief Product Officer Andy Watson share how they moved from skepticism and experimentation to building a "human–AI joint org structure" inside their company, and what other leaders can learn from that journey.
James first took agentic AI seriously after attending an AWS executive summit, where CIOs and CTOs from major enterprises described in detail what they were actually building with AI, not just what vendors were promising. The conclusion he drew was blunt: if your organization is not already experimenting with AI agents, a competitor or startup in your industry almost certainly is – and they will come for your margins.
Personally, James would "put AI back in Pandora's box" if that were realistically possible, but he does not believe that is an option. Given that AI is here to stay, his view is: if you will have to adopt it eventually, adopt it while it still confers a commercial advantage.
Andy openly describes himself as one of the skeptics. Like many product leaders, he has seen waves of hype that failed to deliver, so his default attitude was cautious: the technology will change things, but probably not in the way the marketing decks predict.
What changed his mind was twofold:
For Andy, AI now feels "fundamentally important" both for his own career and for how he designs product experiences at Strategic Blue.
Both leaders emphasize that the right first move is not a huge, high‑stakes AI program, but modest, low‑risk experiments that build capabilities and confidence.
Their approach:
This staged adoption is as much about culture as technology. Staff need time and psychological safety to play, make mistakes, and see tangible value before AI becomes part of everyone's daily workflow.
One of James's most important mental models is to treat agents less like tools and more like (non‑human) team members. Tools are supervised step‑by‑step by a human; agents are given context, autonomy, and access to other tools and data.
Strategic Blue's evolution illustrates this:
They began with one coding assistant to help prototype features. It delivered value, but also showed classic issues: hallucinations, breaking existing functionality, and inconsistent output.
As they added more instructions, rules, and training into that single agent, conflicts appeared and behavior degraded. It was "overloaded" with expectations.
They then broke the work into multiple specialized agents, including:
These agents are given clear "blueprints", narrow contexts, and sometimes even asked to critique each other's work, just like human specialists in a team.
This multi‑agent pattern does two things: it reduces the impact of any one agent's flaws, and it makes it easier to wrap governance, guardrails, and testing around the whole system.
One of the most intriguing experiments at Strategic Blue is "Cleo" – an AI agent modeled as James's experienced CEO alter ego. Cleo has been trained on his views about the market, customers, mission, and vision, and acts as:
Cleo has even helped refine Strategic Blue's mission and vision. Because the initial "line in the sand" comes from an agent rather than a person, everyone feels free to critique and improve it, which leads to more open discussion and better outcomes.
Many conversations about AI focus on offloading drudgery. Strategic Blue is doing that – letting agents handle repetitive, low‑value tasks – but they see a deeper impact on culture and collaboration.
Some of the emerging benefits:
AI thus becomes both a technical capability and a cultural enabler for curiosity, questioning, and continuous improvement.
James and Andy are clear that subject‑matter expertise and judgment remain human responsibilities, especially for experienced staff who know "what good looks like". However, they see a huge opportunity for early‑career employees to accelerate their learning curves using AI.
For example, a junior marketer can manage an agent configured as a seasoned Chief Marketing Officer, asking it to explain concepts, critique ideas, and expose industry best practices. That gives the human a faster path to fluency, freeing them to focus on creativity and "what's next" rather than just learning the current playbook.
Given Strategic Blue's core business in cloud financial operations, it is natural that customers raise concerns about AI cost models and vendor lock‑in.
Two key themes stand out:
From their vantage point, leaders should treat AI financial operations as a first‑class concern alongside model choice and architecture.
Strategic Blue's team has seen firsthand that generic LLMs can answer some cloud‑cost questions convincingly and correctly while hallucinating others in subtle, dangerous ways. That experience reinforces several principles:
The goal is not to trust agents blindly, but to deliberately engineer systems where multiple flawed components, arranged thoughtfully, yield more reliable outcomes.
James offers a simple but powerful pattern for improving reliability: use multiple agents with overlapping checks.
For example:
Add another validation layer and you can drive that residual risk significantly lower again, at the cost of more compute and energy.
This architecture‑first mindset – accepting flaws, then designing around them – is central to how Strategic Blue thinks about "orchestration" rather than chasing a mythical perfect model.
As agents became more capable, Strategic Blue realized they were effectively duplicating their ISO‑backed business management system for AI: policies, procedures, and work instructions for humans on one side and separate rules and context for agents on the other.
The direction they are now heading:
In that world, almost every employee becomes, by design, a manager of agents as well as a practitioner in their own field.
James expects almost every job description at Strategic Blue to be rewritten to assume that each person manages agents and/or people, supported by AI sidekicks that can handle a lot of reporting and coordination.
Implications include:
The business outcomes shareholders care about – growth, profitability, customer satisfaction – remain the same, but expectations of what a stable team can deliver rise as AI multiplies their capacity.
One of the more sobering points James makes is that traditional software moats are eroding. With well‑documented APIs and powerful code‑generating agents, a motivated competitor can clone core functionality far faster than before.
In that environment, defensibility will shift towards:
Strategic Blue's own board is pushing hard for durable innovation capacity rather than incrementalism, a pressure James believes most leadership teams will face in some form.
One of Andy's biggest challenges is balancing time spent "working on the AI system" versus "working on the core business". Building robust agent orchestration, supervision, and testing is a significant effort in its own right and can feel disconnected from solving customers' immediate problems.
However, as the orchestration layer matures, the payoff becomes visible:
That inflection point – when the investment in orchestration begins to compound visibly – is also when adoption accelerates across the organization.
Both James and Andy are cautious about making detailed three‑to‑five‑year predictions. The pace of change is too high, and existential threats are emerging from many directions.
Instead, they advocate:
In James's words, leaders "have no idea" exactly what their business will look like in five years – and that is precisely why flexibility and learning matter so much.
Across the conversation, a few recurring pieces of advice stand out for executives and product leaders:
James also suggests a practical way to expose yourself to what others are doing: attend cloud vendor events (such as those run by AWS), not for the hype, but for the conversations with peers who are already in the trenches with AI.
The insights from Strategic Blue's journey illustrate that successful AI adoption goes beyond technology—it requires executive vision, thoughtful orchestration, cultural enablement, and a commitment to learning. By treating agents as team members rather than tools, designing multi-agent systems with overlapping checks, and building a unified human–AI organizational structure, Strategic Blue is pioneering an approach that other leaders can learn from and adapt to their own contexts.