Podcast

Orchestrating the Agentic AI Symphony - Strategic Blue

AI agents are moving from hype to reality in business, and the biggest driver of adoption is not technology – it is executive vision and sponsorship. Strategic Blue CEO James Mitchell and CPO Andy Watson share how they built a human–AI joint org structure and what other leaders can learn from that journey.

J
James Mitchell & Andy Watson
DateDecember 5th 2025
Read time52 mins watch
#AI Agents#Enterprise AI#Organizational Change#AI Orchestration#Leadership#Business Strategy#Human-AI Collaboration#AI Culture

Orchestrating the Agentic AI Symphony - Strategic Blue

AI agents are moving from hype to reality in business, and the biggest driver of adoption is not technology – it is executive vision and sponsorship. That simple fact shapes how companies like Strategic Blue are approaching AI: not as a shiny tool, but as a catalyst to redesign how people, processes, and software work together.​

In this article, Strategic Blue CEO and founder James Mitchell and Chief Product Officer Andy Watson share how they moved from skepticism and experimentation to building a "human–AI joint org structure" inside their company, and what other leaders can learn from that journey.​

From Kool‑Aid To Commercial Advantage

James first took agentic AI seriously after attending an AWS executive summit, where CIOs and CTOs from major enterprises described in detail what they were actually building with AI, not just what vendors were promising. The conclusion he drew was blunt: if your organization is not already experimenting with AI agents, a competitor or startup in your industry almost certainly is – and they will come for your margins.​

Personally, James would "put AI back in Pandora's box" if that were realistically possible, but he does not believe that is an option. Given that AI is here to stay, his view is: if you will have to adopt it eventually, adopt it while it still confers a commercial advantage.​

Healthy Skepticism, Not Blind Faith

Andy openly describes himself as one of the skeptics. Like many product leaders, he has seen waves of hype that failed to deliver, so his default attitude was cautious: the technology will change things, but probably not in the way the marketing decks predict.​

What changed his mind was twofold:

  • The rate of capability growth: Things that were impossible months ago suddenly became straightforward.​
  • The inevitability of impact: Even without a perfect roadmap, it was clear AI would fundamentally change how products are built, how customers expect to interact with them, and the pace at which companies must deliver.​

For Andy, AI now feels "fundamentally important" both for his own career and for how he designs product experiences at Strategic Blue.​

Start Small, Learn Fast, Bring Everyone Along

Both leaders emphasize that the right first move is not a huge, high‑stakes AI program, but modest, low‑risk experiments that build capabilities and confidence.​

Their approach:

  • Begin with simple, adjacent use cases rather than the core product, where risk to reputation and customers is highest.​
  • Accept that early attempts will hit dead ends, but treat those dead ends as valuable learning rather than failure.​
  • Let a small group experiment first, then gradually widen participation so each "cohort" can learn from the previous one.​

This staged adoption is as much about culture as technology. Staff need time and psychological safety to play, make mistakes, and see tangible value before AI becomes part of everyone's daily workflow.​

Managing Agents Like People, Not Tools

One of James's most important mental models is to treat agents less like tools and more like (non‑human) team members. Tools are supervised step‑by‑step by a human; agents are given context, autonomy, and access to other tools and data.​

Strategic Blue's evolution illustrates this:

Single coding assistant

They began with one coding assistant to help prototype features. It delivered value, but also showed classic issues: hallucinations, breaking existing functionality, and inconsistent output.​

Context overload

As they added more instructions, rules, and training into that single agent, conflicts appeared and behavior degraded. It was "overloaded" with expectations.​

A team of specialist agents

They then broke the work into multiple specialized agents, including:​

  • Requirements‑gathering agents
  • Implementation / execution agents
  • Testing and sanity‑checking agents
  • Policy and standards enforcement agents
  • Visual consistency / branding agents

These agents are given clear "blueprints", narrow contexts, and sometimes even asked to critique each other's work, just like human specialists in a team.​

This multi‑agent pattern does two things: it reduces the impact of any one agent's flaws, and it makes it easier to wrap governance, guardrails, and testing around the whole system.​

The Power Of An AI Alter Ego

One of the most intriguing experiments at Strategic Blue is "Cleo" – an AI agent modeled as James's experienced CEO alter ego. Cleo has been trained on his views about the market, customers, mission, and vision, and acts as:​

  • A sounding board for James to clarify his own thinking and have it played back concisely.​
  • A safe, low‑ego proxy that others in the company can "challenge" without worrying about offending the real CEO.​
  • A contextual resource that other agents can query to ensure alignment with company direction.​

Cleo has even helped refine Strategic Blue's mission and vision. Because the initial "line in the sand" comes from an agent rather than a person, everyone feels free to critique and improve it, which leads to more open discussion and better outcomes.​

AI As A Cultural Facilitator, Not Just A Productivity Tool

Many conversations about AI focus on offloading drudgery. Strategic Blue is doing that – letting agents handle repetitive, low‑value tasks – but they see a deeper impact on culture and collaboration.​

Some of the emerging benefits:

  • Psychological safety for experimentation: People can try ideas with agents in private, learn from mistakes, and only share the refined results, which reduces the fear of looking incompetent in front of peers.​
  • More inclusive strategic conversations: Agents like Cleo, loaded with company context, allow a wider group of employees to engage meaningfully in topics that used to be reserved for the executive team.​
  • Reinforcing a learning culture: Strategic Blue has always seen itself as a place to learn, and AI becomes the next frontier for that learning – something they explicitly frame as vital for everyone's long‑term relevance, whether they stay at the company or not.​

AI thus becomes both a technical capability and a cultural enabler for curiosity, questioning, and continuous improvement.​

Young Talent: Up‑Levelling Through AI Mentors

James and Andy are clear that subject‑matter expertise and judgment remain human responsibilities, especially for experienced staff who know "what good looks like". However, they see a huge opportunity for early‑career employees to accelerate their learning curves using AI.​

For example, a junior marketer can manage an agent configured as a seasoned Chief Marketing Officer, asking it to explain concepts, critique ideas, and expose industry best practices. That gives the human a faster path to fluency, freeing them to focus on creativity and "what's next" rather than just learning the current playbook.​

Economics, Vendor Lock‑In, And The Coming Cost Shift

Given Strategic Blue's core business in cloud financial operations, it is natural that customers raise concerns about AI cost models and vendor lock‑in.​

Two key themes stand out:

  • Experimentation is being subsidized – for now: Many AI agent access models today are priced as flat per‑user subscriptions, effectively "all you can eat". James expects this to shift toward usage‑based pricing as richer, more intensive use cases become common, which will expose organizations that have not learned to use AI efficiently.​
  • Cloud‑native usage can spike costs quickly: When companies connect directly to services like Amazon Bedrock on a per‑call basis, casual experimentation appears cheap – until an application triggers a huge volume of calls, at which point costs can balloon. The time to understand unit economics and controls is during the learning phase, not after agents are embedded in critical workflows.​

From their vantage point, leaders should treat AI financial operations as a first‑class concern alongside model choice and architecture.​

Guardrails, Subject‑Matter Experts, And AI Hallucinations

Strategic Blue's team has seen firsthand that generic LLMs can answer some cloud‑cost questions convincingly and correctly while hallucinating others in subtle, dangerous ways. That experience reinforces several principles:​

  • Do not accept AI outputs in domains where you lack the expertise to judge correctness.​
  • Keep subject‑matter experts in the loop to review, constrain, and continuously refine what agents do.​
  • Design systems where agents check each other's work, especially in high‑risk areas like finance, security, or compliance.​

The goal is not to trust agents blindly, but to deliberately engineer systems where multiple flawed components, arranged thoughtfully, yield more reliable outcomes.​

A Practical Reliability Pattern: Layering Agents

James offers a simple but powerful pattern for improving reliability: use multiple agents with overlapping checks.​

For example:

  • Suppose a coding agent introduces errors 10% of the time.
  • A testing agent, also imperfect, misses issues 10% of the time.
  • If you run both, and design them to look for each other's failures, you reduce the probability of an undetected error to around 1%.​

Add another validation layer and you can drive that residual risk significantly lower again, at the cost of more compute and energy.​

This architecture‑first mindset – accepting flaws, then designing around them – is central to how Strategic Blue thinks about "orchestration" rather than chasing a mythical perfect model.​

Towards A Human–AI Joint Org Structure

As agents became more capable, Strategic Blue realized they were effectively duplicating their ISO‑backed business management system for AI: policies, procedures, and work instructions for humans on one side and separate rules and context for agents on the other.​

The direction they are now heading:

  • A unified organizational structure where roles for humans and AI agents sit side by side.​
  • A single, coherent set of policies, procedures, OKRs, and multi‑year milestones that are readable by both people and AI systems.​
  • Governance that ensures both humans and agents "sing from the same hymn sheet" in terms of mission alignment and behavior.​

In that world, almost every employee becomes, by design, a manager of agents as well as a practitioner in their own field.​

Rethinking Performance And Work Itself

James expects almost every job description at Strategic Blue to be rewritten to assume that each person manages agents and/or people, supported by AI sidekicks that can handle a lot of reporting and coordination.​

Implications include:

  • KPIs and incentives shifting from "how much did you personally produce?" to "how effectively do you orchestrate human and AI resources to deliver outcomes?"​
  • Automated reporting where agents prepare balanced scorecards, OKR updates, and other management artifacts on behalf of their human counterparts, freeing people to spend more time on judgment and problem‑solving.​
  • Work–life balance gains, as agents happily operate 24/7 within guardrails, letting humans focus on higher‑value, more fulfilling work.​

The business outcomes shareholders care about – growth, profitability, customer satisfaction – remain the same, but expectations of what a stable team can deliver rise as AI multiplies their capacity.​

Defensive Moats Are Shifting

One of the more sobering points James makes is that traditional software moats are eroding. With well‑documented APIs and powerful code‑generating agents, a motivated competitor can clone core functionality far faster than before.​

In that environment, defensibility will shift towards:

  • Proprietary and high‑quality data
  • Long‑term contracts and trust relationships
  • Reliability of service and delivery
  • The ability to innovate faster than others, not just maintain the current product set​

Strategic Blue's own board is pushing hard for durable innovation capacity rather than incrementalism, a pressure James believes most leadership teams will face in some form.​

Balancing Orchestration And Delivery

One of Andy's biggest challenges is balancing time spent "working on the AI system" versus "working on the core business". Building robust agent orchestration, supervision, and testing is a significant effort in its own right and can feel disconnected from solving customers' immediate problems.​

However, as the orchestration layer matures, the payoff becomes visible:

  • Faster, more reliable execution on customer needs
  • Quicker prototyping cycles and richer experimentation
  • Clearer linkage between customer problems and internal capabilities​

That inflection point – when the investment in orchestration begins to compound visibly – is also when adoption accelerates across the organization.​

Scenario Planning, Not Prediction

Both James and Andy are cautious about making detailed three‑to‑five‑year predictions. The pace of change is too high, and existential threats are emerging from many directions.​

Instead, they advocate:

  • Scenario planning for both upside and downside.
  • Investing heavily in flexibility – cloud infrastructure, agent platforms, and general‑purpose capabilities that can be reconfigured as the business evolves.​
  • Hedging bets by focusing on skills and systems that will be useful under many possible futures, even if the exact outcomes are unpredictable.​

In James's words, leaders "have no idea" exactly what their business will look like in five years – and that is precisely why flexibility and learning matter so much.​

Practical Advice For Leaders Getting Started

Across the conversation, a few recurring pieces of advice stand out for executives and product leaders:

  • Just start, but start small: Pick simple, low‑risk problems and begin experimenting. The hardest part is taking the first step; the path will change as you learn.​
  • Think orchestration before moonshots: Design how agents, people, policies, and testing will work together before going after your most critical use cases.​
  • Invest in learning culture and psychological safety: Encourage experimentation, normalize mistakes, and use agents to create safer forums for critique and idea‑sharing.​
  • Keep subject‑matter experts in the loop: They are essential for defining "good", catching dangerous hallucinations, and anchoring AI systems in business reality.​
  • Watch the economics early: Use today's subsidized experimentation window to understand cost drivers and unit economics before usage‑based pricing and heavier workloads kick in.​

James also suggests a practical way to expose yourself to what others are doing: attend cloud vendor events (such as those run by AWS), not for the hype, but for the conversations with peers who are already in the trenches with AI.

Conclusion

The insights from Strategic Blue's journey illustrate that successful AI adoption goes beyond technology—it requires executive vision, thoughtful orchestration, cultural enablement, and a commitment to learning. By treating agents as team members rather than tools, designing multi-agent systems with overlapping checks, and building a unified human–AI organizational structure, Strategic Blue is pioneering an approach that other leaders can learn from and adapt to their own contexts.

Share this article

Footer Background
Humbot Logo

© 2025 Humbot. All rights reserved.

  • linkedin Icon
  • youtube Icon