Podcast

Building a Product and Enterprise GTM in the World of AI Agents (Lessons from Iris.ai on the HumBot Podcast)

The AI agent wave has moved fast. In this HumBot Podcast episode featuring Victor (CTO & founder) and Steven (CRO) from Iris.ai, the discussion gets refreshingly concrete on what actually makes agentic AI succeed in enterprise environments: data readiness, context quality, evaluation, and the trust layer that turns demos into durable adoption.

V
Victor Botev & Steven Kramer
DateDecember 11th 2025
Read time55 mins watch
#AI Agents#Enterprise AI#Product Strategy#Go-to-Market#Data Readiness#AI Trust#Knowledge Management#RAG

Building a Product and Enterprise GTM in the World of AI Agents (Lessons from Iris.ai on the HumBot Podcast)

The AI agent wave has moved fast. Last year, most enterprise conversations were dominated by "Let's run a pilot." This year, the tone has changed: "Can we actually deploy this… and trust it?"

In a HumBot Podcast episode featuring Victor (CTO & founder) and Steven (CRO) from Iris.ai, the discussion gets refreshingly concrete. Instead of repeating the usual agent hype, they go deep on what actually makes agentic AI succeed in enterprise environments: data readiness, context quality, evaluation, and the trust layer that turns demos into durable adoption.

Below is a full write-up of the most important ideas—especially useful if you're building an agentic product, selling into enterprise, or trying to scale from a single use case to an "AI factory" across teams.

1) The "Agentic AI" Era Isn't New—The Timing Is

Iris.ai started in 2015—long before today's generative AI boom. Victor explains that their original obsession is basically the same problem everyone is trying to solve now:

The world produces a massive amount of knowledge, but only a small fraction gets used in decision-making.

In research and academia, the pain is obvious: papers, experiments, and findings pile up faster than people can read them. But the enterprise version is even more expensive. If teams can't access the right knowledge at the right moment, decisions become ad-hoc, innovation slows, and critical work relies on partial context.

The key difference today isn't that the problem changed. It's that the enabling technology finally matured enough for real adoption. In Victor's words, the generative AI "revolution" opened doors to solve the same underlying challenge in a more scalable way.

2) How Iris.ai Positions Itself Amid the Hype

Steven joins and makes a positioning point that's worth copying if you're in a crowded AI space:

They don't try to replace the data platform. They sit on top of it.

Think of it as a data-to-LLM foundation layer:

  • Connect data across systems
  • Unify structured + unstructured sources
  • Extract meaning from complex formats (PDFs, slides, tables, charts, graphs)
  • Normalize it into something LLMs can actually consume reliably
  • Provide a trust/evaluation layer so enterprises can adopt with confidence

This positioning is important because most enterprise customers already have data warehouses, databases, lakes, governance tools, ETL pipelines, etc. They don't want another "replace everything" vendor. They want something that makes their existing data usable for agents—fast.

3) The Pilot Failure Problem Isn't Mainly "LLMs Aren't Good"

Steven references a stat often cited in enterprise AI discussions: many POCs fail. Whether the exact percentage varies by report, the pattern is real: enterprises get stuck in pilot loops.

Their take is blunt:

  • The technology is often good enough to demo
  • The real blocker is what you feed into it
  • Without reliable data and context, outcomes don't scale

Video Highlight: Why Most AI Pilots Fail

This is where Iris.ai's "unification" focus becomes a GTM weapon. If you can shorten the time-to-value by making the underlying data usable, you don't just win a POC—you earn the right to talk about production.

4) "Context" Is Not Just Data — It's How Data Is Presented to the Model

Victor shares one of the most practical insights in the whole conversation: enterprises have heterogeneous data, and LLMs don't treat all context equally.

Example:

  • You retrieve one number from an SQL database
  • You retrieve five long chunks from PDFs
  • You dump everything into the prompt

What happens?

The model may overweight the long text and ignore the number—because the representation is disproportionate.

This sounds obvious, but it's exactly why so many "RAG wrappers" look good in controlled demos and fall apart in real environments. Iris.ai focuses on:

  • Normalization
  • Unified representations
  • Ensuring the model "sees" the important signal across diverse sources

If you want an agent to act with confidence, "context injection" needs to be engineered—not improvised.

5) The Real Differentiator: A Trust Layer Built on Evaluation

A major theme is that enterprise adoption is a function of trust.

Not brand trust. Not sales trust. Trust in reasoning + outputs. Trust in agentic actions.

Victor explains that the "trust layer" wasn't initially built as a product feature. It started as an internal necessity: if you're iterating on deep tech over years, you must know whether each change makes the system better or worse.

That mindset becomes a differentiator in the agentic era because most organizations still don't have strong answers to questions like:

  • Is the model answering based on retrieved context—or its parametric memory?
  • Did the prompt steer it into the right expertise mode?
  • Are we improving over time, or drifting backward?

They discuss building metrics like context grounding—measuring whether the output is actually derived from the provided enterprise sources rather than general model knowledge.

That's not just "nice to have." It's how you build confidence with regulated customers (finance, telco, manufacturing) where hallucination isn't "funny"—it's a risk event.

6) Co-Creation: The Only Real Way to Scale in Enterprise

A big practical point: enterprise success isn't "we shipped you software."

It's product + expertise + process change, delivered together.

Steven breaks their co-creation into a few core stages:

1) Discovery & Assessment

Not just "what data do you have," but:

  • What business goals matter?
  • What outcomes define success per department?
  • Where is the highest ROI starting point?

He gives an example: a finance team spending 40% of their time doing manual data chunking and insight compilation. The value is obvious—but only if you align it to business priorities and workflow realities.

2) Value Mapping

POCs die when they're too isolated.

Instead, build a foundation that can expand across teams—so one use case can become a reusable capability for other departments.

This is the "AI factory" idea, but grounded:

  • You centralize the data-to-LLM foundation
  • Teams build specific experiences on top
  • You don't rebuild the hard parts every time

3) Measurement & Metrics

Everyone can build a prototype agent.

Scaling requires repeatable quality measurement from both:

  • A business perspective (does it help?)
  • A technical perspective (is it consistent? grounded? improving?)

This is how you avoid the "demo trap."

7) The Hidden Work: Reimagining Processes Around Agents

Victor makes a point that's uncomfortable but true:

Most "AI agent adoption" today is still not pure. It's often "replace one step in a process."

But the bigger gains come when you redesign the process around AI, not around humans doing manual steps.

Video Highlight: The Timing of Transformation: When to Reimagine Workflows

Steven gives a simple example in data validation:

Old world:

  • Extract data
  • Apply schema rules
  • Humans troubleshoot repeatedly

Agentic world:

  • AI links relationships across data
  • Flags highest-likelihood issues
  • Humans focus on review and judgment, not grinding validation steps

That shift takes time. But it's where the compounding value comes from.

8) Where Enterprise Resistance Comes From (and Why)

They call out the most common skeptical stakeholders:

Compliance & Governance

The first line of defense. They ask:

  • Where does data go?
  • What gets exposed?
  • How is it controlled and audited?

Data & Analytics Teams

Not always "anti-AI," but overloaded and realistic:

  • Data isn't clean
  • Infra isn't unified
  • Extraction is hard
  • Expectations are high

Video Highlight: Overcoming Stakeholder Skepticism

Steven's advice: you need internal champions across multiple functions. Otherwise, you can lose six months negotiating paperwork and alignment until the project dies.

Victor adds a sensitive point: domain experts can resist because of job insecurity, and in some cases can even sabotage experiments. His warning is practical:

You can't assume you'll replace experts quickly. Some enterprises remove experts too early, then later scramble to rehire.

A better model is:

  • Use AI to scale experts
  • Do more with the same people first
  • Only then re-evaluate roles responsibly

9) The Next Few Years of Knowledge Work

They end with two complementary predictions:

Victor's View

Knowledge work will shift to:

  • AI suggesting and presenting knowledge
  • Humans validating, critically thinking, and orchestrating outcomes
  • Humans focusing more on discovery and judgment than retrieval

Steven's View

We'll increasingly "pair" with multiple agents:

  • Different tools for different tasks
  • Humans going deeper ("double click / triple click") faster than before
  • Expertise becomes more extensible

Video Highlight: Future of Knowledge Work: Evolution, Extinction or a New Frontier

The future isn't "humans vs agents." It's humans with a toolkit of agents—and the winners will be the ones who can trust the toolkit.

Practical Takeaways for Builders and GTM Teams

If you're building an agentic product or selling into enterprise, here are the core lessons:

🎯 Stop treating "context" as a prompt problem. It's a data representation problem.

📊 Normalize heterogeneous enterprise data so the model can use it predictably.

🔄 Build an evaluation loop early—otherwise you can't scale reliably.

🔐 Trust is the real adoption barrier, and trust comes from measurable grounding and governance.

🤝 Co-create with customers: align goals, map reusable value, measure outcomes.

Don't just insert agents into old workflows—redesign workflows around agentic capabilities.

🏆 Win internally with champions across compliance, data, and business stakeholders.

Closing Thought

Agentic AI in enterprise is entering its "adult phase." The demo era is fading, and the market is rewarding teams who can deliver certainty of outcomes.

The Iris.ai story is a good reminder that the real moat isn't flashy agents—it's the foundations underneath: data readiness, unified context, evaluation, governance, and a trust layer that turns experimentation into adoption.

If you're building in this space, that's the playbook.

Share this article

Footer Background
Humbot Logo

© 2025 Humbot. All rights reserved.

  • linkedin Icon
  • youtube Icon