Q: There is a lot of conversation around agentic AI right now. What is changing for product teams?
The pace of software development has fundamentally changed. AI coding assistants have made it easier to build and ship features, so the constraint isn’t development capacity anymore; it’s around understanding the impact. Teams can release features quickly, but analysing how users respond, diagnosing what is working, and deciding what to do next often still rely on manual workflows, dashboards and periodic reviews. Agentic AI for analytics is overcoming these issues, helping teams continuously monitor product behavior, surface meaningful changes, investigate root causes, and support decision-making in near real time.
Q: What does “autonomous analytics” mean in practice?
Analytics has traditionally required a sequence of manual steps: someone identifies a question, pulls the data, builds a dashboard, interprets the results, and then recommends action. This process can be slow and uneven, especially in large organisations.
Autonomous analytics shifts some of that diagnostic and monitoring work to AI systems. Instead of waiting for a weekly metric review, an AI system can detect meaningful changes within hours, investigate what is driving those changes across funnels or segments, and surface a summary of likely causes.
In some cases, these systems can recommend next steps or trigger workflows. The important distinction here is that they are operating in the background continuously rather than being activated only when someone logs in to run a query.
Q: How are autonomous analytics different from AI tools that sit on top of a data warehouse and generate answers?
This comes down to context. Tools that query a data warehouse can generate summaries, but they often lack an understanding of how behavioural metrics relate to each other within a product environment. Behavioural analytics is more than retrieving numbers and more so about understanding journeys, experiments, segments, and how user actions connect over time.
When AI operates within a system designed specifically for behavioural analysis, it can reason across this more effectively. That enables it to move beyond describing what changed toward analysing why it changed and what that might mean for product strategy. The quality of insight depends heavily on that underlying context.
Q: AI agents can be described as a form of ‘middle management’. Can you explain what this means in practice?
While there is a tendency to frame AI in terms of job replacement, in practice we are seeing more of a shift in coordination and oversight tasks. ‘Middle management’ refers to monitoring performance, identifying issues early, priorising work and ensuring the right teams focus on the right problems. AI agents are particularly well suited to continuous monitoring and pattern detection at scale. In product organisations, this might mean tracking metrics 24/7, flagging anomalies, highlighting friction in user journeys, or surfacing underperforming experiments. Rather than replacing strategic roles, these systems are taking on diagnostic and coordination work that can otherwise consume significant time. The outcome here is about relocating human attention toward higher-value decisions.
Q: As AI improves personalisation experiences at scale, is there a risk of it going too far?
AI systems make it technically simple to generate countless variations of messaging, design or product experiences. But increased variation does not automatically produce better outcomes. It can fragment the user experience and make it harder to understand what is driving impact.
The more sustainable approach is disciplined experimentation. AI can accelerate the design launch and analysis of experiments, but there still needs to be a coherent framework for testing and decision-making. Human judgement remains important in defining guardrails, interpreting results, and protecting brand consistency. The challenge here is learning faster from a controlled set of changes.
Q: What should CIOs and CFOs be thinking about as they evaluate AI investments, particularly in agentic systems?
AI agents are continuous operating systems, rather than one-time deployments, which has implications for cost models, governance, and measurement. Leaders need to understand and gain clarity on the specific operational outcomes they expect AI to deliver. That includes whether the system reduces bottlenecks, shortens the cycle between product release and learning, and improves metrics such as conversion, retention, or efficiency in measurable ways.
Increasingly, AI deployment will need to be tied directly to defined business outcomes rather than experimentation. As usage-based pricing becomes more common, disciplined adoption becomes essential. Investment decisions should focus on where AI materially improves decision quality or speed in ways that justify ongoing spending.
Q: Do you see a growing divide between organisations that embed AI agents into their workflows and those that do not?
Organisations that use AI agents to continuously monitor performance, investigate anomalies, and support experimentation can shorten feedback loops significantly. Instead of identifying issues weeks later, they may surface them within hours. Quarterly tests turn into continuous experiments and so forth, until this acceleration compounds, resulting in faster feedback iteration. This in turn drives stronger product-market fit and competitive positioning. The growing divide isn’t so much about not having access to data, as most companies already have immense amounts of it, so the differentiator is how quickly they can translate that data into informed action.
Q: What skills matter most in the era of agenitc AI?
The role of humans is shifting from executing analysis to shaping systems and making judgement calls based on AI-generated insights. This means organisations will need people who can define the right questions, design effective workflows, interpret complex outputs, and establish governance frameworks.
The companies that navigate this shift successfully will combine autonomous capabilities with strong oversight and strategic clarity.
