From Agents to an AI-Native Platform

How I led my team in evolving our $1.3B platform—augmenting our out-of-the-box AI capabilities by providing the trusted foundation for customers to customize and extend the platform with their own agentic solutions.

"In times that feel like a relentless tempest, my job as a leader hasn't been to find calmer seas but to help my team build better ships."

The Starting Dilemma

2023: having built a $1.3B ARR platform used by 90% of the Fortune 500, we faced a strategic fork in the road with GenAI.

Choice A: Reinvent Core ValueImmediate incremental revenue by adding AI to enhance existing capabilities.
Choice B: Platform ExtensibilityEmpower customers to customize and extend the platform with their own agentic solutions.
Our ApproachWe needed to understand the realities of production AI first. We built out-of-the-box solutions ourselves, mastering the patterns and practices we would later open up for customers to customize and extend.

Early Product Bets that Defined the Paradigm:

  • Collaboration: Learned customers want AI as teammates, not replacements; demanded determinism for serious use.

  • Transparency: Established absolute transparency—the platform must explain every auto-generated snippet.

  • Trust: Realized "Trust is a feature." Built deep audit logs into SRE agent. This foundational principle underpins our future platform.

Shifting Culture & Craft

Transitioning from a "know-it-all" to a "learn-it-all" culture required leadership trade-offs and new operational cadences.

Redefining Cadence & Success

Shifted from semesterly, to quarterly, to weekly planning via impact-effort matrices. Freed PMs from daily revenue obsession to focus strictly on learning, retention, and ROI.

Customer Centricity Mechanisms

Mandated AI-summarized friction logs and started every offsite and all-hands with a customer narrative.

Ruthless Deprecation

Deprecated commodity legacy services to free up engineering capacity, shifting "productive capital" away from maintaining the past to building frontier AI capabilities.

High Agency Squads

Broke down silos by empowering small, cross-functional squads. Like the agents they built, they had a clear mission and guardrails, but full agency on how to execute.

The Platform Expansion

Customers realized the ROI of our out-of-the-box agents and wanted to extend them. We evolved our architecture to allow deep customization, becoming the definitive "app server for AI."

Agent Loop & AI Gateway

Agentic process automation produces immediate ROI. Built the Agent Loop into Logic Apps to enable 120K+ enterprises to run agents at infinite production scale as part of their processes

AI governance is the unlock for agentic adoption at scale. First to market with AI Gateway and Foundry Control Plane enabling Agent Identity, Visibility and Control.

Leadership Tradeoff

We risked our 6-year Gartner lead and $600M API Mgmt business, pivoting its 80K-customer infrastructure to be first to market with the AI Gateway and Foundry Control Plane.

Knowledge

Agents require durable memory, semantic caching, and fast vector retrieval to function effectively in real-world scenarios.

Drawing from capabilities like Knowledge IQ in Microsoft Foundry, we focused on intelligently indexing and measuring retrieval quality so agents remain securely grounded in enterprise context.

Leadership Tradeoff

Deprecating commodity services, we boldly redirected our $400M ARR Azure Managed Redis business to focus entirely on agentic caching/vector needs, partnering with Redis Inc. to move faster.

MCP & Connector Catalog

To tame tool sprawl and reduce hallucinations, we transformed 1400+ connectors into a private catalog of MCP servers with dynamic composition and secure access controls.

Ecosystem Contribution: We didn't just use MCP; we engaged the community and drove the addition of Auth protocols to the standard.

The Partner Ecosystem

Evolving the differentiated ecosystem program I created for Azure, we launched the Foundry-native partner services program.

We deeply integrated E2E vertical partners (Kore.ai), MLOps leaders (Weights & Biases, Arize, Mistral) and core infrastructure (Vast Data) directly into the platform fabric.

Catching Tech Currents

Coding Sandboxes & Flywheels

Compute sandboxes for agent execution created a counter-intuitive massive flywheel, driving aggregate platform usage via self-optimizing RALPH loops.

Self-Optimizing Architecture

Resolving Routing Tensions

Resolving the tension between probabilistic model choice and deterministic policy. Embedding tools within Gateway constraints for enterprise safety.

Deterministic Safety Loops

Agentic UX Integration

Beyond chat: Agents returning render instructions via MCP to inject UI directly into conversation flows. Adopting OpenAI/MCP App standards.

Composable Render Paradigms

The New Developer Experience

Devs no longer start from scratch; they extend out-of-the-box agents with custom intent. We ship knowledge and skills that customers can seamlessly adapt to their workflows.

Intent-Based Development
The Strategic Thesis

"Making it easy for users to define and run evals is the most important capability an AI-native platform can provide."

Transitioning evals from an internal mandate to a first-class customer product allows users to establish their own guardrails and define intent explicitly.

Defining Intent

By authoring evals, customers explicitly define what they want their extended agents to do and establish custom guardrails.

LLM-Based Evaluation

We leverage LLMs to critique LLMs. Entire companies are built on making evals easy. We bake this into DevOps and production.