Kyndryl is taking a direct shot at one of enterprise AI’s biggest roadblocks: trust.
The IT infrastructure giant (NYSE: KD) today introduced a new “policy as code” capability designed to govern how agentic AI workflows operate inside complex, highly regulated environments. The goal is straightforward but ambitious—translate corporate rules, regulatory requirements, and operational controls into machine-readable policies that automatically constrain how AI agents act.
In an era where AI agents are increasingly tasked with autonomous decision-making, that enforcement layer could determine whether enterprises scale AI—or stall.
The Governance Gap in Agentic AI
Agentic AI systems differ from traditional AI models in one critical way: they don’t just generate insights—they take action. They trigger workflows, interact with systems, execute transactions, and in some cases, make operational decisions with limited human intervention.
That autonomy is powerful. It’s also risky.
According to Kyndryl, 31% of enterprise customers cite regulatory or compliance concerns as a primary barrier to scaling recent technology investments. For industries like financial services, public sector agencies, healthcare systems, and supply chains, the idea of semi-autonomous agents operating without strict guardrails is a nonstarter.
Kyndryl’s policy-as-code approach attempts to close that gap.
Rather than layering oversight on top of AI agents after deployment, the company embeds business logic and regulatory constraints directly into how those agents execute. Policies become code. Code becomes enforcement.
How It Works
At the core of the offering is a logical enforcement layer within the Kyndryl Agentic AI Framework. The system translates organizational policies into deterministic execution rules—essentially defining what AI agents can and cannot do before they act.
Key features include:
Deterministic execution: Agents are restricted to actions explicitly permitted by predefined policies, reducing operational and compliance risk.
Hallucination impact controls: Guardrails are designed to block unauthorized or unpredictable agent actions along the workflow—limiting the operational consequences of AI hallucinations.
Audit-by-design transparency: Every action and decision is logged and explainable, supporting compliance audits and internal oversight.
Human supervision dashboards: Enterprises can observe and supervise agent activities through structured monitoring interfaces aligned to testable policies.
In practical terms, this means an AI agent processing financial transactions, approving supply chain changes, or handling public service workflows would be constrained within explicit regulatory and operational boundaries.
Built on Operational Scale
Kyndryl is positioning this capability as more than a theoretical governance model. The company manages nearly 190 million automations each month across mission-critical enterprise systems. Those operational foundations, it argues, provide real-world insight into reliability, workflow orchestration, and risk mitigation.
That experience could give Kyndryl an advantage over AI-native startups that focus primarily on model performance rather than production governance.
Ismail Amla, Senior Vice President at Kyndryl Consult, framed the offering as a structural upgrade to traditional AI controls. “By embedding and codifying business and regulatory requirements directly into AI agent operations, we can help customers execute AI workflows that are governed, transparent, explainable, and aligned with their organizational requirements,” he said.
Why This Matters Now
Enterprises are moving from AI experimentation to AI operationalization. Generative AI pilots are evolving into workflow automation initiatives. CIOs and CTOs are under pressure to demonstrate ROI—but without introducing unacceptable risk.
Regulators, meanwhile, are sharpening their focus on algorithmic accountability. Financial authorities, healthcare regulators, and government oversight bodies are increasingly demanding auditability and explainability in AI systems.
Policy-as-code governance could become a foundational requirement for scaling agentic AI in those environments. The concept mirrors broader infrastructure trends, such as “infrastructure as code” and “security as code,” where automation enforces predefined standards at scale.
In other words, if AI agents are going to act like employees, they’ll need something resembling a rulebook—and an enforcement mechanism.
Competitive Landscape
Kyndryl’s move comes as hyperscalers and enterprise software vendors race to define AI governance frameworks. Microsoft, AWS, Google Cloud, and IBM are all introducing responsible AI toolkits, compliance dashboards, and guardrail services.
What differentiates Kyndryl’s approach is its emphasis on workflow-level enforcement in operational environments rather than model-level content moderation alone. Instead of focusing solely on preventing harmful outputs, policy-as-code governs the downstream actions agents take across systems.
That’s a subtle but significant distinction—particularly for enterprises where operational errors, not just text outputs, carry real financial and regulatory consequences.
The Bottom Line
Agentic AI promises faster decision-making, reduced costs, and automation at unprecedented scale. But without structured governance, it also introduces unpredictability in environments that can’t afford it.
Kyndryl’s policy-as-code capability doesn’t make AI agents smarter. It makes them accountable.
For enterprises operating in regulated, mission-critical domains, that may be the prerequisite for turning AI from pilot project into production backbone.
Get in touch with our Adtech experts
