The Q1 2026 CNCF Technology Landscape Radar, released March 24 at KubeCon + CloudNativeCon Europe in Amsterdam, surveyed 400+ professional developers through a partnership with SlashData. Three tools landed in the "Adopt" position for application delivery: Helm, Backstage, and kro.
Some context on what "Adopt" means here. This isn't a CNCF endorsement or a maturity rating from the foundation itself. It's aggregated developer sentiment: enough practitioners rated these tools highly on maturity, usefulness, and likelihood to recommend that they cleared the threshold. Helm hit 94% four-to-five-star ratings on reliability and stability. Backstage and kro scored strongly on usefulness. The methodology matters: these are self-reported responses from developers already working with cloud native tooling, not a random sample.
The more interesting signal is what sits alongside the application delivery findings. According to the same report, 28% of organizations now have dedicated platform engineering teams, 41% run multi-team collaboration models for their internal developer platforms, and 35% report using a hybrid platform approach to integrate AI workloads, combining their existing developer platforms with specialized AI tooling.
Three tools, three different maturity profiles
Putting Helm, Backstage, and kro in the same "Adopt" bucket obscures significant differences in where each tool sits architecturally and operationally.
Helm is the oldest of the three and the least surprising inclusion. It is the de facto packaging standard for Kubernetes applications. The 94% maturity rating reflects reality: if you're deploying anything to Kubernetes, you're almost certainly consuming Helm charts, even if you're rendering them through ArgoCD or Flux rather than running helm install directly. The "Adopt" rating here is more confirmation than news.
Backstage, Spotify's open-source developer portal now under CNCF incubation, occupies a different niche entirely. It is a frontend framework for building internal developer portals, not a deployment tool. Its strength is service catalog, documentation aggregation, and plugin-based extensibility. But Backstage has a well-known operational cost: maintaining a Backstage instance requires dedicated engineering effort. You need to build and maintain plugins, keep the catalog populated, and manage the upgrade path across a fast-moving codebase. The "Adopt" rating tells you developers find it useful. It does not tell you the total cost of ownership is low.
kro (Kube Resource Orchestrator) is the newest and most architecturally interesting of the three. It's a kubernetes-sigs project with backing from Google Cloud, AWS, and Azure. kro lets you define a ResourceGraphDefinition that abstracts multiple Kubernetes resources behind a single custom API. You define a schema, describe the resources in YAML, connect them with CEL expressions, and kro generates a CRD and controller at runtime. It handles dependency ordering through a resource DAG and validates definitions before reconciliation.
The practical implication: kro gives platform teams a Kubernetes-native alternative to Crossplane compositions or hand-rolled operators for creating self-service abstractions. As CNCF Ambassador Abby Bangser wrote in December 2025, kro "can reduce reliance on tools such as Helm, Kustomize, or hand-written operators when creating consistent patterns," but she also noted that "challenges remain for end users adopting these frameworks."
The AI infrastructure angle
Here's where the report gets interesting for teams running AI workloads. The 35% hybrid platform figure tells you something important about how organizations are approaching AI infrastructure: they're not building separate stacks. They're extending what they already have.
Chris Aniszczyk, CTO at CNCF, stated: "What's especially notable about this research is how organizations are extending those same platforms to support AI workloads, showing how cloud native is the base layer of powering the next era of applications."
For practical purposes, this means your Helm charts are now packaging model serving deployments alongside your application services. Your Backstage catalog needs entries for ML pipelines and inference endpoints alongside microservices. And kro's ResourceGraphDefinitions could abstract the combination of a model server, GPU node pool configuration, and monitoring stack behind a single kind: MLService API.
But the tradeoffs scale differently for AI workloads:
| Concern | Traditional workloads | AI/ML workloads |
|---|---|---|
| Resource scheduling | CPU/memory, well-understood | GPU allocation, topology-aware scheduling, memory pressure from large models |
| Helm chart complexity | Moderate, stable patterns | High: operator CRDs, runtime configs, model storage volumes, accelerator plugins |
| Backstage catalog modeling | Service + API entities | Services + models + datasets + training runs + experiments, unclear entity boundaries |
| kro composition scope | Single-cluster resource grouping | Often needs multi-cluster (training vs. inference), which kro doesn't handle natively |
When NOT to adopt these for AI infrastructure
The "Adopt" rating might tempt teams to standardize prematurely. Some scenarios where you should resist.
Don't force Helm for AI operator lifecycle management. GPU operators, model serving runtimes (vLLM, Triton), and training frameworks often ship with their own installation patterns. Wrapping them in Helm charts adds a maintenance layer between you and upstream. If the operator's release cycle doesn't match your chart update cadence, you'll spend more time debugging chart drift than you save on standardization.
Don't adopt Backstage if you don't have the team to maintain it. The report says 28% of orgs have dedicated platform engineering teams. If you're in the other 72%, a Backstage instance will become a stale catalog within months. For AI teams specifically, the plugin ecosystem for ML workflow visibility, experiment tracking, model registry integration, and GPU utilization dashboards is still maturing. You might be building more than you're leveraging.
Don't use kro as a replacement for purpose-built ML platforms. kro excels at single-cluster resource composition. AI training workflows frequently span clusters, require specialized scheduling (gang scheduling, elastic training), and need data pipeline orchestration that sits outside Kubernetes primitives. kro's ResourceGraphDefinitions are the wrong abstraction layer for workflow DAGs. That's what Argo Workflows, Kubeflow Pipelines, or Flyte are for.
The real decision tree
The question isn't whether these tools are good. They clearly are, and the developer sentiment data backs that up. The question is whether your organization's platform maturity matches the operational requirements.
If you already run Helm and Kubernetes in production, the "Adopt" rating changes nothing for you. You're already there.
If you're evaluating Backstage, the decision hinges entirely on whether you have 1-2 engineers who can own it full-time. Part-time Backstage maintenance produces a worse developer experience than no portal at all.
If you're looking at kro, the interesting comparison is against Crossplane. Both solve resource composition. kro is lighter weight, Kubernetes-native, and backed by the three major cloud providers through kubernetes-sigs. Crossplane is a CNCF graduated project with a larger ecosystem of providers and compositions. Your choice depends on whether you need cross-cluster orchestration (Crossplane) or single-cluster API abstraction (kro).
Liam Bollmann-Dodd, principal market research consultant at SlashData, summarized the broader trend: "Technologies gaining traction are the ones that are reducing operational friction while enabling teams to standardize application delivery and management."
That's the right framing. These tools earned "Adopt" because they reduce friction. Whether they reduce friction for your AI infrastructure depends on constraints the radar can't measure: team size, existing tooling, workload characteristics, and how much operational complexity you're willing to absorb in exchange for standardization.
The full Q1 2026 CNCF Technology Landscape Radar report is available on the CNCF website.
Nate Hargrove covers platform engineering and cloud infrastructure for The Daily Vibe.



