Show HN: Chart Preview – Preview environments for Helm charts on every PR

I’m a software engineer who accidentally became my team’s Kubernetes person — and eventually the bottleneck for every Helm chart PR.

I built Chart Preview so reviewers could see Helm chart changes running without waiting on me.

A few years ago, my team needed to implement HA for an existing product, which meant deploying on Kubernetes and OpenShift. I spent months learning Kubernetes, Helm, and the surrounding ecosystem. After that, Kubernetes largely became “my thing” on the team.

We later published public Helm charts for the product, and customers started submitting PRs. Those PRs would often sit for months — not because the changes were bad, but because testing them meant manually spinning up a Kubernetes cluster, deploying the chart with the proposed changes, running through test scenarios, and coordinating verification with product and QA. Since I was the only one who could reliably set up those environments, everything waited on me.

I kept thinking: what if the PR itself showed the changes working? What if reviewers could just click a link and see it deployed?

That idea became Chart Preview.

Chart Preview deploys your Helm chart to a real Kubernetes cluster when you open a PR, provides a unique preview URL for that PR, and cleans everything up automatically when the PR closes.

I started by solving a problem I was personally hitting, rather than surveying the whole market upfront. As I built more of it, I looked at existing preview tools and noticed that while there are solid solutions for previewing container-based applications, Helm-specific workflows introduce different challenges — chart dependencies, layered values files, and opinionated chart structures. That pushed me to focus Chart Preview on being Helm-native first, rather than adapting a container preview workflow to fit Helm.

Under the hood, it’s built in Go using the Helm v3 SDK. The architecture is an API server with workers pulling jobs from a PostgreSQL queue — no Kubernetes operator, just services talking directly to the Kubernetes API. Each preview runs in its own namespace with deny-all NetworkPolicies, ResourceQuotas, and LimitRanges. GitHub integration is done via a GitHub App for check runs and webhooks, with GitLab supported via the REST API.

There were a few interesting challenges along the way. Injecting preview hostnames into Ingress resources without corrupting manifests took several iterations. Helm uninstall doesn’t always clean everything up, so deleting the entire namespace turned out to be the safest fallback. Handling rapid pushes to the same PR required build numbering so the latest push always wins. And while the Helm SDK is powerful, it’s under-documented — I spent a lot of time reading Helm’s source code.

I’ve been building and testing this for a few months using real charts like Grafana, podinfo, and WordPress to validate the workflow. It’s early, but it works, and now I’m trying to understand whether other teams have the same pain point I did.

You can try it by installing the GitHub App here: https://github.com/apps/chart-preview

I’d love feedback on a few things:

Does this solve a real problem for your team, or is shared staging “good enough”?

What’s missing that would make you actually use it?

Are there Helm charts this wouldn’t work for? (Cluster-scoped resources are intentionally blocked.)

Happy to answer questions about the implementation.

17 points | by chartpreview 11 hours ago

3 comments

  • JimBlackwood 4 hours ago
    I don’t fully understand the problem this is trying to solve. Or at least, if this solves your problem then it feels like you have bigger problems?

    If you have staging/production deployments in CI/CD and have your Kubernetes clusters managed in code, then adding feature deployments is not any different from what you have done already. Paying for a third party app seems (to me) both a waste of money and a problem waiting to happen.

    How we do it: For a given helm chart, we have three sets of value files; prod, staging and preview. An Argo application exists for each prod, staging and preview instance.

    When a new branch is created, a pipeline runs that renders a new preview chart (with some variables based on branch/tag name), creates a new argo application and commits this to the kubernetes repo. Argo picks it up, deploys it to the appropriate cluster and that’s it. Ingress hostnames get picked up and DNS records get created.

    When the branch gets deleted, a job runs to remove the argo application and done.

    It’s the same for staging and production, I really wouldn’t want a different deployment pipeline for preview environments - that just increases complexity and the chances of things going wrong.

  • kodama-lens 5 hours ago
    Great way to apply your gathered Kubernetes knowledge! But I find the pricing tough and I don't like to give 3rd party tools that level of access to my clusters. I know its early state but I see several problems: Right now it seems to be GH only, a lot of people are on selfhosted GitLab. Does it only support helm or also kustomize and raw extra manifests. What about GitOps?

    I've build similar solution for clints, mostly only CI based. Often with Flux/ArgoCD support. The thing I found difficult was to show the diff of the rendered manifest also while applying the app. Since I'm not a fan of the rendered manifest pattern this often involved extra branches. Is this handled by the app?

    • chartpreview 3 hours ago
      Thanks for the thoughtful feedback — these are all fair concerns.

      The app doesn’t access your production clusters. Previews run in managed, isolated clusters, and each preview gets its own namespace with deny-all NetworkPolicies, quotas, and automatic teardown. That said, if the concern is about installing charts into any external K8 cluster at all, then I agree this won’t be a fit — and that’s a reasonable constraint.

      It’s GitHub-first today simply because that’s where I personally hit the problem. GitLab is supported via the REST API using a personal access token that you can scope as tightly as you want, so you can trigger previews from GitLab CI today.

      Native GitLab App integration (auto-triggering on MRs, status updates, etc.) is something I’ve thought about, but I wanted to validate the core workflow first.

      It is intentionally Helm-only for now. The specific pain I was trying to solve was reviewing Helm changes — values layering, dependencies, and template changes — by seeing them running in a real environment, rather than trying to generalise across all deployment models.

      I’m not trying to replace or compete with Flux or Argo CD. The idea is to validate Helm changes before they land in a GitOps repo or get promoted through environments — essentially answering the question of “does this look OK and actually work when deployed, so should be safe to merge?”

      It doesn’t expose rendered manifest diffs today, but I agree that would be valuable — especially a readable “what changed after Helm rendering” view tied back to the PR. I’m still thinking through the cleanest way to do that without adding a lot of complexity to the workflow.

      Appreciate you taking the time give your feedback. Thanks.

  • mrj 6 hours ago
    Congrats! I could see the value of this, for sure. I handle this problem by spinning up a preview environment in a namespace. Each branch gets its own and a script takes care of setting up namespaces for a couple of shared resources for staging (rabbit and temporal).

    It was a lot of work setting that up though. Preview environments based on a helm deploy makes sense. I wish this had been available before I did all that.

    • chartpreview 3 hours ago
      Thanks for the feedback — you’re spot on about the setup this is trying to speed up. The namespace-per-branch approach works well (and that’s what this does), but the setup around ingress, DNS, secrets, and cleanup tends to be the real time sink. Glad it resonates.

      Thanks again.