Use this file to discover all available pages before exploring further.
Kubernetes resources can be used with Tensor9 by embedding them within Terraform or CloudFormation origin stacks. Unlike other infrastructure-as-code formats that Tensor9 supports, Kubernetes manifests cannot be used as standalone origin stacks - they must be embedded within a parent origin stack.
A Kubernetes origin stack consists of Kubernetes resources (Deployments, Services, ConfigMaps, etc.) defined within a Terraform or CloudFormation origin stack using the respective provider’s Kubernetes resources.Key characteristic: Kubernetes resources are always embedded within another origin stack format. The parent origin stack (Terraform or CloudFormation) serves as the container, while Kubernetes manifests define the workload orchestration.
Your Kubernetes resources should be part of your existing Terraform or CloudFormation configuration. Tensor9 is designed to work with the infrastructure-as-code you already have - you don’t need to rewrite your Kubernetes deployments just for Tensor9. The goal is to maintain a single stack that works for both your cloud deployment and private customer deployments.
Kubernetes manifests define how your application runs (pods, deployments, services), but they don’t provision the underlying infrastructure (clusters, networks, load balancers). By embedding Kubernetes within Terraform or CloudFormation, you get:
Complete infrastructure: The parent stack provisions the cluster (EKS, GKE, AKS) and supporting infrastructure
Unified deployment: One release process deploys both infrastructure and workloads
Define Kubernetes resources in Terraform/CloudFormation
In your Terraform or CloudFormation origin stack, use the Kubernetes provider to define your Kubernetes resources. For Terraform, this typically means using kubernetes_manifest or kubernetes_deployment resources:Terraform example:
Publish your parent origin stack (Terraform or CloudFormation) to your control plane. When you create a release for an appliance, your control plane:
Finds Kubernetes resources: Scans the origin stack for kubernetes_manifest, kubernetes_deployment, and other Kubernetes provider resources
Extracts container images: Identifies all container image references in Kubernetes specs
Prepares image copying: Configures the deployment stack to copy images to the appliance’s container registry (specific to the cloud provider)
Rewrites image references: Updates container image fields to point to the locally-copied images
Compiles the deployment stack: Generates a ready-to-deploy stack with all Kubernetes resources intact
The result is a deployment stack that includes both your infrastructure and Kubernetes workloads. When deployed, the deployment stack will copy the container images into the customer’s appliance.
3
Deploy to the appliance
Deploy the compiled deployment stack using the parent stack’s tooling:For Terraform deployment stacks:
cd acme-corp-productiontofu inittofu apply
For CloudFormation deployment stacks:
Your control plane automatically creates the CloudFormation stack in your control plane’s AWS account. Monitor deployment using:
tensor9 report -customerName acme-corp
During deployment, the Kubernetes resources are applied to the customer’s cluster with all container images pointing to the locally-copied versions.
Tensor9 supports other Kubernetes provider resources (Services, ConfigMaps, Secrets, etc.). These resources pass through compilation unchanged, as they typically don’t reference external artifacts.
Helm charts can be deployed using the Terraform Helm provider. Helm is a package manager for Kubernetes that bundles multiple Kubernetes resources into a single deployable unit called a “chart.”
1
Add the Helm provider to your Terraform configuration
Include the Helm provider in your required_providers block:
The helm_release resource is included in the deployment stack unchanged.
2
Charts deploy at runtime
When you run tofu apply on the deployment stack, Terraform installs the Helm chart into the cluster.
3
Container images in charts
Container images referenced within Helm charts are pulled from their original registries at deployment time.
Currently, Tensor9 does not automatically copy container images referenced within Helm charts to the appliance’s container registry. This means Helm charts must be able to pull images from their original registries (e.g., Docker Hub, GitHub Container Registry). If your appliance cannot reach external registries, consider using Kubernetes manifests directly instead of Helm charts, or pre-pull images and reference them from a local registry.
⚠️ Skipped: nginx:latest (no registry - assumed to be publicly available in the appliance)
Images without a registry are assumed to be publicly available from Docker Hub and are not copied. If your appliance cannot reach Docker Hub, make sure to include the registry prefix: docker.io/nginx:latest instead of nginx:latest.
Copied container images are stored in the appliance’s container registry. The storage location is determined by the appliance’s form factor and is handled automatically by Tensor9:
Support for Kubernetes resources in CloudFormation origin stacks is under development. Currently, Terraform is the recommended approach for embedding Kubernetes resources.
If you have a CloudFormation + Kubernetes use case, please reach out to [email protected] to discuss your requirements.
This creates the EKS cluster, node groups, and deploys your Kubernetes workloads. The deployment stack will copy container images into the customer’s appliance.
Include the full registry in your container image references (docker.io/nginx:latest instead of nginx:latest). This ensures Tensor9 can copy the images to customer environments.
Use versioned image tags
Avoid using :latest tags in production. Use specific version tags (v1.0.0, sha-abc123) to ensure consistent deployments across customer appliances.
Configure provider authentication carefully
When configuring the Kubernetes provider to connect to your cluster, use data sources (like aws_eks_cluster) rather than hardcoded values. This ensures the provider configuration adapts to each appliance’s cluster.
Use environment variables for secrets
Store secrets in AWS Secrets Manager in your origin stack, then pass them to your Kubernetes pods as environment variables. Avoid using Kubernetes Secrets for sensitive data, as Tensor9 does not automatically map runtime SDK calls to fetch secrets across different cloud environments.
# Define secret in AWS Secrets Managerresource "aws_secretsmanager_secret" "db_password" { name = "${var.instance_id}/prod/db/password"}# Reference in Kubernetes Deployment as environment variableresource "kubernetes_deployment" "app" { spec { template { spec { container { env { name = "DB_PASSWORD" value_from { secret_key_ref { name = aws_secretsmanager_secret.db_password.name key = "password" } } } } } } }}
For non-sensitive configuration, Kubernetes ConfigMaps are appropriate.
Mind resource limits and requests
Always specify resource requests and limits for containers. This ensures proper scheduling and prevents resource contention in customer clusters.
Kubernetes YAML files cannot be used as standalone origin stacks. They must be embedded within Terraform or CloudFormation. This is by design - Kubernetes defines workloads, not the underlying infrastructure.
Cluster must exist before Kubernetes resources
The parent origin stack (Terraform/CloudFormation) must provision or reference the Kubernetes cluster before defining Kubernetes resources. Use proper dependency management (depends_on in Terraform) to ensure correct ordering.
Kubernetes provider versions
Ensure your Kubernetes provider version is compatible with your target cluster version. Different Kubernetes versions may have different API schemas for resources.
Image pull secrets
If your container images require authentication to pull, you’ll need to configure image pull secrets in your Kubernetes manifests. Tensor9 copies the images but doesn’t automatically create pull secrets.
Symptom: Deployment fails because container image cannot be pulled from the original registry.Cause: Image reference doesn’t include a registry, or Tensor9 couldn’t detect the image reference.Solution:
Ensure your image reference includes the full registry: docker.io/nginx:latest
Check that the image is referenced in a supported field (spec.containers[].image)
Review compilation logs for warnings about skipped images
Kubernetes provider authentication fails
Symptom: Terraform apply fails with “unable to connect to Kubernetes cluster” error.Cause: Kubernetes provider is not correctly configured to connect to the cluster.Solution:
Verify the cluster exists before applying Kubernetes resources
Use data sources to dynamically configure the provider
Check that IAM roles/permissions allow cluster access
Deployment applies but pods don't start
Symptom: Kubernetes deployment is created but pods fail to start with ImagePullBackOff.Cause: Pods cannot pull the container image from the appliance registry.Solution:
Verify that the deployment stack shows rewritten image references
Check that the node group has permissions to pull from ECR (for AWS)
Ensure the image was successfully copied during compilation