Use this file to discover all available pages before exploring further.
DigitalOcean is a fully supported deployment platform for Tensor9 appliances. Deploying to DigitalOcean customer environments provides access to managed Kubernetes (DOKS), managed databases, and object storage with a simplified infrastructure model ideal for mid-market customers.
DigitalOcean appliances are deployed on DigitalOcean Kubernetes (DOKS) with managed services orchestrated by your Tensor9 control plane.
1
Customer provisions API tokens
Your customer creates four API tokens in their DigitalOcean account, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These tokens define what the Tensor9 controller in the appliance can do within their environment.The customer configures token scopes and expiration times to control when and how long each permission phase is active.
2
You create a release for the customer appliance
You create a release targeting the customer’s appliance:
Your control plane compiles your origin stack into a deployment stack tailored for DigitalOcean, compiling any non-DigitalOcean resources to their DigitalOcean service equivalents. The deployment stack downloads to your local environment.
3
Customer grants deploy access
The customer approves the deployment by providing or activating the Deploy API token. This can be manual (sharing the token) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can use the Deploy token to create resources in the customer’s account.
4
You deploy the release
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller uses the Deploy API token and creates all infrastructure resources in the customer’s DigitalOcean account:
Any other DigitalOcean resources defined in your origin stack
5
Steady-state observability begins
After deployment, your control plane uses the Steady-state token to continuously collect observability data (logs, metrics) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.
When you deploy an origin stack to DigitalOcean customer environments, Tensor9 automatically compiles resources from other cloud providers to their DigitalOcean equivalents.
When compiling a deployment stack for DigitalOcean:
DigitalOcean-native resources are preserved - If your origin stack already uses DigitalOcean resources (DOKS, Managed PostgreSQL, Spaces), they remain unchanged
AWS resources are compiled - AWS resources are converted to their DigitalOcean equivalents
Kubernetes resources are deployed - Most compute workloads run on DOKS (DigitalOcean Kubernetes)
Configuration is adjusted - Resource configurations are modified to match DigitalOcean conventions
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.
Customer approves a deployment by providing the Deploy API token to the Tensor9 controller. This can be done via the Tensor9 UI, CLI, or automated workflows.
2
You execute deployment locally
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3
Controller uses Deploy token and creates resources
For each resource Terraform attempts to create, the Tensor9 controller inside the appliance uses the Deploy API token and creates the resource in the customer’s account.All infrastructure changes occur within the customer’s account using their Deploy token permissions.
4
Deploy access expires
After the token expires or is revoked, the Deploy token can no longer be used. Your control plane automatically reverts to using only the Steady-state token for observability.
When an appliance is deployed, Tensor9 creates a dedicated DOKS cluster containing the Tensor9 controller. The controller:
Communicates outbound to your Tensor9 control plane over HTTPS
Manages appliance resources using the customer’s API tokens
Forwards observability data to your observability sink
Does not accept inbound connections - all communication is outbound-only
The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
# Example: Controller DOKS cluster (managed by Tensor9)resource "digitalocean_kubernetes_cluster" "tensor9_controller" { name = "tensor9-controller-${var.instance_id}" region = var.region version = "1.28.2-do.0" node_pool { name = "controller-pool" size = "s-1vcpu-2gb" node_count = 2 } vpc_uuid = digitalocean_vpc.tensor9_controller.id tags = ["tensor9", "controller", "instance-id:${var.instance_id}"]}# VPC for controller isolationresource "digitalocean_vpc" "tensor9_controller" { name = "tensor9-controller-${var.instance_id}" region = var.region ip_range = "10.0.0.0/24"}# No inbound firewall rules required - controller only connects outbound
Your application resources run on their own DOKS cluster or use managed services, completely separate from the Tensor9 controller infrastructure. The application infrastructure is defined entirely by your origin stack.Example: Application DOKS cluster with load balancer
# Application DOKS cluster (compiled from your AWS origin stack)resource "digitalocean_kubernetes_cluster" "application" { name = "myapp-cluster-${var.instance_id}" region = var.region version = "1.28.2-do.0" node_pool { name = "app-pool" size = "s-2vcpu-4gb" auto_scale = true min_nodes = 2 max_nodes = 10 } vpc_uuid = digitalocean_vpc.application.id tags = ["myapp", "instance-id:${var.instance_id}"]}# VPC for applicationresource "digitalocean_vpc" "application" { name = "myapp-vpc-${var.instance_id}" region = var.region ip_range = "10.1.0.0/16"}# Load Balancerresource "digitalocean_loadbalancer" "application" { name = "myapp-lb-${var.instance_id}" region = var.region forwarding_rule { entry_port = 443 entry_protocol = "https" target_port = 8080 target_protocol = "http" certificate_id = digitalocean_certificate.app.id } healthcheck { port = 8080 protocol = "http" path = "/health" } droplet_tag = "myapp-backend-${var.instance_id}"}
DigitalOcean uses string-based tags (not key-value pairs like AWS/GCP) for most resources. Tag all resources with instance-id to enable observability and resource discovery:For most resources (Droplets, DOKS, Databases, Load Balancers, Volumes):
resource "digitalocean_kubernetes_cluster" "app" { name = "myapp-cluster-${var.instance_id}" region = var.region tags = [ "instance-id:${var.instance_id}", "application:my-app", "managed-by:tensor9" ]}resource "digitalocean_database_cluster" "postgres" { name = "myapp-db-${var.instance_id}" engine = "pg" region = var.region tags = [ "instance-id:${var.instance_id}", "myapp" ]}
DigitalOcean tags are simple strings (e.g., "instance-id:000000007e") for most resources, unlike AWS/Google Cloud which use key-value pairs. Spaces buckets are an exception and support key-value tags. When filtering or querying resources, use the full string tag format.
The instance-id tag:
Allows filtering of observability data by appliance
Helps customers track costs per appliance
Facilitates resource discovery by Tensor9 controllers
When you deploy an appliance, Tensor9 automatically provisions a private container registry in the customer’s DigitalOcean account.Example: Origin stack with DOKS deploymentYour origin stack references container images from your vendor registry:
# Kubernetes deployment in your origin stackapiVersion: apps/v1kind: Deploymentmetadata: name: myapp-api-${var.instance_id}spec: replicas: 3 selector: matchLabels: app: myapp-api template: metadata: labels: app: myapp-api instance-id: ${var.instance_id} spec: containers: - name: api # Reference to your vendor registry image: registry.digitalocean.com/vendor-registry/myapp-api:1.0.0 ports: - containerPort: 8080 env: - name: INSTANCE_ID value: ${var.instance_id}
Container copy during deploymentWhen you deploy the deployment stack, Tensor9 automatically:
Detects the container image reference in your Kubernetes manifests
Provisions a private container registry in the appliance
Copies the container image from your vendor registry to the appliance’s private registry
Rewrites the deployment stack to reference the appliance-local registry
The compiled deployment stack will contain:
spec: containers: - name: api # Rewritten to reference appliance's private registry image: registry.digitalocean.com/customer-registry-000000007e/myapp-api:1.0.0
This ensures the container image is stored locally in the customer’s account and the application doesn’t depend on cross-account access to your vendor registry.Artifact lifecycleContainer artifacts are tied to the deployment stack lifecycle:
Deploy (tofu apply): Tensor9 copies the container image from your vendor registry to the appliance’s private registry
Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private registry
See Artifacts for comprehensive documentation on artifact management.
Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store in your AWS origin stack, then pass them to your application as environment variables.
Your application reads secrets from environment variables:
import os# Read secret from environment variabledb_password = os.environ['DB_PASSWORD']
If your application dynamically fetches secrets using AWS SDK calls (e.g., boto3.client('secretsmanager').get_secret_value()), those calls will NOT be automatically mapped by Tensor9. Always pass secrets as environment variables.
See Secrets for detailed secret management patterns.
Apply the instance-id tag to every resource. DigitalOcean uses string tags for most resources:
# For most resources (DOKS, Databases, Load Balancers, Droplets, Volumes)tags = ["instance-id:${var.instance_id}", "myapp"]# For Spaces buckets (key-value tags)tags = {instance-id = var.instance_idapplication = "my-app"}
This enables:
Observability data filtering
Cost tracking
Resource discovery
Use EKS, ECS Fargate, or Lambda in your AWS origin stack
For compute workloads in your AWS origin stack, prefer managed container and serverless services over EC2 instances. These compile cleanly to DigitalOcean Kubernetes (DOKS) and Functions:
These AWS resources automatically compile to appropriate DigitalOcean equivalents (DOKS, Functions) when deployed to DigitalOcean customer environments.
Use AWS Secrets Manager for sensitive data
Never hardcode secrets. Use AWS Secrets Manager or SSM Parameter Store with parameterized names in your AWS origin stack:
# AWS Secrets Manager (recommended)resource "aws_secretsmanager_secret" "db_creds" { name = "${var.instance_id}/prod/db/credentials" tags = { "instance-id" = var.instance_id }}# Or AWS Systems Manager Parameter Storeresource "aws_ssm_parameter" "api_key" { name = "/${var.instance_id}/prod/api/key" type = "SecureString" value = var.api_key tags = { "instance-id" = var.instance_id }}
Pass secrets to your application as environment variables. Runtime SDK calls to fetch secrets are not automatically mapped by Tensor9.