Use this file to discover all available pages before exploring further.
Amazon Web Services (AWS) is Tensor9’s primary deployment platform and the reference implementation for all service equivalents. Deploying to AWS provides the most comprehensive feature set and serves as the baseline from which other form factors are derived.
When you deploy an application to AWS customer environments using Tensor9:
Customer appliances run entirely within the customer’s AWS account
Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
Cross-account IAM roles enable your control plane to manage customer appliances with customer-approved permissions
Service equivalents compile your origin stack into AWS-native resources (or preserve them if already AWS-based)
AWS appliances leverage AWS-native services for compute, storage, networking, and observability, providing enterprise-grade infrastructure that integrates seamlessly with your customers’ existing AWS environments.
AWS appliances are deployed using AWS-native services orchestrated by your Tensor9 control plane.
1
Customer provisions IAM roles
Your customer creates four IAM roles in their AWS account, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These roles define what the Tensor9 controller in the appliance can do within their environment.The customer configures trust policies that allow the Tensor9 controller to assume these roles with appropriate conditions (time windows, approval tags, etc.).
2
You create a release for the customer appliance
You create a release targeting the customer’s appliance:
Your control plane compiles your origin stack into a deployment stack tailored for AWS, compiling any non-AWS resources to their AWS service equivalents. The deployment stack downloads to your local environment.
3
Customer grants deploy access
The customer approves the deployment by granting temporary deploy access. This can be manual (updating IAM policy conditions) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can assume the Deploy role in the customer’s account.
4
You deploy the release
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller assumes the Deploy role and creates all infrastructure resources in the customer’s AWS account:
VPCs, subnets, security groups
EKS clusters, Lambda functions
RDS databases, S3 buckets, ElastiCache clusters
CloudWatch log groups, IAM roles, Route 53 records
Any other AWS resources defined in your origin stack
5
Steady-state observability begins
After deployment, your control plane uses the Steady-state role to continuously collect observability data (logs, metrics, traces) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.
Each role is created in the customer’s AWS account with a trust policy that allows the Tensor9 controller in the appliance to assume it.Example: Deploy role with conditional access
Customer approves a deployment by setting the DeployAccess tag to “enabled” and defining a time window. This can be done manually or through automated approval workflows.
2
You execute deployment locally
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3
Controller assumes Deploy role and creates resources
For each resource Terraform attempts to create, the Tensor9 controller inside the appliance assumes the Deploy role and creates the resource in the customer’s account.All infrastructure changes occur within the customer’s account using their Deploy role permissions.
4
Deploy access expires
After the time window expires, the Deploy role can no longer be assumed. Your control plane automatically reverts to using only the Steady-state role for observability.
The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
Your application resources run in their own VPC(s), completely separate from the Tensor9 controller VPC. The application VPC topology is defined entirely by your origin stack - whatever VPC resources you define in your origin stack will be deployed into the appliance.Example: Application VPC with internet-facing load balancerIf your origin stack defines a VPC with public subnets, an internet gateway, and a load balancer, that exact topology will be created in the customer’s appliance:
This application VPC topology is deployed alongside the Tensor9 controller VPC, but they remain completely separate. The controller VPC manages the control plane connection, while the application VPC handles your application’s traffic and resources.
AWS appliances automatically provision private artifact repositories to store container images and application files deployed by your deployment stacks.
When you deploy an appliance, Tensor9 automatically provisions a private ECR repository in the customer’s AWS account to store your container images.Example: Origin stack with ECS serviceYour origin stack references container images from your vendor ECR repository:
Container copy during deploymentWhen you deploy the deployment stack, Tensor9 automatically:
Detects the container image reference in your ECS task definition
Provisions a private ECR repository in the appliance (e.g., 987654321098.dkr.ecr.us-east-1.amazonaws.com/myapp-api-000000007e)
Copies the container image from your vendor ECR (123456789012.dkr.ecr.us-west-2.amazonaws.com/myapp-api:1.0.0) to the appliance’s private ECR
Rewrites the deployment stack to reference the appliance-local ECR repository
The compiled deployment stack will contain:
container_definitions = jsonencode([ { name = "api" # Rewritten to reference appliance's private ECR image = "987654321098.dkr.ecr.us-east-1.amazonaws.com/myapp-api-000000007e:1.0.0" # ... rest of configuration }])
This ensures the container image is stored locally in the customer’s account and the application doesn’t depend on cross-account access to your vendor ECR.Artifact lifecycleContainer artifacts are tied to the deployment stack lifecycle:
Deploy (tofu apply): Tensor9 copies the container image from your vendor ECR to the appliance’s private ECR
Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private ECR
This ensures that artifacts are cleaned up when deployments are removed, preventing orphaned resources.
For Lambda functions, Tensor9 supports copying Lambda deployment packages (zip files) from S3. This follows the same copy pattern as container images:
# Lambda function referencing S3 deployment package in your origin stackresource "aws_lambda_function" "processor" { function_name = "myapp-processor-${var.instance_id}" role = aws_iam_role.lambda.arn handler = "index.handler" runtime = "python3.11" # Reference to Lambda zip in your vendor S3 bucket s3_bucket = "my-vendor-lambda-artifacts" s3_key = "functions/processor-v1.0.0.zip" environment { variables = { INSTANCE_ID = var.instance_id } } tags = { "instance-id" = var.instance_id }}
During deployment, Tensor9:
Provisions a private S3 bucket in the appliance for Lambda artifacts
Copies the Lambda zip file from your vendor S3 bucket to the appliance’s S3 bucket
Rewrites the Lambda function definition to reference the appliance-local S3 bucket
Like container images, destroying the deployment stack (tofu destroy) removes the copied Lambda deployment packages.See Artifacts for comprehensive documentation on artifact management, including immutability requirements and supported artifact types.
Your application reads secrets from environment variables:
import os# Read secret from environment variabledb_password = os.environ['DB_PASSWORD']
Pass secrets as environment variables rather than using runtime SDK calls. While boto3.client('secretsmanager').get_secret_value() works natively in AWS appliances, using environment variables ensures your application works consistently across all deployment targets (AWS, Google Cloud, DigitalOcean).
See Secrets for detailed secret management patterns.