Use this file to discover all available pages before exploring further.
Microsoft Azure is a fully supported deployment platform for Tensor9 appliances. Deploying to Azure customer environments provides access to Microsoft’s global cloud infrastructure with enterprise-grade security, compliance, and integration with customers’ existing Azure resources.
When you deploy an application to Azure customer environments using Tensor9:
Customer appliances run entirely within the customer’s Azure subscription
Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
Managed identities and RBAC enable your control plane to manage customer appliances with customer-approved permissions
Service equivalents compile your origin stack into Azure-native resources
Azure appliances leverage Azure services for compute, storage, networking, and observability, providing enterprise-grade infrastructure that integrates seamlessly with your customers’ existing Azure environments.
Azure appliances are deployed using Azure-native services orchestrated by your Tensor9 control plane.
1
Customer provisions managed identities
Your customer creates four managed identities in their Azure subscription, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These identities define what the Tensor9 controller can do within their environment.The customer configures RBAC role assignments that allow your control plane to impersonate these managed identities with appropriate conditions (time windows, approval tags, etc.).
2
You create a release for the customer appliance
You create a release targeting the customer’s appliance:
Your control plane compiles your origin stack into a deployment stack tailored for Azure, compiling any non-Azure resources to their Azure service equivalents. The deployment stack downloads to your local environment.
3
Customer grants deploy access
The customer approves the deployment by granting temporary deploy access. This can be manual (updating RBAC role assignments) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can use the Deploy managed identity in the customer’s subscription.
4
You deploy the release
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller uses the Deploy managed identity and creates all infrastructure resources in the customer’s Azure subscription:
Virtual networks, subnets, network security groups
AKS clusters, Container Instances, Azure Functions
Azure Database for PostgreSQL/MySQL, Azure Blob Storage, Azure Cache for Redis
Azure Monitor workspaces, Log Analytics, managed identities, Azure DNS zones
Any other Azure resources defined in your origin stack
5
Steady-state observability begins
After deployment, your control plane uses the Steady-state managed identity to continuously collect observability data (logs, metrics, traces) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.
When you deploy an origin stack to Azure customer environments, Tensor9 automatically compiles resources from other cloud providers to their Azure equivalents.
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.
Each managed identity is created in the customer’s Azure subscription with RBAC role assignments that allow your control plane to use it.Example: Deploy managed identity with conditional access
Customer approves a deployment by granting the Managed Identity Operator role and setting up conditional access policies. This can be done manually or through automated approval workflows.
2
You execute deployment locally
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3
Controller uses Deploy identity and creates resources
For each resource Terraform attempts to create, the Tensor9 controller inside the appliance uses the Deploy managed identity and creates the resource in the customer’s subscription.All infrastructure changes occur within the customer’s subscription using their Deploy managed identity permissions.
4
Deploy access expires
After the time window expires or the role assignment is removed, the Deploy identity can no longer be used. Your control plane automatically reverts to using only the Steady-state identity for observability.
The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
Your application resources run in their own VNet(s), completely separate from the Tensor9 controller VNet. The application VNet topology is defined entirely by your origin stack - whatever VPC resources you define in your origin stack will be compiled to Azure VNet resources in the appliance.Example: Application VNet with internet-facing load balancerIf your origin stack defines an AWS VPC with public subnets and a load balancer, that topology will compile to Azure VNet resources in the customer’s appliance:
This application VPC topology is deployed alongside the Tensor9 controller VNet, but they remain completely separate. The controller VNet manages the control plane connection, while the application VNet handles your application’s traffic and resources.
Azure appliances automatically provision private artifact repositories to store container images and application files deployed by your deployment stacks.
When you deploy an appliance, Tensor9 automatically provisions a private Azure Container Registry in the customer’s Azure subscription to store your container images.Example: Origin stack with container serviceYour AWS origin stack references container images from your vendor’s Amazon ECR:
# ECS Fargate service in your origin stackresource "aws_ecs_task_definition" "api" { family = "myapp-api-${var.instance_id}" requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = "256" memory = "512" container_definitions = jsonencode([ { name = "api" # Reference to your vendor ECR registry image = "123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/api:1.0.0" portMappings = [ { containerPort = 8080 protocol = "tcp" } ] environment = [ { name = "INSTANCE_ID" value = var.instance_id } ] } ]) tags = { "instance-id" = var.instance_id }}
Container copy during deploymentWhen you deploy the deployment stack, Tensor9 automatically:
Detects the container image reference in your ECS task definition
Provisions a private Azure Container Registry in the appliance (e.g., myappacr000000007e.azurecr.io)
Copies the container image from your vendor ECR registry to the appliance’s private ACR
Rewrites the deployment stack to reference the appliance-local registry
The compiled deployment stack will contain an Azure Container Instances or AKS deployment with the rewritten image reference:
This ensures the container image is stored locally in the customer’s subscription and the application doesn’t depend on cross-subscription access to your vendor registry.Artifact lifecycleContainer artifacts are tied to the deployment stack lifecycle:
Deploy (tofu apply): Tensor9 copies the container image from your vendor registry to the appliance’s private registry
Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private registry
This ensures that artifacts are cleaned up when deployments are removed, preventing orphaned resources.
For Lambda functions in your AWS origin stack, Tensor9 automatically handles copying function source code to the customer’s Azure environment:
# Lambda function in your AWS origin stackresource "aws_lambda_function" "processor" { function_name = "myapp-processor-${var.instance_id}" handler = "processor.process_event" runtime = "python3.11" role = aws_iam_role.processor.arn # Reference to function code in your vendor S3 bucket s3_bucket = "vendor-lambda-sources" s3_key = "processor-v1.0.0.zip" environment { variables = { INSTANCE_ID = var.instance_id } } tags = { "instance-id" = var.instance_id }}
During deployment, Tensor9:
Provisions a private Azure Storage Account in the appliance for function sources
Copies the Lambda source archive from your vendor S3 bucket to the appliance’s Storage Account
Compiles the Lambda function to an Azure Function with the appliance-local source reference
Like container images, destroying the deployment stack (tofu destroy) removes the copied function source archives.See Artifacts for comprehensive documentation on artifact management, including immutability requirements and supported artifact types.
Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store in your AWS origin stack, then pass them to your application as environment variables.
Your application reads secrets from environment variables:
import os# Read secret from environment variabledb_password = os.environ['DB_PASSWORD']
If your application dynamically fetches secrets using AWS SDK calls (e.g., boto3.client('secretsmanager').get_secret_value()), those calls will NOT be automatically mapped by Tensor9. Always pass secrets as environment variables.
See Secrets for detailed secret management patterns.