Use this file to discover all available pages before exploring further.
Google Cloud is a fully supported deployment platform for Tensor9 appliances. Deploying to Google Cloud customer environments provides access to Google’s global infrastructure with enterprise-grade security, scalability, and integration with customers’ existing Google Cloud resources.
When you deploy an application to Google Cloud customer environments using Tensor9:
Customer appliances run entirely within the customer’s Google Cloud project
Your control plane orchestrates deployments from your dedicated Tensor9 AWS account
Service account impersonation enables your control plane to manage customer appliances with customer-approved permissions
Service equivalents compile your origin stack into Google Cloud-native resources
Google Cloud appliances leverage Google Cloud services for compute, storage, networking, and observability, providing enterprise-grade infrastructure that integrates seamlessly with your customers’ existing Google Cloud environments.
Google Cloud appliances are deployed using Google Cloud-native services orchestrated by your Tensor9 control plane.
1
Customer provisions service accounts
Your customer creates four service accounts in their Google Cloud project, each corresponding to a permission phase: Install, Steady-state, Deploy, and Operate. These service accounts define what your control plane can do within their environment.The customer configures IAM policies that allow your control plane’s service account to impersonate these service accounts with appropriate conditions (time windows, approval labels, etc.).
2
You create a release for the customer appliance
You create a release targeting the customer’s appliance:
Your control plane compiles your origin stack into a deployment stack tailored for Google Cloud, compiling any non-Google Cloud resources to their Google Cloud service equivalents. The deployment stack downloads to your local environment.
3
Customer grants deploy access
The customer approves the deployment by granting temporary deploy access. This can be manual (updating IAM policy conditions) or automated (scheduled maintenance windows).Once approved, the Tensor9 controller in the appliance can impersonate the Deploy service account in the customer’s project.
4
You deploy the release
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller inside the customer’s appliance. The controller impersonates the Deploy service account and creates all infrastructure resources in the customer’s Google Cloud project:
Cloud Logging log sinks, service accounts, Cloud DNS records
Any other Google Cloud resources defined in your origin stack
5
Steady-state observability begins
After deployment, your control plane uses the Steady-state service account to continuously collect observability data (logs, metrics, traces) from the customer’s appliance without requiring additional approvals.This data flows to your observability sink, giving you visibility into appliance health and performance.
When you deploy an origin stack to Google Cloud customer environments, Tensor9 automatically compiles resources from other cloud providers to their Google Cloud equivalents. This allows you to maintain a single origin stack and deploy it across different customer environments.
Some popular AWS services (EC2, DynamoDB, EFS) are not currently supported. See Unsupported AWS services for the full list and recommended alternatives.
Each service account is created in the customer’s Google Cloud project with IAM policies that allow your control plane to impersonate it.Example: Deploy service account with conditional access
# Deploy service accountresource "google_service_account" "deploy" { account_id = "tensor9-deploy-${var.instance_id}" display_name = "Tensor9 Deploy Service Account" project = var.customer_project_id}# IAM binding allowing vendor control plane to impersonateresource "google_service_account_iam_binding" "deploy_impersonation" { service_account_id = google_service_account.deploy.name role = "roles/iam.serviceAccountTokenCreator" members = [ "serviceAccount:[email protected]" ] condition { title = "Deploy access time window" description = "Allow impersonation during approved time window" expression = <<-EOT request.time >= timestamp("2024-01-01T00:00:00Z") && request.time <= timestamp("2024-12-31T23:59:59Z") && resource.labels.deploy_access == "enabled" EOT }}# Grant Deploy service account permissions in customer projectresource "google_project_iam_member" "deploy_compute" { project = var.customer_project_id role = "roles/compute.instanceAdmin.v1" member = "serviceAccount:${google_service_account.deploy.email}"}resource "google_project_iam_member" "deploy_storage" { project = var.customer_project_id role = "roles/storage.admin" member = "serviceAccount:${google_service_account.deploy.email}"}
Your control plane can only impersonate the Deploy service account when:
The deploy_access label is set to “enabled”
The current time is within the allowed window
Customers control when and how long deploy access is granted.Example: Steady-state service account (read-only observability)
# Steady-state service accountresource "google_service_account" "steadystate" { account_id = "tensor9-steadystate-${var.instance_id}" display_name = "Tensor9 Steady-State Service Account" project = var.customer_project_id}# Allow vendor control plane to impersonate (no time restriction)resource "google_service_account_iam_binding" "steadystate_impersonation" { service_account_id = google_service_account.steadystate.name role = "roles/iam.serviceAccountTokenCreator" members = [ "serviceAccount:[email protected]" ]}# Grant read-only permissions scoped to appliance resourcesresource "google_project_iam_member" "steadystate_logging_viewer" { project = var.customer_project_id role = "roles/logging.viewer" member = "serviceAccount:${google_service_account.steadystate.email}" condition { title = "Filter by instance ID" description = "Only access resources with matching instance-id label" expression = "resource.labels.instance_id == '${var.instance_id}'" }}resource "google_project_iam_member" "steadystate_monitoring_viewer" { project = var.customer_project_id role = "roles/monitoring.viewer" member = "serviceAccount:${google_service_account.steadystate.email}" condition { title = "Filter by instance ID" description = "Only access resources with matching instance-id label" expression = "resource.labels.instance_id == '${var.instance_id}'" }}
The Steady-state service account:
Can read observability data from resources labeled with the appliance’s instance-id
Customer approves a deployment by setting the deploy_access label to “enabled” and defining a time window. This can be done manually or through automated approval workflows.
2
You execute deployment locally
You run the deployment locally against the downloaded deployment stack:
cd acme-corp-productiontofu inittofu apply
The deployment stack is configured to route resource creation through the Tensor9 controller in the appliance.
3
Controller impersonates Deploy service account and creates resources
For each resource Terraform attempts to create, the Tensor9 controller inside the appliance impersonates the Deploy service account and creates the resource in the customer’s project.All infrastructure changes occur within the customer’s project using their Deploy service account permissions.
4
Deploy access expires
After the time window expires, the Deploy service account can no longer be impersonated. Your control plane automatically reverts to using only the Steady-state service account for observability.
The Tensor9 controller in your customer’s appliance is designed to only make outbound connections and not require ingress ports to be opened in your customer’s network perimeter:
# Example: Controller VPC configuration (managed by Tensor9)resource "google_compute_network" "tensor9_controller" { name = "tensor9-controller-${var.instance_id}" auto_create_subnetworks = false project = var.customer_project_id}resource "google_compute_subnetwork" "controller" { name = "tensor9-controller-subnet-${var.instance_id}" ip_cidr_range = "10.0.0.0/24" region = var.region network = google_compute_network.tensor9_controller.id project = var.customer_project_id}# Cloud Router for NATresource "google_compute_router" "controller" { name = "tensor9-controller-router-${var.instance_id}" region = var.region network = google_compute_network.tensor9_controller.id project = var.customer_project_id}# Cloud NAT for outbound connectivityresource "google_compute_router_nat" "controller" { name = "tensor9-controller-nat-${var.instance_id}" router = google_compute_router.controller.name region = var.region nat_ip_allocate_option = "AUTO_ONLY" source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES" project = var.customer_project_id}# Firewall: egress only, no ingressresource "google_compute_firewall" "controller_egress" { name = "tensor9-controller-egress-${var.instance_id}" network = google_compute_network.tensor9_controller.name project = var.customer_project_id allow { protocol = "tcp" ports = ["443"] } direction = "EGRESS" destination_ranges = ["0.0.0.0/0"]}# No ingress firewall rules - controller never accepts inbound connections
This architecture ensures that the customer’s appliance cannot be compromised via inbound network attacks on the controller.
Your application resources run in their own VPC(s), completely separate from the Tensor9 controller VPC. The application VPC topology is defined entirely by your origin stack - whatever VPC resources you define in your origin stack will be deployed into the appliance.Example: Application VPC with internet-facing load balancerIf your origin stack defines an AWS VPC with public subnets and a load balancer, that topology will compile to Google Cloud VPC resources in the customer’s appliance:
This application VPC topology is deployed alongside the Tensor9 controller VPC, but they remain completely separate. The controller VPC manages the control plane connection, while the application VPC handles your application’s traffic and resources.
All API calls within the customer’s Google Cloud project are logged to Cloud Audit Logs, providing a complete audit trail of what your control plane does:
Service account impersonations
Resource creation, modification, deletion
Permission denials
Configuration changes
Customers have full visibility into your control plane’s actions through their Cloud Audit Logs.
Google Cloud appliances automatically provision private artifact repositories to store container images and application files deployed by your deployment stacks.
When you deploy an appliance, Tensor9 automatically provisions a private Artifact Registry repository in the customer’s Google Cloud project to store your container images.Example: Origin stack with container serviceYour AWS origin stack references container images from your vendor’s Amazon ECR:
# ECS Fargate service in your origin stackresource "aws_ecs_task_definition" "api" { family = "myapp-api-${var.instance_id}" requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = "256" memory = "512" container_definitions = jsonencode([ { name = "api" # Reference to your vendor ECR registry image = "123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/api:1.0.0" portMappings = [ { containerPort = 8080 protocol = "tcp" } ] environment = [ { name = "INSTANCE_ID" value = var.instance_id } ] } ]) tags = { "instance-id" = var.instance_id }}
Container copy during deploymentWhen you deploy the deployment stack, Tensor9 automatically:
Detects the container image reference in your ECS task definition
Provisions a private Artifact Registry repository in the appliance (e.g., us-docker.pkg.dev/customer-project/myapp-000000007e/api)
Copies the container image from your vendor ECR registry to the appliance’s private Artifact Registry
Rewrites the deployment stack to reference the appliance-local registry
The compiled deployment stack will contain a Cloud Run service with the rewritten image reference:
template { containers { # Rewritten to reference appliance's private Artifact Registry image = "us-docker.pkg.dev/customer-project/myapp-000000007e/api:1.0.0" # ... rest of configuration compiled from ECS task definition }}
This ensures the container image is stored locally in the customer’s project and the application doesn’t depend on cross-project access to your vendor registry.Artifact lifecycleContainer artifacts are tied to the deployment stack lifecycle:
Deploy (tofu apply): Tensor9 copies the container image from your vendor registry to the appliance’s private registry
Destroy (tofu destroy): Deleting the deployment stack also deletes the copied container artifact from the appliance’s private registry
This ensures that artifacts are cleaned up when deployments are removed, preventing orphaned resources.
For Lambda functions in your AWS origin stack, Tensor9 automatically handles copying function source code to the customer’s Google Cloud environment:
# Lambda function in your AWS origin stackresource "aws_lambda_function" "processor" { function_name = "myapp-processor-${var.instance_id}" handler = "processor.process_event" runtime = "python3.11" role = aws_iam_role.processor.arn # Reference to function code in your vendor S3 bucket s3_bucket = "vendor-lambda-sources" s3_key = "processor-v1.0.0.zip" environment { variables = { INSTANCE_ID = var.instance_id } } tags = { "instance-id" = var.instance_id }}
During deployment, Tensor9:
Provisions a private Cloud Storage bucket in the appliance for function sources
Copies the Lambda source archive from your vendor S3 bucket to the appliance’s Cloud Storage bucket
Compiles the Lambda function to a Cloud Function with the appliance-local source reference
Like container images, destroying the deployment stack (tofu destroy) removes the copied function source archives.See Artifacts for comprehensive documentation on artifact management, including immutability requirements and supported artifact types.
Store secrets in AWS Secrets Manager or AWS Systems Manager Parameter Store in your AWS origin stack, then pass them to your application as environment variables.
Your application reads secrets from environment variables:
import os# Read secret from environment variabledb_password = os.environ['DB_PASSWORD']
If your application dynamically fetches secrets using AWS SDK calls (e.g., boto3.client('secretsmanager').get_secret_value()), those calls will NOT be automatically mapped by Tensor9. Always pass secrets as environment variables.
See Secrets for detailed secret management patterns.
Our team can help with deployment troubleshooting, service account configuration, service equivalents, and best practices for Google Cloud environments.