Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tensor9.com/llms.txt

Use this file to discover all available pages before exploring further.

Per-command approval is the safe default, but for routine operations (daily disk-usage checks, log tails, health probes) the back-and-forth gets old fast. A pre-approval lets a customer sign once: you can then run the same template against their appliance up to N times within a validity window without per-command human review. Pre-approval is offered to your customer inside the support-portal approval UI as an optional step after a regular command’s output is released. Your customer clicks through Step 5 of the approval UI, picks a scope, and pastes a few bash snippets we render into their own terminal. Your job is to ask your customer to enable a pre-approval and explain what it does. How pre-approval works: your customer signs once, you run N times within scope, your customer revokes or the window expires.

What a pre-approval is

A pre-approval is an Ed25519-signed manifest your customer creates on their own workstation that authorizes one of your templates to run against one specific appliance, capped by:
  • Max runs. Default 100. (Note: enforcement of this counter is scheduled but not yet shipping; see “Known limits” below.)
  • Validity window. Default 90 days. After expiry, runs fall back to per-command approval.
  • Approval level. CommandsOnly (auto-approve the command; customer still releases output per run) or FullyPreApprove (auto-approve both command and output release).
  • Variable constraints. Optional regex patterns that variable values you submit must match. A submission whose --vars would violate any constraint falls back to per-command approval.
Your customer’s Ed25519 private key lives in their own secret store: AWS SSM Parameter Store (SecureString) on AWS appliances, or a Kubernetes Secret on Kube appliances. The approval UI renders the bash snippets that put it there. Other backends (GCP, Azure, DigitalOcean, on-prem) are on the roadmap; the approval UI refuses to advance setup on those today. The matching public key is pinned into the appliance’s secret store by a one-time command your customer runs against their own cloud account. Your control plane never sees the private key, and you never see either.

Approval levels

LevelCommand stepOutput release step
CommandsOnly (default)Auto-approvedCustomer still reviews and releases per run
FullyPreApproveAuto-approvedAuto-approved
CommandsOnly is the right default for most templates: your customer loses the per-command review burden but still controls what bytes you see. FullyPreApprove is appropriate for purely operational templates where the output content is uninteresting (a health probe whose output is just ok) or where your customer has strong upstream trust in you.

Granting a pre-approval

Pre-approval is offered as Step 5 of the support-portal approval UI, after your customer releases the output of a regular per-command flow. The approval UI walks them through four sub-steps:
1

Pick a scope

Your customer picks what to pre-approve: this specific template only, or a broader scope (e.g., any read-only command). They also pick the approval level (CommandsOnly or FullyPreApprove), validity window (default 90 days), and optional variable constraints.
2

Set up signing keypair (one-time per appliance)

If the appliance has no pinned signing pubkey yet, the approval UI renders three bash snippets your customer pastes into their terminal:
  1. openssl genpkey -algorithm Ed25519 -out priv.pem to generate the keypair on their workstation.
  2. aws ssm put-parameter --type SecureString (or kubectl create secret) to store the private key in your customer’s own secret store.
  3. A second put-parameter / create secret to pin the public key where the appliance can read it.
The approval UI polls until the appliance reports the pubkey is visible, then advances. Subsequent pre-approvals on this appliance skip this step.
3

Render the install + sign snippets

The approval UI shows the canonical manifest your customer is about to sign (so they can audit the exact bytes), plus a bash snippet that:
  1. Fetches the private key from your customer’s chosen storage.
  2. Runs openssl pkeyutl -sign over the canonical bytes.
  3. Prints a base64 signature.
The private key never enters the browser. The signing is local to your customer’s workstation.
4

Paste the signature

Your customer pastes the base64 signature into the approval UI’s text field. The approval UI submits the signed manifest to your control plane, which mirrors it to the appliance’s vault on the appliance’s next poll. From this point forward, runs matching the pre-approval’s scope auto-approve.
What you’ll see: from the next ops command submission onward, runs that match the pre-approval’s scope auto-approve. Your customer sees no notification per run; runs that fall outside the scope still produce a regular /support/<token> link.

Variable constraints

Without constraints, a MOUNT_PREFIX variable can be set to any string at submit time. The approval UI’s Step 5a scope picker accepts per-variable regex patterns that pin acceptable values, e.g.:
MOUNT_PREFIX  =  ^/var/log/.*
LIMIT         =  ^([1-9][0-9]?|100)$
If you submit tensor9 ops command create whose --vars would violate any constraint, the appliance rejects auto-approval for that submission and falls back to per-command manual approval. The customer still sees the request; they just have to approve it by hand. Constraint patterns are stored in the manifest, signed alongside the rest of the scope, and re-verified on every run.

Listing active pre-approvals

You can list the pre-approvals your customer has granted on a specific appliance:
tensor9 ops template approvals list \
  --appName my-app \
  --applianceId appl_xyz789
Walks every template you’ve published on this app and reports any pre-approval scoped to the appliance, including:
  • Template id and current version
  • Max runs, used runs, runs remaining (subject to the “Known limits” caveat below)
  • Validity window (validUntil timestamp)
  • Approval level
  • Signer public-key fingerprint (so an audit can confirm the manifest was signed with the customer’s current key, not a leaked older one)
  • Runtime state (expired or active)
Read-only. Pass --output json to script against the output. This is a vendor-side surface; your customer reads the same information from their support-portal approval UI.

Verifying a pre-approval is actually active

The most common failure: your customer finishes the approval UI’s pre-approval flow but the pin step never actually completed (the cloud command was copied wrong, or your customer doesn’t have the IAM permission to write to the secret store). Without a pinned public key on the appliance, every pre-approval verification falls back to manual, silently. You keep getting per-command notifications even though your customer thinks they enabled auto-approval. The approval UI polls the appliance for pubkey visibility before letting your customer advance past the setup step, so this failure mode is mostly caught at setup time. But if your customer skipped the poll (closed the tab early, or the approval UI was bypassed by a buyer-side script), you can confirm from the vendor side by running audit-verify in JSON mode against a recent command:
tensor9 ops command audit verify \
  --appName my-app \
  --commandName <a recent command name> \
  --output json | jq '.checks[] | select(.name=="commandApproval") | .approvedBy'
If pre-approval is working, recent commands print "BuyerSignedPreapproval:<keyId>". If pre-approval is silently falling back to manual, the same field will be your customer’s email or whatever signer string they used at approval time. A pre-approval configured but consistently falling back almost always means the pinning step from grant didn’t take.

Revoking a pre-approval

There are two revocation paths (one targeted, one nuclear). Both happen directly against the customer’s own cloud secret store with their own credentials and do not involve your control plane, so revocation works even if your control plane is unreachable.

Surgical: revoke one pre-approval

When your customer opens an active or recently-finished support link, the approval UI’s terminal page renders a revocation snippet for the pre-approval they granted in that flow. The snippet is the matching aws secretsmanager delete-secret (or kubectl delete secret) for the specific pre-approval; your customer pastes it into their terminal and runs it with their own cloud credentials. tensor9 ops template revoke is the vendor-side surface to render the same snippet (useful when you need to instruct a customer who isn’t actively in a support session):
tensor9 ops template revoke \
  --appName my-app \
  --templateId tmpl_abc123 \
  --templateSemver 1.0.0 \
  --applianceId appl_xyz789
--templateSemver is required because pre-approvals are scoped to a specific template version; if the same template has been evolved, each version carries its own pre-approval that has to be revoked independently. On the appliance’s next polling cycle (within seconds), the appliance reads the revocation record and stops honoring that specific pre-approval. Other pre-approvals on the same appliance, including those for other of your templates, are unaffected. Runs of the revoked template fall back to per-command approval. The revocation record itself is unsigned by design: anyone with write access to the appliance’s secret store can already rotate or delete keys, and writing a fake revocation record can only force a pre-approval into the more-strict per-command path. Adding signatures would add complexity without security gain.
Don’t try to construct the revocation command by hand from this doc. The exact secret-store path is install-scoped and key-id-suffixed, and the wire format is non-trivial. Always use the approval UI’s rendered snippet (or tensor9 ops template revoke from the vendor side); the path it produces is the one the appliance actually consults.

Nuclear: revoke every pre-approval signed by this key

To revoke every pre-approval signed by a particular workstation’s key (for example, on a workstation that may be compromised), the customer deletes the pinned public key from the appliance’s secret store. The approval UI’s Setup-Signing-Keypair step prints the exact path on first run; your customer can re-open any support link and walk through Step 2 setup to see the path again (the snippet shape is the inverse of the pin command, just delete-secret / delete secret in place of put-secret-value / create secret).
# AWS
aws ssm delete-parameter --name <path-from-approval UI-setup>

# Kubernetes
kubectl delete secret <secret-name-from-approval UI-setup> -n <namespace>
Once the pinned public key is gone, every signed pre-approval whose signer matches the deleted key immediately fails verification on the appliance and falls back to per-command approval. To start granting pre-approvals again, your customer opens a support link, the approval UI detects no pinned key, and walks them back through Step 2 setup with a fresh keypair.

Lost-key recovery

The private key lives in your customer’s own SSM Parameter Store or Kubernetes Secret, so workstation loss is not a key-loss event: any workstation with your customer’s cloud credentials can refetch the key. The actual lost-key scenarios are:
  • Your customer accidentally deletes the SecureString / Secret holding the private key (aws ssm delete-parameter on the wrong name, kubectl delete secret on the wrong target).
  • Your customer’s cloud account is compromised and the private key may have been exfiltrated; they want to rotate to a new key.
  • The workstation that generated the key is suspected of compromise and your customer wants to invalidate every pre-approval ever signed from it.
There is no escrow on our side; the private key never leaves your customer’s environment. Recovery is the same shape as nuclear revoke:
  1. From any workstation your customer trusts, run the nuclear revoke step above to delete the pinned public key from the appliance. Every pre-approval signed by the lost key immediately stops auto-approving.
  2. From the new workstation, open a fresh support link. The approval UI detects no pinned key and walks your customer through Step 2 setup with a new keypair, then through Step 5 to re-grant pre-approvals for each template they want to keep.

Supported appliance backends

Today the pin and revoke command rendering supports:
  • AWS Secrets Manager, for appliances running in AWS accounts.
  • Kubernetes secrets, for appliances running on a Kubernetes cluster your customer controls.
Other backends (GCP Secret Manager, Azure Key Vault, HashiCorp Vault, on-prem / air-gapped, multi-region appliances) fall back to a “# Pinning command not yet templated for ApplianceEnv=$env” placeholder; your customer cannot use the rendered-command flow on those today. We are adding them as customer demand surfaces. If your deal hinges on one of these, let us know.

Validity-mid-flight semantics

The validity window is checked twice per command: once when the command is approved, and again when output is released (for FullyPreApprove only). This matters for two cases:
  • FullyPreApprove cmd whose manifest expires between approve and release. The command executes (validity check passed at command-approve time) but the output is held for manual release. You see a command stuck at Executed waiting for human release, even though everything was supposed to be auto-approved. Restoring auto-release means refreshing the pre-approval and releasing the held output by hand.
  • Signature check fails between approve and release. Per appliance-side policy, signature failures at output release fall back to manual rather than rejecting outright. The reasoning: rejecting output on a command that already ran adds no safety and blocks your customer from reviewing the output. Same symptom on your end: an Executed cmd waiting on human release.
If a FullyPreApprove workload starts producing held outputs after months of clean runs, the most likely cause is one of those two edges, not a bug in your code.

Trust properties

When asking your customer to enable a pre-approval, these are the properties worth naming explicitly:
  • Signing keys live entirely in your customer’s environment. The private key sits in your customer’s chosen secret store (SSM, Kubernetes Secret, or whatever they pick); the actual signing runs locally in their terminal via openssl pkeyutl. The browser and your control plane never touch the private key. A compromised control plane cannot fabricate a pre-approval.
  • Pinning is gated by your customer’s own cloud credentials. Only your customer can install or delete the pinned public key on an appliance.
  • Constraints are part of the signed scope. You cannot expand the set of acceptable variable values after the fact without the customer re-signing.
  • Revocation is independent of your control plane. Even if your control plane is down or compromised, a customer can revoke any pre-approval through their own cloud console.

Known limits

  • maxRuns is not yet enforced. The counter that should decrement on each auto-approved run is wired into the data model but the increment call site is not yet shipping. In practice this means a pre-approval with maxRuns: 100 will auto-approve unlimited times within its validity window. Treat maxRuns as documentation-of-intent until enforcement lands. Validity-window expiry, signature check, and revocation all work correctly.
  • Running commands: how individual ops commands flow through the lifecycle, with and without pre-approval.
  • Authoring templates: how to write templates whose data_access, side_effects, and permission tier give a customer enough information to pre-approve them.
  • Security model: the keys, signatures, and storage-side audit guarantees that underpin pre-approval (including why revocation cannot rewrite history).