Skip to main content

Provisioning a Resource

There are two interfaces for provisioning cloud resources in terranetes

The difference is largely around simplicity and control. While Configurations are essentially a one-to-one mapping to the terraform module, the CloudResources interface expose only a subset of the options, allowing platform teams to set defaults, inline their best practices, security or organizational policy. This arrangement has the added benefit of removing cognitive load surrounding the myriad of options a terraform module provides.

Example CloudResource

Assuming CloudResources is being used,

1. Search the service currently available

Query the cluster to discovery the resources available to self-serve.

$ kubectl get plans
database v0.0.1 3s

2. View latest revision of the service

kubectl get revision $(kubectl get plan database -o json | jq -r) -o yaml

The above will show you the options available on the plan.

3. Create a CloudResource from a revision

tnctl create cloudresource database

Example Configuration resource

Alternatively, if you are using Configurations. Below is an example:

kind: Configuration
name: bucket
# ssh example: git::ssh://

name: default

name: test

# An optional reference to a secret containing credentials to retrieve
# the git repository

# Terraform variables used to populate the module
# -- The name of the bucket. If omitted, Terraform will assign a random, unique name
bucket: example-test-1234
# -- The canned ACL to apply
acl: private
# -- Map containing versioning configuration
enabled: true
# --Whether Amazon S3 should block public ACLs for this bucket
block_public_acls: true
# -- Whether Amazon S3 should block public bucket policies for this bucket
block_public_policy: true
# -- Whether Amazon S3 should ignore public ACLs for this bucket
ignore_public_acls: true
# -- Whether Amazon S3 should restrict public bucket policies for this bucket
restrict_public_buckets: true
# -- Map containing server-side encryption configuration
sse_algorithm: "aws:kms"
bucket_key_enabled: true

The source syntax (spec.module) on releases <= v0.2.5 does not fully support suggested Github format. References to Github must use the or git::ssh://

Following the syntax of Generic Git Repository.

Sections of the configuration resource

The configuration resource is comprised of the following sections.

Module reference

The module reference defines the source of the terraform module to run.


The source reference uses the exact same format as terraform itself (the same library is used). For full details take a look at hashicorp/go-getter.

For quick reference:

  • Using SSH the format would look like this: git::ssh://
  • Using HTTPS the format would be:

You can also extract specific folders or files from the downloaded module by using the double slash: [URL]//dir/file.

Provider reference

The provider reference is what links a configuration to the credentials used to speak to the cloud. Depending on the Kubernetes RBAC you currently posses you can retrieve a list of the current providers via kubectl:

$ kubectl get providers -n [NAMESPACE]

Once you have the provider name you use the reference in the configuration:

name: <NAME>

Terraform variables

The variables section spec.variables is a free form map used to define all the variables the module can consume. These are converted to HCL and provided into the workflow via -var-file on the plan and apply stages.

For variables that are sensitive such as passwords it would be better to use the spec.valueFrom field. This is a collection of references to kubernetes secrets that hold the values.


ValueFrom fields is available from version >= v0.1.6 ::::

An example for an RDS module can be

- secret: db_password
key: database_password
optional: false

Connection secret reference

The connection secret spec.writeConnectionSecretToRef is the name of a secret within the namespace where you want any Terraform outputs to be written. These outputs are converted to environment variable format, i.e., upper-cased and ready to be consumed by workloads using env and envFrom.

By default when a secret is defined all the outputs produced are written in environment variable form. If you want to filter this and only select specific keys from the terraform output you can include the spec.writeConnectionSecretToRef.keys field as shown below.

name: [NAME]
- name_of_key
- name_of_key

Secrets Remapping

We use the resource outputs as the keys in the connection secret, so if a resource has a database_endpoint output the secret will have a key named DATABASE_ENDPOINT. You might want to rename one or more outputs for convenience however, for example change the database_endpoint to mysql_host. You can change the key like below

kind: Configuration|CloudResource
name: bucket
name: aws
name: test
- database_endpoint:mysql_host # is renamed to MYSQL_HOST
- database_port # is unchanged as DATABASE_PORT

Viewing the changes

As a Configuration transitions through its plan, apply and destroy phases, a job is created in the namespace, and used to feedback the execution of the change. The jobs follows the naming format [RESOURCE]-[GENERATION]-[plan|apply|destroy]. You can easily view the execution of a change by inspecting the pod's logs (kubectl logs [POD]).

As an alternative to using kubectl commands, you can use tnctl cli

$ tnctl logs -n NAMESPACE NAME

Approving a plan

By default, unless the spec.enableAutoApproval is set to true, all Configurations require a manual approval. You can do this by toggling an annotation on the Configuration itself.

To approve the Configuration or CloudResource bucket:

$ kubectl -n apps annotate configuration bucket ""=true --overwrite


$ kubectl -n apps annotate cloudresource bucket ""=true --overwrite

Or if using the tnctl cli, you can type

$ tnctl approve cloudresource|configuration -n NAMESPACE NAME

Deleting the resource

You can delete the resource like any other Kubernetes resource (kubectl delete configuration [NAME]). One extra feature is the ability to orphan the cloud resources (i.e., delete the Kubernetes representation but DO NOT delete the cloud resources themselves).

For instance, you may need to migrate the configuration to another cluster. In that case:

  1. Annotate the Configuration with kubectl annotate configuration [NAME] ""=true
  2. Delete the Configuration resource as per normal. The resource will disappear but the cloud resources will remain.