Quick start

This quick start guide assumes you have working knowledge of Catalyst Cloud Command line interface (CLI) and some familiarity with Kubernetes.


Due to active development of this service, we recommend the use of the latest version of the CLI to interact with it. Please refer to the Upgrading the CLI section of the documentation for upgrade instructions. This documentation assumes python-magnumclient is 2.12.0 or above.


Ensure user has the required privileges

In order to create a Kubernetes cluster you need to ensure the user has been allocated the heat_stack_owner role.

Ensure quota is sufficient

A small quota is sufficient to deploy the production cluster template if your project is empty. However, if you already have some resources allocated, you may want to increase your quota to ensure there is sufficient capacity available to deploy Kubernetes.

By default, the production Kubernetes template allocates:

  • 6 compute instances

  • 18 vCPUs

  • 36 GB of RAM

  • 3 block storage volumes

  • 60 GB of block storage space

  • 3 security groups

  • 1 load balancer

As a project admin you can change your quota using the Quota Management panel in the dashboard, under the Management section.

Download and install kubectl

Kubectl is the command line interface to the Kubernetes API and the canonical way to interact with Kubernetes clusters.

The instructions below can be used to quickly install kubectl on Linux as a static binary:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s \
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

For other platforms or installation methods, please refer to the detailed instructions on how to install kubectl.

Choosing a cluster template

A cluster template is a blue-print to build a Kubernetes cluster (similar to machine images for the compute service). The cluster template specifies what version of Kubernetes will be installed and the features that will be enabled. Initially Catalyst Cloud will only support the use of the pre-defined templates.


From cluster template version v1.12.10 onwards, as a security best practice, the behaviour when creating a new cluster is for it to be created as a Private cluster. This means that the cluster will not be reachable directly from the internet by default.

The following command will list all cluster templates available:

$ openstack coe cluster template list
| uuid                                 | name                              |
| b1d124db-b7cc-4085-8e56-859a0a7796e6 | kubernetes-v1.11.9-dev-20190402   |
| cf337c0a-86e6-45de-9985-17914e78f181 | kubernetes-v1.11.9-prod-20190402  |
| 967a2b86-8709-4c07-ae89-c0fe6d69d62d | kubernetes-v1.12.7-dev-20190403   |
| f8fc0c67-84af-4bb8-89fb-d29f4c926975 | kubernetes-v1.12.7-prod-20190403  |
| bfde711c-655c-4de9-b37e-847fc635b734 | kubernetes-v1.12.10-dev-20190912  |
| 38382877-957e-4667-9851-838eef892b64 | kubernetes-v1.12.10-prod-20190912 |
| d319cc8e-e27d-4ef9-be84-a6d431800215 | kubernetes-v1.13.10-dev-20190912  |
| e8257719-b209-40bf-9619-2895698d5a73 | kubernetes-v1.13.10-prod-20190912 |

Alternatively, a list of cluster templates available can be seen in the Cluster Templates panel in the dashboard, under the Container Infra section.

Template types

The naming convention used for the templates is broken down as follows:

  • kubernetes-v1.11.2 : this is the version of kubernetes that the template will use to create the cluster.

  • -prod or -dev: the type of environment to be created (see below).

  • -20190912: the date on which the template was created.

The difference between the development and production templates are:

  • Production: creates a Kubernetes cluster that is intended for production workloads. It creates three or more master nodes and three or more worker nodes. The master nodes will have a loadbalancer deployed in front of them to provide high availability for the Kubernetes API. This template also deploys Prometheus and Grafana to provide cluster metrics.

  • Development: creates a minimal Kubernetes cluster with a single master and a single worker node. As the name suggests, it should not be used for production.

Deploying a Kubernetes cluster

Private vs Public cluster API access

Any cluster created using one of the predefined templates will, by default, be created as a private cluster. This means that the Kubernetes API will not be accessible from the internet and access will need to be via a bastion or jumphost server within the cloud project.

If you would prefer to create a publicly accessible cluster then simply add the following option to the cluster creation command.


The actual usage would look like this.

$ openstack coe cluster create <Cluster name> \
  --cluster-template <Template ID> \


This quickstart guide covers the steps to creating a kubernetes cluster from scratch. But if you wish to create a cluster on an existing private network then you can refer to the relevant section in the private-cluster documentation.

Creating a cluster

To create a new production cluster, run the following command:

$ openstack coe cluster create k8s-cluster \
--cluster-template kubernetes-v1.13.10-prod-20190912 \
--keypair my-ssh-key \
--node-count 3 \
--master-count 3

Request to create cluster c191470e-7540-43fe-af32-ad5bf84940d7 accepted

To create a new development cluster run the following command:

$ openstack coe cluster create k8s-cluster \
--cluster-template kubernetes-v1.13.10-dev-20190912 \
--keypair my-ssh-key \
--node-count 1 \
--master-count 1

Request to create cluster c191470e-7540-43fe-af32-ad5bf84940d7 accepted

Modifying cluster template behaviour

It is possible to override the behaviour of a template by adding or modifying the labels supplied by the template. To do this the entire list of existing labels in the template must be provided as a set of key=value pairs, overriding the required ones as necessary.

To get the list of existing labels on a template simple view the template like so.

openstack coe cluster template show kubernetes-v1.15.6-dev-20191129 -f yaml
insecure_registry: '-'
  auto_healing_controller: magnum-auto-healer
  auto_healing_enabled: 'true'
  auto_scaling_enabled: 'false'
  cloud_provider_enabled: 'true'
  cloud_provider_tag: 1.14.0-catalyst
  container_infra_prefix: docker.io/catalystcloud/
  heat_container_agent_tag: stein-dev
  ingress_controller: octavia
  k8s_keystone_auth_tag: v1.15.0
  keystone_auth_enabled: 'true'
  kube_dashboard_enabled: 'true'
  kube_tag: v1.15.6
  magnum_auto_healer_tag: v1.15.0-catalyst.0
  master_lb_floating_ip_enabled: 'false'
  octavia_ingress_controller_tag: 1.14.0-catalyst
  prometheus_monitoring: 'true'

<-- truncated for brevity -->

Then convert this into a comma separated key value list like so.


ensure there is no whitespace added around commas “,” or equal signs ‘=’ when creating the list

auto_healing_controller=magnum-auto-healer,auto_healing_enabled=true, \
auto_scaling_enabled=false,cloud_provider_enabled=true, \
cloud_provider_tag=1.14.0-catalyst, \
container_infra_prefix=docker.io/catalystcloud/, \
heat_container_agent_tag=stein-dev,ingress_controller=octavia, \
k8s_keystone_auth_tag=v1.15.0,keystone_auth_enabled=true, \
kube_dashboard_enabled=true,kube_tag=v1.15.6, \
magnum_auto_healer_tag=v1.15.0-catalyst.0, \
master_lb_floating_ip_enabled=false, \

This will then be passed as the argument to the labels parameter.

openstack coe cluster create k8s-cluster \
--cluster-template kubernetes-v1.13.10-prod-20190912 \
--labels auto_healing_controller=magnum-auto-healer,auto_healing_enabled=true,<-- truncated -->
--keypair my-ssh-key \
--node-count 3 \
--master-count 3


If the complete list of labels is not provided it is likely that the cluster will fail to deploy correctly and will end up in a FAILED or UNHEALTHY state.

Checking the status of the cluster

Depending on the template used, it will take 5 to 15 minutes for the cluster to be created.

You can use the following command to check the status of the cluster:

$ openstack coe cluster list
| uuid                                 | name        | keypair  | node_count | master_count | status             |
| c191470e-7540-43fe-af32-ad5bf84940d7 | k8s-cluster | testkey  |          1 |            1 | CREATE_IN_PROGRESS |

Alternatively, you can check the status of the cluster on the Clusters panel , in the Container Infra section of the Dashboard.

Please wait until the status changes to CREATE_COMPLETE to proceed.

Getting the cluster config

The kubectl command-line tool uses kubeconfig files to determine how to connect to the APIs of the Kubernetes cluster. The following command will download the necessary certificates and create a configuration file on your current directory. It will also export the KUBECONFIG variable on your behalf:

$ eval $(openstack coe cluster config k8s-cluster)

If you wish to save the configuration to a different location you can use the --dir <directory_name> parameter to select a different destination.


If you are running multiple clusters, or are deleting and re-creating a cluster, it is necessary to ensure that the current kubectl configuration is referencing the correct cluster configuration.

Accessing a private cluster

Once the cluster state is CREATE_COMPLETE and you have successfully retrieved the cluster config, we need to confirm that we are able to access the cluster.


The use of the bastion server is unnecessary if you created a public cluster that is directly accessible from the internet.

If you did not override the default behaviour you will have created a private cluster. In order to access this you will need to create a bastion host within your cloud project to allow you to reach the Kubernetes API.


When using a bastion server to access a private cluster you will need to delete the bastion before trying to delete the cluster.

Failure to do so will leave your cluster in a DELETE_FAILED state that will require assistance from the Catalyst Cloud team to resolve.

For the purpose of this example let’s assume we deployed a bastion host with the following characteristics:

  • name - bastion

  • flavor - c1.c1r1

  • image - ubuntu-18.04-x86_64

  • network - attached to the Kubernetes cluster network

  • security group - bastion-ssh-access

  • security group rules - ingress TCP/22 from 114.110.xx.xx ( public IP to allow traffic from)

The following commands are to check our setup and gather the information we need to set up our SSH forward in order to reach the API endpoint.

Find the instance’s external public IP address

$ openstack server show bastion -c addresses -f value

Confirm that we have a security group applied to our instance that allows inbound TCP connections on port 22 from our current public IP address. In this case our security group is called bastion-ssh-access and out public IP is 114.110.xx.xx.

$ openstack server show bastion -c security_groups -f value

$ openstack security group rule list bastion-ssh-access
| ID                                   | IP Protocol | Ethertype | IP Range         | Port Range | Remote Security Group |
| 42c1320c-98d5-4275-9c2d-b81b0eadac29 | tcp         | IPv4      | 114.110.xx.xx/32 | 22:22      | None                  |

Finally we need the IP address for the Kubernetes API endpoint

$ openstack coe cluster show k8s-prod -c api_address -f value

We will make use of SSH’s port forwarding ability in order to allow us to connect from our local machine’s environment. To do this run the following command in your shell.

ssh -f -L 6443: ubuntu@ -N
  • -f fork the process in background

  • -N do not execute any commands

  • -L specifies what connections are given to the localhost. In this example we use the

    port:host:hostport to bind 6443 on localhost to 6443 on the API endpoint at

  • The ubuntu@ is the credentials for SSH to log into the bastion host.


Setting up the SSH forwarding is optional. You can choose to deploy a cloud instance on the Kubernetes cluster network with appropriate remote access and SSH on it and run all of your cluster interactions from there.

As a quick test we can run the following curl command to check that we get a response from the API server.

$ curl https://localhost:6443 --insecure
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {

  "code": 403

If the curl request returned a JSON response similar to that shown above you can run the following command to confirm that Kubernetes is working as expected.

First, if you are running a private cluster and connecting over the SSH tunnel you will need to edit the kubeconfig file you retrieved earlier and make the following change.

Find the server entry that points to the Kubernetes API.


Change it so that it points to the localhost address instead.


Then run kubectl to confirm that the cluster responds correctly.


If you have not yet set up the Kubernetes command line tools see Setting up Kubernetes CLI for details.

$ kubectl cluster-info
Kubernetes master is running at
Heapster is running at
CoreDNS is running at

You can now proceed with deploying your applications into the cluster using kubectl or whatever your preferred mechanism may be.

Accessing the Kubernetes dashboard

The Catalyst Kubernetes Service enables the Kubernetes web dashboard by default (this behaviour can be overwritten if desirable).

In order to access the Kubernetes dashboard, you will need to retrieve the admin token for the cluster using the following command:

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-token | awk '{print $1}')
Name:         admin-token-f5728
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin

Type:  kubernetes.io/service-account-token

ca.crt:     1054 bytes
namespace:  11 bytes
token:      1234567890123456789012.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mNTcyOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImNjNDQxNmQxLWNhODItMTFlOC04OTkzLWZhMTYzZTEwZWY3NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.ngUnhjCOnIQYOAMzyx9TbX7dM2l4ne_AMiJmUDT9fpLGaJexVuq7EHq6FVfdzllgaCINFC2AF0wlxIscqFRWgF1b1SPIdL05XStJZ9tMg4cyr6sm0XXpzgkMLsuAzsltt5GfOzMoK3o5_nqn4ijvXJiWLc4XkQ3_qEPHUtWPK9Jem7p-GDQLfF7IvxafJpBbbCR3upBQpFzn0huZlpgdo46NAuzTT6iKhccnB0IyTFVgvItHtFPFKTUAr4jeuCDNlIVfho99NBSNYM_IwI-jTMkDqIQ-cLEfB2rHD42R-wOEWztoKeuXVkGdPBGEiWNw91ZWuWKkfslYIFE5ntwHgA

In a separate terminal run the kubectl proxy command to allow for your browser to connect to the Kubernetes dashboard.

$ kubectl proxy
Starting to serve on

Once the proxy is ready, open following URL on your browser: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

You will be presented with a login screen, as illustrated below. Select Token as the authentication type and paste in the authentication token acquired in the previous step.


Once successfully authenticated you will be able to view the Kubernetes dashboard, as illustrated below.


Deploying a hello world application

It is possible to have a loadbalancer created on your behalf by Kubernetes through the underlying Catalyst Cloud infrastructure services.

For this example we are going to deploy a container running a simple flask app that will respond with a basic ‘Hello World’ message that includes the host name and IP of the node responding to the request. This will sit behind a loadbalancer that will be publicly available on the internet via a floating ip and will serve requests to the application servers using the round robin algorithm.

The container image in question catalystcloud/helloworld version_1.1 runs the following application. You do not need to copy this, it already exists in the cloud.

#!/usr/bin/env python
from flask import Flask
import socket

app = Flask(__name__)

host_name = socket.gethostname()
host_ip = socket.gethostbyname(host_name)

def hello_world():
  return "Hello World! From Server : {} @ {}".format(host_name, host_ip)

if __name__ == "__main__":

Creating the application deployment

First we need to create a manifest like this. If you’re following along with this example you should save this file as helloworld-deployment_1.yaml

apiVersion: apps/v1
kind: Deployment
  name: helloworld-deployment
    app: helloworld
  replicas: 3
      app: helloworld
        app: helloworld
      - name: helloworld
        image: catalystcloud/helloworld-flask:1.1
        - name: helloworld-port
          containerPort: 8080

This provides the following parameters for a deployment:

  • number of replicas - 3

  • deployment image - catalystcloud/helloworld version_1.1.

  • pod labels, to identify the app to the service - app: helloworld

  • containerPort to expose the application on - 8080

    • This port also uses a name, in this case helloworld-port, which allows us to refer to it by name rather than value in the service.

To deploy the application run the following command.

$ kubectl create -f helloworld-deployment_1.yaml
deployment.apps/helloworld-deployment created

Check the state of the pods to confirm that they have all been deployed correctly. Once the status of all of them shows that they are running and ready, this may take a few seconds, continue to the next section.

$ kubectl get pods
NAME                                     READY   STATUS              RESTARTS   AGE
helloworld-deployment-5bdfcbb467-648bd   1/1     Running             0          43s
helloworld-deployment-5bdfcbb467-wlb6n   0/1     ContainerCreating   0          43s
helloworld-deployment-5bdfcbb467-zrlvl   1/1     Running             0          43s

Creating the loadbalancer service

The deployment itself however does not provide a means for us to expose the application outside of the cluster. In order to do this we need to create a service to act as a go between.

The manifest for our service definition will look like this.

kind: Service
apiVersion: v1
  name: helloworld-service
    app: helloworld
  - protocol: TCP
    port: 80
    targetPort: helloworld-port
  type: LoadBalancer

The parameters of interest here are:

  • the selector which links the service to the app using the label helloworld.

  • The port that it exposes externally - 80.

  • The targetPort on the pods to link back to, in this case the named port, helloworld-port that we created on the deployment.

To create the service run the following command.

kubectl create -f helloworld-service.yaml
service/helloworld-service created

The final step is to check on the state of the service and wait until the loadbalancer is active and the LoadBalancer Ingress field has received a publicly accessible floating IP address.

kubectl describe svc helloworld-service
Name:                     helloworld-service
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=helloworld
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                     <unset>  80/TCP
TargetPort:               helloworld-port/TCP
NodePort:                 <unset>  32548/TCP
Endpoints:      ,,
Session Affinity:         None
External Traffic Policy:  Cluster
  Type     Reason                      Age                From                Message
  ----     ------                      ----               ----                -------
  Normal   EnsuringLoadBalancer        10m (x2 over 12m)  service-controller  Ensuring load balancer
  Normal   EnsuringLoadBalancer        60s                service-controller  Ensuring load balancer

Once your service is in this state you should be able to browse to the IP address assign to the LoadBalancer Ingress field and see a simple text output similar to the following.

Hello World! From Server : helloworld-deployment-5bdfcbb467-c7rln @

If you refresh the browser you should also see the response update to reflect different host responses as the loadbalancer attempts to round robin the requests.