Quick start

This quick start guide assumes you have working knowledge of Catalyst Cloud Command line interface (CLI) and familiarity with Kubernetes.


Ensure user has the required privileges

In order to be able to create a Kubernetes cluster you need to ensure the user has been allocated the heat_stack_owner role.

Ensure quota is sufficient

A small quota is sufficient to deploy the production cluster template if your project is empty. However, if you already have some resources allocated, you may want to increase your quota to ensure there is sufficient capacity available to deploy Kubernetes.

By default, the production Kubernetes template allocates:

  • 6 compute instances
  • 18 vCPUs
  • 36 GB of RAM
  • 8 floating IPs
  • 3 security groups
  • 3 volumes
  • 60 GB of block storage space

As a project admin you can change your quota using the Quota Management panel in the dashboard, under the Management section.

Download and install kubectl

Kubectl is the command line interface to the Kubernetes API and the canonical way to interact with Kubernetes clusters.

The instructions below can be used to quickly install kubectl on Linux as a static binary:

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s \
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

For other platforms or installations methods, please refer to the detailed instructions on how to install kubectl.

Choosing a cluster template

A cluster template is a blue-print to build a Kubernetes cluster (similar to machine images for the compute service). The cluster template specifies what version of Kubernetes will be installed and the features that will be enabled.

The following command will list all cluster templates available:

 $ openstack coe cluster template list
| uuid                                 | name                             |
| c5b5a636-0066-4291-8da9-5190915f5a76 | kubernetes-v1.11.6-prod-20190130 |
| 5cb74603-4ad3-4e3b-a1d4-4539c392dbf0 | kubernetes-v1.11.6-dev-20190130  |
| 5e17bc87-27b2-4c61-ba58-c064fd10245d | kubernetes-v1.11.9-dev-20190402  |
| bd116a49-4381-4cb6-adf8-cd442e1a713f | kubernetes-v1.11.9-prod-20190402 |
| be25ca0c-2bf6-4bef-a234-4e073b187d71 | kubernetes-v1.12.7-dev-20190403  |
| 81d0f765-62fe-4c99-b7f8-284ffddac861 | kubernetes-v1.12.7-prod-20190403 |


Make sure that when you use one of these names a command that you use them precisely as written. There are no warning messages to tell you that the template name doesn’t exist and so you must double check before inputting any commands.

Alternatively, a list of cluster templates available can be seen in the Cluster Templates panel in the dashboard, under the Container Infra section.

Template types

The naming convention used for the templates is broken down as follows:

  • kubernetes-v1.11.2 : this is the version of kubernetes that the template will use to create the cluster.
  • -dev or -prod : this create either a minimalist cluster for proof of concept or development work, whereas the prod option creates a more production ready cluster (see below).
  • -20181008 the final portion of the name is the date on which the template was created.

The difference between between the development and production templates are:

  • dev creates a small Kubernetes cluster with a single master and a single worker node. As the name suggests, it should not be used for production.
  • prod creates a Kubernetes cluster that is intended for production workloads. It expects a minimum three master nodes and three worker nodes. The master nodes will have two loadbalancers deployed in front of them in order to provide HA for the API and etcd services. This template also deploys Prometheus and Grafana to provide cluster metrics.


Please note that despite having a template called “production”, the Kubernetes service on the Catalyst Cloud is still in alpha (Tech Preview) and should not be used for production workloads.

Deploying a Kubernetes cluster

Creating a cluster

To create a new development cluster run the following command:

$ openstack coe cluster create k8s-dev-cluster \
--cluster-template kubernetes-v1.11.2-dev-20181008 \
--keypair my-ssh-key \
--node-count 1 \
--master-count 1

Request to create cluster c191470e-7540-43fe-af32-ad5bf84940d7 accepted

To create a new production cluster, run the following command:

$ openstack coe cluster create k8s-prod-cluster \
--cluster-template kubernetes-v1.11.2-prod-20181008 \
--keypair my-ssh-key \
--node-count 3 \
--master-count 3

Request to create cluster c191470e-7540-43fe-af32-ad5bf84940d7 accepted

Checking the status of the cluster

Depending on the template used, it will take 5 to 15 minutes for the cluster to be created.

You can use the following command to check the status of the cluster:

$ openstack coe cluster list
| uuid                                 | name        | keypair  | node_count | master_count | status             |
| c191470e-7540-43fe-af32-ad5bf84940d7 | k8s-cluster | testkey  |          1 |            1 | CREATE_IN_PROGRESS |

Alternatively, you can check the status of the cluster on the Clusters panel , in the Container Infra section of the Dashboard.

Please wait until the status changes to CREATE_COMPLETE to proceed.

The kubectl command-line tool uses kubeconfig files to determine how to coonect to the APIs of the Kubernetes cluster.

Getting the cluster config

The following command will download the necessary certificates and create a configuration file on your current directory. It will also export the KUBECONFIG variable on your behalf:

$ eval $(openstack coe cluster config k8s-cluster)

If you wish to save the configuration to a different location you can use the --dir <directory_name> parameter to select a different destination.


If you are running multiple clusters, or are deleting and re-creating a cluster, it is necessary to ensure that the current kubectl configuration is referencing the correct cluster configuration.

Testing the cluster

Once the cluster state is CREATE_COMPLETE and you have successfully retrieved the cluster config, you can proceed with deploying your applications into the cluster using kubectl or whatever your preferred mechanism may be.

As a quick check, you can run the following command to confirm that Kubernetes is working as expected:

$ kubectl cluster-info
Kubernetes master is running at
Heapster is running at
CoreDNS is running at

Accessing the Kubernetes dashboard

The Catalyst Kubernetes Service enables the Kubernetes web dashboard by default (this behaviour can be overwritten if desirable).

In order to access the Kubernetes dashboard, you will need retrieve the admin token for the cluster using the following command:

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-token | awk '{print $1}')
Name:         admin-token-f5728
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin

Type:  kubernetes.io/service-account-token

ca.crt:     1054 bytes
namespace:  11 bytes
token:      1234567890123456789012.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mNTcyOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImNjNDQxNmQxLWNhODItMTFlOC04OTkzLWZhMTYzZTEwZWY3NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.ngUnhjCOnIQYOAMzyx9TbX7dM2l4ne_AMiJmUDT9fpLGaJexVuq7EHq6FVfdzllgaCINFC2AF0wlxIscqFRWgF1b1SPIdL05XStJZ9tMg4cyr6sm0XXpzgkMLsuAzsltt5GfOzMoK3o5_nqn4ijvXJiWLc4XkQ3_qEPHUtWPK9Jem7p-GDQLfF7IvxafJpBbbCR3upBQpFzn0huZlpgdo46NAuzTT6iKhccnB0IyTFVgvItHtFPFKTUAr4jeuCDNlIVfho99NBSNYM_IwI-jTMkDqIQ-cLEfB2rHD42R-wOEWztoKeuXVkGdPBGEiWNw91ZWuWKkfslYIFE5ntwHgA

In a separate terminal run the kubectl proxy command to allow for your browser to connect to the Kubernetes dashboard.

$ kubectl proxy
Starting to serve on

Once the proxy is ready, open following URL on your browser:


You will be presented with a login screen, as illustrated below. Select Token as the authentication type and paste in the authentication token acquired in the previous step.


Once successfully authenticated you will be able to view the Kubernetes dashboard, as illustrated below.


Deploying a hello world application

It is possible to have a loadbalancer created on your behalf by Kubernetes through the underlying Catalyst Cloud infrastructure services.

For this example we are going to deploy a container running a simple flask app that will respond with a basic ‘Hello World’ message that includes the host name and IP of the node responding to the request. This will sit behind a loadbalancer that will be publicly available on the internet via a floating ip and will serve requests to the application servers using the round robin algorithm.

The container image in question catalystcloud/helloworld version_1.1 runs the following application. You do not need to copy this, it already exists in the cloud.

#!/usr/bin/env python
from flask import Flask
import socket

app = Flask(__name__)

host_name = socket.gethostname()
host_ip = socket.gethostbyname(host_name)

def hello_world():
  return "Hello World! From Server : {} @ {}".format(host_name, host_ip)

if __name__ == "__main__":

Creating the application deployment

First we need to create a manifest like this. If you’re folliwing along with this example you should save this file as helloworld-deployment_1.yaml

apiVersion: apps/v1
kind: Deployment
  name: helloworld-deployment
    app: helloworld
  replicas: 3
      app: helloworld
        app: helloworld
      - name: helloworld
        image: catalystcloud/helloworld-flask:version_1.1
        - name: helloworld-port
          containerPort: 5000

This provides the following parameters for a deployment:

  • number of replicas - 3
  • deployment image - catalystcloud/helloworld version_1.1.
  • pod labels, to identify the app to the service - app: helloworld
  • containerPort to expose the application on - 5000
    • This port also uses a name, in this case helloworld-port, which allows us to refer to it by name rather than value in the service.

To deploy the application run the following command.

$ kubectl create -f helloworld-deployment_1.yaml
deployment.apps/helloworld-deployment created

Check the state of the pods to confirm that they have all been deployed correctly. Once the status of all of them shows that they are running and ready, this may take a few seconds, continue to the next section.

$ kubectl get pods
NAME                                     READY   STATUS              RESTARTS   AGE
helloworld-deployment-5bdfcbb467-648bd   1/1     Running             0          43s
helloworld-deployment-5bdfcbb467-wlb6n   0/1     ContainerCreating   0          43s
helloworld-deployment-5bdfcbb467-zrlvl   1/1     Running             0          43s

Creating the loadbalancer service

The deployment itself however does not provide a means for us to expose the application outside of the cluster. In order to do this we need to create a service to act as a go between.

The manifest for our service definition will look like this.

kind: Service
apiVersion: v1
  name: helloworld-service
    app: helloworld
  - protocol: TCP
    port: 80
    targetPort: helloworld-port
  type: LoadBalancer

The parameters of interest here are:

  • the selector which links the service to the app using the label helloworld.
  • The port that it exposes externally - 80.
  • The targetPort on the pods to link back to, in this case the named port, helloworld-port that we created on the deployment.

To create the service run the following command.

kubectl create -f helloworld-service.yaml
service/helloworld-service created

The final step is to check on the state of the service and wait until the loadbalancer is active and the LoadBalancer Ingress field has received a publicly accessible floating IP address.

kubectl describe svc helloworld-service
Name:                     helloworld-service
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=helloworld
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                     <unset>  80/TCP
TargetPort:               helloworld-port/TCP
NodePort:                 <unset>  32548/TCP
Endpoints:      ,,
Session Affinity:         None
External Traffic Policy:  Cluster
  Type     Reason                      Age                From                Message
  ----     ------                      ----               ----                -------
  Normal   EnsuringLoadBalancer        10m (x2 over 12m)  service-controller  Ensuring load balancer
  Normal   EnsuringLoadBalancer        60s                service-controller  Ensuring load balancer

Once your service is in this state you should be able to browse to the IP address assign to the LoadBalancer Ingress field and see a simple text output similar to the following.

Hello World! From Server : helloworld-deployment-5bdfcbb467-c7rln @

If you refresh the browser you should also see the response update to reflect different host responses as the loadbalancer attempts to round robin the requests.