What is a cluster?

A container cluster is the foundation of the Kubernetes Engine, it consists of at least one master server and one or more node servers. It is made up of a collection of compute, networking and storage resources necessary to run any given workloads. Communication between them is by way of a shared network. An entire system may be comprised of multiple clusters.

The master server is the control plane of the cluster consisting of a collection of services responsible for providing the centralised scheduling, logic and management of all aspects of the cluster. While it is possible to run a cluster with a single master that hosts all of the required services it is more advisable, especially for production environments, to deploy them in a multi-master HA configuration.

Some of the key services running on the master are:

  • The interface to the cluster is via the API Server, which provides a RESTful API frontend to the control plane.
  • Configuration and state of the cluster is managed by the cluster store. This is based on etcd, which is a distributed key-value store, and provides the single source of truth for the cluster and as such is the only stateful component within the cluster.
  • The scheduler

The machines designated as nodes, previously referred to as minions, are responsible for accepting and running workloads assigned by the master using appropriate local and external resources.

The cluster template

A cluster template is a collection of parameters to describe how a cluster can be constructed. Some parameters are relevant to the infrastructure of the cluster, while others are for the particular COE.

The cloud provider may supply pre-defined templates for users and it may also be possible, in some situations, for user to create their own templates. Initially Catalyst Cloud will only allow the use of the pre-defined templates.

Viewing templates

When running openstack command line tools ensure that you have sourced a valid openrc file first. For more information on this see Source an OpenStack RC file


In order to be able to create a Kubernetes cluster the user needs to ensure that they have been allocated the heat_stack_owner role.

$ source keystonerc

Then list all of the available cluster templates.

$ openstack coe cluster template list
| uuid                                 | name |
| cf6f8cab-8d22-4f38-a88b-25f8a41e5b77 | k8s  |

To view the details of a particular template.

$ openstack coe cluster template show k8s
| Field                 | Value                                |
| insecure_registry     | -                                    |
| labels                | {u'kube_tag': u'v1.11.2-1'}          |
| updated_at            | 2018-10-05T01:06:15+00:00            |
| floating_ip_enabled   | True                                 |
| fixed_subnet          | -                                    |
| master_flavor_id      | c1.c2r2                              |
| uuid                  | cf6f8cab-8d22-4f38-a88b-25f8a41e5b77 |
| no_proxy              | -                                    |
| https_proxy           | -                                    |
| tls_disabled          | False                                |
| keypair_id            | -                                    |
| public                | True                                 |
| http_proxy            | -                                    |
| docker_volume_size    | -                                    |
| server_type           | vm                                   |
| external_network_id   | e0ba6b88-5360-492c-9c3d-119948356fd3 |
| cluster_distro        | fedora-atomic                        |
| image_id              | 83833f4f-5d09-44cd-9e23-b0786fc580fd |
| volume_driver         | cinder                               |
| registry_enabled      | False                                |
| docker_storage_driver | overlay2                             |
| apiserver_port        | -                                    |
| name                  | kubernetes-v1.11.2-development       |
| created_at            | 2018-10-05T00:25:19+00:00            |
| network_driver        | calico                               |
| fixed_network         | -                                    |
| coe                   | kubernetes                           |
| flavor_id             | c1.c2r2                              |
| master_lb_enabled     | False                                |
| dns_nameserver        |                       |

There are some key parameters that are worth mentioning in the above template:

  • coe: kubernetes Specifies the container orchestration engine, such as kubernetes, swarm and mesos. Currently the the only option available on the Catalyst Cloud is Kubernetes.
  • master_lb_enabled: true As multiple masters may exist in a cluster, a load balancer is created to provide the API endpoint for the cluster and to direct requests to the masters. Where the load balancer service is not available, this option can be set to ‘false’ thus creating a cluster without the load balancer. In this case, one of the masters will serve as the API endpoint. The default is True.
  • network_driver: calico This is the driver used to provide networking services to the containers. This is independent from the Neutron networking that the cluster uses. Calico is the Catalyst Cloud recommended network driver as it provides secure network connectivity for containers and virtual machine workloads.
  • labels These are arbitrary labels (defined by the cluster drivers) in the form of key=value pairs as a way to pass additional parameters to the cluster driver. Currently only prometheus_monitoring is supported and if set to true the monitoring stack will be set up and Node Exporter will automatically be picked up and launched as a regular Kubernetes POD. By default this is False.

Creating a cluster

To create a new cluster we run the openstack coe cluster create command, providing the name of the cluster that we wish to create along with any possible additonal or over-riding parameters that are necessary.

$ openstack coe cluster create k8s-cluster \
--cluster-template k8s \
--keypair testkey
--node-count 1 \
--master-count 1 \

Request to create cluster c191470e-7540-43fe-af32-ad5bf84940d7 accepted

$ openstack coe cluster list
| uuid                                 | name        | keypair  | node_count | master_count | status             |
| c191470e-7540-43fe-af32-ad5bf84940d7 | k8s-cluster | testkey  |          1 |            1 | CREATE_IN_PROGRESS |

Once the cluster is active access to server nodes in the cluster is via ssh, the ssh user will be ‘fedora’ and the authentication will be using the ssh key provided in the cluster template.

$ ssh fedora@<node_ip>


Once a cluster template is in use it cannot be updated or deleted until all of the clusters using it have been terminated.

Setting up Kubernetes CLI

Getting kubectl

To deploy and manage applications on kubernetes use the Kubernetes command-line tool, kubectl. With this tool you can inspect cluster resources; create, delete, and update components; and look at your new cluster and bring up example apps. It’s basically the Kubernertes Swiss army knife.

The details for getting the latest version of kubectl can be found here.

To install on Linux via the command line as a simple binary, perform the following steps:

$ curl -LO$(curl -s \

$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

The basic format of kubectl commands looks like this:

kubectl [command] [TYPE] [NAME] [flags]

where command, TYPE, NAME, and flags are:

  • command: the operation to perform
  • TYPE: the resource type to act on
  • NAME: the name of the resource in question
  • flags: optional flags to provide extra

Cluster Access Using kubeconfig Files

The kubectl command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server of a cluster. These files to provide information about clusters, users, namespaces, and authentication mechanisms.

Getting the cluster config

Configure native client to access cluster. You can source the output of this command to get the native client of the corresponding COE configured to access the cluster.

For example: eval $(openstack coe cluster config <cluster-name>)

$ eval $(openstack coe cluster config k8s-cluster)

This will download the necessary certificates and create a config file within the directory that you are running the command from. If you wish to save the configuration to a different location you can use the --dir <directory_name> parameter to select a different destination.


If you are running multiple clusters or are deleting and re-creating cluster it is necessary to ensure that the current kubectl configuration is referencing the right cluster. The following section will outline this in more detail.

Viewing the cluster

It is possible to view details of the cluster with the following command. This will return the address of the master and the services running there.

$ kubectl cluster-info
Kubernetes master is running at
Heapster is running at
CoreDNS is running at

In order to view more in depth information about the cluster simply add the dump option to the above example. This generates output suitable for debugging and diagnosing cluster problems. By default, it redirects everything to stdout.

$ kubectl cluster-info dump

Accessing the Kubernetes Dashboard

By default Kubernetes provides a web based dashboard that exposes the details of a given cluster. In order to access this it is first necessary to to retrieve the admin token for the cluster you wish to examine.

The following command will extract the correct value from the secretes in the kube-system namespace.

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-token | awk '{print $1}')
Name:         admin-token-f5728
Namespace:    kube-system
Labels:       <none>


ca.crt:     1054 bytes
namespace:  11 bytes
token:      1234567890123456789012.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1mNTcyOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImNjNDQxNmQxLWNhODItMTFlOC04OTkzLWZhMTYzZTEwZWY3NiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.ngUnhjCOnIQYOAMzyx9TbX7dM2l4ne_AMiJmUDT9fpLGaJexVuq7EHq6FVfdzllgaCINFC2AF0wlxIscqFRWgF1b1SPIdL05XStJZ9tMg4cyr6sm0XXpzgkMLsuAzsltt5GfOzMoK3o5_nqn4ijvXJiWLc4XkQ3_qEPHUtWPK9Jem7p-GDQLfF7IvxafJpBbbCR3upBQpFzn0huZlpgdo46NAuzTT6iKhccnB0IyTFVgvItHtFPFKTUAr4jeuCDNlIVfho99NBSNYM_IwI-jTMkDqIQ-cLEfB2rHD42R-wOEWztoKeuXVkGdPBGEiWNw91ZWuWKkfslYIFE5ntwHgA

Next run the kubectl proxy command from the CLI.

$ kubectl proxy
Starting to serve on

Once the proxy is ready browse to the following URL:


You will be prompted with a login screen, select token as the type and paste in the authentication token acquired in the step above.


Once successfully authenticated you will be able to view the cluster console.


Now that we have a cluster up and running and have confirmed our access you should be able to run workloads in your Kubernetes cluster.

Managing cluster configurations

When working with multiple clusters or a cluster that has been torn down and recreated it is necessary to ensure that you have the correct cluster context loaded in order for kubectl to interact with the intended cluster.

In order to see the current configuration and context that kubectl is using, run the following.

$ kubectl config view
apiVersion: v1
- cluster:
    certificate-authority: /home/testuser/tmp/ca.pem
  name: k8s-m1-n1
- context:
    cluster: k8s-m1-n1
    user: admin
  name: default
current-context: default
kind: Config
preferences: {}
- name: admin
    client-certificate: /home/testuser/tmp/cert.pem
    client-key: /home/testuser/tmp/key.pem

$ kubectl config current-context

This shows us the details of the current configuration file that kubectl is referencing and also the specific cluster context within that, in this case default. There is also an environment variable called $KUBECONFIG that stores the path or paths to the various configurations that are available.

If we had run the command to retrieve the cluster configuration from a directory called tmp within our home directory then the output would look like this.


If there was a second cluster that we wished to also be able to work with then we need to retrieve the configuration and store it to a local directory.


At the current time it is not possible to store multiple cluster configurations within the same directory. There is a change coming in a future release that will make this possible using a converged configuration file.

If you run eval $(openstack coe cluster config <cluster-name>) within a directory that already contains the configuration for a cluster it will fail. If this is intentional, as in the case of upgrading a cluster that has been rebuilt, then this is possible by adding the --force flag, like this.

$ eval $(openstack coe cluster config --force k8s-cluster )

If you are wanting to download the configuration for another cluster then we can use the -dir flag and pass in the location for the configuration to be saved. Here we will save our new configuration into a directory called .kube/ under the users home directory.

$ eval $(openstack coe cluster config --dir ~/.kube/ k8s-cluster-2)

If we now check the current config we will see that is also says default, this is because that is the naming convention used in the creation of the local config.

$ kubectl config current-context

If we view the actual config however we can see that this is indeed a different file to the one we view previously.

$ kubectl config view
apiVersion: v1
- cluster:
    certificate-authority: /home/testuser/.kube/ca.pem
  name: k8s-cluster-2
- context:
    cluster: k8s-cluster-2
    user: admin
  name: default
current-context: default
kind: Config
preferences: {}
- name: admin
    client-certificate: /home/testuser/.kube/cert.pem
    client-key: /home/testuser/.kube/key.pem

To make things more useful we can change and confirm the new name of the context in the following manner.

$ kubectl config rename-context default test
$ kubectl config current-context

The final step needed to give us access to both of our clusters is to update the $KUBECONFIG environment variable so that it knows about both and allows us to see them in a single view.

$ export KUBECONFIG=~/tmp/config:~/.kube/config
$ kubectl config get-contexts
          default   k8s-cluster    admin
*         test      k8s-cluster-2  admin

Now we can simply switch between the various contexts available to us in the following manner.

kubectl config use-context default