Kubernetes clusters can have one or more labels applied to them, to control how your cluster is configured, and what gets installed into it.
Labels can only be applied to a cluster when it is created.
Label |
Type |
Default Value |
Accepted Values / Examples |
Description |
---|---|---|---|---|
|
Boolean |
|
|
Assign a floating IP to the Kubernetes API load balancer, to allow access to the Kubernetes API via the public Internet. |
|
IPv4/IPv6 CIDR |
|
|
Specify a set of CIDR ranges that should be allowed to access the Kubernetes API. Multiple values can be defined (see Specifying multiple label values). |
|
String |
|
Network Name |
Optional additional network to attach to cluster worker nodes. Useful for allowing access to external networks from the workers. |
|
Enumeration |
|
|
Policy for reclaiming dynamically created persistent volumes. For more information, see Persistent Volume Retention. |
|
Enumeration |
|
|
Filesystem type for persistent volumes. |
|
Boolean |
|
|
Allows for expansion of volumes by editing the corresponding
|
|
Boolean |
|
|
Install the Kubernetes Dashboard into the cluster. |
|
Integer |
|
Greater than 0 |
The size (in GiB) to create the boot volume for Control Plane and Worker nodes. Currently, this is the only disk attached to nodes. |
|
Enumeration |
|
See Volume Tiers for a list of volume type names |
The Block Storage volume type name to use for the boot volume. |
|
Boolean |
|
|
Enable Worker node auto scaling in the cluster. When set to |
|
Integer |
|
Greater than 0 |
Minimum number of Worker nodes for auto scaling. This value is required if |
|
Integer |
|
Greater than |
Maximum number of Worker nodes to scale out to, if auto scaling is enabled. This value is required if |
|
Boolean |
|
|
Enable auto-healing on control plane and worker nodes. With auto-healing enabled, if nodes become Note: Control plane machines will only be remediated one at a time. Worker nodes will not be remediated if 40% are considered unhealthy, preventing some cascading failures. |
|
Boolean |
|
|
With this option enabled, a deployment will be installed into your cluster allowing the use of Role-Based Access Control with Catalyst Cloud’s authentication system. For more information see Role-Based Access Control. With this option disabled, the admin kubeconfig is still available as well as Kubernetes API Access Control. |
Labels may be set on a cluster at creation time either via the API or in the dashboard.
When running openstack coe cluster create
, set the --labels
option
to define custom labels.
Each label should be provided in a comma-separated list of key-value pairs.
Note
Make sure to also define the --merge-labels
option
when defining custom labels.
Here is an example of setting a few custom labels:
openstack coe cluster create my-cluster-name \
--cluster-template kubernetes-v1.28.9-20240416 \
...
--merge-labels \
--labels csi_cinder_reclaim_policy=Retain,kube_dashboard_enabled=true,master_lb_floating_ip_enabled=false
Note
It is not possible to modify labels on a cluster in-place after it has been created.
Custom labels can be defined using the Labels -> Additional Labels field in the Advanced tab of the Create New Cluster window.
Note
It is not possible to modify labels on a cluster in-place after it has been created.
When defining the openstack_containerinfra_cluster_v1 resource,
use the labels
attribute to define a label key-value mapping.
Note
Make sure to also set the merge_labels
attribute to true
when defining custom labels.
Here is an example of setting a few custom labels:
resource "openstack_containerinfra_cluster_v1" "my-cluster-name" {
name = "my-cluster-name"
cluster_template_id = "b9a45c5c-cd03-4958-82aa-b80bf93cb922"
...
merge_labels = true
labels = {
csi_cinder_reclaim_policy = "Retain"
kube_dashboard_enabled = "true"
master_lb_floating_ip_enabled = "false"
}
}
Warning
It is not possible to modify labels on a cluster in-place after it has been created.
If the labels are modified in Terraform after a cluster has been created, the cluster will be re-created, so be careful not to modify them unintentionally.
Some labels can have multiple values set for them.
Using the CLI, you can specify multiple copies of the label key-value pair, each with their own unique value.
For example, to define multiple CIDRs for api_master_lb_allowed_cidrs
:
openstack coe cluster create my-cluster-name \
--cluster-template kubernetes-v1.28.9-20240416 \
...
--merge-labels \
--labels master_lb_floating_ip_enabled=true,api_master_lb_allowed_cidrs=192.0.2.1/32,api_master_lb_allowed_cidrs=192.0.2.2/32
Note
Specifying multiple values for labels is currently not supported by the dashboard.
When specifying labels using the Labels -> Additional Labels field in the Advanced tab, if multiple key-value pairs with the same label are defined, only the first defined value will be used.
If you would like to specify multiple label values when creating a cluster, please create the cluster using the CLI or Terraform.
When defining the openstack_containerinfra_cluster_v1 resource, define the label value as a comma-separated string, with all values listed.
For example, to define multiple CIDRs for api_master_lb_allowed_cidrs
:
resource "openstack_containerinfra_cluster_v1" "my-cluster-name" {
name = "my-cluster-name"
cluster_template_id = "b9a45c5c-cd03-4958-82aa-b80bf93cb922"
...
merge_labels = true
labels = {
master_lb_floating_ip_enabled = "true"
api_master_lb_allowed_cidrs = "192.0.2.1/32,192.0.2.2/32"
}
}
For Kubernetes to function, the system daemons consume some vCPU time and memory. When Kubernetes schedules pods, it will only allow them to be placed on nodes that have available capacity.
A fully packed node needs to take into account the resource consumption of the system daemons as well as the pod limits. For this reason, we reserve some resources (both vCPU and memory) for the system daemons.
When choosing node flavors and viewing node capacity in Kubernetes you will notice a difference between the allocated and reported available capacity.
We reserve vCPU and memory as a reducing percentage the more resources the node has. Example node sizes and the current kubeReserved algorithm are provided in the table below to give approximate values for available capacity. These are subject to change, consult your node details within your cluster for the actual available capacity to the Kubernetes scheduler.
Reserved vCPU capacity values for select example compute flavours:
Example Flavor name |
vCPU in Flavor (cores) |
Reserved vCPU (millicore) |
Kubernetes Available vCPU (millicore) |
Percentage reserved |
---|---|---|---|---|
c1.c1r2 |
1 |
60 millicore |
940 millicore |
6% |
c1.c2r2 |
2 |
70 millicore |
1930 millicore |
3.5% |
c1.c4r4 |
4 |
80 millicore |
3920 millicore |
2% |
c1.c8r8 |
8 |
90 millicore |
7910 millicore |
1.13% |
c1.c16r16 |
16 |
110 millicore |
15890 millicore |
0.68% |
c1.c32r16 |
32 |
150 millicore |
31850 millicore |
0.47% |
And the corresponding reserved memory values for the same example flavours:
Example Flavor name |
Memory in Flavor (MiB) |
Memory Reserved (MiB) |
Kubernetes Available Memory (MiB) |
Percentage reserved |
---|---|---|---|---|
c1.c1r2 |
2048 |
512 |
1536 |
25% |
c1.c2r2 |
2048 |
512 |
1536 |
25% |
c1.c4r4 |
4096 |
1024 |
3072 |
25% |
c1.c8r8 |
8192 |
1844 |
6348 |
22.5% |
c1.c16r16 |
16384 |
2664 |
13720 |
16.25% |
c1.c32r32 |
32768 |
3648 |
29120 |
11.13% |