User access

Introduction

Kubernetes clusters launched on the Catalyst Cloud are integrated with the OpenStack Keystone (Identity) service. Users with one of the roles listed below are able to interact with any Kubernetes clusters owned by their project using their existing cloud credentials.

The OpenStack Keystone Identity roles related to the Kubernetes service are:

  • k8s_admin: administrators of the cluster platform with full privileges to perform any operation.

  • k8s_developer: users able to deploy applications to the cluster platform, who are restricted from performing cluster level operations.

  • k8s_viewer: users able to view/obtain information about cluster resources.

For a detailed list of permissions associated with these roles, please refer to role permissions table in this document.

These roles can be added to an existing user through the Project users page by anyone who has the Project Admin or Project Moderator roles assigned to their account.

Role

Permissions

k8s_admin

Privileged users with maximum rights. Full admin access is granted for Magnum cluster CRUD operations and all Kubernetes namespaces.

k8s_developer

Privileged users with restricted rights. Kubernetes CRUD operation access is granted to any namespace other than the admin (kube-system) namespace.

k8s_viewer

Non-privileged users able to perform READ actions in both Magnum and Kubernetes. Has access to all namespaces, excluding the admin namespace.

Warning

The privileged roles deserve special attention when deploying kubernetes clusters. The RBAC permissions that grant the ability to launch a pod in the cluster is a powerful right and use of a more restrictive Admission Controller may be appropriate to meet specific customer security needs.

Please note: this means any user with the the k8s_developer role must be a trusted individual, as by default they’re capable of escalating their own privileges.

Integrated Pod Policy Solutions:

Example 3rd Party Policy Solutions:

More information

The following is a comprehensive list of the exact RBAC permissions that each role gives a user:

+----------------------+-----------------------------------------------------+
| Role                 | Permissions                                         |
+======================+=====================================================+
| k8s_admin            | resourcemanager.projects.*                          |
+----------------------+-----------------------------------------------------+
| k8s_developer        | container.apiServices.*                             |
|                      | container.bindings.*                                |
|                      | container.certificateSigningRequests.create         |
|                      | container.certificateSigningRequests.delete         |
|                      | container.certificateSigningRequests.get            |
|                      | container.certificateSigningRequests.list           |
|                      | container.certificateSigningRequests.update         |
|                      | container.certificateSigningRequests.watch          |
|                      | container.clusterRoleBindings.get                   |
|                      | container.clusterRoleBindings.list                  |
|                      | container.clusterRoleBindings.watch                 |
|                      | container.clusterRoles.get                          |
|                      | container.clusterRoles.list                         |
|                      | container.clusterRoles.watch                        |
|                      | container.componentStatuses.*                       |
|                      | container.configMaps.*                              |
|                      | container.controllerRevisions.get                   |
|                      | container.controllerRevisions.list                  |
|                      | container.controllerRevisions.watch                 |
|                      | container.cronJobs.*                                |
|                      | container.customResourceDefinitions.*               |
|                      | container.deployments.*                             |
|                      | container.endpoints.*                               |
|                      | container.events.*                                  |
|                      | container.horizontalPodAutoscalers.*                |
|                      | container.ingresses.*                               |
|                      | container.initializerConfigurations.*               |
|                      | container.jobs.*                                    |
|                      | container.limitRanges.*                             |
|                      | container.localSubjectAccessReviews.*               |
|                      | container.namespaces.*                              |
|                      | container.networkPolicies.*                         |
|                      | container.nodes.get                                 |
|                      | container.nodes.list                                |
|                      | container.nodes.watch                               |
|                      | container.persistentVolumeClaims.*                  |
|                      | container.persistentVolumes.*                       |
|                      | container.podDisruptionBudgets.*                    |
|                      | container.podPresets.*                              |
|                      | container.podSecurityPolicies.get                   |
|                      | container.podSecurityPolicies.list                  |
|                      | container.podSecurityPolicies.watch                 |
|                      | container.podTemplates.*                            |
|                      | container.pods.*                                    |
|                      | container.replicaSets.*                             |
|                      | container.replicationControllers.*                  |
|                      | container.resourceQuotas.*                          |
|                      | container.roleBindings.get                          |
|                      | container.roleBindings.list                         |
|                      | container.roleBindings.watch                        |
|                      | container.roles.get                                 |
|                      | container.roles.list                                |
|                      | container.roles.watch                               |
|                      | container.secrets.*                                 |
|                      | container.selfSubjectAccessReviews.*                |
|                      | container.serviceAccounts.*                         |
|                      | container.services.*                                |
|                      | container.statefulSets.*                            |
|                      | container.storageClasses.*                          |
|                      | container.subjectAccessReviews.*                    |
|                      | container.tokenReviews.*                            |
+----------------------+-----------------------------------------------------+
| k8s_viewer           | container.apiServices.get                           |
|                      | container.apiServices.list                          |
|                      | container.apiServices.watch                         |
|                      | container.binding.get                               |
|                      | container.binding.list                              |
|                      | container.binding.watch                             |
|                      | container.clusterRoleBindings.get                   |
|                      | container.clusterRoleBindings.list                  |
|                      | container.clusterRoleBindings.watch                 |
|                      | container.clusterRoles.get                          |
|                      | container.clusterRoles.list                         |
|                      | container.clusterRoles.watch                        |
|                      | container.componentStatuses.get                     |
|                      | container.componentStatuses.list                    |
|                      | container.componentStatuses.watch                   |
|                      | container.configMaps.get                            |
|                      | container.configMaps.list                           |
|                      | container.configMaps.watch                          |
|                      | container.controllerRevisions.get                   |
|                      | container.controllerRevisions.list                  |
|                      | container.controllerRevisions.watch                 |
|                      | container.cronJobs.get                              |
|                      | container.cronJobs.list                             |
|                      | container.cronJobs.watch                            |
|                      | container.customResourceDefinitions.get             |
|                      | container.customResourceDefinitions.list            |
|                      | container.customResourceDefinitions.watch           |
|                      | container.deployments.get                           |
|                      | container.deployments.list                          |
|                      | container.deployments.watch                         |
|                      | container.endpoints.get                             |
|                      | container.endpoints.list                            |
|                      | container.endpoints.watch                           |
|                      | container.events.get                                |
|                      | container.events.list                               |
|                      | container.events.watch                              |
|                      | container.horizontalPodAutoscalers.get              |
|                      | container.horizontalPodAutoscalers.list             |
|                      | container.horizontalPodAutoscalers.watch            |
|                      | container.ingresses.get                             |
|                      | container.ingresses.list                            |
|                      | container.ingresses.watch                           |
|                      | container.initializerConfigurations.get             |
|                      | container.initializerConfigurations.list            |
|                      | container.initializerConfigurations.watch           |
|                      | container.jobs.get                                  |
|                      | container.jobs.list                                 |
|                      | container.jobs.watch                                |
|                      | container.limitRanges.get                           |
|                      | container.limitRanges.list                          |
|                      | container.limitRanges.watch                         |
|                      | container.localSubjectAccessReviews.get             |
|                      | container.localSubjectAccessReviews.list            |
|                      | container.localSubjectAccessReviews.watch           |
|                      | container.namespaces.get                            |
|                      | container.namespaces.list                           |
|                      | container.namespaces.watch                          |
|                      | container.networkPolicies.get                       |
|                      | container.networkPolicies.list                      |
|                      | container.networkPolicies.watch                     |
|                      | container.nodes.get                                 |
|                      | container.nodes.list                                |
|                      | container.nodes.watch                               |
|                      | container.persistentVolumeClaims.get                |
|                      | container.persistentVolumeClaims.list               |
|                      | container.persistentVolumeClaims.watch              |
|                      | container.persistentVolumes.get                     |
|                      | container.persistentVolumes.list                    |
|                      | container.persistentVolumes.watch                   |
|                      | container.podDisruptionBudgets.get                  |
|                      | container.podDisruptionBudgets.list                 |
|                      | container.podDisruptionBudgets.watch                |
|                      | container.podPresets.get                            |
|                      | container.podPresets.list                           |
|                      | container.podPresets.watch                          |
|                      | container.podTemplates.get                          |
|                      | container.podTemplates.list                         |
|                      | container.podTemplates.watch                        |
|                      | container.podSecurityPolicies.get                   |
|                      | container.podSecurityPolicies.list                  |
|                      | container.podSecurityPolicies.watch                 |
|                      | container.pods.get                                  |
|                      | container.pods.list                                 |
|                      | container.pods.watch                                |
|                      | container.replicaSets.get                           |
|                      | container.replicaSets.list                          |
|                      | container.replicaSets.watch                         |
|                      | container.replicationControllers.get                |
|                      | container.replicationControllers.list               |
|                      | container.replicationControllers.watch              |
|                      | container.resourceQuotas.get                        |
|                      | container.resourceQuotas.list                       |
|                      | container.resourceQuotas.watch                      |
|                      | container.roleBindings.get                          |
|                      | container.roleBindings.list                         |
|                      | container.roleBindings.watch                        |
|                      | container.roles.get                                 |
|                      | container.roles.list                                |
|                      | container.roles.watch                               |
|                      | container.secrets.get                               |
|                      | container.secrets.list                              |
|                      | container.secrets.watch                             |
|                      | container.selfSubjectAccessReviews.get              |
|                      | container.selfSubjectAccessReviews.list             |
|                      | container.selfSubjectAccessReviews.watch            |
|                      | container.serviceAccounts.get                       |
|                      | container.serviceAccounts.list                      |
|                      | container.serviceAccounts.watch                     |
|                      | container.services.get                              |
|                      | container.services.list                             |
|                      | container.services.watch                            |
|                      | container.statefulSets.get                          |
|                      | container.statefulSets.list                         |
|                      | container.statefulSets.watch                        |
|                      | container.storageClasses.get                        |
|                      | container.storageClasses.list                       |
|                      | container.storageClasses.watch                      |
|                      | container.subjectAccessReviews.get                  |
|                      | container.subjectAccessReviews.list                 |
|                      | container.subjectAccessReviews.watch                |
+----------------------+-----------------------------------------------------+

Generating Kubernetes config file

As the owner of the cluster (user who created it), you can run the following command to obtain the generic Kubernetes configuration file:

$ openstack coe cluster config test-cluster --use-keystone

The output of this command will be a file named config in the current working directory. This configuration file instructs kubectl to use the Catalyst Cloud credentials for authentication. A copy of this file will need to be made available to any user that requires access to the cluster.

Note

If you run this command in the directory where your current config file exists it will fail. You will need to run this from a different location.

Accessing the cluster

Once you have copied the config generated in the previous step, you need to create an environment variable to let kubectl know where to find its configuration file.

$ export KUBECONFIG='/home/user/config'

Next, you have to Source an openstack RC file and export a variable with an access token as demonstrated below:

export OS_TOKEN=$(openstack token issue -f yaml -c id | awk '{print $2}')

Now, for the duration of the authentication token issued in the previous step, you should be able to use kubectl to interact with the cluster.

kubectl cluster-info

If the token expires, you can re-generate another token by sourcing the MFA enabled OpenStack RC file again.

Using namespaces for granular access control

It is possible, through the use of roles and namespaces, to achieve a much more granular level of access control.

Kubernetes namespaces are a way to create virtual clusters inside a single physical cluster. This allows for different projects, teams, or customers to share a Kubernetes cluster.

In order to use namespacing, you will need to provide the following:

  • A scope for names.

  • A mechanism to attach authorization and policy to a subsection of the cluster.

For a more in depth look at namespaces it is recommended that you read through the official kubernetes documentation.

An example namespace

In this example we will provide access to some cluster resources for a cloud user that has none of the Kubernetes specific access roles (discussed above ) applied to their account. We will refer to this as our restricted user. Before we begin, the following is a list of the different resources and actions that we are going to be taking or creating in this example:

You will need to have these resources created before we start:

  • A cluster, in our example we have named ours: dev-cluster

  • A restricted user, in our example we have named them: clouduser

We are going to be creating the following resource in the tutorial below:

  • namespace : testapp

The level of access we are going to be supplying for users in this namespace is:

  • The cluster resource to access : pod

  • Resource access level : get, list, watch

Authenticating a non-admin cluster user

The first thing we need to address is a means for our restricted user to be able to authenticate with the cluster. To do this we will need to create a new configuration file that can be used by non administrator users. This will apply to all users on our project, including our restricted user.

Creating a non-admin cluster config

As the cluster administrator we need to create a cluster config file that allows cloud project users to use the cloud’s own authentication service as a means to access the cluster.

We can do that with the following command:

$ openstack coe cluster config <CLUSTER_NAME> –use-keystone

For example:

$ openstack coe cluster config dev-cluster --use-keystone

This config file can now be made available to other cloud users that need access to this cluster. By default this file will provide the following levels of access:

  • For a restricted project user, that is a project user with no Kubernetes specific role assigned to their cloud account, the default is no cluster access.

  • For a project user with a Kubernetes specific role assigned to their cloud account, they will be assigned the level of access dictated by that role (see above)

Setting up the access policy

Note

Run the following commands as the cluster administrator.

First, we will create a new namespace for the application to run in.

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: testapp
EOF

Confirm that is was created correctly.

$ kubectl get ns
NAME      STATUS   AGE
testapp   Active   3h45m

Next we need to create a new role and a role binding in the cluster to provide the required access to our restricted user. The role defines what access is being provided, where the rolebinding defines who is to be given that access.

Some of the key things to note in the manifest below are:

  • In the Role config

    • apiGroups: [""], the use of “” indicates that it applies to the core API group

  • In the RoleBinding config

    • The name in subjects: is case sensitive.

    • It is possible to add more than one subject to a role binding.

    • The name in roleRef: must match the name of the role you wish to bind to.

cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: testapp
  name: pod-viewer
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: view-pods
  namespace: testapp
subjects:
- kind: User
  name: clouduser
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-viewer
  apiGroup: rbac.authorization.k8s.io
EOF

Confirm that our Role and RoleBinding were created successfully in our new namespace.

$ kubectl get role,rolebinding -n testapp
NAME                                        AGE
role.rbac.authorization.k8s.io/pod-viewer   21s

NAME                                              AGE
rolebinding.rbac.authorization.k8s.io/view-pods   21s

Testing our restricted users access

Note

Run the following commands as the restricted user.

Setting up our cloud authentication

To access the cluster we first need to authenticate against the cloud using an openRC file. Once the cloud authentication has been taken care of we need to set up the cluster config file to authenticate with the cluster.

We do this by exporting the KUBECONFIG environment variable with the path to the files location, like so.

$ export KUBECONFIG=/home/clouduser/config

Confirming cluster access

We are now in a position to test that we have access to view pods in the namespace testapp. As we have not deployed any workloads as part of this example we will make use of the kubectl inbuilt command to inspect authorisation. The command is constructed as follows:

$ kubectl auth can-i <action_to_check>

So in our case we want to check that we can get pod information from the testapp namespace, which would look like this.

$ kubectl auth can-i get pod --namespace testapp
yes

Now lets confirm that we cannot view services in this namespace.

$ kubectl auth can-i get service --namespace testapp
no

The final check is to confirm that our right to view pods does not apply in any other namespace. We will check the default to confirm that this is true.

$ kubectl auth can-i get pod --namespace default
no

Cleaning up

Note

Run the following commands as the cluster administrator.

To remove the elements we created in this example run the following commands:

$ kubectl delete rolebinding view-pods --namespace testapp
rolebinding.rbac.authorization.k8s.io "view-pods" deleted

$ kubectl delete role pod-viewer --namespace testapp
role.rbac.authorization.k8s.io "pod-viewer" deleted

$ kubectl delete namespace testapp
namespace "testapp" deleted

Associating Kubernetes RBAC with Openstack roles

By creating a relationship between Kubernetes RBAC and Openstack keystone roles, you are able to configure access for users based on their openstack roles. If we look at the Project Member role as an example; by default any user with this role will not have access to the pods on your cluster. However, by creating an association with a kubernetes RBAC, you can allow access to your cluster for all user on your project who have the Project Member role. Below we discuss the process of how you can create this association and how you can define your own rules to allow users access to your cluster.

Before we begin, there are a few resources that we are going to need to gather before we can make changes to our cluster and the pod policy for it. You will need to have:

  • The correct openrc file sourced

  • A way to ssh to your master node (a jumphost if you have a private cluster)

  • Kubectl installed (on your machine or your jumphost)

  • Your kube-admin config downloaded

Once you have all of these set up we need to start by taking a look at the default configmap that connects our openstack roles to our kube RBAC group. To find the default configmap we use the following command:

$ kubectl -n kube-system get configmaps
NAME                                                           DATA   AGE
calico-config                                                  4      3d
coredns                                                        1      3d
extension-apiserver-authentication                             6      3d
k8s-keystone-auth-policy                                       1      3d
keystone-sync-policy                                           1      3d <-- we are looking for this config map here.
kube-dns-autoscaler                                            1      3d
kubernetes-dashboard-settings                                  0      3d
magnum-auto-healer                                             0      3d
magnum-auto-healer-config                                      1      3d
magnum-grafana                                                 1      3d
magnum-grafana-config-dashboards                               1      3d
magnum-grafana-test                                            1      3d
magnum-prometheus-operator-apiserver                           1      3d
------ Truncated for brevity -----

The keystone-sync-policy is what we use to connect the openstack keystone role with a group inside the k8s cluster. We can take a look at what our policy says by default, using the following command:

$ kubectl -n kube-system describe configmaps     keystone-sync-policy
Name:         keystone-sync-policy
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
syncConfig:
----
role-mappings:
  - keystone-role: _member_
    groups: []

Events:  <none>

We can see that by default our policy currently has the member role specified, but it does not have a group to sync with. We can create a group and associate it with the member role by updating our configmap. For this example we will call our group “pod-internal-group”:

Note

You do not have to use the _member_ as the keystone role that you sync with your internal group, you could use the k8s_viewer role or even the auth_only role. We are just using the member role in this example because it is part of the default policy and most users will be familiar with this role.

$ cat << EOF | kubectl apply -f -
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: keystone-sync-policy
  namespace: kube-system
data:
  syncConfig: |
    role-mappings:
      - keystone-role: _member_
        groups: ['pod-internal-group']
EOF

# We can confirm this worked by checking our config map again:
$ kubectl -n kube-system describe configmaps    keystone-sync-policy
Name:         keystone-sync-policy
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
syncConfig:
----
role-mappings:
  - keystone-role: _member_
    groups: ['pod-internal-group']

Events:  <none>

At this point we have now updated our sync policy to include a relationship between our Project Member role and our pod-internal-group.

Now, we will create our set of RBAC roles and rolebindings for the group. This will give users who exist in this group permission to perform the commands that we specify in our rolebinding. These permissions will then extend to users with the Project Member role because of our keystone-sync configmap. For our example, we will give our users the ability to list the pods in the kube-system namespace of our cluster.

$ kubectl -n kube-system create role pod-reader --verb=get,list --resource=pods
$ kubectl -n kube-system create rolebinding pod-reader --role=pod-reader --group=pod-internal-group

Warning

This is only an example and you should be mindful of what access you allow to all project members on your project.

Now that everything has been set up, your keystone users who have the Project Member role should be able to get and list the pods of your cluster. You can confirm this with the commands below:

# After swapping to our openstack user
$ kubectl get pod
Error from server (Forbidden): pods is forbidden: User "daniel" cannot list resource "pods" in API group "" in the namespace "default"

$ kubectl -n kube-system get pod
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-magnum-prometheus-operator-alertmanager-0   2/2     Running   0          3d
calico-kube-controllers-7457bb579b-qbdqx                 1/1     Running   0          3d
calico-node-8vxz8                                        1/1     Running   0          3d
kube-dns-autoscaler-7d66dbddbc-94vbd                     1/1     Running   0          3d
kubernetes-dashboard-5f4b4f9b5d-x5l9h                    1/1     Running   0          3d
magnum-auto-healer-f6jl9                                 1/1     Running   0          3d
---- List of pods truncated for brevity ----

$ kubectl -n kube-system get deployment
Error from server (Forbidden): deployments.extensions is forbidden: User "daniel" cannot list resource "deployments" in API group "extensions" in the namespace "kube-system"

You will notice how even though we gave our pod-internal-group members the ability to list pods, the command only works in the correct namespace and again on top of that, even in the correct namespace we only have access to the one set of commands we specified earlier. This means you can define very strict rules for what commands each group has access to.