Using namespaces for granular access control

It is possible, through the use of roles and namespaces, to achieve a much more granular level of access control.

Kubernetes namespaces are a way to create virtual clusters inside a single physical cluster. This allows for different projects, teams, or customers to share a Kubernetes cluster.

In order to use namespacing, you will need to provide the following:

  • A scope for names.

  • A mechanism to attach authorization and policy to a subsection of the cluster.

For a more in depth look at namespaces it is recommended that you read through the official kubernetes documentation.

An example namespace

In this example we will provide access to some cluster resources for a cloud user that has none of the Kubernetes specific access roles (discussed above ) applied to their account. We will refer to this as our restricted user. Before we begin, the following is a list of the different resources and actions that we are going to be taking or creating in this example:

You will need to have these resources created before we start:

  • A cluster, in our example we have named ours: dev-cluster

  • A restricted user, in our example we have named them: clouduser

We are going to be creating the following resource in the tutorial below:

  • namespace : testapp

The level of access we are going to be supplying for users in this namespace is:

  • The cluster resource to access : pod

  • Resource access level : get, list, watch

Setting up the access policy

Note

Run the following commands as the cluster administrator.

First, we will create a new namespace for the application to run in.

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: testapp
EOF

Confirm that is was created correctly.

$ kubectl get ns
NAME      STATUS   AGE
testapp   Active   3h45m

Next we need to create a new role and a role binding in the cluster to provide the required access to our restricted user. The role defines what access is being provided, where the rolebinding defines who is to be given that access.

Some of the key things to note in the manifest below are:

  • In the Role config

    • apiGroups: [""], the use of “” indicates that it applies to the core API group

  • In the RoleBinding config

    • The name in subjects: is case sensitive.

    • It is possible to add more than one subject to a role binding.

    • The name in roleRef: must match the name of the role you wish to bind to.

cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: testapp
  name: pod-viewer
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: view-pods
  namespace: testapp
subjects:
- kind: User
  name: clouduser
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-viewer
  apiGroup: rbac.authorization.k8s.io
EOF

Confirm that our Role and RoleBinding were created successfully in our new namespace.

$ kubectl get role,rolebinding -n testapp
NAME                                        AGE
role.rbac.authorization.k8s.io/pod-viewer   21s

NAME                                              AGE
rolebinding.rbac.authorization.k8s.io/view-pods   21s

Testing our restricted users access

Note

Run the following commands as the restricted user.

Setting up our cloud authentication

To access the cluster we first need to authenticate against the cloud using an openRC file. Once the cloud authentication has been taken care of we need to set up the cluster config file to authenticate with the cluster.

We do this by exporting the KUBECONFIG environment variable with the path to the files location, like so.

$ export KUBECONFIG=/home/clouduser/config

Confirming cluster access

We are now in a position to test that we have access to view pods in the namespace testapp. As we have not deployed any workloads as part of this example we will make use of the kubectl inbuilt command to inspect authorisation. The command is constructed as follows:

$ kubectl auth can-i <action_to_check>

So in our case we want to check that we can get pod information from the testapp namespace, which would look like this.

$ kubectl auth can-i get pod --namespace testapp
yes

Now lets confirm that we cannot view services in this namespace.

$ kubectl auth can-i get service --namespace testapp
no

The final check is to confirm that our right to view pods does not apply in any other namespace. We will check the default to confirm that this is true.

$ kubectl auth can-i get pod --namespace default
no

Cleaning up

Note

Run the following commands as the cluster administrator.

To remove the elements we created in this example run the following commands:

$ kubectl delete rolebinding view-pods --namespace testapp
rolebinding.rbac.authorization.k8s.io "view-pods" deleted

$ kubectl delete role pod-viewer --namespace testapp
role.rbac.authorization.k8s.io "pod-viewer" deleted

$ kubectl delete namespace testapp
namespace "testapp" deleted

Associating Kubernetes RBAC with Openstack roles

By creating a relationship between Kubernetes RBAC and Openstack keystone roles, you are able to configure access for users based on their openstack roles. If we look at the Project Member role as an example; by default any user with this role will not have access to the pods on your cluster. However, by creating an association with a kubernetes RBAC, you can allow access to your cluster for all user on your project who have the Project Member role. Below we discuss the process of how you can create this association and how you can define your own rules to allow users access to your cluster.

Before we begin, there are a few resources that we are going to need to gather before we can make changes to our cluster and the pod policy for it. You will need to have:

  • The correct openrc file sourced

  • A way to ssh to your master node (a jumphost if you have a private cluster)

  • Kubectl installed (on your machine or your jumphost)

  • Your kube-admin config downloaded

Once you have all of these set up we need to start by taking a look at the default configmap that connects our openstack roles to our kube RBAC group. To find the default configmap we use the following command:

$ kubectl -n kube-system get configmaps
NAME                                                           DATA   AGE
calico-config                                                  4      3d
coredns                                                        1      3d
extension-apiserver-authentication                             6      3d
k8s-keystone-auth-policy                                       1      3d
keystone-sync-policy                                           1      3d <-- we are looking for this config map here.
kube-dns-autoscaler                                            1      3d
kubernetes-dashboard-settings                                  0      3d
magnum-auto-healer                                             0      3d
magnum-auto-healer-config                                      1      3d
magnum-grafana                                                 1      3d
magnum-grafana-config-dashboards                               1      3d
magnum-grafana-test                                            1      3d
magnum-prometheus-operator-apiserver                           1      3d
------ Truncated for brevity -----

The keystone-sync-policy is what we use to connect the openstack keystone role with a group inside the k8s cluster. We can take a look at what our policy says by default, using the following command:

$ kubectl -n kube-system describe configmaps     keystone-sync-policy
Name:         keystone-sync-policy
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
syncConfig:
----
role-mappings:
  - keystone-role: _member_
    groups: []

Events:  <none>

We can see that by default our policy currently has the member role specified, but it does not have a group to sync with. We can create a group and associate it with the member role by updating our configmap. For this example we will call our group “pod-internal-group”:

Note

You do not have to use the _member_ as the keystone role that you sync with your internal group, you could use the k8s_viewer role or even the auth_only role. We are just using the member role in this example because it is part of the default policy and most users will be familiar with this role.

$ cat << EOF | kubectl apply -f -
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: keystone-sync-policy
  namespace: kube-system
data:
  syncConfig: |
    role-mappings:
      - keystone-role: _member_
        groups: ['pod-internal-group']
EOF

# We can confirm this worked by checking our config map again:
$ kubectl -n kube-system describe configmaps    keystone-sync-policy
Name:         keystone-sync-policy
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
syncConfig:
----
role-mappings:
  - keystone-role: _member_
    groups: ['pod-internal-group']

Events:  <none>

At this point we have now updated our sync policy to include a relationship between our Project Member role and our pod-internal-group.

Now, we will create our set of RBAC roles and rolebindings for the group. This will give users who exist in this group permission to perform the commands that we specify in our rolebinding. These permissions will then extend to users with the Project Member role because of our keystone-sync configmap. For our example, we will give our users the ability to list the pods in the kube-system namespace of our cluster.

$ kubectl -n kube-system create role pod-reader --verb=get,list --resource=pods
$ kubectl -n kube-system create rolebinding pod-reader --role=pod-reader --group=pod-internal-group

Warning

This is only an example and you should be mindful of what access you allow to all project members on your project.

Now that everything has been set up, your keystone users who have the Project Member role should be able to get and list the pods of your cluster. You can confirm this with the commands below:

# After swapping to our openstack user
$ kubectl get pod
Error from server (Forbidden): pods is forbidden: User "daniel" cannot list resource "pods" in API group "" in the namespace "default"

$ kubectl -n kube-system get pod
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-magnum-prometheus-operator-alertmanager-0   2/2     Running   0          3d
calico-kube-controllers-7457bb579b-qbdqx                 1/1     Running   0          3d
calico-node-8vxz8                                        1/1     Running   0          3d
kube-dns-autoscaler-7d66dbddbc-94vbd                     1/1     Running   0          3d
kubernetes-dashboard-5f4b4f9b5d-x5l9h                    1/1     Running   0          3d
magnum-auto-healer-f6jl9                                 1/1     Running   0          3d
---- List of pods truncated for brevity ----

$ kubectl -n kube-system get deployment
Error from server (Forbidden): deployments.extensions is forbidden: User "daniel" cannot list resource "deployments" in API group "extensions" in the namespace "kube-system"

You will notice how even though we gave our pod-internal-group members the ability to list pods, the command only works in the correct namespace and again on top of that, even in the correct namespace we only have access to the one set of commands we specified earlier. This means you can define very strict rules for what commands each group has access to.