Retrieving data from an orphaned PersistentVolumeΒΆ

Assume we had previously created a cluster that had a Pod with a PersistentVolume mounted on it. If the cluster was deleted without deleting the PersistentVolume, or if the PersistentVolume was set to Retain, we can still find the Cinder volume and use it within another Kubernetes cluster in the same cloud project and region.

First we need to query our project to identify the volume in question and retrieve its UUID.

$ openstack volume list
+--------------------------------------+-------------------------------------------------------------+-----------+------+-------------+
| ID                                   | Name                                                        | Status    | Size | Attached to |
+--------------------------------------+-------------------------------------------------------------+-----------+------+-------------+
| 6b1903ea-d1aa-452d-93cc-xxxxxxxxxxxx | kubernetes-dynamic-pvc-1e3b558f-3945-11e9-9776-xxxxxxxxxxxx | available |    1 |             |
+--------------------------------------+-------------------------------------------------------------+-----------+------+-------------+

Once we have the ID value for the volume in question we can create a new PersistentVolume resource in our cluster and link it specifically to that volume.

# pv-existing.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: existing-pv
  labels:
    type: existing-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  cinder:
    fsType: ext4
    volumeID: 119f5f68-585e-41dd-b1a4-xxxxxxxxxxxx
$ kubectl create -f pv-existing.yaml
persistentvolume/cinder-pv created

Once we have the PV created we need to create a corresponding PersistentVolumeClaim for that resource.

The key point to note here is that our claim needs to reference the specific PersistentVolume we created in the previous step. To do this we use a selector with the matchLabels argument to refer to a corresponding label that we had in the PersistentVolume declaration.

# pvc-existing-pv.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: existing-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      type: existing-pv
$ kubectl create -f pvc-existing-pv.yaml
persistentvolumeclaim/existing-cinder-pv-claim created

Finally we can create a new Pod that uses our PersistentVolumeClaim to mount the required volume on this pod.

# pod-with-existing-pv.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-exisitng-pv-test
spec:
  volumes:
    - name: existing-pv-storage
      persistentVolumeClaim:
        claimName: existing-pv-claim
  containers:
    - name: test-exisitng-storage-container
      image: nginx:latest
      ports:
        - containerPort: 8080
          name: "http-server"
      volumeMounts:
        - mountPath: "/data"
          name: existing-pv-storage
$ kubectl create -f pod-with-existing-pv.yaml
pod/pod-cinder created

If we describe the pod we can see that it has now successfully mounted our volume as /data within the container.

$ kubectl describe pod pod-cinder
Name:         pod-cinder
Namespace:    default
Node:         k8s-dev-pvc-test-3-y4gcy3oygsil-minion-0/10.0.0.5
Start Time:   Wed, 27 Feb 2019 10:26:25 +1300
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:
Containers:
  cinder-storage-container:
    Container ID:
    Image:          lingxiankong/alpine-test
    Image ID:
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from cinder-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kjqdd (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  cinder-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  existing-cinder-pv-claim
    ReadOnly:   false
  default-token-kjqdd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kjqdd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               9s    default-scheduler        Successfully assigned default/pod-cinder to k8s-dev-pvc-test-3-y4gcy3oygsil-minion-0
  Normal  SuccessfulAttachVolume  3s    attachdetach-controller  AttachVolume.Attach succeeded for volume "cinder-pv"