Kubernetes Storage Operations
Internal
Get Information about Persistent Volumes
kubectl get pv <pv-name> kubectl describe pv <pv-name>
Get Information about Persistent Volumes Claims
kubectl get pvc <pvc-name> kubectl describe pvc <pvc-name>
Get Information about Storage Classes
NFS volume Example
This is an example to setup and use an nfs volume.
TODO: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
NFS volume Troubleshooting
Missing /sbin/mount.nfs:
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/6bb1cdc0-8689-4bf3-ac69-385a0fe35a7c/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.10.2.249:/opt/nfs0 /var/lib/kubelet/pods/6bb1cdc0-8689-4bf3-ac69-385a0fe35a7c/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit run-27331.scope.
mount: wrong fs type, bad option, bad superblock on 10.10.2.249:/opt/nfs0,
missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.
Warning FailedMount 10s kubelet, worker-00 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Local Volumes
Create a Local Volume, the corresponding Persistent Volume and a Persistent Volume Claim that Binds to It
This is the step-by-step procedure to create a persistent volume backed by a local volume and expose it to a pod with a matching persistent volume claim.
Expose a Local Directory
Define a local directory that will be exposed as a persistent volume:
mkdir /mnt/disk1/local-volume-0
Define the persistent volume API resource instance:
apiVersion: v1
kind: PersistentVolume
metadata:
name: worker-00-local-volume-0
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: manual-local-storage
local:
path: /mnt/disk1/local-volume-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-00
Note that in the persistent volume metadata shown above, "worker-00" is the name of the node, as known to Kubernetes. It can be obtained with:
kubectl get nodes -o wide
If the cluster has more than one node, symmetrical persistent volumes can be created for other nodes as well.
Upon creation, the persistent volumes can be listed with:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
worker-00-local-volume-0 10Gi RWO Delete Available manual-local-storage 94s
worker-01-local-volume-0 10Gi RWO Delete Available manual-local-storage 2s
Create a Matching Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-storage-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual-local-storage
resources:
requests:
storage: 10Gi
In case of StatefulSets, the StatefulSet generates the claims based on a [[1]].
Aside from attributes like storage amount request access mode, the storage class name must match.
The claim is immediately bound, as it finds a matching persistent volume:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-storage-claim Bound worker-00-local-volume-0 10Gi RWO manual-local-storage 2m19s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
worker-00-local-volume-0 10Gi RWO Delete Bound default/local-storage-claim manual-local-storage 6m37s
worker-01-local-volume-0 10Gi RWO Delete Available manual-local-storage 5m5s
The pod gets access to the volume content by declaring a persistentVolumeClaim volume:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
...
volumeMounts:
- mountPath: "/something"
name: local-storage
volumes:
- name: local-storage
persistentVolumeClaim:
claimName: local-storage-claim
Storage Class
Note that if the persistent volume claim is created without specifying any storage class, as some higher level controllers do, and no default storage class is defined, then the persistent storage claim won't bind to persistent volume, even if the rest of the attributes match: the storage classes need to match as well, as described here.
We can avoid this situation by declaring our own "manual-local-storage" default storage class:
kind: StorageClass
metadata:
name: manual-local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
reclaimPolicy: Delete
volumeBindingMode: Immediate
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
In this case, the claim will automatically assumed the default storage class and it'll bind to available volumes.