Amazon EFS CSI Operations: Difference between revisions
(35 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
=<span id='Deploy_the_Driver'></span>Deploy the Amazon EFS CSI Driver= | =<span id='Deploy_the_Driver'></span>Deploy the Amazon EFS CSI Driver= | ||
Ensure you are in the right [[.kube_config#Contexts|context]], with sufficient permissions, | Ensure you are in the right [[.kube_config#Contexts|context]], with sufficient permissions. | ||
The command to deploy is provided below, but always check with the original documentation to make sure you use the latest verison: | |||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> | ||
Line 14: | Line 16: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
This deploys a [[Kubernetes_Storage_Concepts#CSIDriver|CSIDriver]] and a DaemonSet resources: | Current releases: https://github.com/kubernetes-sigs/aws-efs-csi-driver/tags | ||
This deploys a [[Kubernetes_Storage_Concepts#CSIDriver|CSIDriver]] and a [[Kubernetes DaemonSet|DaemonSet]] resources: | |||
<syntaxhighlight lang='bash'> | <syntaxhighlight lang='bash'> | ||
Line 27: | Line 31: | ||
efs-csi-node 3 3 3 3 3 kubernetes.io/arch=amd64,kubernetes.io/os=linux 57d | efs-csi-node 3 3 3 3 3 kubernetes.io/arch=amd64,kubernetes.io/os=linux 57d | ||
</syntaxhighlight> | </syntaxhighlight> | ||
=Deploy the EFS Storage Class= | |||
<syntaxhighlight lang='bash'> | |||
cat <<EOF | kubectl apply -f - | |||
kind: StorageClass | |||
apiVersion: storage.k8s.io/v1 | |||
metadata: | |||
name: efs-csi | |||
provisioner: efs.csi.aws.com | |||
EOF | |||
</syntaxhighlight> | |||
<syntaxhighlight lang='bash'> | |||
storageclass.storage.k8s.io/efs-csi created | |||
kubectl get sc | |||
NAME PROVISIONER AGE | |||
[...] | |||
efs-csi efs.csi.aws.com 16s | |||
</syntaxhighlight> | |||
=Deploy the EFS Persistent Volume= | |||
There is an one-to-one relationship between the Persistent Volume and the EFS file system, so the name of the EFS filesystem can be used. | |||
<syntaxhighlight lang='yaml'> | |||
apiVersion: v1 | |||
kind: PersistentVolume | |||
metadata: | |||
name: efs-pv-01 | |||
spec: | |||
capacity: | |||
storage: 1Gi | |||
volumeMode: Filesystem | |||
accessModes: | |||
- ReadWriteMany | |||
persistentVolumeReclaimPolicy: Retain | |||
storageClassName: efs-csi | |||
csi: | |||
driver: efs.csi.aws.com | |||
volumeHandle: fs-99999999 | |||
</syntaxhighlight> | |||
<syntaxhighlight lang='bash'> | |||
kubectl apply -f persistent-volume.yaml | |||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE | |||
[...] | |||
efs-pv-01 1Gi RWX Retain Available efs-csi 38s | |||
</syntaxhighlight> | |||
==Deploy the EFS Persistent Volume that uses an Access Point== | |||
{{External|https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/access_points/README.md}} | |||
{{Internal|Amazon_Elastic_File_System_Concepts#Access_Point|EFS Access Point}} | |||
<syntaxhighlight lang='yaml'> | |||
apiVersion: v1 | |||
kind: PersistentVolume | |||
metadata: | |||
name: ... | |||
spec: | |||
# similar to a regular EFS PV | |||
csi: | |||
driver: efs.csi.aws.com | |||
# volumeHandle: <efs-id>::<access-point-id> | |||
volumeHandle: fs-99999999::fsap-99999999999999999 | |||
</syntaxhighlight> | |||
=Deploy the Persistent Volume Claim= | |||
For a discussion on what combination of storage class and persistent volume names work, see [[Kubernetes_Storage_Concepts#Persistent_Volume_Claims_and_Storage_Class|Persistent Volume Claims and Storage Class]]. Usually one can specify only the storage class, or the storage class and a persistent volume name. However, when we rely on getting a specific EFS file system, which is in most case desirable, specifying the volume name is a good idea. For more syntax details, see [[Kubernetes_Persistent_Volume_Claim_Manifest#Example|Persistent Volume Claim manifest]]. | |||
<syntaxhighlight lang='yaml'> | |||
apiVersion: v1 | |||
kind: PersistentVolumeClaim | |||
metadata: | |||
name: efs-pv-01 | |||
namespace: test | |||
spec: | |||
accessModes: | |||
- ReadWriteMany | |||
storageClassName: efs-csi | |||
volumeName: efs-pv-01 | |||
resources: | |||
requests: | |||
storage: 1Gi | |||
</syntaxhighlight> | |||
<syntaxhighlight lang='bash'> | |||
kubectl apply -f persistent-volume-claim.yaml | |||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE | |||
[...] | |||
efs-pv-01 1Gi RWX Retain Available efs-csi 38s | |||
</syntaxhighlight> | |||
=Mount in Pod= | |||
<syntaxhighlight lang='yaml'> | |||
apiVersion: v1 | |||
kind: Pod | |||
metadata: | |||
name: app1 | |||
spec: | |||
containers: | |||
- name: app1 | |||
image: busybox | |||
command: ["/bin/sh"] | |||
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"] | |||
volumeMounts: | |||
- name: persistent-storage | |||
mountPath: /data | |||
volumes: | |||
- name: persistent-storage | |||
persistentVolumeClaim: | |||
claimName: efs-pv-01 | |||
</syntaxhighlight> | |||
=Troubleshooting= | |||
==The EFS filesystem cannot be mounted on EKS worker nodes== | |||
The symptoms include the pod that attempts mounting the volume getting stuck in "ContainerCreating". One of the causes is that the security groups associated with the worker nodes do not allow IP connectivity to/from the [[Amazon_Elastic_File_System_Concepts#Mount_Target|mount targets]]. If possible, try to manually mount the filesystem on the worker nodes, as a test. |
Latest revision as of 23:33, 30 March 2021
External
Internal
Deploy the Amazon EFS CSI Driver
Ensure you are in the right context, with sufficient permissions.
The command to deploy is provided below, but always check with the original documentation to make sure you use the latest verison:
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.0"
Current releases: https://github.com/kubernetes-sigs/aws-efs-csi-driver/tags
This deploys a CSIDriver and a DaemonSet resources:
kubectl get csidriver
NAME CREATED AT
efs.csi.aws.com 2020-06-24T04:29:45Z
kubectl get -n kube-system daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
efs-csi-node 3 3 3 3 3 kubernetes.io/arch=amd64,kubernetes.io/os=linux 57d
Deploy the EFS Storage Class
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-csi
provisioner: efs.csi.aws.com
EOF
storageclass.storage.k8s.io/efs-csi created
kubectl get sc
NAME PROVISIONER AGE
[...]
efs-csi efs.csi.aws.com 16s
Deploy the EFS Persistent Volume
There is an one-to-one relationship between the Persistent Volume and the EFS file system, so the name of the EFS filesystem can be used.
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv-01
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-csi
csi:
driver: efs.csi.aws.com
volumeHandle: fs-99999999
kubectl apply -f persistent-volume.yaml
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
[...]
efs-pv-01 1Gi RWX Retain Available efs-csi 38s
Deploy the EFS Persistent Volume that uses an Access Point
apiVersion: v1
kind: PersistentVolume
metadata:
name: ...
spec:
# similar to a regular EFS PV
csi:
driver: efs.csi.aws.com
# volumeHandle: <efs-id>::<access-point-id>
volumeHandle: fs-99999999::fsap-99999999999999999
Deploy the Persistent Volume Claim
For a discussion on what combination of storage class and persistent volume names work, see Persistent Volume Claims and Storage Class. Usually one can specify only the storage class, or the storage class and a persistent volume name. However, when we rely on getting a specific EFS file system, which is in most case desirable, specifying the volume name is a good idea. For more syntax details, see Persistent Volume Claim manifest.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-pv-01
namespace: test
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-csi
volumeName: efs-pv-01
resources:
requests:
storage: 1Gi
kubectl apply -f persistent-volume-claim.yaml
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
[...]
efs-pv-01 1Gi RWX Retain Available efs-csi 38s
Mount in Pod
apiVersion: v1
kind: Pod
metadata:
name: app1
spec:
containers:
- name: app1
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-pv-01
Troubleshooting
The EFS filesystem cannot be mounted on EKS worker nodes
The symptoms include the pod that attempts mounting the volume getting stuck in "ContainerCreating". One of the causes is that the security groups associated with the worker nodes do not allow IP connectivity to/from the mount targets. If possible, try to manually mount the filesystem on the worker nodes, as a test.