OpenShift Volume Concepts: Difference between revisions
No edit summary |
|||
(26 intermediate revisions by the same user not shown) | |||
Line 11: | Line 11: | ||
* [[#Downward_API|downwardAPI]] | * [[#Downward_API|downwardAPI]] | ||
* [[#EmptyDir|emptyDir]] | * [[#EmptyDir|emptyDir]] | ||
* [[# | * [[#hostPath|hostPath]] | ||
* [[#NFS|nfs]] | * [[#NFS|nfs]] | ||
* [[#Persistent_Volume_Claim|persistentVolumeClaim]] | * [[#Persistent_Volume_Claim|persistentVolumeClaim]] | ||
Line 21: | Line 21: | ||
=Volume Types= | =Volume Types= | ||
==EmptyDir== | ==EmptyDir== | ||
Line 61: | Line 45: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
"emptyDir" volume storage may be restricted by a quota based on the pod’s [[OpenShift_Security_Context_Constraints#FSGROUP|FSGroup]], if the FSGroup parameter is enabled by | When emptyDir is backed by a node directory, the location of this directory is <tt>/var/lib/origin/openshift.local.volumes/pods/<id>/volumes/kubernetes.io~empty-</tt> but it can be configured in the Kubernetes node configuration file. <font color=red>How?</font> | ||
"emptyDir" volume storage may be restricted by a quota based on the pod’s [[OpenShift_Security_Context_Constraints#FSGROUP|FSGroup]], if the FSGroup parameter is enabled by the cluster administrator. | |||
This is how an emptyDir is declared as part of a deployment configuration: | |||
spec: | |||
template: | |||
spec: | |||
containers: | |||
- name: some-container | |||
'''volumeMounts''': | |||
- '''name''': <font color=teal>blue</font> | |||
'''mountPath''': <font color=teal>/blue</font> | |||
... | |||
'''volumes''': | |||
- '''name''': <font color=teal>blue</font> | |||
'''emptyDir''': | |||
medium: | |||
sizeLimit: | |||
'''medium''' - what type of storage medium should back this directory. The default (unspecified) is to use the node's default medium. The alternative is "Memory". | |||
'''sizeLimit''' - the total amount of local storage required for this emptyDir volume. The size limit is also applicable for the memory medium; for this case the emptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in the pod. The default is nil, which means the limit is undefined (requires Kubernetes 1.8). | |||
===EmptyDir Quota=== | |||
By default, pods can allocate on emptyDirs as much space as the underlying filesystem has available. It is possible to set up a configuration that would restrict space allocation to a quota enforced with the [[XFS Quotas|XFS quota]] mechanism, provided that the backing filesystem used to allocate space for emptyDirs is XFS. For more details see: | |||
{{Internal|OpenShift Setting and Enforcing emptyDir Quotas|Setting and Enforcing emptyDir Quotas}} | |||
===EmptyDir Operations=== | ===EmptyDir Operations=== | ||
{{Internal|Oc_set#EmptyDir_Volume|Adding an EmptyDir Volume}} | {{Internal|Oc_set#EmptyDir_Volume|Adding an EmptyDir Volume}} | ||
==<span id='Host_Path'></span>hostPath== | |||
{{External|https://kubernetes.io/docs/concepts/storage/volumes/#hostpath}} | |||
spec: | |||
template: | |||
spec: | |||
containers: | |||
- name: fluentd-elasticsearch | |||
... | |||
volumeMounts: | |||
- mountPath: /var/log | |||
name: varlog | |||
... | |||
volumes: | |||
- name: varlog | |||
hostPath: | |||
path: /var/log | |||
==local== | |||
==Persistent Volume Claim== | |||
A ''persistent volume claim'' is a request for a persistence resource with specific attributes, such as storage size. The persistent volume claims are created by the developers who manage the pods. Persistent volume claims are project-specific objects. | |||
Persistent volume claims are matched to available [[#Persistent_Volume|volumes]] and ''binds'' the pod to the volume. This process allows a claim to be used as a volume in a pod: OpenShift finds the volume backing the claim and mounts it into the pod. Multiple PVCs within the same project can bind to the same PV. Once a PVC binds to a PV, that PV cannot be bound by a claim outside the first claim's project - the PV "belongs" to the project. If the underlying storage needs to be accessed by multiple projects, that can be implemented by declaring multiple PVs, where each project needs its own PV, which can then point to the same physical storage. | |||
The pod can be disassociated from the persistent volume by deleting the persistent volume claim. The persistent volume transitions from a "Bound" to "Released" state. To make the persistent volume "Available" again, edit it and remove the persistent volume claim reference, as shown [[OpenShift_PersistentVolume_Operations#Unbind_a_Pod_from_the_Volume|here]]. Transitioning the persistent volume from "Released" to "Available" state does not clear the storage content - this will have to be done manually. | |||
{{Internal|OpenShift_Persistent_Volume_Claim_Definition|Persistent Volume Claim Definition}} | |||
All persistent volume claims for the current project can be listed with: | |||
[[OpenShift_PersistentVolume_Operations#List_Existent_Persistent_Volume_Claims_for_the_Current_Project|oc get pvc]] | |||
{{Internal|OpenShift_PersistentVolume_Operations#Persistent_Volume_Claim_Operations|Persistent Volume Claim Operations}} | |||
==ConfigMap== | ==ConfigMap== | ||
Line 78: | Line 128: | ||
The ConfigMaps must be created before they are consumed in pods. They cannot be shared between projects. If the ConfigMap is updated, it must be redeployed in then pod for the pod to see the changes. | The ConfigMaps must be created before they are consumed in pods. They cannot be shared between projects. If the ConfigMap is updated, it must be redeployed in then pod for the pod to see the changes. | ||
==Downward API== | ==Downward API== | ||
Line 100: | Line 145: | ||
The information can be exposed to pods via [[OpenShift_Downward_API_Operations#Expose_OpenShift_Information_to_Pod_via_Environment_Variables|environment variables]] and [[OpenShift_Downward_API_Operations#Expose_OpenShift_Information_to_Pod_via_Volumes|volumes]]. | The information can be exposed to pods via [[OpenShift_Downward_API_Operations#Expose_OpenShift_Information_to_Pod_via_Environment_Variables|environment variables]] and [[OpenShift_Downward_API_Operations#Expose_OpenShift_Information_to_Pod_via_Volumes|volumes]]. | ||
== | ==Secret== | ||
==projected== | |||
==NFS== | |||
These are the persistent volume mounts: | |||
support.ocp36.local:/nfs/pv2 80G 7.0G 73G 9% \ | |||
/var/lib/origin/openshift.local.volumes/pods/cbc6c6f4-fda5-11e7-a14c-525400360e56/volumes/kubernetes.io~nfs/pv2 | |||
=Persistent Volume= | =Persistent Volume= |
Latest revision as of 17:21, 29 February 2024
Internal
The Volume Mechanism
Volumes are mounted filesystems available to pods and their containers. Volumes may be backed by a number of host-local or network attached storage endpoints. The simplest volume type is EmptyDir, which is a temporary directory on a single machine. Administrators may also allow to request and attach Persistent Volumes.
Various contexts list the following objects as "volumes":
Volume Security
Volume Types
EmptyDir
An "emptyDir", also known as a "temporary pod volume", is created when the pod is assigned to a node, and exists as long as the pod is running on the node. It is initially empty. Containers in a pod can all read and write the same files in the "emptyDir" volume, though the volume can be mounted at the same or different paths in each container.
When the pod is removed from the node, the data is deleted. If the container crashes, that does not remove the pod from a node, so data in an empty dir is safe across container crashes.
The "emptyDir" volumes are stored on whatever medium is backing the node (disk, network storage). The mapping on the local file system backing the node can be discovered by identifying the container and then executing a docker inspect:
"Mounts": [
{
"Source": "/var/lib/origin/openshift.local.volumes/pods/1806c74f-0ad4-11e8-85a1-525400360e56/volumes/kubernetes.io~empty-dir/emptydirvol1",
"Destination": "/something",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
}
...
When emptyDir is backed by a node directory, the location of this directory is /var/lib/origin/openshift.local.volumes/pods/<id>/volumes/kubernetes.io~empty- but it can be configured in the Kubernetes node configuration file. How?
"emptyDir" volume storage may be restricted by a quota based on the pod’s FSGroup, if the FSGroup parameter is enabled by the cluster administrator.
This is how an emptyDir is declared as part of a deployment configuration:
spec: template: spec: containers: - name: some-container volumeMounts: - name: blue mountPath: /blue ... volumes: - name: blue emptyDir: medium: sizeLimit:
medium - what type of storage medium should back this directory. The default (unspecified) is to use the node's default medium. The alternative is "Memory".
sizeLimit - the total amount of local storage required for this emptyDir volume. The size limit is also applicable for the memory medium; for this case the emptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in the pod. The default is nil, which means the limit is undefined (requires Kubernetes 1.8).
EmptyDir Quota
By default, pods can allocate on emptyDirs as much space as the underlying filesystem has available. It is possible to set up a configuration that would restrict space allocation to a quota enforced with the XFS quota mechanism, provided that the backing filesystem used to allocate space for emptyDirs is XFS. For more details see:
EmptyDir Operations
hostPath
spec: template: spec: containers: - name: fluentd-elasticsearch ... volumeMounts: - mountPath: /var/log name: varlog ... volumes: - name: varlog hostPath: path: /var/log
local
Persistent Volume Claim
A persistent volume claim is a request for a persistence resource with specific attributes, such as storage size. The persistent volume claims are created by the developers who manage the pods. Persistent volume claims are project-specific objects.
Persistent volume claims are matched to available volumes and binds the pod to the volume. This process allows a claim to be used as a volume in a pod: OpenShift finds the volume backing the claim and mounts it into the pod. Multiple PVCs within the same project can bind to the same PV. Once a PVC binds to a PV, that PV cannot be bound by a claim outside the first claim's project - the PV "belongs" to the project. If the underlying storage needs to be accessed by multiple projects, that can be implemented by declaring multiple PVs, where each project needs its own PV, which can then point to the same physical storage.
The pod can be disassociated from the persistent volume by deleting the persistent volume claim. The persistent volume transitions from a "Bound" to "Released" state. To make the persistent volume "Available" again, edit it and remove the persistent volume claim reference, as shown here. Transitioning the persistent volume from "Released" to "Available" state does not clear the storage content - this will have to be done manually.
All persistent volume claims for the current project can be listed with:
oc get pvc
ConfigMap
A ConfigMap is a component that holds key/value pairs of configuration data, and that can be consumed by pods, or can be used to store configuration for OpenShift system components such as controllers. It is a mechanism to inject containers with configuration while keeping the containers agnostic of the OpenShift platform. Aside from fine-grained information like individual properties, ConfigMaps can also store coarse-grained information such as entire configuration files or JSON blobs. The ConfigMaps can populate environment variables, set command-line arguments in a container and populate configuration files in a volume.
A ConfigMap is similar to a secret, but designed to be more convenient when working with strings that do not contain sensitive information.
ConfigMap can be created from directories, files, literal values.
The ConfigMaps must be created before they are consumed in pods. They cannot be shared between projects. If the ConfigMap is updated, it must be redeployed in then pod for the pod to see the changes.
Downward API
The downward API allows containers to consume information about OpenShift objects. The field within a pod are selected using 'fieldRef' API type, which has two types: 'fieldPath', the path of field to select relative to the pod and 'apiVersion'. The downward API exposes the following selectors:
- metadata.name: the pod name.
- metadata.namespace: pod namespace.
- metadata.labels
- metadata.annotations
- status.podIP
'resourceFieldRef'/'resource', which refers to the resource entries.
The information can be exposed to pods via environment variables and volumes.
Secret
projected
NFS
These are the persistent volume mounts:
support.ocp36.local:/nfs/pv2 80G 7.0G 73G 9% \ /var/lib/origin/openshift.local.volumes/pods/cbc6c6f4-fda5-11e7-a14c-525400360e56/volumes/kubernetes.io~nfs/pv2
Persistent Volume
Persistent volumes are created by the cluster administrators.
Persistent volumes can be listed with oc get pv.