OpenShift Volume Concepts: Difference between revisions
Line 67: | Line 67: | ||
This is how an emptyDir is declared as part of a deployment configuration: | This is how an emptyDir is declared as part of a deployment configuration: | ||
... | spec: | ||
template: | |||
spec: | |||
containers: | |||
'''volumeMounts''': | |||
- name: blue | |||
mountPath: /blue | |||
... | |||
'''volumes''': | |||
- '''emptyDir''': | |||
'''name''': <font color=teal>blue</font> | |||
medium: | |||
sizeLimit: | |||
'''medium''' - what type of storage medium should back this directory. The default (unspecified) is to use the node's default medium. The alternative is "Memory". | '''medium''' - what type of storage medium should back this directory. The default (unspecified) is to use the node's default medium. The alternative is "Memory". |
Revision as of 02:28, 10 February 2018
Internal
The Volume Mechanism
Volumes are mounted filesystems available to pods and their containers. Volumes may be backed by a number of host-local or network attached storage endpoints. The simplest volume type is EmptyDir, which is a temporary directory on a single machine. Administrators may also allow to request and attach Persistent Volumes.
Various contexts list the following objects as "volumes":
Volume Security
Volume Types
Persistent Volume Claim
A persistent volume claim is a request for a persistence resource with specific attributes, such as storage size. The persistent volume claims are created by the developers who manage the pods. Persistent volume claims are project-specific objects.
Persistent volume claims are matched to available volumes and binds the pod to the volume. This process allows a claim to be used as a volume in a pod: OpenShift finds the volume backing the claim and mounts it into the pod. Multiple PVCs within the same project can bind to the same PV. Once a PVC binds to a PV, that PV cannot be bound by a claim outside the first claim's project - the PV "belongs" to the project. If the underlying storage needs to be accessed by multiple projects, that can be implemented by declaring multiple PVs, where each project needs its own PV, which can then point to the same physical storage.
The pod can be disassociated from the persistent volume by deleting the persistent volume claim. The persistent volume transitions from a "Bound" to "Released" state. To make the persistent volume "Available" again, edit it and remove the persistent volume claim reference, as shown here. Transitioning the persistent volume from "Released" to "Available" state does not clear the storage content - this will have to be done manually.
All persistent volume claims for the current project can be listed with:
oc get pvc
EmptyDir
An "emptyDir", also known as a "temporary pod volume", is created when the pod is assigned to a node, and exists as long as the pod is running on the node. It is initially empty. Containers in a pod can all read and write the same files in the "emptyDir" volume, though the volume can be mounted at the same or different paths in each container.
When the pod is removed from the node, the data is deleted. If the container crashes, that does not remove the pod from a node, so data in an empty dir is safe across container crashes.
The "emptyDir" volumes are stored on whatever medium is backing the node (disk, network storage). The mapping on the local file system backing the node can be discovered by identifying the container and then executing a docker inspect:
"Mounts": [
{
"Source": "/var/lib/origin/openshift.local.volumes/pods/1806c74f-0ad4-11e8-85a1-525400360e56/volumes/kubernetes.io~empty-dir/emptydirvol1",
"Destination": "/something",
"Mode": "Z",
"RW": true,
"Propagation": "rprivate"
}
...
When emptyDir is backed by a node directory, the location of this directory is /var/lib/origin/openshift.local.volumes/pods/<id>/volumes/kubernetes.io~empty- but it can be configured in the Kubernetes node configuration file. How?
"emptyDir" volume storage may be restricted by a quota based on the pod’s FSGroup, if the FSGroup parameter is enabled by the cluster administrator.
This is how an emptyDir is declared as part of a deployment configuration:
spec: template: spec: containers: volumeMounts: - name: blue mountPath: /blue ... volumes: - emptyDir: name: blue medium: sizeLimit:
medium - what type of storage medium should back this directory. The default (unspecified) is to use the node's default medium. The alternative is "Memory".
sizeLimit - the total amount of local storage required for this emptyDir volume. The size limit is also applicable for the memory medium; for this case the emptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in the pod. The default is nil, which means the limit is undefined (requires Kubernetes 1.8).
EmptyDir Operations
EmptyDir Organizatorium
- How to set quota for emptyDir volume usage on an Openshift Node? https://access.redhat.com/solutions/3110681
ConfigMap
A ConfigMap is a component that holds key/value pairs of configuration data, and that can be consumed by pods, or can be used to store configuration for OpenShift system components such as controllers. It is a mechanism to inject containers with configuration while keeping the containers agnostic of the OpenShift platform. Aside from fine-grained information like individual properties, ConfigMaps can also store coarse-grained information such as entire configuration files or JSON blobs. The ConfigMaps can populate environment variables, set command-line arguments in a container and populate configuration files in a volume.
A ConfigMap is similar to a secret, but designed to be more convenient when working with strings that do not contain sensitive information.
ConfigMap can be created from directories, files, literal values.
The ConfigMaps must be created before they are consumed in pods. They cannot be shared between projects. If the ConfigMap is updated, it must be redeployed in then pod for the pod to see the changes.
Downward API
The downward API allows containers to consume information about OpenShift objects. The field within a pod are selected using 'fieldRef' API type, which has two types: 'fieldPath', the path of field to select relative to the pod and 'apiVersion'. The downward API exposes the following selectors:
- metadata.name: the pod name.
- metadata.namespace: pod namespace.
- metadata.labels
- metadata.annotations
- status.podIP
'resourceFieldRef'/'resource', which refers to the resource entries.
The information can be exposed to pods via environment variables and volumes.
hostPath
spec: template: spec: containers: - name: fluentd-elasticsearch ... volumeMounts: - mountPath: /var/log name: varlog ... volumes: - hostPath: path: /var/log name: varlog
Secret
NFS
Persistent Volume
Persistent volumes are created by the cluster administrators.
Persistent volumes can be listed with oc get pv.