持久卷
持久卷
-
PersistentVolume(PV)是集群中的一种资源,就像节点一样。但在这种情况下,它是一种存储资源。
PV 是集群中的一块存储资源,它已经被管理员配置好了。PV 有一个独立于任何与之相关联的pod 的生命周期。这个API 允许存储的类型。NFS、ISCSI 或来自特定云提供商的存储。 -
PersistentVolumeClaim(PVC)类似于
Pod 。Pods 消耗来自节点的资源,PVC 消耗来自PV 的资源。但什么是PVC 呢?它不过是用户创建的一个存储请求。
Local Persistent Volumes
在
- 与
hostPath 的区别:hostPath 是一种Volume ,可以让pod 挂载宿主机上的一个文件或目录(如果挂载路径不存在,则创建为目录或文件并挂载) 。最大的不同在于调度器是否能理解磁盘和node 的对应关系,一个使用hostPath 的pod ,当他被重新调度时,很有可能被调度到与原先不同的node 上,这就导致pod 内数据丢失了。而使用LPV 的pod ,总会被调度到同一个node 上(否则就调度失败) 。
NFS
我们将创建一个
# Ubuntu
$ sudo apt-get install -y nfs-kernel-server
$ sudo apt-get install -y nfs-common
# CentOS
$ yum install -y nfs-utils
现在我们要在
$ sudo mkdir /opt/dados
$ sudo chmod 1777 /opt/dados/
$ sudo vim /etc/exports
# 添加如下行
/opt/dados *(rw,sync,no_root_squash,subtree_check)
# 将 NFS 配置应用于节点 elliot-01。
$ sudo exportfs -a
# 重启服务
$ sudo systemctl restart nfs-kernel-server
$ systemctl restart nfs
依然是在节点
$ sudo touch /opt/dados/FUNCIONA
还是在节点
sudo vim primeiro-pv.yaml
apiVersion : v1
kind : PersistentVolume
metadata :
name : prime-pv
spec :
capacity :
storage : 1Gi
accessModes :
- ReadWriteMany
persistentVolumeReclaimPolicy : Retain
nfs :
path : /opt/data
server : 10.138.0.2
readOnly : false
然后执行创建命令;
$ kubectl create -f primeiro-pv.yaml
persistentvolume/primeiro-pv created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY ... AGE
primeiro-pv 1Gi RWX Retain ... 22s
$ kubectl describe pv primeiro-pv
Name: primeiro-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.138.0.2
Path: /opt/dados
ReadOnly: false
Events: <none>
现在我们需要创建我们的
$ vim primeiro-pvc.yaml
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
name : prime-pvc
spec :
accessModes :
- ReadWriteMany
resources :
requests :
storage : 800Mi
然后执行创建命令:
$ kubectl create -f primeiro-pvc.yaml
persistentvolumeclaim/primeiro-pvc created
$ kubectl get pv
NAME CAPACITY ACCESS MOD ... CLAIM ... AGE
primeiro-pv 1Gi RWX ... default/primeiro-pvc ... 8m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES ... AGE
primeiro-pvc Bound primeiro-pv 1Gi RWX ... 3m
现在我们有了一个
$ vim nfs-pv.yaml
apiVersion : apps / v1
kind : Deployment
metadata :
labels :
run : nginx
name : nginx
namespace : default
spec :
progressDeadlineSeconds : 600
replicas : 1
revisionHistoryLimit : 10
selector :
matchLabels :
run : nginx
strategy :
rollingUpdate :
maxSurge : 1
maxUnavailable: 1
type : RollingUpdate
template :
metadata :
labels :
run : nginx
spec :
containers :
- image : nginx
imagePullPolicy : Always
name : nginx
volumeMounts :
- name : nfs-pv
mountPath : / giropops
resources : {}
terminationMessagePath : / dev / termination-log
terminationMessagePolicy : File
volumes :
- name : nfs-pv
persistentVolumeClaim :
claimName : prime-pvc
dnsPolicy : ClusterFirst
restartPolicy : Always
schedulerName : default-scheduler
securityContext : {}
terminationGracePeriodSeconds : 30
执行创建操作:
$ kubectl create -f nfs-pv.yaml
deployment.apps/nginx created
$ kubectl describe deployment nginx
Name: nginx
Namespace: default
CreationTimestamp: Wed, 7 Jul 2018 22:01:49 +0000
Labels: run=nginx
Annotations: deployment.kubernetes.io/revision=1
Selector: run=nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/giropops from nfs-pv (rw)
Volumes:
nfs-pv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: primeiro-pvc
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-b4bd77674 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7s deployment-controller Scaled up replica set nginx-b4bd77674 to 1
正如我们可以看到详细介绍我们的
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE ... NODE
nginx-b4b... 1/1 Running 0 28s elliot-02
$ kubectl describe pod nginx-b4bd77674-gwc9k
Name: nginx-b4bd77674-gwc9k
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: elliot-02/10.138.0.3
...
Mounts:
/giropops from nfs-pv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-np77m (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-pv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: primeiro-pvc
ReadOnly: false
default-token-np77m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-np77m
Optional: false
...
好吧,对吧?
$ kubectl exec -ti nginx-b4bd77674-gwc9k -- ls /giropops/
FUNCIONA
我们可以看到一开始的文件都列在目录中。现在我们要用容器本身的命令
$ kubectl exec -ti nginx-b4bd77674-gwc9k -- touch /giropops/STRIGUS
$ kubectl exec -ti nginx-b4bd77674-gwc9k -- ls -la /giropops/
total 4
drwxr-xr-x. 2 root root 4096 Jul 7 23:13 .
drwxr-xr-x. 1 root root 44 Jul 7 22:53 ..
-rw-r--r--. 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r--. 1 root root 0 Jul 7 23:13 STRIGUS
在容器里面列表,我们可以看到文件被创建了,但是在我们的
$ ls -la /opt/dados/
-rw-r--r-- 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r-- 1 root root 0 Jul 7 23:13 STRIGUS
看到了吗?我们的
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 28m
$ kubectl delete deployment nginx
deployment.extensions "nginx" deleted
$ ls -la /opt/dados/
-rw-r--r-- 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r-- 1 root root 0 Jul 7 23:13 STRIGUS
正如预期的那样,文件还在,并没有在排除