Kubernetes Managed NFS

Entry

https://kubernetes.io/docs/concepts/storage/volumes/#nfs

这个demo很有意思,从之前来看,如果需要为K8s设置NFS,则我们需要一个physical NFS server,然后create PV based on that NFS server then PVC claim from PV. 这样需要自己去维护NFS的健康保证high availability.

这个例子完全将NFS交给了K8s来管理,它先提供一块存储去构造PVC(这块存储可能来自provisioner或者其他PV),然后用这个PVC构造了一个NFS server pod以及给这个pod绑定了一个cluster IP,这样就相当于一个虚拟的physical NFS server node了。当我们需要PV的时候,就从这个NFS server pod中取(本质就是从一个PV中构造另一些PV)。

当然为了满足NFS的特性,这个NFS server的镜像必须要特别的构造,安装了NFS的组件,暴露相关端口,以及在初始化启动程序中配置好/etc/export的相关参数。以及当NFS server pod被重新构造时,保证之前的share不受影响。

这样的好处就是完全交给K8s去管理,不用担心NFS高可用性的问题,也不用自己去搭建物理的NFS cluster了,只要提供一块存储,就可以构造成NFS。

Demo

写这个blog的时候这个demo yaml中有一些错误参数,以我的blog准,这是demo git repository: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs

Under provisioner folder, it uses storageclass (internal provisioner) to create a PV, for example, in GCE:

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi

If no internal provisioner is available, you can create an external provisioner (for example: NFS), or just create a PV with hostPath:

1
2
3
4
5
6
7
8
9
10
11
12
13
kind: PersistentVolume
apiVersion: v1
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
capacity:
storage: "20Gi"
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs-server-pv"

This /nfs-server-pv folder will be created on the host where nfs server pod reside.

Then it creates a replicationController for NFS server (just like a physical NFS server):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
## the nfs exports folder
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
## mount the pvc from provisioner
claimName: nfs-pv-provisioning-demo

Notice that the image is dedicated with nfs-utils pre-installed, and it expose some nfs dedicated ports, see the dockerfile:

1
2
3
4
5
6
7
8
9
10
11
FROM centos
RUN yum -y install /usr/bin/ps nfs-utils && yum clean all
RUN mkdir -p /exports
ADD run_nfs.sh /usr/local/bin/
ADD index.html /tmp/index.html
RUN chmod 644 /tmp/index.html

## expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
## init script to set up this nfs server
ENTRYPOINT ["/usr/local/bin/run_nfs.sh", "/exports"]

Then create a service with cluster IP to expose the NFS server pod.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server

Then we can create application PV and PVC from this service: refer to DNS for service and pod.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
## service name
## 这里需要cluster IP,如果有其他DNS配置,可以直接用service name
server: <nfs server service cluster IP>
path: "/exports"

Then you can create PVC bind this PV for other use.

0%