[Kubernetes (K8S)] Helm install nfs-server-provisioner within Kubernetes (K8S)

helm-nfs-server-provisioner-example

NFS Server Provisioner

nfs-client-provisioner - https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere throught Network File System (NFS).

This chart will deploy the Kubernetes nfs provisioner. This provisioner includes a built in NFS server, and is not intended for connecting to a pre-existing NFS server. If you have a pre-existing NFS Server, please consider using the nfs-client-provisioner - https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner instead.

This article is about how to use Helm to install nfs-server-provisioner on Kubernetes (K8S).


As of Nov 13, 2020, charts in this repo will no longer be updated. For more information, see the Helm Charts Deprecation and Archive Notice, and Update.


You can use Azure Helm mirror - http://mirror.azure.cn/kubernetes/charts/ to replace https://kubernetes-charts.storage.googleapis.com

Prerequisites

  • Kubernetes (K8S)
    Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

  • Helm
    Helm is the best way to find, share, and use software built for Kubernetes.

  • NFS packages and rpcbind.service
    A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally.

    The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind affect all RPC-based services(Such as NFS).

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # CentOS
    # Install NFS utils.
    $ sudo yum install nfs-utils -y

    # Start and enable rpcbind.service
    $ sudo systemctl start rpcbind && sudo systemctl enable rpcbind

    # Start and enable rpc-statd.service
    $ sudo systemctl start rpc-statd && sudo systemctl enable rpc-statd

Install

Helm install nfs-server-provisioner into nfs-server-provisioner or your other namespace.

1
2
3
4
5
6
7
8
9
10
11
12
# crate namespace:
$ kubectl create namespace nfs-server-provisioner

# Add the Stable Helm repository:
$ helm repo add stable http://mirror.azure.cn/kubernetes/charts/
# (Deprecation) $ helm repo add stable https://kubernetes-charts.storage.googleapis.com

# Update your local Helm chart repository cache:
$ helm repo update

# To install Helm chart:
$ helm install nfs-server-provisioner stable/nfs-server-provisioner --namespace nfs-server-provisioner -f values.yaml

See Helm release about nfs-server-provisioner

1
2
3
$ helm list --namespace nfs-server-provisioner
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nfs-server-provisioner nfs-server-provisioner 1 2020-10-17 00:06:55.701473 +0800 +0800 deployed nfs-server-provisioner-1.1.1 2.3.0

See pods about nfs-server-provisioner.

1
2
3
$ kubectl get pods -n nfs-server-provisioner
NAME READY STATUS RESTARTS AGE
nfs-server-provisioner-0 1/1 Running 0 24h

See StorageClass.

1
2
3
4
$ kubectl get sc
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs (default) cluster.local/nfs-server-provisioner Delete Immediate true 53d

Custom Values.yaml

With others storage class.

On many clusters, the cloud provider integration will create a storage class which will create a volume (e.g. a Google Compute Engine Persistent Disk or Amazon EBS volume - gp) to provide persistence.

The following is a recommended configuration example when another storage class exists to provide persistence:

1
2
3
4
5
6
7
# cat values.yaml
persistence:
enabled: true
storageClass: "standard"
size: 200Gi
storageClass:
defaultClass: true

Without others storage class.

The following is a recommended configration example for running on bare metal with a hostPath volume. Need to do:

  • specify nodeSelector with Your Node Host Name or Label.
  • create a Persistence Volume manually.

The following is a recommended configuration example when another storage class does not exist to provide persistence:

Edit values.yaml and replace content within < and >.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# cat values.yaml

# nfs-server-provisioner 1.1.1 · helm/helm-stable
# https://artifacthub.io/packages/helm/helm-stable/nfs-server-provisioner

# The following is a recommended configration example for running on bare metal or not exist the others StorageClass.
persistence:
enabled: true
#
storageClass: "-"
size: 200Gi

storageClass:
defaultClass: true

nodeSelector:
# Remember to replace <Your Node Host Name or Label>.
kubernetes.io/hostname: <Your Node Host Name or Label>

In this configuration, a PersistentVolume must be created for each replica to use. Installing the Helm chart, and then inspecting the PersistentVolumeClaim’s created will provide the necessary names for your PersistentVolume’s to bind to.

Otherwise Pod nfs-server-provisioner-0 will fail with error pod has unbound immediate PersistentVolumeClaims.

An example of the necessary PersistentVolume:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-nfs-server-provisioner-0
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /srv/volumes/data-nfs-server-provisioner-0
claimRef:
namespace: nfs-server-provisioner
name: data-nfs-server-provisioner-0

Apply pv.yaml to creat a PersistentVolume.

1
$ kubectl apply -f pv.yaml

Then, check the PersistentVolume status.

1
2
3
$ kubectl get pv data-nfs-server-provisioner-0 -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
data-nfs-server-provisioner-0 200Gi RWO Retain Bound nfs-server-provisioner/data-nfs-server-provisioner-0 7d23h Filesystem

Dynamically Provision PersistentVolume

The following is a example about nfs-server-provisioner dynamically provision a PersistentVolume to the Deployment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: data-nfs-server-provisioner-0

Apply nginx-deployment.yaml to create the Deployment.

1
$ kubectl apply -f nginx-deploymentyaml

Check Nginx Deployment.

1
2
3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-64bf5ccdc8-mq26m 1/1 Running 0 13s

Then, the Pod of the Deployment will mount a Volume from nfs-server-provisioner.

FAQs

rpcbind service not start

if the nfs packages is not install or rpcbind.service is not started, the Pod can not mount pv with error(such as
Unable to attach or mount volumes: unmounted volumes=[storage], unattached volumes=[config storage nginx-deployment-token-fr5t7]: timed out waiting for the condition) .

You need to install nfs packages, then start and enable rpcbind.service.

1
2
# Start and enable rpcbind.service
$ sudo systemctl start rpcbind && sudo systemctl enable rpcbind

Either use ‘-o nolock’ to keep locks local, or start statd

If service rpc-statd is not started, you will encounter this error:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ systemctl status
...
Dec 04 08:00:16 cloudolife-s2 kubelet[1544]: A dependency job for rpc-statd.service failed. See 'journalctl -xe' for details.
Dec 04 08:00:16 cloudolife-s2 kubelet[1544]: mount.nfs: rpc.statd is not running but is required for remote locking.
Dec 04 08:00:16 cloudolife-s2 kubelet[1544]: mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
Dec 04 08:00:16 cloudolife-s2 kubelet[1544]: mount.nfs: an incorrect mount option was specified
...

$ journalctl -u rpc-statd.service
...
Dec 04 08:00:16 cloudolife-s2 systemd[1]: Dependency failed for NFS status monitor for NFSv2/3 locking..
Dec 04 08:00:16 cloudolife-s2 systemd[1]: Job rpc-statd.service/start failed with result 'dependency'.
...

Enable and start rpc-statd to resolve it.

1
2
# Start and enable rpc-statd.service
$ sudo systemctl start rpc-statd && sudo systemctl enable rpc-statd

References

[1] nfs-server-provisioner 1.1.1 · helm/helm-stable - https://artifacthub.io/packages/helm/helm-stable/nfs-server-provisioner

[2] Helm - https://helm.sh/

[3] Kubernetes - https://kubernetes.io/

[4] nfs-client-provisioner - https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner