NFS Server Provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.
This chart will deploy the Kubernetes external-storage projects nfs provisioner. This provisioner includes a built in NFS server, and is not intended for connecting to a pre-existing NFS server. If you have a pre-existing NFS Server, please consider using the NFS Client Provisioner [charts/stable/nfs-client-provisioner at master · helm/charts · GitHub - https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner] instead.
This article is about how to use Pulumi, kubernetes (K8S) provider, Helm Chart and TypeScript SDK to deploy nfs-server-provisioner within Kubernetes (K8S).
Pulumi - Modern Infrastructure as Code - https://www.pulumi.com/ is a Modern Infrastructure as Code (IaC) to create, deploy, and manage infrastructure on any cloud using familiar programming languages and tools.
Pulumi’s Cloud Native SDK makes it easy to target any Kubernetes environment to provision a cluster, configure and deploy applications, and update them as required.
Helm Chart is a component representing a collection of resources described by an arbitrary Helm Chart.
The Chart can be fetched from any source that is accessible to the helm command line. Values in the values.yml file can be overridden using ChartOpts.values (equivalent to --set or having multiple values.yml files). Objects can be transformed arbitrarily by supplying callbacks to
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
See Getting started | Kubernetes - https://kubernetes.io/docs/setup/ to leanr more.
Pulumi is a modern infrastructure-as-code platform that allows you to use common programming languages, tools, and frameworks, to provision, update, and manage cloud infrastructure resources.
Install the Pulumi - https://www.pulumi.com/ CLI.
Mac OS X
brew install pulumi
See Download and Install | Pulumi - https://www.pulumi.com/docs/get-started/install/ to learn more about others OS.
Install Node.js - https://nodejs.org/en/ CLI.
Mac OS X
brew install node
See Node.js - https://nodejs.org/en/ to learn more about others OS.
Pre-existing NFS server
You must have a pre-existing NFS server installed according to NFS Server and Client Installation on CentOS 7 - https://www.howtoforge.com/nfs-server-and-client-on-centos-7, or use which is provided by the third-party vendors(such as  File Storage NAS: Reliable Network Attached Storage - Alibaba Cloud - https://www.alibabacloud.com/product/nas).
Create the workspace directory.
mkdir -p col-example-pulumi-typescript-nfs-server-provisioner
Pulumi login into local file system.
pulumi login file://.
Pulumi new a project with kubernetes-typescript SDK.
pulumi new kubernetes-typescript
The above command will create some files within the current directory.
tree . -L 1
js-yaml package to load and parse yaml file.
npm i js-yaml
By default, Pulumi will look for a kubeconfig file in the following locations, just like kubectl:
The environment variable:
Or in current user’s default kubeconfig directory:
If the kubeconfig file is not in either of these locations, Pulumi will not find it, and it will fail to authenticate against the cluster. Set one of these locations to a valid kubeconfig file, if you have not done so already.
Edit values.yaml and replace content within
See and modify main.ts file.
Run pulumi up to create the namespace and pods.
See pods about nfs-server-provisioner.
kubectl get pods -n nfs-server-provisioner
check the StorageClass status.
kubectl get sc
Destroy all resources created by Pulumi.
Without others storage class.
The following is a recommended configration example for running on bare metal with a hostPath volume. Need to do:
specify nodeSelector with Your Node Host Name or Label.
create a Persistence Volume manually.
The following is a recommended configuration example when another storage class does not exist to provide persistence:
Edit values.yaml and replace content within < and >.
# cat values.yaml
In this configuration, a PersistentVolume must be created for each replica to use. Installing the Helm chart, and then inspecting the PersistentVolumeClaim’s created will provide the necessary names for your PersistentVolume’s to bind to.
nfs-server-provisioner-0 will fail with error pod has unbound immediate PersistentVolumeClaims.
An example of the necessary PersistentVolume:
pv.yaml to creat a PersistentVolume.
kubectl apply -f pv.yaml
Then, check the PersistentVolume status.
kubectl get pv data-nfs-server-provisioner-0 -o wide
bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.
MountVolume.SetUp failed for volume "pvc-c10001c32-f2dd-4f8c-8d19-bd8c2e01b5cf" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o nolock,tcp,noresvport,vers=4.1 bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
Install nfs packages to solve that issue.
Unable to attach or mount volumes: unmounted volumes=[config storage xxxxxx-volume], unattached volumes=[config storage xxxxxx-volume]: timed out waiting for the condition
nolock,tcp,noresvport to fix that issue.
 [raphaelmonrouzeau/nfs-server-provisioner-chart: An Helm chart for kubernetes-sigs/nfs-ganesha-server-and-external-provisioner - https://github.com/raphaelmonrouzeau/nfs-server-provisioner-chart]
 [nfs-server-provisioner 1.3.0 · raphael/raphael - https://artifacthub.io/packages/helm/raphael/nfs-server-provisioner]