[Infrastructure as Code (IaC) Pulumi] Use Pulumi kubernetes (K8S) Helm Chart to deploy Rook Ceph to provide distributed storage

Rook Storage Operators for Kubernetes

Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.

This article is about how to use Pulumi, kubernetes (K8S) provider, Helm Chart and TypeScript SDK to deploy Rook Ceph within Kubernetes (K8S).

Prerequisites

Usage

Pulumi New

Create the workspace directory.

1
2
3
$ mkdir -p col-example-pulumi-typescript-rook-ceph

$ cd col-example-pulumi-typescript-rook-ceph

Pulumi login into local file system.

1
2
3
$ pulumi login file://.
Logged in to cloudolife as cloudolife (file://.)
or visit https://pulumi.com/docs/reference/install/ for manual instructions and release notes.

Pulumi new a project with kubernetes-typescript SDK.

1
$ pulumi new kubernetes-typescript

The above command will create some files within the current directory.

1
2
3
4
5
6
7
8
tree . -L 1
.
├── node_modules/
├── package.json
├── package.json.lock
├── Pulumi.dev.yaml
├── Pulumi.yaml
└── main.ts

Install js-yaml package to load and parse yaml file.

1
$ npm i js-yaml

Pulumi Configuration

Configure Kubernetes

By default, Pulumi will look for a kubeconfig file in the following locations, just like kubectl:

  • The environment variable: $KUBECONFIG,

  • Or in current user’s default kubeconfig directory: ~/.kube/config

If the kubeconfig file is not in either of these locations, Pulumi will not find it, and it will fail to authenticate against the cluster. Set one of these locations to a valid kubeconfig file, if you have not done so already.

Ceph Operator Helm Chart

The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.

Ceph Operator Helm Chart | Ceph Docs - https://rook.io/docs/rook/v1.7/helm-operator.html

Configure Values.yaml

Edit values.rook-ceph.yaml and replace content within {{ }}.

1
2
3
4
5
# values.rook-ceph.yaml

# rook/values.yaml at master · rook/rook
# https://github.com/rook/rook/blob/master/cluster/charts/rook-ceph/values.yaml

See and modify main.ts file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
// main.ts

import * as pulumi from "@pulumi/pulumi";

import * as k8s from "@pulumi/kubernetes";

const yaml = require('js-yaml');
const fs = require('fs');

const nameRookCeph = "rook-ceph"

// kubernetes.core/v1.Namespace | Pulumi
// https://www.pulumi.com/docs/reference/pkg/kubernetes/core/v1/namespace/
const namespaceRookCeph = new k8s.core.v1.Namespace(nameRookCeph, {
metadata: {
name: nameRookCeph,
},
})

const values = yaml.safeLoad(fs.readFileSync("./values.rook-ceph.yaml", 'utf8'))

// Install Rook Ceph Operator
const charNameRookCeph = "rook-ceph"

const charRookCeph = new k8s.helm.v3.Chart(charNameRookCeph, {
chart: charNameRookCeph,
version: "1.7.4",
fetchOpts:{
repo: "https://charts.rook.io/release",
},
namespace: namespaceRookCeph.metadata.name,
values: values,
});

const charRookCeph = new k8s.helm.v3.Chart(charNameRookCeph, {
chart: charNameRookCeph,
version: "1.7.4",
fetchOpts:{
repo: "https://charts.rook.io/release",
},
namespace: namespaceRookCeph.metadata.name,
values: values,
});

// Install Rook Ceph Cluster
values = yaml.safeLoad(fs.readFileSync("./values.rook-ceph-cluster.yaml", 'utf8'))

const charNameRookCeph = "rook-ceph-cluster"

const charRookCephCluster = new k8s.helm.v3.Chart(charNameRookCephCluster, {
chart: charNameRookCephCluster,
version: "1.7.4",
fetchOpts:{
repo: "https://charts.rook.io/release",
},
namespace: namespaceRookCeph.metadata.name,
values: values,
});

Pulumi Up

Run pulumi up to create the namespace and pods.

1
$ pulumi up

See pods about rook-ceph.

1
2
3
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-5597fdc77-zfzgb 1/1 Running 2 (22m ago) 81m

Ceph Cluster Helm Chart

Creates Rook resources to configure a Ceph cluster using the Helm package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as:

  • CephCluster, CephFilesystem, and CephObjectStore CRs

  • Storage classes to expose Ceph RBD volumes, CephFS volumes, and RGW buckets

  • Ingress for external access to the dashboard

  • Toolbox

Ceph Docs - https://rook.io/docs/rook/v1.7/helm-ceph-cluster.html

Edit values.rook-ceph-cluster…yaml and replace content within {{ }}.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# rook-ceph-cluster.yaml


# rook/values.yaml at master · rook/rook
# https://github.com/rook/rook/blob/master/cluster/charts/rook-ceph-cluster/values.yaml

# Installs a debugging toolbox deployment
toolbox:
enabled: true

monitoring:
# requires Prometheus to be pre-installed
# enabling will also create RBAC rules to allow Operator to create ServiceMonitors
enabled: true

# All values below are taken from the CephCluster CRD
# More information can be found at [Ceph Cluster CRD](/Documentation/ceph-cluster-crd.md)
cephClusterSpec:

mgr:
# When higher availability of the mgr is needed, increase the count to 2.
# In that case, one mgr will be active and one in standby. When Ceph updates which
# mgr is active, Rook will update the mgr services to match the active mgr.
count: 2

# enable the ceph dashboard for viewing cluster status
dashboard:
enabled: true

storage: # cluster level storage configuration and selection
useAllNodes: true
useAllDevices: true

ingress:
dashboard:
annotations:
kubernetes.io/ingress.class: nginx
# external-dns.alpha.kubernetes.io/hostname: example.com
# nginx.ingress.kubernetes.io/rewrite-target: /ceph-dashboard/$2
host:
name: {{ .Values.host }}
# path: "/ceph-dashboard(/|$)(.*)"
tls:
- secretName: {{ .Values.tls.secretName }}

See and append the follow code into main.ts file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// main.ts

// Install Rook Ceph Cluster
values = yaml.safeLoad(fs.readFileSync("./values.rook-ceph-cluster.yaml", 'utf8'))

const charNameRookCeph = "rook-ceph-cluster"

const charRookCephCluster = new k8s.helm.v3.Chart(charNameRookCephCluster, {
chart: charNameRookCephCluster,
version: "1.7.4",
fetchOpts:{
repo: "https://charts.rook.io/release",
},
namespace: "rook-ceph",
values: values,
});

Pulumi Up

Run pulumi up to create the namespace and pods.

1
$ pulumi up

See pods about rook-ceph.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-4xqgf 3/3 Running 3 (49m ago) 103m
csi-cephfsplugin-8qsxx 3/3 Running 3 (46m ago) 103m
csi-cephfsplugin-mkmsz 3/3 Running 3 (44m ago) 103m
csi-cephfsplugin-provisioner-b54db7d9b-fhf7m 6/6 Running 6 (46m ago) 103m
csi-cephfsplugin-provisioner-b54db7d9b-qpw2q 6/6 Running 11 (44m ago) 103m
csi-rbdplugin-4mstv 3/3 Running 3 (49m ago) 102m
csi-rbdplugin-58c84 3/3 Running 3 (44m ago) 103m
csi-rbdplugin-9gbxk 3/3 Running 3 (46m ago) 103m
csi-rbdplugin-provisioner-5845579d68-d2lct 6/6 Running 6 (46m ago) 103m
csi-rbdplugin-provisioner-5845579d68-hgm5k 6/6 Running 7 (43m ago) 103m
rook-ceph-crashcollector-cloudolife-694f4bdf9f-vglpw 1/1 Running 0 31s
rook-ceph-crashcollector-cloudolife-5f6bf9b776-rs5gq 1/1 Running 0 31s
rook-ceph-crashcollector-cloudolife-5f48565757-lnkp7 1/1 Running 0 43s
rook-ceph-mgr-a-cb65bdc78-2x5sd 2/2 Running 0 52s
rook-ceph-mgr-b-97cf45c4d-sp727 2/2 Running 0 51s
rook-ceph-mon-a-5cff77b977-27s52 1/1 Running 0 107s
rook-ceph-mon-b-595b488967-m7xch 1/1 Running 0 93s
rook-ceph-mon-c-7bf54cd7c6-rfqvq 1/1 Running 0 75s
rook-ceph-operator-5597fdc77-zfzgb 1/1 Running 2 (44m ago) 103m
rook-ceph-osd-0-665f56588d-kwggn 1/1 Running 0 31s
rook-ceph-osd-1-5f5857f647-xtl2s 1/1 Running 0 31s
rook-ceph-osd-2-9c7b6dfb6-cq6hc 1/1 Running 0 10s
rook-ceph-osd-prepare-cloudolife--1-mqvkk 0/1 Completed 0 44s
rook-ceph-osd-prepare-cloudolife--1-7ktxv 0/1 Completed 0 44s
rook-ceph-osd-prepare-cloudolife--1-8f2xt 0/1 Completed 0 43s
rook-ceph-tools-96c99fbf-lpjmb 1/1 Running 0 2m1s

Then, you can visit Ceph Dashboard with http://<Your Ceph Dashboard Host>.

Initialize RDB Pool

Remember to initialize before use RDB Pool.

  • Execute rbd pool init <pool_name> command from toolbox or ceph-csi pods(similar to this).

  • Restart the csi-rbdplugin-provisioner-xxx pods.

    1
    $ kubectl -n rook-ceph delete pods -l app=csi-rbdplugin-provisioner

Test Ceph Block Storage Class

Create or edit a pvc.yaml manifest file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-ceph-block
labels:
app: test
spec:
storageClassName: ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

Run command to run and check.

1
2
3
4
5
$ kubectl apply -f pvc.yaml

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-ceph-block Bound pvc-ce500448-7ce8-42a4-b9aa-f96bd005fd5b 1Gi RWO ceph-block 19

Pulumi Destroy

Destroy all resources created by Pulumi.

1
$ pulumi destroy

FAQs

cephobjectstores.ceph.rook.io “ceph-objectstore” already exists or cephfilesystems.ceph.rook.io “ceph-filesystem” already exists when run pulumi up

1
2
3
4
5
6
$ pulumi up
kubernetes:ceph.rook.io/v1:CephObjectStore (rook-ceph/ceph-objectstore):
error: resource rook-ceph/ceph-objectstore was not successfully created by the Kubernetes API server : object is being deleted: cephobjectstores.ceph.rook.io "ceph-objectstore" already exists

kubernetes:ceph.rook.io/v1:CephFilesystem (rook-ceph/ceph-filesystem):
error: resource rook-ceph/ceph-filesystem was not successfully created by the Kubernetes API server : object is being deleted: cephfilesystems.ceph.rook.io "ceph-filesystem" already exists

Delete previouse CRD resources related to ceph.rook.io.

1
2
3
4
5
6
7
$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge -n rook-ceph CephObjectStore ceph-objectstore

$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge -n rook-ceph CephFilesystem ceph-filesystem

$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge -n rook-ceph CephCluster rook-ceph

$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge -n rook-ceph CephBlockPool ceph-blockpool

Delete CRD related to ceph.rook.io.

1
2
3
4
5
6
7
$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge crd cephblockpools.ceph.rook.io

$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge crd cephclusters.ceph.rook.io

$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge crd cephfilesystems.ceph.rook.io

$ kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge crd cephobjectstores.ceph.rook.io

Then, run pulumi up.

failed to read label for lvm2

Some Linux distributions do not ship with the lvm2 package. This package is required on all storage nodes in your k8s cluster. Please install it using your Linux distribution’s package manager; for example:

1
2
3
4
5
# Centos
$ sudo yum install -y lvm2

# Ubuntu
$ sudo apt-get install -y lvm2

See failed to provision volume with StorageClass “ceph-block”: an operation with the given Volume ID pvc-ID already exists · Issue #8749 · rook/rook - https://github.com/rook/rook/issues/8749 to learn more.

PersistentVolumeClaim or PersistentVolume Creation hangs and fails, failed to provision volume with StorageClass “ceph-block”: an operation with the given Volume ID pvc-ID already exists

Particularly, ceph csi v3.4.0 (built with ceph pacific base image) and rook v1.7.1(which ships with cephcsi v3.4.0 as default) is affected by this issue.

See New BlockPool / SC + Parallel RBD Volume Creation hangs and fails · Issue #8696 · rook/rook - https://github.com/rook/rook/issues/8696 to learn more.

  • Execute rbd pool init <pool_name> command from toolbox or ceph-csi pods(similar to this).

  • Restart the csi-rbdplugin-provisioner-xxx pods.

    1
    $ kubectl -n rook-ceph delete pods -l app=csi-rbdplugin-provisioner

References

[1] Ceph Operator Helm Chart | Ceph Docs - https://rook.io/docs/rook/v1.7/helm-operator.html

[2] Ceph Docs - https://rook.io/docs/rook/v1.7/helm-ceph-cluster.html

[3] Rook - https://rook.io/

[4] Ceph.io — Home - https://ceph.io/en/