What is OpenEBS?

OpenEBS now stands as one of the leading open-source projects related to container-native and container-attached storage type under the common name Kubernetes. With a dedicated storage controller designated to each workload, the OpenEBS follows the footsteps of Container Attached Storage or CAS. To provide further facilities to its users, the OpenEBS has granular storage policies and isolation that helps the users to choose their storage according to their workload. This project does not depend upon the Linux Kernel modules and runs in userspace. It falls under the Cloud Native Computing Foundation Sandbox and useful in various situations like clusters that run in the public cloud, air-gapped clusters that run in isolated environments, and on-premises clusters.

What is CAS?

First and foremost, CAS is the acronym for Container Attached Storage. Generally, the Kubernetes storage is maintained outside of the cluster environment. The storage facilities have always been related to an external resource, whatever the shared file system, including the storage giants like Amazon EBS, GCE PDs, NFS, Gluster FS, and Azure Disks. When we look at most cases, storage is usually related to nodes in the form of an OS Kernel module. This applies to the Persistent Volumes, too, where they are compactly coupled with the modules and thus appear as legacy resources and monolithic. What CAS provides is a facility for Kubernetes to use storage entities like microservices. As a whole, the CAS is divided into two elements, which are the data plane and the control plane. Data plane is a collection of Pods which lies close to the workload and contributes to the translation of IO transactions that help in essential read and write processes.

On the other hand, the control plane functions as a set of CRD or Custom Resource Definitions and involves the storage entities at low levels. Such kind of clean separation between the data plane and the control plane provides users with the same kind of advantage as microservices found in Kubernetes. This unique architecture helps with a portable side of workloads by decoupling the storage entities with the persistence. Another positive thing about this architecture is that it allows the operators and the admins to shape the volume according to the workload dynamically. This is also known as scale-out capabilities.

This structure puts the compute (Pod) and the data (P) in a hyper-converged mode where they have higher chances of fault tolerance and good throughput.

What makes OpenEBS different from other storage solutions?

Some of the qualities that make OpenEBS quite different from the traditional storage engines are:

  1. Quite like the application it serves for, the OpenEBS has a micro-service architecture built. While the OpenEBS is getting deployed, they are installed as a container to the worker nodes of Kubernetes. Moreover, the system manages its components and uses the Kubernetes to orchestrate.
  2. Portability is a good quality of OpenEBS as an open-source storage option because it is entirely built-in userspace. This makes it prone to the problems of cross-platform.
  3. The use of Kubernetes makes this system very intent-driven as it follows the principles that help in better usability among customers.
  4. Talking about usability functions, another benefit of using OpenEBS is that it allows its users to choose from a wide range of storage engines. This means a person can use the storage engine compatible with the design and objectives of his application. Whatever the type of the engine, OpenEBS provides a strong framework that has good manageability, snapshots, availability, and clones. For example, Cassandra is a distributed application that requires low latency writes. So, it can use the Local PV engine. Similarly, the ZFS engine is recommended for monolithic applications such as PostgreSQL and MySQL that need resilience. When it comes to streaming applications, professionals often recommend using the NVMe engine called MayaStor which promises the best performance.

There are a number of storage services you could get after the deployment of OpenEBS which include:

  1. In the worker nodes attached to the Kubernetes, make the storage management automated. This will allow you to use the storage for the dynamic provisioning of Local PVs and OpenEBS PVs.
  2. The data persistence across the nodes gets improved which helps the user to save time which is often lost at rebuilding. For example, Cassandra rings.
  3. The data between the cloud providers and the availability zones get synchronized properly. Such function helps to improve the availability of the required data and also reduces the attach and detach time.
  4. This is sort of like a common layer for users so that they can experience the same levels of storage services and good developers and wiring facilities. It does not matter if one is using bare metal, AKS, AWS, or GKE.
  5. Since OpenEBS falls under the Kubernetes native solution, there is a better chance of interaction between administrators and developers which helps in the management of OpenEBS. They have access to different tools like Helm, Prometheus, Kubectl, Grafana, and Weave Scope.
  6. The to and fro tiering process with S3 and other targets are properly managed.

Open EBS Architecture

As we already know that the OpenEBS falls under the CAS or the container attached storage model. Maintaining its structure with this model, each volume of the OpenEBS system has a specified controller POD and a set of duplicate PODs. We have discussed the idea behind the CAS system so we can tell that the overall architecture of the CAS system helps to make OpenEBS such a user-friendly system. People suggest that the OpenEBS is very simple during operations and it feels like any other cloud-native projects coming from the Kubernetes.

Here is a diagram of the OpenEBS architecture. All the important functionalities have been drawn accordingly and we will be discussing each of them briefly.

OpenEBS-Architecture

Figure 1 : OpenEBS Architecture

There are many components of the OpenEBS system. As a whole, they can be categorized into following groups.

  1. Control Plane: This includes sub-divisions like API server, provisioner, volume sidecars, and volume exports.
  2. Data Plane: Jiva, cStor, and LocalPV
  3. Node Disk Manager: Monitoring, discovering, and managing the media that are attached to the nodes of Kubernetes.
  4. Cloud-native tools integrations: The integrations are done via Grafana, Jaeger, Prometheus, and Fluentd.

Control Plane of OpenEBS

Another term for the cluster of OpenEBS is the “Maya”. There are several functions of the control plane. Some of them include provisioning volumes, actions associated with volumes like clone making, taking snapshots, storage policy enforcement, storage policy creation, the export of the volume metrics so that Prometheus/Grafana can use them and many others.

Figure 2 : Control Plane

The standard Kubernetes storage plugin is the dynamic provisioner and the main task of the OpenEBS PV provisioner is implementing the specification according to Kubernetes for PVs and also initiating the volume provisioning. When it comes to the task of volume policy management and processing in bulk, m-apiserver helps to expose the storage REST API. When we look at the connection between the data plane and the control plane, we can see a sidecar pattern. We have some examples of the conditions when the data plane has to communicate with the control plane.

  • In case of the volume statistics like the throughput, latency, and IOPS. Here, a volume-exporter sidecar is used.
  • Disk or pool management with the help of volume replica pod and volume policy enforcement with the help of volume controller pod. Here, volume-management sidecars are used.

Let’s talk about the above-mentioned components of the control plane:

Figure 3: PV Provisioner

The main function of this component is to make provisioning decisions while running as a POD. The working mechanism is pretty simple too. First, the developer makes a claim which has the necessary volume parameters and then chooses the right storage class. At last, he or she invokes the Kubelet on the YAML specification. Here, the maya-apiserver and the OpenEBS PV provisioner interact with each other and create deployment specifications required for the volume replica pods and volume controller pods on the nodes. The scheduling of the volume pods are controlled using annotations which lies in the PVC specification. As per current statistics, the OpenEBS only supports iSCSI binding.

Maya-ApiServer

Figure 4: Maya-Apiserver

The main task of the m-apiserver is to expose the OpenEBS REST APIs and it does this work running as a POD. If you need to create volume pods, then m-apiserver makes the files required for deployment specification. Then it schedules the pods according to the situation and invokes kube-apiserver. When the process gets finished, an object PV gets created and then mounted on the application pod. The controller pod along with the help of replica pods then hosts the PV. Both the replica pods and the controller pods are important parts of the data plane.

One more major task of m-apiserver is the management of volume policy. While providing the policies, OpenEBS often uses granular specifications. Then the m-apiserver takes these interpretations the YAML specifications to convert them into enforceable components. After that, they get enforced through the help of volume-management sidebars.

Maya Volume Exporter

Each storage controller pods i.e. the cStor and Jiva have a sidecar which is called the Maya volume exporter. The function of these sidecars is to help in retrieving information by connecting the data plane with the control plane. If we look at the granularity of the statistics, it is always at the volume level. Some of the examples of statistics are:

  1. Volume write latency
  2. Volume read latency
  3. Write IOPS
  4. Read IOPS
  5. Write block size
  6. Read block size
  7. Capacity status

Figure 5: Volume exporter data flow

Volume Management Sidecars

There are two main functions of sidecars: One is to pass the volume policies and controller configuration parameters to the volume controller pod or the data plane. Another one is to pass the replica configuration parameters and also the replica of data protection parameters for the volume replica pod.

Figure 6: Volume management side-car

Data Plane of OpenEBS

One thing the OpenEBS architecture is loved about is the facility that it provides to its users related to storage engines. It gives choices to the users to configure their storage engines according to the configuration and characteristics of the workloads. For example, if you have a high IOPS-based database, you can choose a different storage engine from a read-heavy and shared CMS workload. As such, the data plane gives three storage engine choices to its users: Jiva, cStor, and Local PV.

cStor is the most popular storage engine option provided by the OpenEBS which includes features like rich storage engine and lightweight. These features are particularly good for databases resembling HA workloads. The features you get on this option are enterprise-grade. Some of these are on-demand capacity and performance elevation, high data resilience, consistency of data, synchronous data replication, clones, snapshots, and thin data provision. A single replica of cStor’s synchronous replication provides stateful Kubernetes Deployments with high availability. When a high availability of data is requested from the application, the cStor produces 3 replicas in which the data are written in synchronous order. This type of replication helps in the protection of data loss.

Jiva is the earliest storage engine of the OpenEBS and is very simple to use. Part of it that makes it so convenient is that the engine runs entirely on the standards of user space and has standard block storage capacity like the synchronous replication. If you have a small application which has no option for adding block storage devices then, Jiva might be the right option for you. With that in mind, the contrary is true too which means the engine is not efficient for workloads that require advanced storage and high-performance capabilities.

Moving on to the simplest storage engine of the OpenEBS is the Local PV or the Local Persistent Volume. It is simply a disk that is directly connected to a node of a single Kubernetes. Such use of familiar APIs means that the Kubernetes can extract high-performance local storage in the process. To sum up the whole concept, Local PV of the OPenEBS will help the user to create a persistent volume of local disks or paths on the nodes. This can be very useful for those applications that don’t need advanced storing features like clones, replication, and snapshots i.e. cloud-native applications. For example, for the configuration of a Local PV which is based on OpenEBS, a StatefulSet which handles both HA and replication can be used.

Snapshots and cloning supportBasicAdvancedNo
Data consistencyYesYesNA
Backup and restore using VeleroYesYesYes
Suitable for high capacity workloads YesYes
Thin Provisioning YesNo
Disk pool or aggregate support YesNo
On demand capacity expansion YesYes
Data resiliency (RAID Support) YesYes*
Near disk performanceNoNoYes

We have three currently available storage engines but, this does not mean they are the only options. In fact, the OpenEBS community is currently working on building new engines. They are still prototypes and need proper testing before they come into the market. For example, MayaStor is a data engine that is likely to hit the markets soon. It is written in Rust and has a low latency engine and is super helpful for the applications which require API access for block storage and near disk performance. Also, the problems related to Local PV have been getting tested and a variant by the name of ZFS Local PV is gaining some recognition for overcoming these shortcomings.

Node Device Manager

The task of managing the persistent storage in case of stateful applications while working in Kubernetes is done by various tools. One such tool that fills this requirement gap is the NDM or the Node Device Manager. The DevOps architects have to provide the infrastructural needs of application developers and the application themselves in such a way that there is consistency and resilience. For this to be possible, the flexibility of storage stack must be high so that the cloud-native ecosystem can use the stacks easily. The function of NDM comes handy in this situation by uniting separate disks and giving them the capacity to pool them in pieces. NDM does this by identifying the disks as Kubernetes object. It also helps in the management of disk subsystems for Kubernetes PV provisioners like the OpenEBS, Prometheus and other systems. Some of its functions include provisioning, managing, and monitoring the underlying disks.

Figure 7: NDM

Integrations with Cloud-Native Tools

Grafana and Prometheus: The installation of Prometheus takes place during the initial setup of the OpenEBS operator as a part of a micro service. The volume policy is responsible for controlling the Prometheus monitoring as per the given volume. As a whole, the Prometheus and Grafana tools collectively assist the OpenEBS community in the monitoring of persistent data.

WeaveScope: If we need to view the tags, metadata, and metrics in related to the container, process, host or service, WeaveScope is used. It is therefore regareded as an important part of the cloud-native visualization solution within Kubernetes. For WeaveScope integration, things like volume pods, Node Disk Manager Components, and other types of storage structures related to Kubernetes are enabled. All these enhancements have helped in the traversal and exploration of these components.

How is data protected?

Kubernetes has many ways in which the data remains protected. For example, if the IO container fails along with the iSCSI target, it gets spun back by the Kubernetes. The same principle gets applied to the replica containers where the data is stored. OpenEBS has a way to secure the multiple replicas with the help of a configurable quorum or minimum requirements for the replica. The cStor has additional features that check the corruption of silent data and can fix it while concealing itself in the background.

How to Install and Get Started

The first thing one should do is confirm the iSCSI client settings. Through the use of necessary iSCSI protocols, the OpenEBS provides the user with block volume support. So, it is mandatory that all the Kubernetes nodes have the iSCSI initiator present during installation. According to your operating system, there are different methods to verify the iSCSI client installation. If not installed yet, we have the whole process for Ubuntu users as an example:

As we have already discussed, making sure the iSCSI services is running on all the worker nodes is required for the proper functioning of the the OpenEBS system. Follow the steps below to initiate the process in Linux platform (Ubuntu).

Configuration:

In case you have the iSCSI initiator already installed in your system, use the below given command to check the configuration of the initiator name and the status of the iSCSI service:

sudo cat /etc/iscsi/initiatorname.iscsi
systemctl status iscsid

After you have successfully run the command, the system will show you whether the services are running or not. If the status shows “Inactive” then type in the following command to restart the iscsid service:

sudo systemctl enable iscsid && sudo systemctl start iscsid 

If you provide the right commands then the system will give you the following output:

systemctl status iscsid
● iscsid.service - iSCSI initiator daemon (iscsid)
   Loaded: loaded (/lib/systemd/system/iscsid.service; disabled; vendor preset: enabled)
   Active: active (running) since Mon 2019-02-18 11:00:07 UTC; 1min 51s ago
     Docs: man:iscsid(8)
  Process: 11185 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
  Process: 11170 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=0/SUCCESS)
 Main PID: 11187 (iscsid)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/iscsid.service
           ├─11186 /sbin/iscsid
           └─11187 /sbin/iscsid

IF you do not have iSCSI initiator installed on the node, go to “open-iscsi” packages with the help of the following commands:

sudo apt-get update
sudo apt-get install open-iscsi
sudo systemctl enable iscsid && sudo systemctl start iscsid

If one has a Kubernetes environment pre-installed on their system, he or she can easily deploy the OpenEBS with the help of the following command:

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml

After that, you can start running the workload against OpenEBS. In fact, there are a lot of workloads that use the storage classes of the OpenEBS. And no, you don’t have to be very specific to the storage classes. This is the easy way but, taking the time and picking the specific storage classes will help you save time and also help in the workload customization in the long run. The default OpenEBS reclaim policy is the same as used by K8s. “Delete” is the default reclaims policy for the PersistentVolumes that are dynamically provisioned. They are related in a sense that if a person deletes the corresponding PersistentVolumeClaim, the dynamically provisioned volume gets deleted automatically. While in the case of cStor volumes, the data got deleted with it. As for jiva (0.8.0 version onwards), scrub jobs do the work of data deletion. One can easily delete the jobs that have been completed with the help of the following command:

kubectl delete job <job_name> -n <namespace>

Before you provision the Jiva and cStor volumes, the first thing you should do is verify the iSCSI client. With that being said, it is always recommended that a user completes setting up iSCSI client and ensure that the iscsid services are well and running on each worker nodes. This is required for the proper and smooth installation of OpenEBS installation.

 Also, remember that a cluster-admin user context is mandatory if you are installing OpenEBS. If you do not have a cluster-admin user context, then you create one and use it in the process. For creating, you can use the following command.

kubectl config set-context NAME [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]

Here is an example of the above command:

kubectl config set-context admin-ctx --cluster=gke_strong-eon-153112_us-central1-a_rocket-test2 --user=cluster-admin

After that type in the following command to set the newly created context or the existing one. See the example below:

kubectl config use-context admin-ctx

Installation Process through helm

Before initiating the process, check to see if helm is installed in your system and the helm repo needs any updates.

For the second version of Helm:

First, run the command “helm init” to install tiller pod under the “kube-system” namespace and then follow the instructions given below to set up RBAC for a tiller. To obtain an installed version of helm, users can type in the following command:

helm version

Here is an example of the output:

Client: &amp;version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

If you are using the default mode for installation, use the below-given command for installing OpenEBS in the “openebs” namespace:

helm install --namespace openebs --name openebs stable/openebs --version 1.10.0

For the third version of Helm:

You can acquire a pre-installed version of the third helm version with the help of the following command:

helm version

Here is an example of the output:

version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

There are two ways by which you can install OpenEBS with the help of helm v3. Let’s discuss them one by one.

First Option: In this method, the helm acquires the present namespace from the local kube configuration and uses it later when the user decides to run the helm commands. In case it is absent, the helm uses the default namespace. To get started, install the openebs with the openebs namespace with the help of the following command:

You can view your current context with the following line of code:

kubectl config current-context

To the current context, assign the name openebs and type the following:

kubectl config set-context <current_context_name> --namespace=openebs

For creating an OpenEBS namespace:

kubectl create ns openebs

Then install OpenEBS with openebs as the chart name. Use the following command:

helm install openebs stable/openebs --version 1.10.0

Finally, write down this code for viewing the chart:

helm ls

By following the above-described steps, you will have installed the OpenEBS with openebs namespace which has the chart name as openebs.

Second Option: The second option is about mentioning the namespace in helm command directly. Follow the steps periodically to install.

Creating namespace for OpenEBS 

kubectl create ns openebs

With the chart name openebs, install the openebs system. The command is given below:

helm install --namespace openebs openebs stable/openebs --version 1.10.0

helm install –namespace openebs openebs stable/openebs –version 1.10.0

To view the charts use the following code:

helm ls -n openebs

After this, you will have an installed version of the OpenEBS with chart name and namespace openebs.

There are some of the things that you need to note:

  • From the 1.12 version of Kubernetes, it is necessary for a container to set its limit values and resource requests otherwise it gets into an eviction. Before installing, we would suggest our readers set the values to OpenEBS pod spec in the YAML operator first.
  • Before you install the OpenEBS operator, check the mount status of the block device on Nodes.

If you have continued with the custom installation mode, you will come across the following advanced configurations:

  • You can choose nodes for the OpenEBS control plane pods.
  • The node selection is available for the OpenEBS storage pool too.
  • If you do not need the disk filters, you can simply exclude them.
  • In OpenEBS operator YAML, there is a Configure Environmental Variable which is optional.

If you like to proceed with a custom installation way, you need to download the openebs-operator-1.10.0, update the configurations, and then use the “kubectl” command.

Setting up the node selectors for control plane

If you have a large cluster of Kubernetes, you can deliberately limit the scheduling process of OpenEBS control plane to only a few specific nodes. For this process, one should specify a map of key-value pairs and then find the required cluster nodes to attach the same key-value pair in the form of labels.

Setting up node selector for Admission Control

What the Admission controller does is intercept the request that has been given to the API server of the Kubernetes before the persistence of the object. This is done only after the request is authorized or authenticated. For validation of the incoming request, the openebs admission controller makes additional custom admission policies. To give an example, here are two of the admission policies of the latest version.

  • If there is a clone of the PersistentVolumeClaim then the validation is done by PersistentVolumeClaim delete.
  • For validating the requested claim capacity, the size has to become snapshot size and is done by Clone PersistentVolumeClaim create.

We can use the node selector method to schedule the admission controller pod on a particular node.

For Node Disk Manager node selectors setup

For the construction of OpenEBS cStorPool, either a block device custom resource is used or the Node Disk Manager creates a block device. If you want to filter out the cluster in the Kubernetes and only use some nodes for OpenEBS storage, specifying a key-value pair and attaching the same key-value in the form of labels to the necessary nodes does the work.

Node Disk Manager disk filter setup

It is a default function of the NDM to separate out the below-given disk patterns and then convert the remaining which is discovered on a particular node to DISK CRs. The thing is, they should not be mounted.

In case you have different disk types in the cluster that are yet to get filtered out, all you have to do is include your additional disk patterns to an exclude list. This list is found in the YAML file.

"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"

Configuring the Environmental Variable

Under the environmental variable topic, configurations that are related to default cStor sparse pool, Local PV basepath, default Storage configuration, and cStor Target come.

Enabling the core dump:

For NDM daemonset and cStor pool pods, the dumping cores are disabled as a part of default settings. For enabling this, you need an ENV variable “ENABLE_COREDUMP” set to 1. Then all you have to do is deploy the ENV setting in cStor pool to enable the dumping core in cStor pool pod and likewise put the ENV setting in ndm daemonset spec for daemonset pod core dumping.

- name: ENABLE_COREDUMP
value: "1"

SparseDir:

The SparseDir is simply a hostPath directory to locate the sparse files. By default, the value is set at “/var/openebs/sparse”. Before you apply the OpenEBS operator YAML file, a certain configuration should be added as a part of an environmental variable in maya-apiserver specification.

# environment variable
 - name: SparseDir
   value: "/var/lib/"

cStorSparsePool default

On the basis of the configuration value, the OpenEBS installation process will create a default cStor sparse pool. The configuration is wholly dependent on the values of true and false. If it’s true, then the cStor sparse pools will get configured if not, then the configuration will not proceed. The default value for configuration is always false and this sparse pool is only used for testing purposes. If you want to install cStor using sparse disks, this particular configuration should be added in the form of an environmental variable in the Maya-apiserver specification. Here is an example:

# environment variable
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
  value: "false"

TargetDir

Target Dir acts as hostPath for the target pod which has the default value set to “/var/openebs”. This pre-value overrides the host path and introduces an OPENEBS_IO_CSTOR_TARGET_DIR ENV in the maya-apiserver deployment. This type of configuration is often necessary when the host OS cannot write on the default OpenEBS path i.e. (/var/openebs/). Like the cStor SparsePool, a certain configuration should be added to the maya-apiserver specification as an environment variable before applying the operator YAML file. Here is an example:

# environment variable
- name: OPENEBS_IO_CSTOR_TARGET_DIR
  value: "/var/lib/overlay/openebs"

OpenEBS Local PV basepath

For a localPV which is based on a hostpath, the default hospath is /var/openebs/local. This can be later changed during the installation process of OpenEBS operator. All you have to do is pass the OPENEBS_IO_BASE_PATH ENV parameter.

# environment variable
 - name: OPENEBS_IO_BASE_PATH
   value: "/mnt/"

Default Storage Configuration:

Jiva and Local PV storage classes are some of the default storage configurations that come with the OpenEBS. The storage engines in the OpenEBS can be configured and customized according to the needs and done through associated custom resources and storage classes. You can always change the default configuration of storage after installation process but it gets overwritten by the API server. So, we often recommend users to create their own storage configuration with the help of default options. If you disable the default configuration during installation, you can make your own type of storage configuration. To properly disable the default configuration, add the following code lines as an environmental variable in Maya-apiserver.

# environment variable
- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
  value: "false"

Verifying the installation process

To get the list of the pods with the <openebs> namespace to use the following code:

kubectl get pods -n openebs

If you have successfully installed the OpenEBS, you will most likely see the following output like the example below:

kubectl-get-pods-n-openebs

The openebs-ndm refers to the daemon set which should be running on all the nodes of the cluster or at least on the ones selected during the nodeSelector configuration. Similarly, the maya-apiserver, openebs-snapshot-operator, and openebs-provisioner control plane pods should also be running. In case you have configured nodeSelectors, make sure they are scheduled on the right nodes. For this, list the pods by using “Kubectly get pods -n openebs -o wide”.

Verifying the Storage Classes:

First, check if the OpenEBS has installed the default Storage Classes by listing them:

kubectl get sc

For your reference, here is an example of the output you will see after a successful installation. You will find the given StorageClasses created:

kubectl-get-sc

Verifying Block Device CRs

For every block device CR that the NDM daemon set creates, the discovered nodes have the following two exceptions:

  • The disks which match the vendor-filter” and “path-filter” exclusions.
  • The disks which have already mounted on the node.

To check if the CRs are coming as we expect, use the following command to list the block device CRs.

kubectl get blockdevice -n openebs

If you proceed in the right manner, you will have a similar output in your screen:

blockdevice-n-openebs

After that, use the following command to check the label set on node to find out the corresponding block device CR of the nodes.

kubectl describe blockdevice <blockdevice-cr> -n openebs

To verify the Jiva default pool

kubectl get sp

You will likely see the following output after you hit the above-lines:

kubectl-get-sp

Things to Consider After Installation

One can use the following storage classes to simply test the OpenEBS after installation:

  • For provisioning of the Jiva Volume use openebs-jiva-default. Here, a default pool is used and the replicas of the data are created under the mnt/openebs_disk directory. This directory is found in the Jiva replica pod.
  • For provisioning a Local PV on hostpath, use openebs-host path.
  • For the provisioning of Local PV on a device, use openebs-device.

To use real disks, you have to first create Jiva pools, cStorPools, or the OpenEBS Local PV according to the requirement. After that, create the required StorageClasses or use the default StorageClasses for use.

References:

1. MSV, J., Gain, B. and MSV, J., 2020. How Openebs Brings Container Attached Storage To Kubernetes – The New Stack. [online] The New Stack. Available at: <https://thenewstack.io/how-openebs-brings-container-attached-storage-to-kubernetes/&gt; [Accessed 22 May 2020].

2. Docs.openebs.io. 2020. Welcome To Openebs Documentation ·. [online] Available at: <https://docs.openebs.io/&gt; [Accessed 22 May 2020].

3. All figures 1 to 7 reconstruct using OpenEBS