Enterprise Kubernetes distributions like Openshift is deployed as part of customer’s application modernization journey. Providing benefits in agility and speed of innovation. Since the unit of compute has become more efficient with containers, it increases portability. With stateless containers, there’s less dependencies with portability. However, with stateful containers there’s increased complexity due to the dependencies with persistent volumes. Not just deploying persistent volumes, but also ensuring the data is protected, available, and moveable.
Hitachi Replication Plug-in for Containers (HRPC) addresses the challenges of a production focused stateful container. It provides replication data services for the persistent volumes on Hitachi’s VSP storage platforms. Covering use cases such as:
- Migration – Persistent volumes can be snapshotted and cloned locally or to remote Kubernetes clusters with its own remote VSP storage platform.
- Disaster Recovery – Persistent volumes can be protected against datacenter failures by having the data replicated at extensive distances using Hitachi Universal Replicator.
- Backup – Persistent volumes can be protected with point in time snapshots locally with Hitachi’s CSI plugin (Hitachi Storage Plug-in for Containers), or be backed up to remote VSP storage using HRPC
Hitachi Replication Plug-in for Containers (HRPC) supports any Kubernetes cluster configured with Hitachi Storage Plug-in for Containers, this guide covers the installation on a Red Hat OpenShift Container Platform configured with Hitachi VSP Storage. The infrastructure for this demo is based on the Hitachi Unified Compute Platform.
We will go through a configuration of the HRPC. To simplify the installation of HRPC, storage will need to have the remote paths between two storage system configured using Hitachi Universal Replicator with journal volumes. As well as connected to its respective Kubernetes cluster with StorageClass with HSPC. After, the process of deploying HRPC is as follows:
- Prepare Kubernetes manifest files and environmental variables.
- Install the Hitachi Replication Plug-in for Containers Operator
After the successfully deploying HRPC, volume replication can be configured for PVCs via kubectl or equivalent tools.
For configuration details, follow the Hitachi Replication Plug-in for Containers Configuration Guide.
Pre-requisites
For demonstration purposes, we have completed the following requirements:
- Install two Kubernetes clusters, one in the primary and the other in the secondary site. Single Kubernetes cluster is not supported.
- Configure Hitachi Universal Replicator (HUR)
- Install Hitachi Storage Plug-in for Containers in both Kubernetes or Red Hat OpenShift Container clusters.
- Inter-site connectivity:
- Hitachi Replication Plug-in for Containers in the primary site must communicate with the Kubernetes cluster in the secondary site and vice versa.
- Hitachi Replication Plug-in for Containers in the primary site must communicate with the storage system in the secondary site and vice versa.
- Connection between primary and secondary storage system RESP API
- FC or iSCSI connection between primary and secondary storage system for data copy
Additional details on the HRPC Configuration Guide
Installing and Configuring Hitachi Replication Plug-in for Containers
For this demo, we have configured two OpenShift Clusters (version 4.8), each configured with 3 masters and 3 workers, and each cluster was connected to different Hitachi VSP Storage systems as seen on the diagram below.
Configure the storage systems
Configure the storage system for replication:
- Configure the storage system as described in the Hitachi Storage Plug-in for Containers Quick Reference Guide.
- Configure the remote path between primary site and secondary site storage systems.
For details, see Hitachi Universal Replicator (HUR) User Guide.
- Configure journal volumes. For details, see Hitachi Universal Replicator (HUR) User Guide.
- Create StorageClass in both primary and secondary sites:
- Name and fstype of the StorageClass must be the same for both sites.
- StorageClass in the primary site must point to the storage in the primary site.
- StorageClass in the secondary site must point to the storage in the secondary site.
- Create a namespace in both primary and secondary sites:
- The namespace must have the same name in both sites
Figure below shows the remote connection configured between the two Hitachi VSP Storage Systems, the first VSP connected to the primary Kubernetes cluster, and the second VSP connected to the secondary Kubernetes cluster.
Configure Hitachi Replication Plug-in for Containers (HRPC)
For the installation of HRPC it is required to have a dedicated management workstation/VM that can access both the primary and secondary Kubernetes clusters.
The following tasks are just a summary of installation/configuration steps, for details follow the Hitachi Replication Plug-in for Containers Configuration Guide.
Part I: Prepare manifest files and environment variables:
- Download and extract the installation media for HRPC into the management workstation.
unzip hrpc_<version>.zip
- Get the kubeconfig file from both the primary and secondary sites
KUBECONFIG_P=/path/to/primary-kubeconfig
KUBECONFIG_S=/path/to/secondary-kubeconfig
#The following files will be created in later steps
SECRET_KUBECONFIG_P=/path/to/primary-kubeconfig-secret.yaml
SECRET_KUBECONFIG_S=/path/to/secondary-kubeconfig-secret.yaml
- Configure an environment variable for the secret file of the storage system
SECRET_STORAGE=/path/to/storage-secret.yaml
- Copy the namespace manifest file to the management machine. This file is provided in the media kit (
hspc-replication-operator-namespace.yaml
). Do not edit it.
- Create a Secret manifest file with the secondary kubeconfig information to access the secondary Kubernetes cluster from Hitachi Replication Plug-in for Containers running in the primary Kubernetes cluster. For reference, see the
remote-kubeconfig-sample.yaml
Here an example:
# base64 encoding
cat ${KUBECONFIG_S} | base64 -w 0
vi secondary-kubeconfig-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-remote-kubeconfig
namespace: hspc-replication-operator-system
type: Opaque
data:
remote-kubeconfig: <base64 encoded secondary kubeconfig>
- Create a Secret manifest file the with the primary kubeconfig information to access the primary Kubernetes cluster from Hitachi Replication Plug-in for Containers running in the secondary Kubernetes cluster. For reference, see the
remote-kubeconfig-sample.yaml
. Here an example:
# base64 encoding
cat ${KUBECONFIG_P} | base64 -w 0
vi primary-kubeconfig-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-remote-kubeconfig
namespace: hspc-replication-operator-system
type: Opaque
data:
remote-kubeconfig: <base64 encoded primary kubeconfig>
- Create a Secret manifest file containing storage system information that enables access by Hitachi Replication Plug-in for Containers. For reference, see the
storage-secrets-sample.yaml
This manifest file includes information for both the primary and secondary storage systems. Here an example:
vi ${SECRET_STORAGE}
apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-storage-secrets
namespace: hspc-replication-operator-system
type: Opaque
stringData:
storage-secrets.yaml: |-
storages:
- serial: 40016 #Serial number, primary storage system
url: https://172.25.47.x #URL for the REST API server
user: UserPriary #User, primary storage system
password: PasswordPrimary #Password for user
journal: 1 #Journal ID HUR primary storage system
- serial: 30595 #Serial number, secondary storage system
url: https://172.25.47.y #URL for the REST API server
user: UserSecondary #User, secondary storage system
password: PasswordSecondary #Password for user
journal: 1 #Journal ID HUR secondary storage system
- Modify the Hitachi Replication Plug-in for Containers manifest file (
hspc-replication-operator.yaml
) provided in the media kit based on your requirement to use your private repository.
Part II: Install the Hitachi Replication Plug-in for Containers Operator
- From the management workstation, login to both primary and secondary clusters:
KUBECONFIG=${KUBECONFIG_P} oc login -u <admin user> -p <Password>
KUBECONFIG=${KUBECONFIG_S} oc login -u <admin user> -p <Password>
- Create Namespaces in the primary and secondary sites. Use the same manifest file in primary and secondary sites.
KUBECONFIG=${KUBECONFIG_P} oc create -f hspc-replication-operator-namespace.yaml
KUBECONFIG=${KUBECONFIG_S} oc create -f hspc-replication-operator-namespace.yaml
- Create Secrets containing kubeconfig information in primary and secondary sites. Use the different manifest files in primary and secondary sites.
KUBECONFIG=${KUBECONFIG_P} oc create -f ${SECRET_KUBECONFIG_S}
KUBECONFIG=${KUBECONFIG_S} oc create -f ${SECRET_KUBECONFIG_P}
- Create Secrets containing storage system information in primary and secondary sites. Use the same manifest file in primary and secondary sites.
KUBECONFIG=${KUBECONFIG_P} oc create -f ${SECRET_STORAGE}
KUBECONFIG=${KUBECONFIG_S} oc create -f ${SECRET_STORAGE}
- Load the container (For example, docker load or podman for OpenShift)
hrpc_<version>.tar
and push the loaded container to your private repository.
- Create Hitachi Replication Plug-in for Containers in primary and secondary sites. Use the same manifest file for both the primary and secondary sites.
KUBECONFIG=${KUBECONFIG_S} oc create -f hspc-replication-operator.yaml
KUBECONFIG=${KUBECONFIG_P} oc create -f hspc-replication-operator.yaml
- Confirm that Hitachi Replication Plug-in for Containers are running in primary and secondary sites.
#Checking HRPC operator in primary site:
KUBECONFIG=${KUBECONFIG_P} oc get pods -n hspc-replication-operator-system
#Checking HRPC operator in secondary site:
KUBECONFIG=${KUBECONFIG_S} oc get pods -n hspc-replication-operator-system
At this point the Hitachi Replication Plug-in for Containers operator is ready and the next steps will be to install and test with a stateful app or just create a PVC and Pod that consumes the PVC.
Installing a Stateful App for Replication Testing on the Primary Site
For the purpose of this demo we are installing MySQL database using Bitnami Helm chart.
- First, we need to add the Bitnami repository by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:
[ocpinstall@adminws ~]$ helm search repo mysql
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/mysql 8.8.23 8.0.28 Chart to create a Highly available MySQL cluster
- Next, create a namespace for the stateful app for the demo. Use the following command to create a namespace (project):
[ocpinstall@adminws ~]$ KUBECONFIG=${KUBECONFIG_P} oc new-project demoapps
- Verify storage class
StorageClass “vsp-hrpc-sc
” was created following the requirements for Hitachi Storage Plug-in for Containers.
[ocpinstall@adminws ~]$ KUBECONFIG=${KUBECONFIG_P} oc get sc
- Customize and deploy MySQL Helm chart with persistent storage
The following shows an example of the installation of MySQL on the primary site, using StorageClass “vsp-hrpc-sc
”, database called “vsp_database
”, and a Persistent Volume with 5Gi of capacity.
[ocpinstall@adminws ~]$ helm install mysql-hrpc-example \
> --set secondary.replicaCount=0 \
> --set global.storageClass=vsp-hrpc-sc \
> --set primary.persistence.size=5Gi \
> --set auth.rootPassword=Hitachi123,auth.database=vsp_database \
> --set secondary.replicaCount=0 \
> --set primary.podSecurityContext.enabled=false \
> --set primary.containerSecurityContext.enabled=false \
> --set secondary.podSecurityContext.enabled=false \
> --set secondary.containerSecurityContext.enabled=false \
> bitnami/mysql
We can use the following commands to check the status of the MySQL pod and its corresponding Persistent Volume.
KUBECONFIG=${KUBECONFIG_P} oc get pods
KUBECONFIG=${KUBECONFIG_P} oc get pvc
- Inserting test data in the MySQL database
Once the MySQL pod is ready, a new table called “replication_cr_status
” is created on the “vsp_databse
” database. Then a few records of test data are inserted into this new table. The create table and insert commands are not showed here.
The following data has been inserted into the MySQL database with the purpose to test and verify replicated PVC to the secondary site:
The next step is to replicate the Persistent Volume “data-mysql-hrpc-example-0
” used by the MySQL pod to the secondary site.
Replicating Persistent Volumes
To replicate storage volumes it is required to create a Replication CustomResource(CR) object. Once the Replication CR has been created, HRPC starts replicating the specified PVC and triggers the creation of a PVC in the secondary site. The data in the target PVC from the primary site is copied and protected by HUR.
Creating a manifest file for Replication CR
A Replication CR manifest file contains the name of the PVC and the StorageClass name. The manifest file below is created to replicate the PVC “data-mysql-hrpc-example-0
” previously used by MySQL Pod.
Note: A StorageClass with the same name must exist on the secondary site. Also, a namespace with the same name must be created on the secondary site before creating the Replication CR.
cat hspc_v1_msqldb_replication.yaml
apiVersion: hspc.hitachi.com/v1
kind: Replication
metadata:
name: replication-mysqldb1
spec:
persistentVolumeClaimName: data-mysql-hrpc-example-0
storageClassName: vsp-hrpc-sc
Creating a Replication CR object
The Replication CR object is created in the primary site, using the manifest file from previous step. This triggers the creation of an HUR pair and initial data copy. Also, this triggers the creation of a Replication CR object on the secondary site.
Use the following command to create the Replication CR:
KUBECONFIG=${KUBECONFIG_P} oc create -f hspc_v1_msqldb_replication.yaml
Verifying the status of Replication CR in primary and secondary site
The Replication CR status is changed to Ready when the initial replication is created, and the data protection is started. Transition to the Ready status depends on data size in target PVC and might take some time to change.
The following commands help to verify the status of the Replication CR objects in both the primary and secondary site.
Also, we can see that a PVC “data-mysql-hrpc-example-0” has been automatically created on the secondary site.
On the VSP Storage systems we can verify that UR pairs have been automatically created as well:
UR pairs on primary storage system:
UR pairs on secondary storage system:
The following command provides more details from the Replication CR, like storage serial number, LDEV Name for both primary and secondary site, and it can easily be correlated with the LDEVs seen on the UR Pairs.
KUBECONFIG=${KUBECONFIG_P} oc describe replication replication-mysqldb1
Checking replicated data on the Secondary Site
After the Replication CR is created and it is in “Ready
” status, we can perform a “split
” and resync operation to check the data from the secondary site. The data copy process from the primary site to the secondary site is stopped during the split and resync operation.
Splitting the Hitachi Universal Replicator pair
To split the HUR pair, from the primary site, change the status of Replication CR to perform the split operation. This triggers HRPC to split the HUR pair.
First confirm the status of the Replication CR is Ready and Operation value is none.
Then use the command below to edit the Replication CR and change the spec.desiredPairState
to split
.
KUBECONFIG=${KUBECONFIG_P} oc edit replication replication-mysqldb1
After the edit, make sure the Replication CR status is split and the Operation value is none.
Confirming the replicated data
To confirm the replicated data on the secondary site, we are going to deploy the same MySQL Helm chart, but this time it will be deployed with the existing Persistent Volume that was replicated from the primary site “data-mysql-hrpc-example-0
”.
To accomplish this with helm chart we need to customize and indicate that MySQL needs to be installed with an existing PVC as seen below.
[ocpinstall@jpc3-ocp-admin-ws]$ helm install mysql-hrpc-example \
> --set secondary.replicaCount=0 \
> --set global.storageClass=vsp-hrpc-sc \
> --set primary.persistence.size=5Gi \
> --set auth.rootPassword=Hitachi123,auth.database=vsp_database \
> --set secondary.replicaCount=0 \
> --set primary.podSecurityContext.enabled=false \
> --set primary.containerSecurityContext.enabled=false \
> --set secondary.podSecurityContext.enabled=false \
> --set secondary.containerSecurityContext.enabled=false \
> --set primary.persistence.storageClass=vsp-hrpc-sc \
> --set primary.persistence.existingClaim=data-mysql-hrpc-example-0 \
> bitnami/mysql
We can use the following commands to check the status of the MySQL pod and its corresponding Persistent Volume on the Secondary site.
KUBECONFIG=${KUBECONFIG_P} oc get pods
KUBECONFIG=${KUBECONFIG_P} oc get pvc
Or directly from the console of the secondary cluster:
The next step is to connect to the MySQL database and verify the same data that was created from the primary site.
The following query confirms that the PVC/ MySQL database contains the same data that was inserted on the primary site.
Now that we have confirmed the same data on the PVC on the secondary site, we can uninstall the MySQL helm chart, the PVC will remain there since it is controlled by the Replication CR.
helm uninstall mysql-hrpc-example
oc get pvc
oc get pods
Resynchronizing HUR pair
To resync the HUR pair, from the primary site, change the status of the Replication CR to perform the resync operation. This triggers Hitachi Replication Plug-in for Containers to resync
the HUR pair.
Make sure no Pod is using the PVC, otherwise the resync operation will not work.
KUBECONFIG=${KUBECONFIG_P} oc edit replication replication-mysqldb1
After the edit, make sure the Replication CR status is Ready and the Operation value is none.
Supported Platforms
For details on supported platforms, see the Replication Plug-in for Containers Release Notes.
References: