Blogs

Monitoring Kubernetes Resources and Hitachi storage with Hitachi Storage Plug-in for Prometheus

By Jose Perez posted 02-22-2022 18:34

  

Deploying and running Kubernetes in production also necessitates the need for monitoring and observability of the container platform and its services it hosts.  There are number of tool available to achieve this.  Some of the common tools are Grafana where it provides a central dashboard and Prometheus provides a time series database for telemetry data.

Hitachi Storage Plug-in for Prometheus enables the Kubernetes administrator to monitor the metrics of Kubernetes resources and Hitachi storage system resources within a single tool. Hitachi Storage Plug-in for Prometheus uses Prometheus to collect metrics and Grafana to visualize those metrics for easy evaluation by the Kubernetes administrator. Prometheus collects storage system metrics such as capacity, IOPS, and transfer rate in five-minute intervals.

The following diagram shows the flow of metric collection using Hitachi Storage Plug-in for Prometheus.

While Hitachi Storage Plug-in for Prometheus supports any Kubernetes cluster configured with Hitachi Storage Plug-in for Containers, this guide covers the installation on a Red Hat OpenShift Container Platform configured with Hitachi VSP Storage. The infrastructure for this demo is based on the Hitachi Unified Compute Platform.

This guide will not cover the setup of the Kubernetes cluster or installation of Hitachi Storage Plug-in for Containers.

For some minor details about configuration follow the Hitachi Storage Plug-in for Prometheus Quick Reference Guide

Requirements

  • Install the Kubernetes or Red Hat OpenShift Container Platform.
  • Download the Storage Plug-in for Prometheus installation media kit from the Hitachi Support Connect Portal: https://support.hitachivantara.com/en/user/home.html. A Hitachi login credential is required.
  • Install Hitachi Storage Plug-in for Containers in Kubernetes or Red Hat OpenShift Container Platform.
  • Configure StorageClass for Hitachi Storage Plug-in for Containers in Kubernetes or Red Hat OpenShift Container Platform. 

Installing Hitachi Storage Plug-in for Prometheus on OpenShift Cluster

The installation below for this demo is executed on top of an OpenShift Cluster (version 4.8) configured with 3 masters and 3 workers where the worker nodes are connected to Hitachi VSP Storage.

oc get nodes

NAME            STATUS   ROLES    AGE   VERSION
jpc2-master-1   Ready    master   29d   v1.21.6+b4b4813
jpc2-master-2   Ready    master   29d   v1.21.6+b4b4813
jpc2-master-3   Ready    master   29d   v1.21.6+b4b4813
jpc2-worker-1   Ready    worker   29d   v1.21.6+b4b4813
jpc2-worker-2   Ready    worker   29d   v1.21.6+b4b4813
jpc2-worker-3   Ready    worker   28d   v1.21.6+b4b4813

 

  1. Download and extract the installation media.

tar zxvf storage-exporter.tar.gz

 

  1. Load the Storage Plug-in for Prometheus image into the repository 
  1. Update the “exporter.yaml” with the corresponding registry hostname and port for the cluster.
  2. Update the “secret-sample.yaml” using the info from the VSP storage (s): Serial Number, Storage System API URL, user and password. 
apiVersion: v1
kind: Secret
metadata:
name: storage-exporter-secret
namespace: hspc-monitoring-system
type: Opaque
stringData:
storage-exporter.yaml: |-
   storages:
   - serial: 40016
     url: https://172.25.47.x
     user: MaintenanceUser
     password: PasswordForUser

  

  1. Create the namespace hspc-monitoring-system for Storage Plug-in for Prometheus:

 oc apply -f yaml/namespace.yaml

 

  1. Create security context constraints (SCC):

 oc apply -f yaml/scc-for-openshift.yaml

 

  1. Install Storage Plug-in for Prometheus and Prometheus Pushgateway. 

oc apply -f yaml/secret-sample.yaml -f yaml/exporter.yaml

 

Verify the storage-exporter and pushgateway are running on namespace hspc-monitoring-system:

 

oc get pods

NAME                                READY   STATUS    RESTARTS   AGE
pushgateway-77b85489b9-4vnzt        1/1     Running   0          5d
storage-exporter-77b644b8b7-pzhdj   1/1     Running   0          5d

Installing Prometheus and Grafana

After installing Hitachi Storage Plug-in for Prometheus install and configure Prometheus and Grafana. For more information, see https://prometheus.io/ and https://grafana.com/.

If installing Prometheus and Grafana manually, there is a couple of steps you’ll need to take, follow Hitachi Storage Plug-in for Prometheus Reference Guide:

  • Connect Prometheus to Pushgateway
  • Import sample dashboard json grafana/sample.json to Grafana.

 

For this test and demo purpose we are using the quick installer that comes with the Hitachi Storage Plug-in for Prometheus package. 

  1. In the grafana-prometheus-sample.yaml file, replace StorageClass of with your own StorageClass.
  2. (Optional) Modify the Grafana service.

The grafana-prometheus-sample.yaml file exposes Grafana as a NodePort with a random nodeport. If you want to expose Grafana in a different way, modify the grafana-prometheus-sample.yaml file.

  1. Deploy Grafana and Prometheus 

oc apply -f yaml/grafana-prometheus-sample.yaml

Verify the Prometheus and Grafana PODs are running:
oc get pods

NAME                                READY   STATUS    RESTARTS   AGE
grafana-0                           1/1     Running   0          5d
prometheus-0                        1/1     Running   0          5d
pushgateway-77b85489b9-4vnzt        1/1     Running   0          5d
storage-exporter-77b644b8b7-pzhdj   1/1     Running   0          5d

 Make sure the four PODs are running.

  1. Access Grafana.
If you use NodePort, access Grafana with <Your Node IP Address>:<Grafana Port>. You can identify <Grafana Port> by using the following command.

 

oc get svc

NAME        TYPE        CLUSTER-IP     EXTERNAL-IP PORT(S)          AGE
grafana     NodePort    172.30.219.171 <none>      3000:31929/TCP   5d
prometheus  NodePort    172.30.180.214 <none>      9090:31661/TCP   5d
pushgateway ClusterIP   172.30.25.210  <none>      9091/TCP         5d

 

If you expose the Grafana, please get endpoint by yourself. The Grafana user/password are admin/secret.
 
oc expose svc/grafana
 
oc get routes

NAME      HOST/PORT                                                  PATH   SERVICES   PORT      TERMINATION   WILDCARD
grafana   grafana-hspc-monitoring-system.apps.jpc2.ocp.hvlab.local          grafana    grafana                 None

Monitoring Dashboard with Hitachi Storage Plug-in for Prometheus

When using the quick installer for Grafana and Prometheus following the steps above, there are no additional steps to configure Prometheus as data source since it is pre-configured.

On the dashboards, the installer includes a dashboard for “HSPC Volumes”.

When selecting the HSPC Volumes Dashboard it shows metrics for Persistent Volumes like Capacity, Response Time, IOPS, Read/Write Transfer Rate, Cache Hit Rate, etc.

These metrics can be presented by Namespace, Persistent Volume Claims VC, Storage Class, Storage Serial Number or Storage Pool ID.

 

When doing performance testing you can view specific metrics as shown below:

 

For an overview of our storage integrations with Kubernetes and Red Hat Openshift, see 
Hitachi’s Storage Integration for Kubernetes.

Supported Platforms

For details on supported platforms, see the Hitachi Storage Plug-in for Prometheus Release Notes

References:

 

3 comments
68 views

Permalink

Comments

05-02-2022 01:59

Excellent writeup

04-26-2022 13:38

Very informative

There are a couple of errors in the documentation for the Prometheus plugin. Also after installing the container the container seems to be waiting for something, seems like there are some gaps either in the product or in the documentation.

For example the second step in the install guide:

docker load -i storage-exporter.tar doesn't work. I had to use docker import rather. 

Then when the pod starts it just hangs at:
kubelet Error: Error response from daemon: No command specified

EDIT: Jose showed me that once the tar ball is extracted you need to change to the program directory. In there you'll find the correct tar file to import.