Moving a CM-REST Server to a Clustered Environment
Because Configuration Manager API (CM-REST API) is a commonly used management tool, it’s always a good idea to run it from a clustered environment to avoid any disruption. However, it’s also a challenging task for a customer to move their entire existing API environment from a non-clustered environment to a clustered environment instead of deploying a new API environment in a clustered environment.
In this blog, we’ll provide details for moving an existing API environment (Hitachi Ops Center API Configuration Manager) from a non-clustered environment to a clustered environment using the Microsoft Windows Server Failover Cluster application. In our lab, we tested with a two-node cluster created on VMware virtual machines. When the server was not a part of any cluster, CM-REST was installed on the first node. The CM-REST and Windows platforms were then expanded in a dual node clustered environment using the Microsoft Windows Server Failover Cluster application.
This blog consists of the following sections:
· Prerequisites for moving a single node CM-REST server to a dual node environment
· Deployment of CM-REST server in a clustered environment
Prerequisites for moving a single node CM-REST server to a dual node environment
First, we installed Ops Center CM-REST 10.9.0 on a virtual machine, which is called an Active node in the clustered environment. Next, we registered a storage system with the CM-REST server. Because these are well-known steps, they are not included in the blog.
A shared disk is required when two virtual machines join the cluster. For configuring a WSFC clustered VM on a single ESXi host, we completed the procedure described in the following VMware article:
In addition, the two nodes participating in the cluster configuration must be a part of the Active Directory domain. In our case, the domain was ‘project.local’.
For configuring a Windows Server Failover Cluster, we completed the procedure in the following Microsoft article:
Deploying CM-REST in a clustered environment
This section shows how we built the Windows Server Fail Over Cluster and moved the CM-REST server from a non-clustered environment to a fail-over clustered environment. CM-REST 10.9.0 was already installed on the SISCMRSTND1.project.local node, also known as the active node in the clustered environment. In the clustered environment, CM-REST 10.9.0 was installed on the SISCMRSTND2.project.local node, also known as the stand-by node. To install CM-REST in a clustered environment, complete the following steps:
1. Log in to the active node as an Administrator.
2. From the cluster roles, check the current owner of the resources (Disk and IP) between the two nodes. If the active node is not the current owner, make it the current owner of the resources.
Note: In this scenario, the active node was the owner of the resources. Therefore, no action was taken.
3. If the IP address and shared disk are not already online, bring them online.
4. Stop the API service on the active node.
5. Create a shared folder in the cluster shared disk for REST API.
6. Create a folder named “db” in the shared folder that you created in step 5 for storing REST API database.
7. Copy the following two files from C:\Program Files\hitachi\ConfManager\data\db to the db folder created in step 6.
8. Navigate to C:\Program Files\hitachi\ConfManager\bin and run the configureCluster.bat file as follows:
configureCluster.bat -set <absolute_path_shared_folder> <virtual_IP_address>
Note: The cluster management IP address used when creating the cluster is set as the virtual IP address.
9. To ensure that the path and IP address are set correctly, run the following command:
10. Copy the following environment setting files from the active node to the shared folder.
The original path in the active node is as follows:
· For the StartupV.properties file: C:\Program Files\hitachi\ConfManager\data\properties\StartupV.properties
· For the remaining files: C:\Program Files\hitachi\ConfManager\oss\rabbitmq\etc\rabbitmq
11. Navigate to C:\Program Files\hitachi\ConfManager\bin and run the following command:
Note: Acomp is the character string given based on the user’s choice.
12. To see the changes, start the API service on the active node.
13. To verify whether the REST API is working correctly, run an API query and obtain the API version.
14. Stop the API service again on the active node.
15. To prevent REST API from starting automatically when the OS on the active node starts, navigate to C:\Program Files\hitachi\ConfManager\bin and run the following command:
16. Move the owner of the resource group from the active node to the stand-by node using the Failover Cluster Management application.
When the active node is the owner of the resource group:
When the ownership changed from active node to stand-by node:
17. Log in to the stand-by node as an administrator.
18. Install CM-REST 10.9.0 on the stand-by node.
19. Stop the API service on the stand-by node.
20. Copy (overwrite) the following environment setting files from the shared folder to the stand-by node.
· Copy the following files to the following directory on the standby node:
· Copy the StartupV.properties file to the following location on the stand-by node:
21. Navigate to C:\Program Files\hitachi\ConfManager\bin and run the following command. Note that the character string must be the same as in step 11.
22. Start the API service on the stand-by node.
23. To obtain the API version and ensure that it is working correctly, run the following API query:
24. Stop the API service on the stand-by node.
25. To prevent REST API from starting automatically when the OS on the active node starts, navigate to C:\Program Files\hitachi\ConfManager\bin and run the following command:
26. Register the built-in script to the cluster roles and add this script as a resource. Select Roles > Add Resource > Generic Script.
27. Add the script path and click Next.
28. Select the newly added script and then select Properties from the right context menu.
29. Assign a new name to this script.
30. Select the Dependencies tab and add the Cluster shared disk and Cluster management IP address as the dependencies. Click Apply and then click OK.
After adding dependencies, the role stops.
31. Make the active node the owner of the resource roles and then bring the role online.
32. Select the clusterscript resource and bring it online by clicking Bring Online.
After the script is online, the cluster roles are up and running.
After the clusterscript resource and cluster role or resource group is up and running, the API service starts automatically on the active node but not on the stand-by node.
API service status on the active node:
API service status on the stand-by node:
You can now run your CM-REST servers in the fail-over cluster mode.
Although the blog is mainly based on virtual machines, the steps are valid for physical servers as well. The entire configuration is time-consuming, but not tedious. So, we’re sure that users will find this blog helpful.