GCP - Running Windows Server Failover Clustering Step by Step - Part 2

Hands-on how to build Windows Server failover clustering and achieve IIS Web Application HA in Google Platform above. Cipian from set to set a fault-tolerant cluster management to how to achieve the IIS transfer request in GCP.

Benpian Architecture Reference Google official documents Running Windows Server Failover Clustering . Mainly for the overall operation of the full and taught school for a detailed description. And for readers more involved in the whole teaching in reading, slightly adjusted the order of the original.

In Part I has been completed and added to the host GCP build within AD do management, fault-tolerant cluster here will take over the complete installation and setting

Setting up failover clustering

Followed wsfc-1 and wsfc-2 perform the following actions

  • Remote login host
  • Install Failover Clustering , use the installation WSFC.TESTclusteruser operate, or will not complete the installation. Here do not pay attention to install the feature in wsfc-dc on the host.
  • The wsfc-1 and wsfc-2 is a node within the cluster (node)
  • Through verification steps can know the current settings are correct, and through using the validated nodes Create the cluster now this button to immediately establish a cluster to step off the current post-mortem test.
  • In the Create Cluster Wizard in Access point settings within the cluster name is testcluster
  • The position is set to 10.0.0.8

First complete Failover Clustering installation:

Image 061.jpg

Then start adding nodes:

Image 062.jpg

Image 063.jpg

Image 064.jpg

Image 065.jpg

Image 066.jpg

Image 068.jpg

Image 069.jpg

After the completion of the current node to see the emergence of two new current configuration state of the host (Assigned Vote and Current Vote):

Image 067.jpg

Detailed steps can refer to the Microsoft official of the Create A Failover Cluster the .aspx the Validate #% 20the% 20configuration)

Creating the file share witness

And as the two hosts are added when the node can be used as, we need to elect who will take over the current main operational service processing node through a fair voting mechanism within the cluster, for which we need to establish a quorum to do the arbitration system.

This part is easy through a sharefolder way to pass AD, which host to inform current offline you will need to determine whether to take over the job to another one. GCP is now believed permeable Live Migration with automatic restart to provide reliable services Share Witness.

Here we can share through the following steps to establish a mechanism to witness through the archives to control the role of the current cluster allocation process:

Create the file share path

  1. Remote connection enter wsfc-dc
  2. Data set for file sharing on the hosts in C: shares , the right-click choose to share.
  3. After sharing success and then create a folder inside clusterwitness-testcluster

Image 072.jpg

Image 073.jpg

Image 074.jpg

Image 075.jpg

Image 071.jpg

Analyzing witness added in the fault handler cluster configuration file sharing

  1. On cluster nodes wsfc-1 or WSFC-2 , open Failover Cluster Manager.
  2. Right-clicking the mouse on the left of the current cluster (testcluster.WSFC.TEST) and to move and click More Actions Configure Cluster Quorum Settings.
  3. Then in the Setup Wizard, step by step, press Next to confirm.
  4. When set quorum configuration option, and select the quorum witness select Configure a file share witness.
  5. Share select just set a good route (to "10.0.0.6clusterwitness-testcluster") on the path. Here 10.0.0.6 of IP is set wsfc-dc VM configuration.
  6. Press OK to complete all the configuration

Image 076.jpg

Image 077.jpg

Image 078.jpg

Image 079.jpg

Image 080.jpg

Image 081.jpg

Image 082.jpg

Image 083.jpg

Testing the failover cluster

So far the configuration has been successfully completed setting the clustering process, here we can manually test whether the current configuration to take effect.

  1. In wsfc-1 with wsfc-2 to run Windows PowerShell identity clusteruser
  2. In the below instruction input PowerShell

    Move-ClusterGroup -Name "Cluster Group"
    

If you see the following screen represents the setting success:

Image 085.jpg

Image 084.jpg

Adding a role For IIS

Then start a new role for the cluster process, the role is responsible at work yesterday assignment in IIS:

  1. In the Failover Cluster Manager's Actions select Configure Role in.
  2. Select Other Server in the Select Role page.
  3. Enter "IIS" in the Client Access Point page.
  4. Set to distribute the incoming IP address is "10.0.0.9".
  5. Skip Select Storage and Select Resource Types.
  6. Determine the current configuration and complete the new.

Image 086.jpg

Image 087.jpg

Image 088.jpg

Image 089.jpg

Creating the internal load balancer

Then have to go back on top of Google Cloud Platform Load Balancer established a process for internal use just large column  GCP - Running Windows Server Failover Clustering Step by Step - Part 2 just configured 10.0.0.9 as a request entry to the rear shunt. Side configuration is divided into front-end (frotnend) is responsible for handling incoming requests and back-end machine (backend) is responsible for handling real requested configuration, please remember to configure before the entry into force of the whole setting:

  1. And go into the GCP Console Load balancing page. The OPEN the LOAD BALANCING
  2. Choose to create a new Load Balancer.
  3. Select the TCP Load Balancing card and select Only between my VMs, set last name WSFC-LB .

Image 090.jpg

Image 091.jpg

Image 092.jpg

Do not press this to build, we need to switch to a Backend area given its associated set value.

Configuration Backend

  • Click Backend configuration.
  • Select Region.
  • Select wsfcnet .
  • Select wsfc-group and create a new health check.
  • Enter the name of the WSFC-hc .
  • Set default cluster host agent responded to the interface port 59998 .
  • Set Request, enter 10.0.0.9 .
  • Setting Response, input 1 .
  • Setting Check interval, input 2 .
  • Timeout setting input 1 .

Image 093.jpg

Image 094.jpg

Image 095.jpg

After the above settings to continue frontend storage configuration.

Configuration frontend

  • Click Frontend configuration.
  • Set Name Name, enter WSFC-LB-Fe .
  • Select the subnetwork (wsfcnetsub1).
  • Setting IP, select Ephemeral (Custom) and enter 10.0.0.9 .
  • Setting Ports, input 80 .
  • Click Finish.

Image 096.jpg

Re-examine the entire set and complete the configuration

Before completion can then confirm whether the current configuration is correct:

Image 097.jpg

Create firewall rules for the health check

It should be noted here that the hosts on the back-end configuration must be turned relative port number in the OS firewall, or can not take effect. The current configuration is:

cluster node (wsfc-1 and wsfc-2) may be allowed to receive inbound messages to 59998 Port through TCP connections.

Image 098.jpg

Image 099.jpg

Image 100.jpg

Image 101.jpg

Image 102.jpg

We can build Heathy Check the mechanism through Cloud Shell for the current Load Balancer service to confirm normal operation:

gcloud compute firewall-rules create allow-health-check --network wsfcnet --source-ranges 130.211.0.0/22,35.191.0.0/16 --allow tcp:59998

Image 103.jpg

Validating the load balancer

Current host cluster fault tolerance checks have been established, we can see the whole GCP Load Balancing service is currently only one cluster-node will be started:

Image 104.jpg

If you want to test, you can go Failover Cluster Manager to the moment IIS Role select the right-click the Move , and then click Best Possible Node can see Owner Node during the switch:

Image 105.jpg

Image 106.jpg

Image 107.jpg

Above this test is to simulate the operation of the original problem by the cluster nodes need to determine behavior occurs when other nodes to take over. It can be seen site has quickly import it from the node in question to deal with the normal operation of other nodes to take over the request (request) is.

We can also be confirmed through the Cloud Shell following command:

gcloud compute backend-services get-health wsfc-lb --region=[REGION]

Image 108.jpg

Then we only need two cluster-node installed IIS and launch the corresponding web applications can be.

Installing your application

Here we can quickly establish their role service from IIS Add Roles and Features Wizard:

Image 109.jpg

Image 110.jpg

Here we create a simple web applications and IP displayed on top of the page:

Image 111.jpg

Wsfc-1 from the machine check:

Image 112.jpg

The same steps we also mounted wsfc-2, and checks from the machine:

Image 113.jpg

And when we go to the configuration is done from the DC-WSFC 10.0.0.9 (ie Load Balancer within our area network), you can see the page that appears is the current Owner Cluster-Node page, that wsfc-2:

Image 114.jpg

Image 115.jpg

This, we set teaching has completely finished.

Costs

Let's take a look:

Image 116.jpg

Image 117.jpg

The POC project to open a total of five days (5 / 27-5 / 31), $ 10 to spend one day around $. Most of the cost is spent in the Windows VM expensive ....

Finally, remember to remind you do not have to delete the bottom of the ad hoc measure to avoid Comupted Engine sustained charge to your GCP account ... How to remove the reference to the current text of the POC Cleaning up the steps right

Epilogue

Results of the entire paradigm actually show is very simple but for internal re-use cloud services to help is very large, in the past to establish such a cluster fault tolerance mechanism itself is quite large and complexity of difficulty, but now through GCP instructions and Windows UI above setup wizard, we can complete the relevant settings very quickly, providing a better web applications for high availability (high availability).

And this mechanism not only for the network, in fact, open to the public site can also use it to provide better treatment method for establishing a cluster availability!

Guess you like

Origin www.cnblogs.com/dajunjun/p/11698809.html