RabbitMQ(11)Mirror HA with Cluster

RabbitMQ(11)Mirror HA with Cluster
 
We can set up that from command line with  rabbitmqctl.
We can also do that from the rabbitMQ client.
We can also do that on the UI console.
 
There are several policy.
ha-mode——————ha-params ————— Result
all           ——————(absent)     ————— Queue is mirrored, if we add one node, the system will mirror all data to that.
exactly   ——————count  ———————We will only keep certain count of mirror nodes
nodes ———————node names ———— List the mirror nodes in the parameters part
 
Let’s try to implement it in scala.
 
Lyra, High Availability RabbitMQ Client
It will helps us to achieve high availability in our services by embracing failure, allowing for AMQP resources such as connections, channels and consumers to be automatically recovered when the server or network failures occur.
 
 
msgpack 
 
or 
 
msgpack4s
 
 
Some example
Project Configuration
For RabbitMQ load balance and cluster solution. I read some documents. Actually, there are several levels.
1. Cluster
We can have a cluster of nodes which can have duplicated routing and exchange in every node.
So the client can producer to any nodes. But the persist queue information is stored in one node, so if
that node is down, we need to wait until that node recover.

2. HA based on cluster
If we have configured mirrors in the cluster, then we will have the persist queue information in all the nodes.
We will lost some performance here. But the good things is that when one node is down, we can go on serving from
other nodes.

3. Sharding based on Cluster
From the document, it is said, we will not just mirror all the queue information to all the nodes, the more nodes
we have, we will lost more performance.
Instead of simple mirror, we will sharding different data into different queues in different nodes.

I did not read more about that since we are using mirror on production. I will begin from there.

Set up RabbitMQ Cluster
details here
http://sillycat.iteye.com/blog/2066116

>chmod 400 ~/.erlang.cookie
>sudo chown -R carl ~/.erlang.cookie
>sudo chgrp -R staff ~/.erlang.cookie

enable the plugin
>sudo rabbitmq-plugins enable rabbitmq_management

Start the first node as rabbitmq1 on local MAC
>sudo RABBITMQ_NODE_PORT=5672 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15672}]" RABBITMQ_NODENAME=rabbit1 sbin/rabbitmq-server -detached

Start the second node
>sudo RABBITMQ_NODE_PORT=5673 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15673}]" RABBITMQ_NODENAME=rabbit2 sbin/rabbitmq-server -detached

Check the status of these 2 nodes, make sure they are running fine separately.
>sbin/rabbitmqctl -n rabbit2 cluster_status
>sbin/rabbitmqctl -n rabbit1 cluster_status

Stop the second node, and make the second node join the first node as a cluster
>sudo sbin/rabbitmqctl -n rabbit2 stop_app
>sbin/rabbitmqctl -n rabbit2 join_cluster rabbit1@sparkworker1
>sudo sbin/rabbitmqctl -n rabbit2 start_app

Check the status again, it will be a running cluster.
 
In the basic class, we will do that
def setPolicy(): Unit = {
  val auth = "Basic " + DatatypeConverter.printBase64Binary((username + ":" + password).getBytes)
  rabbitAdmin.split(",").map(h => new URL("http://" + h.trim + "/api/policies/%2f/high-availability")).
    foreach { url =>
      logger.debug("setting policy for " + url)
      val conn = url.openConnection().asInstanceOf[HttpURLConnection]
      conn.setDoOutput(true)
      conn.setRequestMethod("PUT")
      List(
        "Content-Type" -> "application/json",
        "Content-Length" -> haDefinition.length.toString,
        "Authorization" -> auth
      ).foreach(t => conn.setRequestProperty(t._1, t._2))
      val os = conn.getOutputStream
      os.write(haDefinition.getBytes)
      os.close
      logger.debug("response " + conn.getResponseCode + conn.getResponseMessage)
    }
}
 
 
 
 
References:
old blog
 
sharding
 
HA
 
cluster
 
 

猜你喜欢

转载自sillycat.iteye.com/blog/2183555
HA