Mesos: Service Discovery and Load Balancing

Mesos: Service Discovery & Load Balancing

This chapter mainly discusses the Mesos solution for service discovery and application load balancing. It mainly focuses on explaining service discovery and load balancing. It needs to be understood that Mesos is a two-tier architecture, and Marathon is the systemd service of Mesos. Service discovery The function only needs to be provided to marathon, and k8s and Cloud Foundry started by marathon all use their own service discovery function.

1、Service Discovery with Marathon-Bridge and HAProxy

The realization of the function of service discovery is to provide convenient network communication for the services in the Dcos system. Its focus is on the functions of registering, updating, and querying services. There are many ways to realize service discovery, mainly including DNS and centralized service routing. , The application implements service registration/service discovery, etc. The Dcos service discovery function adopts the Mesos-DNS strategy. For the specific introduction of Mesos-DNS, please refer to the previous article.
Through the service discovery strategy of Mesos-DNS, you can use the Marathon REST API to generate the HAProxy configuration file regularly (through the linux crond service) through the auxiliary script, and judge whether to reload the haproxy operation through the hapxoy configuration file generated by diff and the existing haproxy.

The default range of port resources provided by Mesos-slave to master is 31000-32000. When marathon starts a task instance, the mesos-slave where it is located will bind one or more ports within its range at will. It should be noted that the difference between the actual port that the application is bound to (that is, the port assigned to it by mesos-slave) and the formal port specified by the application at configuration time (this is the application port that will be directly accessed later), the formal port (ie. Application port) is a namespace where the application runs in marathon, it is not directly bound, that is to say, other services can also bind such a port, but the service is different, it is indirectly used by the load balancer.

服务发现功能,允许marathon上的应用可以通过配置的端口与其他marathon应用进行通信,这样做的好处就是,无需知道实际分配的端口是 多少,例如: python的wsgi服务(配置时你指定的是80)需要跟mysql(配置时你指定的327)进行通信,这样你可以直接与localhost:327进 行通信即可。

HAProxy会把请求路由到具体的服务节点上,如果此服务路径不可达,它将继续将路由到下一个服务节点。需要注意的是,目前服务发现功能只支持marathon上的应用。

使用 HAProxy

Marathon附带一个简单的被叫做 haproxy-marathon-bridge 的shell脚本以及更高级的Python脚本 servicerouter.py(这个脚本在marathon/bin下面)。两个脚本都可以将Marathon的REST API列表中正在运行的任务推送到HAproxy的设置文件中,HAproxy是一个轻量级的TCP/HTTP的代理。haproxy- marathon-bridge提供了一个最小设置功能。 而servicerouter.py支持如SSL卸载,sticky连接和虚拟主机的负载均衡的更高级的功能。

负载均衡实现原理就是上述提及的,通过辅助脚本(这里是使用haproxy-marathon-bridge)利用Marathon REST API定时(通过linux crond服务)产生HAProxy 配置文件,通过diff 生成的hapxoy配置文件与已有的haproxy,来判断是否进行reload haproxy操作。

下图描述了在一个集群分别在两个节点安装同一服务,SVC1和SVC2,分配配置的应用端口是1111和2222,可以看到实际分配给它们的是31100和31200。

现有集群应用

当slave2节点上的SVC2服务通过localhost:2222连接SVC1服务时,HAProxy将把请求转发到第一配置项SVC1的slave1节点。

HAProxy请求转发

如果slave1节点挂了,下一次对Localhost:2222的请求,将被转发到slave2上。

HAProxy转发

haproxy与Marathon的桥接

通过 haproxy-marathon-bridge脚本从Marathon生成一个HAProxy配置在leader.mesos:8080运行:

$ ./bin/haproxy-marathon-bridge leader.mesos:8080 > /etc/haproxy/haproxy.cfg

重新加载HAProxy配置而不中断现有的连接:

$ haproxy -f haproxy.cfg -p haproxy.pid -sf $(cat haproxy.pid)

配置脚本并重新加载可以通过Cron经常触发来跟踪拓扑变化。如果一个节点在重新加载时消失, HAProxy的健康检查将抓住它并停止向这个node发送traffic 。

为了方便这个设置,haproxy-marathon-bridge 脚本以另一种方式可以调用安装脚本本身,HAProxy和定时任务每分钟ping一次的Marathon服务,如果有任何改变将立刻刷新HAProxy。

$ ./bin/haproxy-marathon-bridge install_haproxy_system leader.mesos:8080

Marathon需要ping的列表存按行存储在 /etc/haproxy-marathon-bridge/marathons

脚本安装在 /usr/local/bin/haproxy-marathon-bridge

-cronjob安装在/etc/cron.d/haproxy-marathon-bridge 注意需要用root来运行。

所提供的只是一个基本的示例脚本。

servicerouter.py

通过servicerouter.py脚本从Marathon生成一个HAProxy配置在leader.mesos:8080运行:

$ ./bin/servicerouter.py --marathon http://leader.mesos:8080 --haproxy-config /etc/haproxy/haproxy.cfg

如果有任何变化,将会刷新haproxy.cfg,这样HAproxy将会重新自动加载。

servicerouter.py有许多额外的功能,像sticky 会话,HTTP到HTTPS的重定向,SSL卸载,VHost支持和模板功能。

2、Service Discovery with Bamboo and HAProxy

场景:当你在Mesos集群上部署的了一系列的微服务,而这些服务能够以HTTP方式通过访问特定的URL来对外提供服务或者对内进行通信。

  • Mesos 集群上通过Marathon框架启动应用(服务),Marathon通过健康检查(healthcheck)跟踪它们的状态
  • Bamboo通过监听Marathon event以更新HAProxy配置文件
  • HAProxy ACL规则通过Bamboo进行配置,其能够根据请求的特征,如URL规则、hostname、HTTP headers,来匹配应用服务。

处理流程

Bamboo的处理流程跟上述的方案是异曲同工的。

优点:
1. 允许任意URL与服务进行对应
2. 允许通过HTTP Header与服务进行对应
3. 及时的触发Marathon event来促使HAProxy进行改变
4. HAProxy heavy lifting
不足:
1. 对于非HTP不适用
2. 内部需要有HAProxy故障切换机制除非能够实现SmartStack架构的服务
3. 内部非流量都邹另外的hop(HAProxy)

实现

1、安装HAProxy和Bamboo

HAProxy

HAProxy的安装可以使用如下方式:
apt-get install haproxy

Bamboo

Bamboo项目地址,你可以通过构建脚本来制作deb或者rpm的软件包,当然也可以通过build container进行构建deb 软件包

docker build -fDockerfile-deb -t bamboo-build .
docker run -it -v $(pwd)/output:/output bamboo-build
# package ends up as output/bamboo_1.0.0-1_all.deb

需要注意的是,需要修改/var/bamboo/production.json来修改对应的Marathon、HAProxy、Zookeeper的hostname,然后重启bamboo,通过retsart bamboo-server

2、在marathon上部署应用

编辑ghost.json文件,填入下述配置:

{"id":"ghost-0","container":{"type":"DOCKER","docker":{"image":"ghost","network":"BRIDGE","portMappings":[{"containerPort":2368}]}},"env":{},"instances":1,"cpus":0.5,"mem":256,"healthChecks":[{"path":"/"}]}

然后使用curl -X POST -H "Content-Type: application/json" http://marathon.mesos:8080/v2/apps [email protected]即可部署该应用,可以看到marathon UI

Marathon UI

3、Bamboo配置rules

可以定义rules来告诉HAProxy如何去proxy:

rules

首先,在/etc/hosts添加一行,这样就可以匹配Host Header:

# ip of HAProxy192.168.99.100 ghost.local
  • 1
  • 2

访问Bamboo UI,通常是http://haproxy:8000, 然后添加对应的name:ghost-0,如下图:

Bamboo

查看一下是否添加成功:

这里写图片描述

ok!可以访问http://ghosts.local/

local

参考文档:
1、Service Discovery mesosphere
2、marathon-lb github
3、Service Discovery pi
4、Bamboo-haproxy-marathonbamboo
5、数人科技mesosphere中文文档

 

 

http://www.zoues.com/2016/01/17/mesos-service-discovery-and-load-balancing/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326508086&siteId=291194637
Recommended