saltstack自动化运维工具简单部署

Saltstack 简介

什么是saltstack

• Saltstack是基于python开发的一套C/S架构配置管理工具

• 使用SSL证书签方的方式进行认证管理

• 底层使用ZeroMQ消息队列pub/sub方式通信

– 号称世界上最快的消息队列ZeroMQ能快速在成千上万台主机上进行各种操作

– 采用RSA Key方式确认身

主要功能

• Saltstack最主要的两个功能是:配置管理与远程执行

• Saltstack不只是一个配置管理工具,还是一个云计算与数据中心架构编排的利器

• Saltstack已经支持Docker相关模块

• 在友好地支持各大云平台之后,配合Saltstack的Mine实时发现功能可以实现各种云平台业务的自动扩展

Saltstack架构

• Saltstack基于C/S架构

– 服务器端称作Master

– 客户端称作Minion

• 可以实现传统处理方式,即:客户端发送请求给服务器,服务器收到请求后处理请求,再将结果返回

• 也可以使用消息队列中的发布与订阅(pub/sub)服务模式

Saltstack工作机制

•    Master和Minion都以守护进程的方式运行

• Master监听配置文件里定义的ret_port(接收minion请求),和publish_port(发布消息)的端口

•  当Minion运行时,它会自动连接到配置文件里定义的Master地址ret_port端口进行连接认证

• 当Master和Minion可以正常通信后,就可以进行各种各样的配置管理工作了

环境搭建

1.准备以下安装包并将整个目录放在物理机端的apache默认发布目录下(并不一定全部用到,只是为了方便将整个目录放过去):

libyaml-0.1.3-4.el6.x86_64.rpm
python-babel-0.9.4-5.1.el6.noarch.rpm
python-backports-1.0-5.el6.x86_64.rpm
python-backports-ssl_match_hostname-3.4.0.2-2.el6.noarch.rpm
python-chardet-2.2.1-1.el6.noarch.rpm
python-cherrypy-3.2.2-4.el6.noarch.rpm
python-crypto-2.6.1-3.el6.x86_64.rpm
python-crypto-debuginfo-2.6.1-3.el6.x86_64.rpm
python-enum34-1.0-4.el6.noarch.rpm
python-futures-3.0.3-1.el6.noarch.rpm
python-impacket-0.9.14-1.el6.noarch.rpm
python-jinja2-2.8.1-1.el6.noarch.rpm
python-msgpack-0.4.6-1.el6.x86_64.rpm
python-ordereddict-1.1-2.el6.noarch.rpm
python-requests-2.6.0-3.el6.noarch.rpm
python-setproctitle-1.1.7-2.el6.x86_64.rpm
python-six-1.9.0-2.el6.noarch.rpm
python-tornado-4.2.1-1.el6.x86_64.rpm
python-urllib3-1.10.2-1.el6.noarch.rpm
python-zmq-14.5.0-2.el6.x86_64.rpm
PyYAML-3.11-1.el6.x86_64.rpm
repodata
salt-2016.11.3-1.el6.noarch.rpm
salt-api-2016.11.3-1.el6.noarch.rpm
salt-cloud-2016.11.3-1.el6.noarch.rpm
salt-master-2016.11.3-1.el6.noarch.rpm
salt-minion-2016.11.3-1.el6.noarch.rpm
salt-ssh-2016.11.3-1.el6.noarch.rpm
salt-syndic-2016.11.3-1.el6.noarch.rpm
zeromq-4.0.5-4.el6.x86_64.rpm

apache访问测试:
这里写图片描述
2.在虚拟机server3端配置yum源将这些包加进yum源中:

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.17.250/source6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[saltstack]
name=saltstack
baseurl=http://172.25.17.250/rhel6
gpgcheck=0
~                        

yum repolist刷新可以看到刷新的29个 包:

[root@server3 yum.repos.d]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
rhel-source                                              | 3.9 kB     00:00     
saltstack                                                | 2.9 kB     00:00     
saltstack/primary_db                                     |  16 kB     00:00     
repo id          repo name                                                status
rhel-source      Red Hat Enterprise Linux 6Server - x86_64 - Source       3,690
saltstack        saltstack                                                   29  ###在这里
repolist: 3,719

yum list salt*查看salt包:

[root@server3 yum.repos.d]# yum list salt*
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Available Packages
salt.noarch                          2016.11.3-1.el6                   saltstack
salt-api.noarch                      2016.11.3-1.el6                   saltstack
salt-cloud.noarch                    2016.11.3-1.el6                   saltstack
salt-master.noarch                   2016.11.3-1.el6                   saltstack
salt-minion.noarch                   2016.11.3-1.el6                   saltstack
salt-ssh.noarch                      2016.11.3-1.el6                   saltstack
salt-syndic.noarch                   2016.11.3-1.el6                   saltstack

同样在server4端配置同样的yum源。

3.将server3作为master安装salt-master 并开启服务:

[root@server3 yum.repos.d]# yum install salt-master -y
[root@server3 yum.repos.d]# /etc/init.d/salt-master start
Starting salt-master daemon:                               [  OK  ]

将server4作为minion端安装salt-minion并编辑minion的配置文件:

[root@server4 ~]# yum install salt-minion -y
[root@server4 salt]# vim /etc/salt/minion

server4端配置文件的修改:
在17行设定master端为172.25.17.3(server3端)。这里需要注意的是,由于配置文件严谨的格式要求,冒号后面必须跟一个空格才能写入后面的内容。

15 # resolved, then the minion will fail to start.
16 #master: salt
17 master: 172.25.17.3    #设定master端为172.25.17.3
18 

之后在server4端开启服务:

[root@server4 salt]# /etc/init.d/salt-minion start
Starting salt-minion:root:server4 daemon: OK

4. 在master端,设定密钥,并允许server4端访问:
salt-key -L查看允许访问的情况:server4不被允许:

[root@server3 yum.repos.d]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
server4
Rejected Keys:

salt-key -A允许所有minion访问或者salt-key -a server4指定主机可以访问:

[root@server3 yum.repos.d]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
server4
Proceed? [n/Y] y
Key for minion server4 accepted.

这样就可以通过server3端的长连接,进行对server4端的相关操作:

[root@server3 yum.repos.d]# salt server4 test.ping   #检查server4端是否能ping通
server4:
    True
[root@server3 yum.repos.d]# salt server4 cmd.run 'df -h'   #查看server4端df情况
server4:
    Filesystem                    Size  Used Avail Use% Mounted on
    /dev/mapper/VolGroup-lv_root   19G  971M   17G   6% /
    tmpfs                         499M   16K  499M   1% /dev/shm
    /dev/vda1                     485M   33M  427M   8% /boot

关于master、minion的密钥存放和端口

1.关于密钥:
master端的密钥存放位置为/etc/salt/pki/master/,名称为master.pub,可以查看密钥的md5码为9499c73e8f719cc852cf92a376eb19a2:

[root@server3 httpd]# cd /etc/salt/pki/master/
[root@server3 master]# ls
master.pem  minions           minions_denied  minions_rejected
master.pub  minions_autosign  minions_pre
[root@server3 master]# md5sum master.pub 
9499c73e8f719cc852cf92a376eb19a2  master.pub

当密钥发送给minion端后,密钥存放在minion端的/etc/salt/pki/minion/目录下,名为minion_master.pub,这个文件的md5码和master端一致。

[root@server4 salt]# cd /etc/salt/pki/minion/
[root@server4 minion]# ls
minion_master.pub  minion.pem  minion.pub
[root@server4 minion]# md5sum minion_master.pub 
9499c73e8f719cc852cf92a376eb19a2  minion_master.pub

2.关于端口:
在master端会开启两个端口4505和4506,其中4505用来建立长连接,4506用来启动响应。

[root@server3 minions]# netstat -antlp |grep salt
tcp        0      0 0.0.0.0:4505                0.0.0.0:*                   LISTEN      6029/salt-master -d 
tcp        0      0 0.0.0.0:4506                0.0.0.0:*                   LISTEN      6036/salt-master -d 
tcp        0      0 172.25.17.3:4505            172.25.17.4:51921           ESTABLISHED 6029/salt-master -d 

使用lsof指令就可以看到4505端口是server3到server4的长连接:

[root@server3 minions]# lsof -i :4505
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
/usr/bin/ 6029 root   16u  IPv4  24138      0t0  TCP *:4505 (LISTEN)
/usr/bin/ 6029 root   18u  IPv4  24883      0t0  TCP server3:4505->server4:51921 (ESTABLISHED)
[root@server3 minions]# lsof -i :4506
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
/usr/bin/ 6036 root   24u  IPv4  24149      0t0  TCP *:4506 (LISTEN)

在minion端也可以看到:
minion端并没有开启端口,只是使用了master端的长连接端口4505:
这里写图片描述
lsof指令也可以看到:

[root@server4 minion]# lsof -i:4505
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
salt-mini 1716 root   21u  IPv4  15383      0t0  TCP server4:51921->server3:4505 (ESTABLISHED)   #server4到server3端的长连接

利用yaml脚本控制实现master到minion的服务部署

1.在server3端安装python-setproctitle并重启服务:

[root@server3 yum.repos.d]# yum install python-setproctitle -y
[root@server3 yum.repos.d]# /etc/init.d/salt-master restart
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                               [  OK  ]

2.编辑master的配置文件:

[root@server3 salt]# vim /etc/vimrc/master 

在配置文件中取消注释,打开文件目录。也就是说,所有的yml语言脚本都要在这个目录下:

 534 file_roots:
 535   base:
 536     - /srv/salt

新建这个目录:

[root@server3 salt]# mkdir /srv/salt
[root@server3 salt]# cd /srv/salt/
[root@server3 salt]# mkdir httpd  #新建关于httpd服务建立的目录
[root@server3 salt]# cd httpd
[root@server3 httpd]# vim install.sls  #新建httpd服务的sls文件,一定要以.sls结尾

编辑关于在minion端安装httpd和php服务的内容:

在这里需要注意的是,yaml语言有着严格的语法要求,主要体现在:
    1.所有的冒号后面必须跟一个空格,不能在冒号后面直接书写
    2.每个短横杠-前后必须要有空格
    3.所有缩进不能使用tab键,必须用两个空格表示一个缩进

脚本内容:

  1 apache-install:
  2   pkg.installed:
  3     - pkgs:
  4       - httpd
  5       - php

这里写图片描述
执行:
在这里需要注意的是,state.sls为调用的方法,httpd.install的意思为/srv/salt/httpd/install.sls,默认目录在/srv/salt下,执行时文件后面的.sls后缀不需要写。

[root@server3 httpd]# salt server4 state.sls httpd.install
server4:
----------
          ID: apache-install
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: httpd, php
     Started: 10:54:37.612103
    Duration: 11969.859 ms
     Changes:   
              ----------
              apr:
                  ----------
                  new:
                      1.3.9-5.el6_2
                  old:
              apr-util:
                  ----------
                  new:
                      1.3.9-3.el6_0.1
                  old:
              apr-util-ldap:
                  ----------
                  new:
                      1.3.9-3.el6_0.1
                  old:
              httpd:
                  ----------
                  new:
                      2.2.15-29.el6_4
                  old:
              httpd-tools:
                  ----------
                  new:
                      2.2.15-29.el6_4
                  old:
              mailcap:
                  ----------
                  new:
                      2.1.31-2.el6
                  old:
              php:
                  ----------
                  new:
                      5.3.3-26.el6
                  old:
              php-cli:
                  ----------
                  new:
                      5.3.3-26.el6
                  old:
              php-common:
                  ----------
                  new:
                      5.3.3-26.el6
                  old:

Summary for server4
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:  11.970 s

执行效果:server4端成功安装lhttpd服务和php服务:

[root@server4 salt]# rpm -qa httpd
httpd-2.2.15-29.el6_4.x86_64
[root@server4 salt]# rpm -qa php
php-5.3.3-26.el6.x86_64

同样也可以在脚本中加入其他的对于minion端的服务配置:

  比如在master端修改minion端apache的服务端口。大致思想是将minion端的
apache配置文件发送到master端,在master端对配置文件进行目的性的修改,
再利用脚本将配置文件发送回去并重启服务,这样minion端的apache端口就被修
改了。这种方式,针对master组内的所有成员,只需要执行配置脚本,就可以修
改组内全部成员的apache端口,不需要逐台配置,实现了大数量主机的管理。

在server3端新建目录/srv/salt/httpd/files将server4端的apache配置文件复制到该目录下:

[root@server4 minion]# scp /etc/httpd/conf/httpd.conf server3:/srv/salt/httpd/files

在server3端编辑这个配置文件将端口改为8080:

 135 #Listen 12.34.56.78:80
 136 Listen 8080
 137 

编辑刚才的脚本install.sls:

  1 apache-install:
  2   pkg.installed:
  3     - pkgs:
  4       - httpd
  5       - php
  6 
  7   service.running:
  8     - name: httpd      #设定要配置的服务
  9     - enable: True     #设定自启动
 10     - reload: True     #设定重新加载
 11     - watch:
 12       - file: /etc/httpd/conf/httpd.conf
 13 
 14 /etc/httpd/conf/httpd.conf:
 15   file.managed:
 16     - source: salt://httpd/files/httpd.conf   #源文件路径(master端)
 17     - mode: 644                               #权限
 18     - user: root                              #执行用户

重新推送:

[root@server3 httpd]# salt server4 state.sls httpd.install

推送成功之后在server4端查看apache端口被修改为8080:

[root@server4 minion]# netstat -antlp |grep httpd
tcp        0      0 :::8080                     :::*                        LISTEN      2124/httpd          

利用yml脚本实现nginx的源码编译部署

1.新开一个虚拟机server5,并像server4一样部署成minion。
2.在server3端新建目录/srv/salt/nginx,并在该目录下放置nginx压缩包,nginx脚本,和yml脚本,存放位置如下:

[root@server3 nginx]# tree .
.
├── files
│   ├── nginx
│   ├── nginx-1.14.0.tar.gz
│   └── nginx.conf
├── install.sls
├── make.sls
├── nginx.sls
└── service.sls

nginx脚本内容:

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /usr/local/nginx/conf/nginx.conf
# pidfile:     /usr/local/nginx/logs/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)

lockfile="/var/lock/subsys/nginx"
pidfile="/usr/local/nginx/logs/${prog}.pid"

NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"


start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc -p $pidfile $prog
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest_q || return 6
    stop
    start
}

reload() {
    configtest_q || return 6
    echo -n $"Reloading $prog: "
    killproc -p $pidfile $prog -HUP
    echo
}

configtest() {
    $nginx -t -c $NGINX_CONF_FILE
}

configtest_q() {
    $nginx -t -q -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

# Upgrade the binary with no downtime.
upgrade() {
    local oldbin_pidfile="${pidfile}.oldbin"

    configtest_q || return 6
    echo -n $"Upgrading $prog: "
    killproc -p $pidfile $prog -USR2
    retval=$?
    sleep 1
    if [[ -f ${oldbin_pidfile} && -f ${pidfile} ]];  then
        killproc -p $oldbin_pidfile $prog -QUIT
        success $"$prog online upgrade"
        echo 
        return 0
    else
        failure $"$prog online upgrade"
        echo
        return 1
    fi
}

# Tell nginx to reopen logs
reopen_logs() {
    configtest_q || return 6
    echo -n $"Reopening $prog logs: "
    killproc -p $pidfile $prog -USR1
    retval=$?
    echo
    return $retval
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest|reopen_logs)
        $1
        ;;
    force-reload|upgrade) 
        rh_status_q || exit 7
        upgrade
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    status|status_q)
        rh_$1
        ;;
    condrestart|try-restart)
        rh_status_q || exit 7
        restart
        ;;
    *)
        echo $"Usage: $0 {start|stop|reload|configtest|status|force-reload|upgrade|restart|reopen_logs}"
        exit 2
esac

以下是nginx源码编译需要用到的脚本:
make.sls脚本:用来安装nginx源码编译的依赖包:

  1 make-depends:
  2   pkg.installed:
  3     - pkgs:
  4       - pcre-devel
  5       - openssl-devel
  6       - gcc

service.sls脚本:用来源码编译nginx:

  1 include:
  2   - nginx.make
  3 nginx-install:
  4   file.managed:
  5     - name: /mnt/nginx-1.14.0.tar.gz
  6     - source: salt://nginx/files/nginx-1.14.0.tar.gz
  7   cmd.run:
  8     - name: cd /mnt && tar zxf nginx-1.14.0.tar.gz && cd nginx-1.14.0 && sed -i.bak  's/#define NGIN    X_VER          "nginx\/" NGINX_VERSION/#define NGINX_VER          "nginx"/g' src/core/nginx.h && sed     -i.bak 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/g '  auto/cc/gcc && ./configure --prefix=/usr/lo    cal/nginx --with-threads --with-file-aio --with-http_ssl_module  --with-http_stub_status_module &> /    dev/null && make &> /dev/null && make install &> /dev/null
  9     - creates: /usr/local/nginx

nginx.sls脚本:用来配置nginx服务启动等:

  1 include:
  2   - nginx.service    #将service.sls脚本在此处调用
  3 /usr/local/nginx/conf/nginx.conf:
  4   file.managed:
  5     - source: salt://nginx/files/nginx.conf
  6 
  7 nginx-service:
  8   file.managed:
  9     - name: /etc/init.d/nginx
 10     - source: salt://nginx/files/nginx
 11     - mode: 755
 12   service.running:
 13     - name: nginx
 14     - reload: True
 15     - watch:
 16       - file: /usr/local/nginx/conf/nginx.conf

三部分脚本写完之后,将nginx.sls脚本推送到server5端用来源码编译nginx并配置:

[root@server3 nginx]# salt server5 state.sls nginx.nginx
server5:
----------
          ID: make-depends
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 14:50:01.438011
    Duration: 390.974 ms
     Changes:   
----------
          ID: nginx-install
    Function: file.managed
        Name: /mnt/nginx-1.14.0.tar.gz
      Result: True
     Comment: File /mnt/nginx-1.14.0.tar.gz is in the correct state
     Started: 14:50:01.830617
    Duration: 66.473 ms
     Changes:   
----------
          ID: nginx-install
    Function: cmd.run
        Name: cd /mnt && tar zxf nginx-1.14.0.tar.gz && cd nginx-1.14.0 && sed -i.bak  's/#define NGINX_VER          "nginx\/" NGINX_VERSION/#define NGINX_VER          "nginx"/g' src/core/nginx.h && sed -i.bak 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/g '  auto/cc/gcc && ./configure --prefix=/usr/local/nginx --with-threads --with-file-aio --with-http_ssl_module  --with-http_stub_status_module &> /dev/null && make &> /dev/null && make install &> /dev/null
      Result: True
     Comment: /usr/local/nginx exists
     Started: 14:50:01.897787
    Duration: 0.363 ms
     Changes:   
----------
          ID: /usr/local/nginx/conf/nginx.conf
    Function: file.managed
      Result: True
     Comment: File /usr/local/nginx/conf/nginx.conf is in the correct state
     Started: 14:50:01.898241
    Duration: 29.181 ms
     Changes:   
----------
          ID: nginx-service
    Function: file.managed
        Name: /etc/init.d/nginx
      Result: True
     Comment: File /etc/init.d/nginx updated
     Started: 14:50:01.927546
    Duration: 59.4 ms
     Changes:   
              ----------
              diff:
                  Replace binary file with text file
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: The service nginx is already running
     Started: 14:50:01.987791
    Duration: 28.952 ms
     Changes:   

Summary for server5
------------
Succeeded: 6 (changed=1)
Failed:    0
------------
Total states run:     6
Total run time: 575.343 ms

之后server5端的nginx就配置完成了。

利用saltstack一键部署负载均衡

要实现haproxy负载均衡,首先需要在server3端配置具有负载均衡模块的yum源:

 1 [rhel-source]
  2 name=Red Hat Enterprise Linux $releasever - $basearch - Source
  3 baseurl=http://172.25.17.250/source6.5
  4 enabled=1
  5 gpgcheck=1
  6 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  7 
  8 [saltstack]
  9 name=saltstack
 10 baseurl=http://172.25.17.250/rhel6
 11 gpgcheck=0
 12 
 13 [LoadBalancer]
 14 name=LoadBalancer
 15 baseurl=http://172.25.17.250/source6.5/LoadBalancer
 16 gpgcheck=0

之后yum repolist才能够安装haproxy。

1.将server3也部署成为一个minion并开启服务。
2.接下来配置脚本:
/srv/salt目录的树状图:

[root@server3 salt]# tree .
.
├── files
├── haproxy
│   ├── files
│   │   └── haproxy.cfg
│   └── install.sls
├── httpd
│   ├── files
│   │   └── httpd.conf
│   └── install.sls
├── nginx
│   ├── files
│   │   ├── nginx
│   │   ├── nginx-1.14.0.tar.gz
│   │   └── nginx.conf
│   ├── install.sls
│   ├── make.sls
│   ├── nginx.sls
│   └── service.sls
└── top.sls

3.在之前实验的基础上,在server4端的apache的默认发布目录下写入首页文件。
首先是关于haproxy配置的脚本:

 1 haproxy-install:
  2   pkg.installed:
  3     - pkgs:
  4       - haproxy

先写这么一部分,推送到server3端进行haproxy的安装:

salt server3 state.sls haproxy.install

然后在/srv/salt/haproxy目录下新建目录并将haproxy配置文件复制过来:

[root@server3 haproxy]# mkdir files
[root@server3 haproxy]# cp /etc/haproxy/haproxy.cfg /srv/salt/haproxy/files/

编辑复制过来的配置文件:

[root@server3 ~]# vim /srv/salt/haproxy/files/haproxy.cfg 

配置负载均衡:

60 #---------------------------------------------------------------------
 61 # main frontend which proxys to the backends
 62 #---------------------------------------------------------------------
 63 frontend  main *:80
 64     default_backend             app
 65 
 66 backend app
 67     balance     roundrobin
 68     server  app1 172.25.17.4:80 check
 69     server  app2 172.25.17.5:80 check

最后将haproxy/install.sls脚本完善:

 1 haproxy-install:
  2   pkg.installed:
  3     - pkgs:
  4       - haproxy
  5 
  6   file.managed:
  7     - name: /etc/haproxy/haproxy.cfg
  8     - source: salt://haproxy/files/haproxy.cfg
  9 
 10   service.running:
 11     - name: haproxy
 12     - reload: True
 13     - watch:
 14       - file: haproxy-install

4.在/srv/salt目录下新建脚本文件top.sls:

 1 base:
  2   'server3':
  3     - haproxy.install   #server3端执行haproxy/install.sls脚本配置haproxy
  4   'server4':
  5     - httpd.install     #server4端执行httpd/install.sls脚本配置httpd
  6   'server5':
  7     - nginx.service     #server5端执行nginx/service.sls脚本配置nginx

5.向所有minion端推送top.sls脚本:
此处state.highstate就代指top.sls脚本

[root@server3 salt]# salt server* state.highstate

6.测试:
实现负载均衡:由于在nginx默认发布目录下没有新建首页文件,所以这里的nginx端为nginx欢迎界面。不过无所谓,只要求效果。
这里写图片描述
这里写图片描述

在grains中存储静态数据

1.在servre3端/srv/salt目录下新建目录_grains并进入目录新建脚本文件:

[root@server3 salt]# mkdir _grains
[root@server3 salt]# cd _grains/
[root@server3 _grains]# vim my_grains.py

脚本内容:

  1 #!/usr/bin/env python
  2 def my_grains():
  3     grains = {}
  4     grains['hello'] = 'world'
  5     grains['salt'] = 'stack'
  6     return grains

2.server4端同步:

[root@server3 _grains]# salt server4 saltutil.sync_grains
server4:
    - grains.my_grains

数据查看:

[root@server3 _grains]# salt '*' grains.item hello
server3:
    ----------
    hello:
server5:
    ----------
    hello:
server4:
    ----------
    hello:
        world
[root@server3 _grains]# salt '*' grains.item salt
server5:
    ----------
    salt:
server4:
    ----------
    salt:
        stack
server3:
    ----------
    salt:

在Pillar中存储静态数据

1.在server3端编辑master配置文件:

[root@server3 _grains]# vim /etc/salt/master

配置:

 693 # highstate format, and is generally just key/value pairs.
 694 pillar_roots:
 695   base:
 696     - /srv/pillar

2.新建目录并新建install.sls脚本:

[root@server3 _grains]# mkdir /srv/pillar
[root@server3 _grains]# cd /srv/pillar/
[root@server3 pillar]# mkdir web
[root@server3 pillar]# cd web/
[root@server3 web]# vim install.sls

脚本内容:

 1 {% if grains['fqdn'] == 'server4' %}
  2 webserver: httpd
  3 {% elif grains['fqdn'] == 'server5' %}
  4 webserver: nginx
  5 {% endif %}                             

3.进到上层目录,新建top.sls脚本:

[root@server3 web]# cd ..
[root@server3 pillar]# vim top.sls

脚本内容:

 1 base:
 2   '*':
 3     - web.install

之后重启master端服务:

[root@server3 pillar]# /etc/init.d/salt-master restart
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                               [  OK  ]

4.刷新pillar:

[root@server3 pillar]# salt '*' saltutil.refresh_pillar
server3:
    True
server5:
    True
server4:
    True

查看数据:

[root@server3 pillar]# salt '*' pillar.items
server3:
    ----------
server4:
    ----------
    webserver:
        httpd
server5:
    ----------
    webserver:
        nginx

利用jinja模板进行变量的调用(以修改httpd端口为例)

方法一:在yml脚本中设定变量在配置文件引用:
编辑脚本文件install.sls

[root@server3 salt]# vim httpd/install.sls 

加入jinja模板,设定port变量:

  1 apache-install:
  2   pkg.installed:
  3     - pkgs:
  4       - httpd
  5       - php
  6 
  7   service.running:
  8     - name: httpd
  9     - enable: True
 10     - reload: True
 11     - watch:
 12       - file: /etc/httpd/conf/httpd.conf
 13 
 14 /etc/httpd/conf/httpd.conf:
 15   file.managed:
 16     - source: salt://httpd/files/httpd.conf
 17     - mode: 644
 18     - user: root
 19     - template: jinja    #引入jinja模板
 20     - context:
 21       bind: 172.25.17.4  #绑定ip
 22       port: 8080         #设定变量port值为8080

编辑master端的httpd.conf:

[root@server3 httpd]# vim files/httpd.conf 

引用变量:

134 #
 135 #Listen 12.34.56.78:80
 136 Listen {{ port }}
 137 

推送:

[root@server3 httpd]# salt server4 state.sls httpd.install

变量引用成功,server4端端口改变:

[root@server4 minion]# netstat -antlp |grep httpd
tcp        0      0 :::8080                     :::*                        LISTEN      2124/httpd    

配置文件中也改变:

 134 #
 135 #Listen 12.34.56.78:80
 136 Listen 8080

方法二:新建脚本文件中设定变量并导入配置文件
新建脚本文件:

[root@server3 httpd]# vim lib.sls

脚本内容:

 1 {% set port = 80 %}

在httpd配置文件中引用变量:

1 {% from 'httpd/lib.sls' import port with context %}
2 # This is the main Apache server configuration file.  It contains the

这里写图片描述
重新推送:

[root@server3 httpd]# salt server4 state.sls httpd.install

server4端端口由8080变成80,变量导入成功:

[root@server4 minion]# netstat -antlp |grep httpd
tcp        0      0 :::80                       :::*                        LISTEN      2124/httpd  

方法三 利用salt命令结果取元组中ip

利用salt指令导出ip,这是一个列表

[root@server3 httpd]# salt server4 grains.item ipv4
server4:
    ----------
    ipv4:
        - 127.0.0.1
        - 172.25.17.4

install.sls文件:
这里写图片描述
httpd配置文件:

134 #
135 #Listen 12.34.56.78:80
136 Listen {{ grains['ipv4'][-1] }}:{{ port }}

之后重新推送。

猜你喜欢

转载自blog.csdn.net/letter_A/article/details/81775211