3.30 自动化运维saltstack---简介,安装和认证,部署httpd,nginx的部署,变量的使用,keepalived+haproxy,mysql记录操作,远端执行,api

一、简介

https://www.jianshu.com/p/624b9cf51c64

Saltstack 简介

1.jpg
什么是saltstack

• Saltstack是基于python开发的一套C/S架构配置管理工具

• 使用SSL证书签方的方式进行认证管理

• 底层使用ZeroMQ消息队列pub/sub方式通信

    – 号称世界上最快的消息队列ZeroMQ能快速在成千上万台主机上进行各种操作

    – 采用RSA Key方式确认身


主要功能

• Saltstack最主要的两个功能是:配置管理与远程执行

• Saltstack不只是一个配置管理工具,还是一个云计算与数据中心架构编排的利器

• Saltstack已经支持Docker相关模块

• 在友好地支持各大云平台之后,配合Saltstack的Mine实时发现功能可以实现各种云平台业务的自动扩展


Saltstack架构

• Saltstack基于C/S架构

    – 服务器端称作Master

    – 客户端称作Minion

• 可以实现传统处理方式,即:客户端发送请求给服务器,服务器收到请求后处理请求,再将结果返回

• 也可以使用消息队列中的发布与订阅(pub/sub)服务模式


Saltstack工作机制

• Master和Minion都以守护进程的方式运行

• Master监听配置文件里定义的ret_port(接收minion请求),和publish_port(发布消息)的端口

• 当Minion运行时,它会自动连接到配置文件里定义的Master地址ret_port端口进行连接认证

• 当Master和Minion可以正常通信后,就可以进行各种各样的配置管理工作了

二、安装和认证

环境:

server1 172.25.38.1 ansible-server    haproxy    keepalived

server2 172.25.38.2 minion    httpd    

server3 172.25.38.3 minion    nginx

server4 172.25.38.4 minion     haproxy    keepalived

1、server1端配置yum源

vim /etc/yum.repos.d/yum.repo

[dvd]
name=rhel7.3
baseurl=http://172.25.38.250/7.3yumpak
gpgcheck=0


[salt]
name=salt
baseurl=http://172.25.38.250/salt
gpgcheck=0

2、安装

server1

yum install -y salt-minion.noarch
yum install -y salt-master.noarch

server 2 3 4

yum install -y salt-minion.noarch

3、修改配置文件


cd /etc/salt/
ls
vim minion

修改如下

master: 172.25.38.1                 ##修改minion的端的主机

4、启动服务

server1
systemctl start salt-master
systemctl start salt-minion

server2 3 4
systemctl start salt-minion
 

5、认证

[root@server1 salt]# salt-key -L      ##查看密钥状态
Accepted Keys:
Denied Keys:
Unaccepted Keys:
server1
server2
server3
Rejected Keys:
[root@server1 salt]# salt-key -A   ##获取密钥
The following keys are going to be accepted:
Unaccepted Keys:
server1
server2
server3
Proceed? [n/Y] Y
Key for minion server1 accepted.
Key for minion server2 accepted.
Key for minion server3 accepted.
[root@server1 salt]# salt-key -L
Accepted Keys:
server1
server2
server3
Denied Keys:
Unaccepted Keys:
Rejected Keys:

测试:

[root@server1 salt]# salt '*' cmd.run hostname
server2:
    server2
server3:
    server3
server1:
    server1

[root@server1 salt]# salt '*' test.ping
server2:
    True
server3:
    True
server1:
    True

三、部署httpd

1、普通部署

[root@server1 httpd]# pwd
/srv/salt/httpd
[root@server1 httpd]# cat apache.sls
install-httpd:
  pkg.installed:
    - pkgs:
      - httpd
      - php
      - php-mysql

  service.running:
    - name: httpd
    - enable: True

[root@server1 httpd]# salt server2 state.sls httpd.apache

测试:

[root@server2 salt]# netstat -antlp
tcp6       0      0 :::80                   :::*                    LISTEN      3319/httpd

2、安装与服务分开部署

1)树结构

[root@server1 salt]# tree .
.
├── httpd
│   ├── files
│   │   └── httpd.conf
│   ├── install.sls
│   └── service.sls
├── nginx
└── top.sls

3 directories, 4 files

2)安装

[root@server1 salt]# cat httpd/install.sls
httpd:
  pkg.installed:
    - pkgs:
      - httpd
      - php
      - php-mysql    

3)http部署

[root@server1 salt]# cat httpd/service.sls
include:                                 ##包含命令
 - httpd.install

/etc/httpd/conf/httpd.conf:     ##配置文件修改命令
  file.managed:
    - source: salt://httpd/files/httpd.conf       ##拷贝文件存放的位置,以/srv/salt开始的

httpd-service:            ##服务运行的命令
  service.running:
    - name: httpd
    - enable: False
    - reload: True   
      watch:                  ##触发器,监控配置文件
        - file: /etc/httpd/conf/httpd.conf

4)部署

salt server2 state.sls httpd.install           ##部署时,调用了state.sls模块,部署所用的文件是从/srv/salt开始算的,使用.分割
salt server2 state.sls httpd.service


测试:


[root@server2 salt]# netstat -antlp
tcp6       0      0 :::8080                 :::*                    LISTEN

四、部署nginx

[root@server1 salt]# tree .
.
├── httpd
│   ├── apache.sls
│   ├── files
│   │   └── httpd.conf
│   ├── install.sls
│   └── service.sls
├── nginx
│   ├── files
│   │   ├── nginx-1.15.8.tar.gz
│   │   ├── nginx.conf
│   │   └── nginx.service
│   ├── install.sls
│   └── service.sls
└── users
    └── nginx.sls

5 directories, 10 files

安装

[root@server1 salt]# cat nginx/install.sls
nginx-install:
  pkg.installed:
    - pkgs:
      - pcre-devel
      - zlib-devel
      - gcc
      - make

  file.managed:
    - name: /mnt/nginx-1.15.8.tar.gz
    - source: salt://nginx/files/nginx-1.15.8.tar.gz

  cmd.run:
    - name: cd /mnt && tar zxf nginx-1.15.8.tar.gz && cd nginx-1.15.8 && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/' auto/cc/gcc && ./configure --prefix=/usr/local/nginx &> /dev/null && make &> /dev/null && make install &> /dev/null && cd .. && rm -rf nginx-1.15.8
    - creates: /usr/local/nginx

部署

[root@server1 salt]# cat nginx/service.sls
include:
  - nginx.install
  - users.nginx


/usr/local/nginx/conf/nginx.conf:
    file.managed:
      - source: salt://nginx/files/nginx.conf

nginx-service:
  file.managed:
    - name: /etc/systemd/system/nginx.service
    - source: salt://nginx/files/nginx.service

  service.running:
    - name: nginx
    - reload: True
    - watch:
      - file: /usr/local/nginx/conf/nginx.conf

启动脚本

[root@server1 salt]# cat nginx/files/nginx.service
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/usr/local/nginx/logs/nginx.pid
ExecStartPre=/usr/local/nginx/sbin/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

使用的用户

[root@server1 salt]# cat users/nginx.sls
nginx:
  user.present:
    - uid: 1000
    - shell: /sbin/nologin

实验:

1、安装

2、启动

3、测试

[root@server3 ~]# netstat -antlp | grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2252/nginx: master  
tcp6       0      0 :::8080                 :::*                    LISTEN      614/httpd 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

补充:多节点推送服务

state.sls与state.highstate的区别大致如下:

  • state.highstate会读取所有环境(包括base环境)的top.sls文件,并且执行top.sls文件内容里面定义的sls文件,不在top.sls文件里面记录的sls则不会被执行;
  • state.sls默认读取base环境,但是它并不会读取top.sls文件。你可以指定state.sls执行哪个sls文件,只要这个sls文件在base环境下存在;
  • state.sls也可以指定读取哪个环境:state.sls salt_env='prod' xxx.sls,这个xxx.sls可以不在top.sls中记录。
  • state.sls执行的xxx.sls会被下发到minion端,而state.highstate则不会

1) 编写top文件

[root@server1 salt]# pwd
/srv/salt
[root@server1 salt]# cat top.sls
base:
  'server2':
    - httpd.service

  'server3':
    - nginx.service

2)多节点推送服务


[root@server1 salt]# salt 'server[2,3]' state.highstate
server3:
----------
          ID: nginx-install
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 17:11:44.501636
    Duration: 676.997 ms
     Changes:   
----------

...................

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`

五、变量的获取

1、grains,静态变量

静态数据,当Minion启动的时候收集的MInion本地的相关信息。(包含操作系统版本、内核版本、CPU、内存、硬盘、设备型号等)

备注:不重启minion,这些信息数据是不会改变的。

1)、信息管理,包括资产管理;

例:

salt 'linux-node1*' grains.ls  # 列出ID为linux-node1的主机,grains的所有key
salt 'linux-node1*' grains.items  # 列出主机的详细信息,可用于资产管理
salt '*' grains.item os  # 列出所有主机的系统版本
salt '*' grains.item fqdn_ip4  # 列出所有主机的IP地址

[root@server1 ~]# salt server2 grains.item fqdn_ip4   # 列出所有主机的IP地址
server2:
    ----------
    fqdn_ip4:
        - 172.25.85.2
[root@server1 ~]# salt server2 grains.item fqdn  # 列出所有主机名
server2:
    ----------
    fqdn:
        server2
[root@server1 ~]# salt server2 grains.item os # 列出所有主机的系统版本
server2:
    ----------
    os:
        RedHat

2)用于目标选择;(查询具体id的主机,查询系统版本为centos的主机 等场景)

例:

salt -G 'os:Centos' test.ping  # 所有主机系统为centos版本ping测试
salt -G 'os:Centos' cmd.run 'echo 123'  # 所有主机系统为centos版本执行命令'echo 123' 

[root@server1 ~]# salt -G os:RedHat test.ping
server3:
    True
server4:
    True
server2:
    True
server1:
    True
[root@server1 ~]# salt -G os:RedHat cmd.run 'echo 123'
server3:
    123
server4:
    123
server2:
    123
server1:
    123

3)配置管理中使用

自定义grains的item

方式一: 修改minion端的配置文件

首先,查看本身的roles,发现没有

[root@server1 salt]# salt server2 grains.item roles
server2:
    ----------
    roles:

到server2修改配置文件,并重启服务


[root@server2 ~]# cd /etc/salt/
[root@server2 salt]# vim minion

grains:
  roles:
    - httpd
[root@server2 salt]# systemctl restart salt-minion

查看server2的roles,发现有值了

[root@server1 salt]# salt server2 grains.item roles
server2:
    ----------
    roles:
        - httpd

方式二:master端(生产环境使用) 

修改配置文件 vim /etc/salt/grains,写法

server1

cloud: openstack

server3

cloud: open

重启 systemctl restart salt-minion

salt '*' grains.item cloud  # 获取所有主机的cloud

[root@server1 ~]# salt '*' grains.item cloud
server4:
    ----------
    cloud:
server2:
    ----------
    cloud:
server3:
    ----------
    cloud:
        open
server1:
    ----------
    cloud:
        openstack

修改/etc/salt/grains不重启服务的方法,刷新命令如下(备注:方式一和方式二修改配置文件,通过此命令都可以不用重启服务)

salt '*' saltutil.sync_grains

[root@server1 ~]# salt '*' saltutil.sync_grains

server3:
server2:
server1:
server4:
[root@server1 ~]#
[root@server1 ~]# salt '*' grains.item cloud
server2:
    ----------
    cloud:
        132
server4:
    ----------
    cloud:
server3:
    ----------
    cloud:
        open
server1:
    ----------
    cloud:

4)使用python在master端进行变量定义

1)master端编辑变量文件的位置

[root@server1 pillar]# cd /srv/salt/
[root@server1 salt]# ls
httpd  nginx  users
[root@server1 salt]# mkdir _grains
[root@server1 salt]# cd _grains/

2)编辑变量
[root@server1 _grains]# vim my_grains.py

[root@server1 _grains]# cat my_grains.py  ##使用python编写
#!/usr/bin/env python
def my_grains():
     grains = {}
     grains['roles'] = 'nginx'       ##输入为roles则grains为nginx,输入为hello则grains为world
     grains['hello'] = 'world'
     return grains

3)刷新变量,并查看
[root@server1 _grains]# salt server3 saltutil.sync_grains
server3:
    - grains.my_grains
[root@server1 _grains]# salt server3 grains.item roles
server3:
    ----------
    roles:
        nginx

4)在minion端也可以看见

[root@server3 salt]# cat minion/files/base/_grains/my_grains.py
#!/usr/bin/env python
def my_grains():
     grains = {}
     grains['roles'] = 'nginx'
     grains['hello'] = 'world'
     return grains

5)使用grains定义的变量多节点推送服务

[root@server1 salt]# salt '*' grains.item roles    ##查看roles
server1:
    ----------
    roles:
server3:
    ----------
    roles:
        nginx
server2:
    ----------
    roles:
        - httpd
[root@server1 salt]# cat top.sls  ##编写多节点推送服务的top文件
base:
  'roles:httpd':
    - match: grain
    - httpd.service

  'roles:nginx':
    - match: grain
    - nginx.service
[root@server1 salt]# salt 'server[2,3]' state.highstate   ##多节点推送
server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 17:18:09.251559
    Duration: 708.559 ms
     Changes:   
----------
          ID: /etc/httpd/conf/httpd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf is in the correct state
     Started: 17:18:09.962935
    Duration: 40.924 ms
     Changes:  

2、pillar

动态数据,给特定的minion指定特定的数据。只有指定的minion自己能看到自己的数据。

salt '*' pillar.items  ##在master端查看所有的minion变量

1)开启系统自带,修改配置文件

[root@server1 ~]# vim /etc/salt/master

pillar_opts: True

[root@server1 ~]# systemctl restart salt-master   ##重启
[root@server1 ~]# salt '*' pillar.items   ##查看所有的变量

server3:
    ----------
    master:
        ----------
        __cli:
            salt-master
        __role:
            master
        allow_minion_key_revoke:
            True
        archive_jobs:
            False
        auth_events:
            True
        auth_mode:


.......

2)、piller存在文件路径设置

[root@server1 ~]# vim /etc/salt/master

[root@server1 ~]# systemctl restart salt-master

3)实现pillar流程

1)创建一个piller文件,python jinja2写法

mkdir /srv/pillar
cd /srv/pillar

[root@server1 pillar]# vim web.sls
[root@server1 pillar]# cat web.sls
{% if grains['fqdn'] == 'server2' %}
webserver: httpd
{% elif grains['fqdn'] == 'server3' %}
webserver: nginx
{% endif %}

 

2)创建TOP FILE文件

[root@server1 pillar]# vim top.sls
[root@server1 pillar]# cat top.sls
base:
  '*':
    - web

3)加载环境变量

[root@server1 pillar]# salt '*' pillar.items
server1:
    ----------
server2:
    ----------
    webserver:
        httpd
server3:
    ----------
    webserver:
        nginx

4)刷新测试

[root@server1 pillar]# salt '*' saltutil.refresh_pillar
server1:
    True
server2:
    True
server3:
    True
[root@server1 pillar]# salt -I 'webserver:nginx' test.ping  ## -I 表示使用一个pillar值进行测试
server3:
    True

[root@server1 pillar]# salt -I 'webserver:httpd' test.ping
server2:
    True

六、模版中变量的使用

1、使用普通jinja变量

1)定义变量port为8080 host为172.25.85.2

[root@server1 pillar]# cd /srv/salt/httpd/
[root@server1 httpd]# vim service.sls

include:
 - httpd.install

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://httpd/files/httpd.conf
    - template: jinja
      port: 8080
      host: 172.25.85.2

httpd-service:
  service.running:
    - name: httpd
    - enable: False
    - reload: True
      watch:
        - file: /etc/httpd/conf/httpd.conf

2)编辑配置文件使用jinja变量
[root@server1 httpd]# vim files/httpd.conf

LiListen {{ port }}sten {{ port }}

3)部署
[root@server1 httpd]# salt server2 state.sls httpd.service
server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 18:17:10.570361
    Duration: 619.464 ms
     Changes:   
----------
          ID: /etc/httpd/conf/httpd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf is in the correct state
     Started: 18:17:11.192441
    Duration: 58.49 ms
     Changes:   

[root@server1 httpd]# salt server2 state.sls httpd.service
server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 18:17:10.570361
    Duration: 619.464 ms
     Changes:   
----------
          ID: /etc/httpd/conf/httpd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf is in the correct state
     Started: 18:17:11.192441
    Duration: 58.49 ms
     Changes:  
..........

测试:

[root@server2 srv]# netstat -antlp |grep 8080
tcp        0      0 172.25.85.2:8080        0.0.0.0:*               LISTEN      3423/httpd   

2、使用grains变量

1)查看grains的ipv4

[root@server1 httpd]# salt server2 grains.item ipv4
server2:
    ----------
    ipv4:
        - 127.0.0.1
        - 172.25.85.2

2)修改sls文件

[root@server1 httpd]# vim service.sls
[root@server1 httpd]# cat service.sls

include:
 - httpd.install

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://httpd/files/httpd.conf
    - template: jinja
      port: 80
      host: {{ grains['ipv4'][-1] }}

httpd-service:
  service.running:
    - name: httpd
    - enable: False
    - reload: True
      watch:
        - file: /etc/httpd/conf/httpd.conf

3)运行测试


[root@server1 httpd]# salt server2 state.sls httpd.service
server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 18:34:35.971556
    Duration: 604.266 ms
     Changes:   
----------
          ID: /etc/httpd/conf/httpd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf updated
     Started: 18:34:36.578450
    Duration: 59.253 ms
     Changes:   
              ----------
              diff:
                  ---
                  +++
                  @@ -39,7 +39,7 @@
                   # prevent Apache from glomming onto all bound IP addresses.
                   #
                   #Listen 12.34.56.78:80
                  -Listen [u'127.0.0.1', u'172.25.85.2']:80
                  +Listen 172.25.85.2:80
                   
                   #
                   # Dynamic Shared Object (DSO) Support
----------
          ID: httpd-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: Service reloaded
     Started: 18:34:36.681019
    Duration: 101.146 ms
     Changes:   
              ----------
              httpd:
                  True

Summary for server2
------------
Succeeded: 3 (changed=2)
Failed:    0
------------
Total states run:     3
Total run time: 764.665 ms

测试:

[root@server2 srv]# netstat -antlp |grep 80
tcp        0      0 172.25.85.2:80          0.0.0.0:*               LISTEN      3423/httpd  

3、使用pillar变量

1)修改pillar变量

[root@server1 salt]# pwd
/srv/salt
[root@server1 salt]# cd ..
[root@server1 srv]# cd pillar/
[root@server1 pillar]# vim web.sls
[root@server1 pillar]# cat web.sls
{% if grains['fqdn'] == 'server2' %}
webserver: httpd
IP: 172.25.85.2
{% elif grains['fqdn'] == 'server3' %}
webserver: nginx
IP: 172.25.85.3
{% endif %}

2)修改sls文件


[root@server1 pillar]# cd ..
[root@server1 srv]# cd salt/httpd/
[root@server1 httpd]# vim service.sls
[root@server1 httpd]# cat service.sls
include:
 - httpd.install

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://httpd/files/httpd.conf
    - template: jinja
      port: 80
      host: {{ pillar['IP'] }}

httpd-service:
  service.running:
    - name: httpd
    - enable: False
    - reload: True
      watch:
        - file: /etc/httpd/conf/httpd.conf

3)部署


[root@server1 httpd]# salt server2 state.sls httpd.service
server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 19:06:41.493002
    Duration: 636.539 ms
     Changes:   
----------
          ID: /etc/httpd/conf/httpd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf is in the correct state
     Started: 19:06:42.132310
    Duration: 53.062 ms
     Changes:   
----------
          ID: httpd-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 19:06:42.187206
    Duration: 42.661 ms
     Changes:   

Summary for server2
------------
Succeeded: 3
Failed:    0
------------
Total states run:     3
Total run time: 732.262 ms

4、使用lib.sls设置变量优先级比较高

1)编辑lib变量

[root@server1 httpd]# pwd
/srv/salt/httpd
[root@server1 httpd]# vim lib,sls
[root@server1 httpd]# cat lib.sls
{% set host = '172.25.85.2' %}

2)加入配置文件中


[root@server1 httpd]# vim files/httpd.conf

3)在部署文件中也写一个host,以体现优先级

[root@server1 httpd]# vim service.sls

[root@server1 httpd]# cat service.sls
include:
 - httpd.install

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://httpd/files/httpd.conf
    - template: jinja
      port: 80
      host: 172.25.85.3

httpd-service:
  service.running:
    - name: httpd
    - enable: False
    - reload: True
      watch:
        - file: /etc/httpd/conf/httpd.conf

4)部署

[root@server1 httpd]# salt server2 state.sls httpd.service
server2:
----------
          ID: httpd
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 19:31:06.049226
    Duration: 627.351 ms
     Changes:   
----------
          ID: /etc/httpd/conf/httpd.conf
    Function: file.managed
      Result: True
     Comment: File /etc/httpd/conf/httpd.conf updated
     Started: 19:31:06.679294
    Duration: 84.947 ms
     Changes:   
              ----------
              diff:
                  ---
                  +++
                  @@ -1,3 +1,4 @@
                  +
                   #
                   # This is the main Apache HTTP server configuration file.  It contains the
                   # configuration directives that give the server its instructions.
----------
          ID: httpd-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: Service reloaded
     Started: 19:31:06.806071
    Duration: 119.771 ms
     Changes:   
              ----------
              httpd:
                  True

Summary for server2
------------
Succeeded: 3 (changed=2)
Failed:    0
------------
Total states run:     3
Total run time: 832.069 ms

测试:

[root@server2 srv]# netstat -antlp |grep 80
tcp        0      0 172.25.85.2:80          0.0.0.0:*               LISTEN      3423/httpd    

5、Jinja在普通文件的使用

1)编写变量

[root@server1 nginx]# pwd
/srv/salt/nginx

[root@server1 nginx]# vim install.sls
[root@server1 nginx]# cat install.sls
{% set nginx_ver = '1.15.8' %}

nginx-install:
  pkg.installed:
    - pkgs:
      - pcre-devel
      - zlib-devel
      - gcc
      - make

  file.managed:
    - name: /mnt/nginx-{{ nginx_ver }}.tar.gz
    - source: salt://nginx/files/nginx-{{ nginx_ver }}.tar.gz

  cmd.run:
    - name: cd /mnt && tar zxf nginx-{{ nginx_ver }}.tar.gz && cd nginx-{{ nginx_ver }} && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/' auto/cc/gcc && ./configure --prefix=/usr/local/nginx &> /dev/null && make &> /dev/null && make install &> /dev/null && cd .. && rm -rf nginx-{{ nginx_ver }}
    - creates: /usr/local/nginx

2)部署


[root@server1 nginx]# salt server3 state.sls nginx.service
server3:
----------
          ID: nginx-install
    Function: pkg.installed
      Result: True
     Comment: All specified packages are already installed
     Started: 19:59:56.268737
    Duration: 625.204 ms
     Changes:   
----------
          ID: nginx-install
    Function: file.managed
        Name: /mnt/nginx-1.15.8.tar.gz
      Result: True
     Comment: File /mnt/nginx-1.15.8.tar.gz is in the correct state
     Started: 19:59:56.896449
    Duration: 45.446 ms
     Changes:   
----------
          ID: nginx-install
    Function: cmd.run
        Name: cd /mnt && tar zxf nginx-1.15.8.tar.gz && cd nginx-1.15.8 && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/' auto/cc/gcc && ./configure --prefix=/usr/local/nginx &> /dev/null && make &> /dev/null && make install &> /dev/null && cd .. && rm -rf nginx-1.15.8
      Result: True
     Comment: /usr/local/nginx exists
     Started: 19:59:56.943484
    Duration: 0.875 ms
     Changes:   
----------
          ID: nginx
    Function: user.present
      Result: True
     Comment: User nginx is present and up to date
     Started: 19:59:56.945224
    Duration: 1.763 ms
     Changes:   
----------
          ID: /usr/local/nginx/conf/nginx.conf
    Function: file.managed
      Result: True
     Comment: File /usr/local/nginx/conf/nginx.conf is in the correct state
     Started: 19:59:56.947158
    Duration: 19.158 ms
     Changes:   
----------
          ID: nginx-service
    Function: file.managed
        Name: /etc/systemd/system/nginx.service
      Result: True
     Comment: File /etc/systemd/system/nginx.service is in the correct state
     Started: 19:59:56.966607
    Duration: 19.309 ms
     Changes:   
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: The service nginx is already running
     Started: 19:59:56.987615
    Duration: 39.15 ms
     Changes:   

Summary for server3
------------
Succeeded: 7
Failed:    0
------------
Total states run:     7
Total run time: 750.905 ms
 

七、keeplive+haproxy


1、树结构

[root@server1 salt]# tree .
.
├── _grains
│   └── my_grains.py
├── haproxy
│   ├── files
│   │   └── haproxy.cfg
│   └── install.sls
├── httpd
│   ├── apache.sls
│   ├── files
│   │   └── httpd.conf
│   ├── install.sls
│   ├── lib.sls
│   └── service.sls
├── keepalived
│   ├── files
│   │   └── keepalived.conf
│   └── install.sls
├── nginx
│   ├── files
│   │   ├── nginx-1.15.8.tar.gz
│   │   ├── nginx.conf
│   │   └── nginx.service
│   ├── install.sls
│   └── service.sls
├── top.sls
└── users
    └── nginx.sls

10 directories, 17 files

2、haproxy的配置

1)主文件

[root@server1 salt]# cat haproxy/install.sls
haproxy-install:        
  pkg.installed:
    - pkgs:
      - haproxy

  file.managed:
    - name: /etc/haproxy/haproxy.cfg
    - source: salt://haproxy/files/haproxy.cfg

  service.running:
    - name: haproxy
    - reload: True
    - watch:
      - file: haproxy-install

2)配置文件

[root@server1 salt]# cat haproxy/files/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

    stats uri /status            ##状态端口
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  main *:80
    default_backend             app

backend app        
    balance     roundrobin
    server  app1 172.25.85.2:80 check  
    server  app2 172.25.85.3:80 check

3、keepalived

1)主文件

[root@server1 salt]# cat keepalived/install.sls
kp-install:
  pkg.installed:
    - pkgs:
      - keepalived

  file.managed:
    - name: /etc/keepalived/keepalived.conf
    - source: salt://keepalived/files/keepalived.conf
    - template: jinja
      {% if grains['fqdn'] == 'server1' %}
      STATE: MASTER
      VRID: 51
      PRIORITY: 100
      {% elif grains['fqdn'] == 'server4' %}
      STATE: BACKUP
      VRID: 51
      PRIORITY: 50
      {% endif %}

  service.running:
    - name: keepalived
    - reload: True
    - watch:
      - file: kp-install

2)配置文件

[root@server1 salt]# cat keepalived/files/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state {{ STATE }}
    interface eth0
    virtual_router_id {{ VRID }}
    priority {{ PRIORITY }}
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    172.25.85.100
    }
}

4、httpd文件

[root@server1 salt]# cat httpd/install.sls
httpd:
  pkg.installed:
    - pkgs:
      - httpd
      - php
      - php-mysql
[root@server1 salt]# cat httpd/service.sls
include:
 - httpd.install

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://httpd/files/httpd.conf
    - template: jinja
      port: 80

httpd-service:
  service.running:
    - name: httpd
    - enable: False
    - reload: True
      watch:
        - file: /etc/httpd/conf/httpd.conf
[root@server1 salt]# cat httpd/lib.sls
{% set host = '172.25.85.2' %}

[root@server1 salt]# cat httpd/files/httpd.conf

{% from 'httpd/lib.sls' import host %}

Listen {{ host }}:{{ port }}

5、nginx文件

[root@server1 salt]# cat nginx/install.sls
{% set nginx_ver = '1.15.8' %}

nginx-install:
  pkg.installed:
    - pkgs:
      - pcre-devel
      - zlib-devel
      - gcc
      - make

  file.managed:
    - name: /mnt/nginx-{{ nginx_ver }}.tar.gz
    - source: salt://nginx/files/nginx-{{ nginx_ver }}.tar.gz

  cmd.run:
    - name: cd /mnt && tar zxf nginx-{{ nginx_ver }}.tar.gz && cd nginx-{{ nginx_ver }} && sed -i 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/' auto/cc/gcc && ./configure --prefix=/usr/local/nginx &> /dev/null && make &> /dev/null && make install &> /dev/null && cd .. && rm -rf nginx-{{ nginx_ver }}
    - creates: /usr/local/nginx
[root@server1 salt]# cat nginx/service.sls
include:
  - nginx.install
  - users.nginx


/usr/local/nginx/conf/nginx.conf:
    file.managed:
      - source: salt://nginx/files/nginx.conf

nginx-service:
  file.managed:
    - name: /etc/systemd/system/nginx.service
    - source: salt://nginx/files/nginx.service

  service.running:
    - name: nginx
    - reload: True
    - watch:
      - file: /usr/local/nginx/conf/nginx.conf

6、top文件

[root@server1 salt]# cat top.sls
base:
  'server1':
    - haproxy.install
    - keepalived.install

  'server4':
    - haproxy.install
    - keepalived.install

  'server2':
    - httpd.service

  'server3':
    - nginx.service

执行

[root@server1 salt]# salt '*' state.highstate

1、实现轮询

2、关闭server1的keepalived,虚拟ip不见了

[root@server1 salt]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:cd:f9:bc brd ff:ff:ff:ff:ff:ff
    inet 172.25.85.1/24 brd 172.25.85.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.85.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fecd:f9bc/64 scope link
       valid_lft forever preferred_lft forever


[root@server1 salt]# systemctl stop keepalived


[root@server1 salt]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:cd:f9:bc brd ff:ff:ff:ff:ff:ff
    inet 172.25.85.1/24 brd 172.25.85.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fecd:f9bc/64 scope link
       valid_lft forever preferred_lft forever

3、在server4可以看到,还可以实现轮询

[root@server4 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:a3:77:b3 brd ff:ff:ff:ff:ff:ff
    inet 172.25.85.4/24 brd 172.25.85.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.85.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fea3:77b3/64 scope link
       valid_lft forever preferred_lft forever

八、数据库的安装    

数据库的创建是用来记录命令

1)在minion端口记录

1、server1安装数据库,并授权

yum install -y mariadb-server
systemctl start mariadb


[root@server1 salt]# mysql
MariaDB [(none)]> grant all on salt.* to salt@'%' identified by 'salt';
Query OK, 0 rows affected (0.01 sec)

2、导入数据库

[root@server1 salt]# vim test.sql


CREATE DATABASE  `salt`
  DEFAULT CHARACTER SET utf8
  DEFAULT COLLATE utf8_general_ci;

USE `salt`;

--
-- Table structure for table `jids`
--

DROP TABLE IF EXISTS `jids`;
CREATE TABLE `jids` (
  `jid` varchar(255) NOT NULL,
  `load` mediumtext NOT NULL,
  UNIQUE KEY `jid` (`jid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE INDEX jid ON jids(jid) USING BTREE;

--
-- Table structure for table `salt_returns`
--

DROP TABLE IF EXISTS `salt_returns`;
CREATE TABLE `salt_returns` (
  `fun` varchar(50) NOT NULL,
  `jid` varchar(255) NOT NULL,
  `return` mediumtext NOT NULL,
  `id` varchar(255) NOT NULL,
  `success` varchar(10) NOT NULL,
  `full_ret` mediumtext NOT NULL,
  `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  KEY `id` (`id`),
  KEY `jid` (`jid`),
  KEY `fun` (`fun`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

--
-- Table structure for table `salt_events`
--

DROP TABLE IF EXISTS `salt_events`;
CREATE TABLE `salt_events` (
`id` BIGINT NOT NULL AUTO_INCREMENT,
`tag` varchar(255) NOT NULL,
`data` mediumtext NOT NULL,
`alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
`master_id` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `tag` (`tag`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

[root@server1 salt]# mysql < test.sql


3、server4 修改配置文件并重启

[root@server4 keepalived]# cd /etc/salt/
[root@server4 salt]# vim minion


return: mysql
mysql.host: '172.25.38.1'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
[root@server4 salt]# systemctl restart salt-minion

4、server4 安装依赖    否则无法写入数据

yum install -y MySQL-python


5、测试,发送一条命令

[root@server1 salt]# salt server4 cmd.run df --return mysql    ##要返回数据库
server4:
    Filesystem            1K-blocks    Used Available Use% Mounted on
    /dev/mapper/rhel-root  17811456 1193856  16617600   7% /
    devtmpfs                 239252       0    239252   0% /dev
    tmpfs                    250224      12    250212   1% /dev/shm
    tmpfs                    250224   33148    217076  14% /run
    tmpfs                    250224       0    250224   0% /sys/fs/cgroup
    /dev/sda1               1038336  141508    896828  14% /boot
    tmpfs                     50048       0     50048   0% /run/user/0


查看数据库

[root@server1 salt]# mysql salt
MariaDB [salt]> select * from salt_returns;

2)master端主动记录

1、安装依赖

yum install -y MySQL-python

2、编辑配置文件并重启服务

[root@server1 salt]# vim /etc/salt/master

return: mysql
mysql.host: 'localhost'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306

master_job_cache: mysql

systemctl restart salt-master

3、mysql授权

[root@server1 salt]# mysql
MariaDB [(none)]> grant all on salt.* to salt@'localhost' identified by 'salt';
Query OK, 0 rows affected (0.00 sec)


4、测试,发送命令

[root@server1 salt]# systemctl restart salt-master
[root@server1 salt]# salt server2 test.ping
server2:
    True

5、查看结果


[root@server1 salt]# mysql salt
MariaDB [salt]> select * from salt_returns;


九、远程执行

类似一个函数

1、server1创建函数模块

[root@server1 ~]# cd /srv/salt/_modules/
[root@server1 _modules]# vim my_disk.py

编写如下

def df():
    cmd = 'df -h'
    return __salt__['cmd.run'](cmd)


2、server1推送模块给server2

[root@server1 _modules]# salt server2 saltutil.sync_modules
server2:
    - modules.my_disk

3、server2查看

[root@server2 salt]# pwd
/var/cache/salt
[root@server2 salt]# tree
.
└── minion
    ├── accumulator
    ├── extmods
    │   └── modules
    │       └── my_disk.py

    ├── files
    │   └── base
    │       ├── httpd
    │       │   ├── apache.sls
    │       │   ├── files
    │       │   │   └── httpd.conf
    │       │   ├── install.sls
    │       │   └── service.sls
    │       ├── _modules
    │       │   └── my_disk.py
    │       └── top.sls
    ├── highstate.cache.p
    ├── module_refresh
    ├── pkg_refresh
    ├── proc
    └── sls.p

4、server1测试,是否可以使用

[root@server1 _modules]# salt server2 my_disk.df
server2:
    Filesystem             Size  Used Avail Use% Mounted on
    /dev/mapper/rhel-root   17G  1.5G   16G   9% /
    devtmpfs               360M     0  360M   0% /dev
    tmpfs                  371M   12K  371M   1% /dev/shm
    tmpfs                  371M  9.7M  361M   3% /run
    tmpfs                  371M     0  371M   0% /sys/fs/cgroup
    /dev/sda1             1014M  139M  876M  14% /boot
    tmpfs                   75M     0   75M   0% /run/user/0


十、api的认证和使用(使用api启动nginx)

1、安装salt-api

[root@server1 ~]# yum install -y salt-api

2、生成证书

[root@server1 ~]# cd /etc/pki/
[root@server1 pki]# ls
CA        consumer     java   product          rpm-gpg  tls
ca-trust  entitlement  nssdb  product-default  rsyslog
[root@server1 pki]# cd tls/
[root@server1 tls]# cd certs/
[root@server1 certs]# ls
ca-bundle.crt  ca-bundle.trust.crt  make-dummy-cert  Makefile  renew-dummy-cert

[root@server1 certs]# cd ../private/
[root@server1 private]# openssl genrsa 1024
Generating RSA private key, 1024 bit long modulus
...................................++++++
...++++++
e is 65537 (0x10001)
-----BEGIN RSA PRIVATE KEY-----
MIICXgIBAAKBgQDbJUlTDI3rBafPuYxtqG2/z6/6j/V647+zFzr5gkvyIKBIlhVE
tdGLydc+G4y4dPJ9FZpkklXI6y6CiupkDGqBqsJSM/oTFj0O+8+WBCtsR326OAXk
xckImUQp0V9wg19PGu+M6P8F92B6qAYd80OrOuXegEcBMJQeVahH48QckwIDAQAB
AoGBAL47Pc1j5oYPoL6HOUmvnaWV6hM9iECnFy+liMIywy5p9/lKnyfIFSCdk8UM
MTml+yFt8VpAVUtWLEeRwyoaRRCyQsFdZJkt2G3HaSVZF1j6/HHtnliMcKazsVUx
HCr5o2/9zYzv2MUQDozl3xaO+CHq9KT54bqXoGAY9c3HX1TRAkEA/S4lVmlO/4NK
58ThDTGRmpCQgLVmHmaVnEcSRbv08hLzktV4GVPHVSnHS3x2Xmdke5b85GpjzjIL
bMsZdcxk3QJBAN2WGlfa8u5M2gwuSe3EG1+srNNB9ZLwyQZm/So8g+uXdnZpCI5g
Y5eayIuP1edAMDvc+Idt0ybz8bryhEFueC8CQE4r0FV04HJeDGOxUzdqpaVOm39S
AvzB+dGt1AN5/DA+D7y3coSHbJVr99/jxvxw+gJ65Qx1mOlSZFqr/ulzOXUCQQC8
x5bl5nk1IHBcFEuTr5GKrzgGO5mWeGErfS6Of0P8wOuB8fYCJohyrsHQdNhNzdfK
CHYMGzrbYtU86kRW4mCtAkEA83jFPjRwrgdY9vJe1QSBlOzE9RPyNLzdATK6ZdlL
1c8GZLb0gfMT53EsXllV8vahAqf+BBzgyRORusIhiqwlHg==
-----END RSA PRIVATE KEY-----
[root@server1 private]# openssl genrsa 1024 > localhost.key
Generating RSA private key, 1024 bit long modulus
.......++++++
...........++++++
e is 65537 (0x10001)
[root@server1 private]# cd ../certs/
[root@server1 certs]# make testcert
umask 77 ; \
/usr/bin/openssl req -utf8 -new -key /etc/pki/tls/private/localhost.key -x509 -days 365 -out /etc/pki/tls/certs/localhost.crt -set_serial 0
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:cn
State or Province Name (full name) []:shannxi
Locality Name (eg, city) [Default City]:xi'an
Organization Name (eg, company) [Default Company Ltd]:westos
Organizational Unit Name (eg, section) []:linux
Common Name (eg, your name or your server's hostname) []:server1
Email Address []:localhost@server
[root@server1 certs]# ll localhost.crt
-rw------- 1 root root 1034 May  6 10:33 localhost.crt

3、修改证书位置

[root@server1 certs]# cd /etc/salt/
[root@server1 salt]# cd master.d/
[root@server1 master.d]# ls
[root@server1 master.d]# rpm -ql salt-api
/usr/bin/salt-api
/usr/lib/systemd/system/salt-api.service
/usr/share/man/man1/salt-api.1.gz
[root@server1 master.d]# vim api.conf
[root@server1 master.d]# vim api.conf
[root@server1 master.d]# cat api.conf
rest_cherrypy:
  port: 8000
  ssl_crt: /etc/pki/tls/certs/localhost.crt
  ssl_key: /etc/pki/tls/private/localhost.key

4、修改认证和用户

[root@server1 master.d]# vim auth.conf
[root@server1 master.d]# cat auth.conf
external_auth:
  pam:
    saltapi:
      - '.*'
      - '@wheel'
      - '@runner'
      - '@jobs'

5、添加用户,修改密码

[root@server1 master.d]# useradd saltapi
[root@server1 master.d]# passwd saltapi
Changing password for user saltapi.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

6、启动api

[root@server1 master.d]# systemctl restart salt-api
[root@server1 master.d]# systemctl status salt-api
● salt-api.service - The Salt API
   Loaded: loaded (/usr/lib/systemd/system/salt-api.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-05-06 10:36:26 CST; 8s ago
     Docs: man:salt-api(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltstack.com/en/latest/contents.html
 Main PID: 19465 (salt-api)
   CGroup: /system.slice/salt-api.service
           ├─19465 /usr/bin/python /usr/bin/salt-api
           └─19476 /usr/bin/python /usr/bin/salt-api

May 06 10:36:26 server1 systemd[1]: Starting The Salt API...
May 06 10:36:26 server1 systemd[1]: Started The Salt API.
      

查看端口8000

[root@server1 master.d]# netstat -antlp|grep 8000
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      19476/python        
tcp        0      0 127.0.0.1:59338         127.0.0.1:8000          TIME_WAIT   -                   
tcp        0      0 127.0.0.1:59332         127.0.0.1:8000          TIME_WAIT   -    

7、重启动salt-master         


[root@server1 master.d]# systemctl restart salt-master

8、调用curl命令测试

生成token


[root@server1 master.d]# curl -sSk https://localhost:8000/login  -d username=saltapi  -d password=westos   -d eauth=pam
{"return": [{"perms": [".*", "@wheel", "@runner", "@jobs"], "start": 1557110710.097285, "token": "8a6a44f3de7bb8fe4a1079c2530420fdaaa80dec", "expire": 1557153910.097286, "user": "saltapi", "eauth": "pam"}]}[root@server1 maste -H 'Accept: application/x-yaml'   -d username=saltapi  -d password=westos   -d eauth=pamam
return:
- eauth: pam
  expire: 1557153945.858572
  perms:
  - .*
  - '@wheel'
  - '@runner'
  - '@jobs'
  start: 1557110745.858571
  token: 00616a6998c64f5fd91cdb1edbfc24a6bcd64326
  user: saltapi

使用token测试


[root@server1 master.d]# curl -sSk https://localhost:8000 -H 'Accept: application/x-yaml' -H 'X-Auth-Token: 00616a6998c64f5fd91cdb1edbfc24a6bcd64326' -d client=local -d tgt='*' -d fun=test.ping
return:
- server1: true
  server2: true
  server3: true
  server4: true

9、关闭server3的nginx

[root@server3 salt]# systemctl stop nginx
[root@server3 salt]# systemctl status nginx
● nginx.service - The NGINX HTTP and reverse proxy server
   Loaded: loaded (/etc/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

[root@server3 salt]# netstat -antlp | grep 80


10、编写启动nginx的api


[root@server1 master.d]# cd
[root@server1 ~]# vim saltapi.py

[root@server1 ~]# cat saltapi.py
# -*- coding: utf-8 -*-

import urllib2,urllib
import time

try:
    import json
except ImportError:
    import simplejson as json

class SaltAPI(object):
    __token_id = ''
    def __init__(self,url,username,password):
        self.__url = url.rstrip('/')
        self.__user = username
        self.__password = password

    def token_id(self):
        ''' user login and get token id '''
        params = {'eauth': 'pam', 'username': self.__user, 'password': self.__password}
        encode = urllib.urlencode(params)
        obj = urllib.unquote(encode)
        content = self.postRequest(obj,prefix='/login')
        try:
            self.__token_id = content['return'][0]['token']
        except KeyError:
            raise KeyError

    def postRequest(self,obj,prefix='/'):
        url = self.__url + prefix
        headers = {'X-Auth-Token'   : self.__token_id}
        req = urllib2.Request(url, obj, headers)
        opener = urllib2.urlopen(req)
        content = json.loads(opener.read())
        return content

    def list_all_key(self):
        params = {'client': 'wheel', 'fun': 'key.list_all'}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        minions = content['return'][0]['data']['return']['minions']
        minions_pre = content['return'][0]['data']['return']['minions_pre']
        return minions,minions_pre

    def delete_key(self,node_name):
        params = {'client': 'wheel', 'fun': 'key.delete', 'match': node_name}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        ret = content['return'][0]['data']['success']
        return ret

    def accept_key(self,node_name):
        params = {'client': 'wheel', 'fun': 'key.accept', 'match': node_name}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        ret = content['return'][0]['data']['success']
        return ret

    def remote_noarg_execution(self,tgt,fun):
        ''' Execute commands without parameters '''
        params = {'client': 'local', 'tgt': tgt, 'fun': fun}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        ret = content['return'][0][tgt]
        return ret

    def remote_execution(self,tgt,fun,arg):
        ''' Command execution with parameters '''        
        params = {'client': 'local', 'tgt': tgt, 'fun': fun, 'arg': arg}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        ret = content['return'][0][tgt]
        return ret

    def target_remote_execution(self,tgt,fun,arg):
        ''' Use targeting for remote execution '''
        params = {'client': 'local', 'tgt': tgt, 'fun': fun, 'arg': arg, 'expr_form': 'nodegroup'}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

    def deploy(self,tgt,arg):
        ''' Module deployment '''
        params = {'client': 'local', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        return content

    def async_deploy(self,tgt,arg):
        ''' Asynchronously send a command to connected minions '''
        params = {'client': 'local_async', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

    def target_deploy(self,tgt,arg):
        ''' Based on the node group forms deployment '''
        params = {'client': 'local_async', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg, 'expr_form': 'nodegroup'}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

def main():
    sapi = SaltAPI(url='https://172.25.85.1:8000',username='saltapi',password='westos')  ##修改url,用户和密码
    #sapi.token_id()
    #print sapi.list_all_key()
    #sapi.delete_key('test-01')
    #sapi.accept_key('test-01')
    sapi.deploy('server3','nginx.service')     ##调用启动
    #print sapi.remote_noarg_execution('test-01','grains.items')

if __name__ == '__main__':
      main()
[root@server1 ~]# python saltapi.py    ##运行api
 

测试:查看server3的nginx

[root@server3 salt]# systemctl status nginx
● nginx.service - The NGINX HTTP and reverse proxy server
   Loaded: loaded (/etc/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-05-06 11:11:30 CST; 58s ago
  Process: 45

[root@server3 salt]# netstat -antlp | grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      4563/nginx: master  
tcp        0      0 172.25.85.3:80          172.25.85.4:46390       SYN_RECV    -         

猜你喜欢

转载自blog.csdn.net/qq_41627390/article/details/88831433