自动化部署
Ansible
Ansible简介:
在centos7环境部署自动部署工具–ansible
运维工作:
一阶段:手动部署
二阶段:脚本,满足批量安装
三阶段:使用自动部署工具
ansible
puppet
saltstack
功能:
批量系统配置、批量程序部署、批量运行命令
工作原理:
Ansible的核心套件;
Host Inventory: 主机清单
定义控管的主机列表
Connection Plugins: 连接套件
ssh 节点要给ansible提供账号和密码(root)
PlayBooks: 剧本;节点要执行的操作
遵循python语言格式,yml脚本
module: 完成剧本中编排的工作任务
输出:
红色,error
粉色,warning
黄色或绿色,ok
公司应用:
-
推war包
-
git momen(合war包的) jenkins ansible (自动调试模组)
1. ansible
开始实验:
ansible 172.16.0.19
node1 172.16.0.31
node2 172.16.0.32
安装ansible
[root@ansible ~]# lftp 172.16.0.99
lftp 172.16.0.99:~> cd release/ # 通过rpm包,安装epel源
lftp 172.16.0.99:/release> get epel-release-7-6.noarch.rpm
[root@ansible ~]# rpm -ivh epel-release-7-6.noarch.rpm
#安装ansible
[root@ansible ~]# yum install -y ansible
[root@ansible ~]# cd /etc/ansible/
[root@ansible /etc/ansible]# ls
ansible.cfg hosts roles
hosts 主机清单
[root@ansible /etc/ansible]# vim hosts
172.16.0.31
测试:
[root@ansible ~]# ansible 172.16.0.31 -m ping -k
SSH password:
-m 表示带入ansible的模块
ping 测试对端节点是否存活
-k 交互式操作,要输入对端节点的密码
输出:
红色,error
粉色,warning
黄色或绿色,ok
172.16.0.31 | FAILED! => {
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
报错信息显示,本机的known_hosts没有对端节点的指纹信息。
解决:
[root@ansible ~]# ssh 172.16.0.31
The authenticity of host '172.16.0.31 (172.16.0.31)' can't be established.
ECDSA key fingerprint is SHA256:2XbadPSEyc/rLTTQeUjJ7fgeX93S+eOEbMcKVBXEipc.
ECDSA key fingerprint is MD5:a3:b6:36:34:9a:95:52:78:b3:f1:ab:23:64:ee:13:15.
Are you sure you want to continue connecting (yes/no)? yes
[root@ansible ~/.ssh]# ls
known_hosts
[root@ansible ~/.ssh]# cat known_hosts
172.16.0.31 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNNfFHh+mBmim3B9rbBuxntwnzhOZjYkMjRH7mz56LGvRcl7zUGwjembfFw1V/QbfY97ZRTAzY2AiU6/a3xhZcs=
[root@ansible ~]# ansible 172.16.0.31 -m ping -k
SSH password:
[root@ansible /etc/ansible]# vim hosts
[node]
172.16.0.31
172.16.0.32
[root@ansible /etc/ansible]# vim hosts
[node]
172.16.0.31 ansible_ssh_user=root ansible_ssh_pass=123
172.16.0.32 ansible_ssh_user=root ansible_ssh_pass=123
[root@ansible ~]# ansible node -m ping
问题:在hosts文件,记录节点的root密码,不安全?
解决:
ansible连接节点,无密码操作。
ssh连接节点,不需要输入密码。
生成公钥文件
[root@ansible ~]# ssh-keygen
使用ssh-copy-id -i 将公钥文件发给 172.16.0.31
[root@ansible ~]# ssh-copy-id -i .ssh/id_rsa.pub 172.16.0.31
[root@ansible ~]# cat ipfile
172.16.0.31
172.16.0.32
[root@ansible ~]# vim scp-sshpubkey.sh
#!/bin/bash
# 分发ansible的ssh公钥 (密码相同的情况)
pass="123"
pubkey="/root/.ssh/id_rsa.pub"
file="/root/ipfile"
while read ip
do
sshpass -p $pass ssh-copy-id -i $pubkey $ip &> /dev/null && echo "$ip sshpubkey is success."
done < $file
思考:如果节点的root密码,不是一样的,怎么写?
介绍ansible的常用模块:
1. ping
功能:测试节点是否存活。
[root@ansible ~]# ansible node -m ping -k
SSH password:
172.16.14.33 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
172.16.14.32 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
2. command
功能:在远程节点直接执行命令;不支持使用管道|
[root@ansible ~]# ansible node -m command -a "mkdir /tmp/dir1"
[root@ansible ~]# ansible node -m command -a "ls /tmp"
3. copy
功能:将ansible上的文件,拷贝到远程节点。
常用选项:
src 源文件
dest 目的文件
backup 备份目标文件
owner 指定目标文件的所有者
mode 指定目标文件的权限
[root@ansible ~]# ansible node -m copy -a "src=/etc/hosts dest=/tmp/hosts"
src 源文件
dest 目的文件
backup 备份目标文件
owner 指定目标文件的所有者
mode 指定目标文件的权限
[root@ansible ~]# ansible node -m copy -a "src=/etc/hosts dest=/tmp/hosts backup=yes"
[root@ansible ~]# ansible node -m copy -a "src=/root/1.sh dest=/tmp/1.sh mode=755"
[root@ansible ~]# ansible node -m command -a "/tmp/1.sh"
4. file模块
功能:管理远程节点的文件。
常用选项
path 指定文件的路径
state 控管文件
touch 创建文件
directory 创建目录,目录不存在的情况下
absent 删除
[root@ansible ~]# ansible node -m file -a "path=/tmp/file1 state=touch"
path 指定文件的路径
state 控管文件
touch 创建文件
directory 创建目录,目录不存在的情况下
absent 删除
[root@ansible ~]# ansible node -m file -a "path=/tmp/dir1 state=directory"
[root@ansible ~]# ansible node -m file -a "path=/tmp/dir2 state=directory"
[root@ansible ~]# ansible node -m file -a "path=/tmp/f1 state=absent
[root@ansible ~]# ansible node -m file -a "path=/tmp/dir1 state=absent"
[root@ansible ~]# ansible node -m file -a "path=/tmp/file2 owner=user1 group=user1 mode=755 state=touch"
[root@ansible ~]# ansible node -m command -a "ls -l /tmp/file2"
172.16.0.31 | CHANGED | rc=0 >>
-rwxr-xr-x 1 user1 user1 0 Dec 4 13:47 /tmp/file2
172.16.0.32 | CHANGED | rc=0 >>
-rwxr-xr-x 1 user1 user1 0 Dec 4 13:47 /tmp/file2
5. user模块
功能:管理远程节点的用户
6. group模块
功能:管理远程节点的组
[root@ansible ~]# ansible node -m user -a "name=user1"
== useradd user1
[root@ansible ~]# ansible node -m user -a "name=user2 uid=2000 shell=/sbin/nologin"
== useradd -u 2000 -s /sbin/nologin user2
[root@ansible ~]# ansible node -m user -a "name=user1 state=absent"
== userdel user1
[root@ansible ~]# ansible node -m user -a "name=user2 state=absent remove=yes"
== userdel -r user2
[root@ansible ~]# ansible node -m group -a "name=group1"
== groupadd group1
[root@ansible ~]# ansible node -m group -a "name=group2 gid=5000"
== groupadd -g 5000 group2
[root@ansible ~]# ansible node -m group -a "name=group1 state=absent"
[root@ansible ~]# ansible node -m group -a "name=group2 state=absent"
== groupdel 组名
7. get_url
功能:下载文件到远程节点
[root@ansible ~]# ansible node -m get_url -a “url=ftp://172.16.0.99/scripts/nginx-1.13-clean.sh dest=/tmp/”
[root@ansible ~]# ansible node -m get_url -a “url=ftp://172.16.0.99/scripts/mysql-5.7.18.sh dest=/tmp/ mode=755”
8. yum
功能:只能安装软件的 rpm包
[root@ansible ~]# ansible node -m yum -a "name=vsftpd"
== yum install -y vsftpd
[root@ansible ~]# ansible node -m yum -a "name=vsftpd state=absent"
== rpm -e vsftpd
[root@ansible ~]# ansible node -m yum -a "name=vsftpd,httpd"
查看远程主机是否安装vsftpd程序
[root@ansible ~]# ansible node -m command -a "rpm -q vsftpd"
9. systemd
功能:开 、 关远程节点的服务的
[root@ansible ~]# ansible node -m systemd -a "name=httpd state=started"
== systemctl start httpd
[root@ansible ~]# ansible node -m systemd -a "name=httpd state=stopped"
== systemctl stop httpd
restarted reloaded
重启 重新加载配置文件
[root@ansible ~]# ansible node -m systemd -a "name=httpd enabled=yes"
== systemctl enable httpd
提示:
红色,服务没有安装,ansible操作失败
黄色,执行成功
绿色,已经执行完成过了
10. cron
功能:在远程节点配置计划任务
例子:每隔5分钟,关闭一次firewalld
*/5 * * * * systemctl stop firewalld
[root@ansible ~]# ansible node -m cron -a 'name="stop firewalld" minute=*/5 job="systemctl stop firewalld"'
[root@ansible ~]# ansible node -m command -a "crontab -l"
[root@ansible ~]# ansible node -m cron -a 'name="stop firewalld" state=absent'
删除计划任务
分 minute 0-59 * */num
时 hour 0-23
日 day
月 month
周 weekday
命令
name 定义计划任务的名字
例子:每天凌晨1:30,执行mysql的备份脚本- mysql_backup.sh
30 1 * * * /路径/mysql_backup.sh
[root@ansible ~]# ansible node -m cron -a 'name="mysql backup" minute=30 hour=1 job="/root/mysql_backup.sh"'
编写ansible的剧本:
yml脚本
一定要注意格式,对齐!!!
- name: 之间有空格,:冒号后边有空格。
剧本:在远程节点部署apache
安装
yum
配置
copy 配置文件 测试页
启动
[root@ansible ~]# yum install -y httpd # ansible端 先安装程序,为了编辑配置文件(也可以使用其他的)
[root@ansible /etc/ansible]# mkdir playbooks
[root@ansible /etc/ansible/playbooks]# vim apache.yml
- name: install and config apache
hosts: node
user: root
tasks:
- name: install apache
yum: name=httpd
- name: copy conf
copy: src=files/httpd.conf dest=/etc/httpd/conf/httpd.conf backup=yes
notify: restart httpd
- name: copy index.html
copy: src=files/index.html dest=/var/www/html/index.html
handlers:
- name: restart httpd
systemd: name=httpd state=restarted enabled=yes
[root@ansible /etc/ansible/playbooks]# mkdir files
[root@ansible /etc/ansible/playbooks]# cp /etc/httpd/conf/httpd.conf files/
[root@ansible /etc/ansible/playbooks]# echo "apache" > files/index.html
[root@ansible /etc/ansible/playbooks]# vim files/httpd.conf
加入虚拟主机的配置
[root@ansible /etc/ansible/playbooks]# ansible-playbook apache.yml
- name: install and config apache
== echo "提示"
hosts: node
指定剧本的主机清单
user: root
指定执行剧本的用户
tasks:
剧本的具体内容
- name: install apache
yum: name=httpd
安装软件,使用yum模块
- name: copy conf
copy: src=files/httpd.conf dest=/etc/httpd/conf/httpd.conf backup=yes
notify: restart httpd
##当配置文件发生变化,通知handlers去重启apache
- name: copy index.html
copy: src=files/index.html dest=/var/www/html/index.html
handlers:
定义服务启动
- name: restart httpd
systemd: name=httpd state=restarted enabled=yes
写一个ftp的yml脚本:
安装
yum
配置文件
copy 支持匿名用户上传
给匿名用户创建一个目录,用于接收上传文件
file
启动服务
- name: install and config ftp
hosts: node
user: root
tasks:
- name: install ftp
yum: name=vsftpd
- name: copy conf
copy: src=files/vsftpd.conf dest=/etc/vsftpd/vsftpd.conf backup=yes
notify: restart vsftpd
- name: create ftp directory
file: path=/var/ftp/upload owner=ftp state=directory
handlers:
- name: restart vsftpd
systemd: name=vsftpd state=restarted enabled=yes
notify的名字要和handlers:下的name名字相同,否则服务无法启动
Puppet
简介
在centos7环境部署自动部署工具–puppet
ruby语言
功能:
集中配置管理系统。
puppet采用C/S星状的结构
C client
S server
每个客户端周期的(默认半个小时)向服务器发送请求,获得其最新的配置信息,保证和该配置信息同步。
在同步配置文件方面是比较强悍的。
puppet的工作过程:
- 节点和server之间进行通信时,在第一次,节点会向server发送证书签名请求以及自己的信息;
- server对节点的证书进行签署,完成后,节点和server之间可以建立连接,该连接是使用SSL加密的;
- server查找节点的定义信息,将节点的相关配置收集起来,解析成伪代码,传给节点;
- 节点检查自己当前的配置,如果和伪代码不一致,就会同步server的数据或配置;
- 节点将执行的结果通知server。
开始配置:
优先设置主机名
/etc/hosts
[root@puppet ~]# tail -3 /etc/hosts
172.16.0.60 puppet.up.com puppet
172.16.0.61 node1.up.com node1
172.16.0.62 node2.up.com node2
[root@puppet ~]# scp /etc/hosts 172.16.0.61:/etc
[root@puppet ~]# scp /etc/hosts 172.16.0.62:/etc
1. server
puppet-master
[root@puppet ~]# lftp 172.16.0.99
lftp 172.16.0.99:~> cd release/
lftp 172.16.0.99:/release> get epel-release-7-6.noarch.rpm
添加epel源
[root@puppet ~]# rpm -ivh epel-release-7-6.noarch.rpm
下载puppet,语言包,相关工具
[root@puppet ~]# yum install -y ruby ruby-libs puppet puppet-server facter
ruby 安装ruby环境
facter 系统盘点工具,负责采集系统信息
[root@puppet /etc/puppet]# ls
auth.conf fileserver.conf manifests modules puppet.conf
2. node1
[root@node1 ~]# lftp 172.16.0.99
lftp 172.16.0.99:~> cd release/
lftp 172.16.0.99:/release> get epel-release-7-6.noarch.rpm
[root@node1 ~]# rpm -ivh epel-release-7-6.noarch.rpm
客户端不需要安装服务端
[root@node1 ~]# yum install -y ruby ruby-libs puppet facter
3. 配置server
[root@puppet /etc/puppet]# touch manifests/site.pp
通知server到哪里找并载入指定的节点
[root@puppet ~]# systemctl start puppetmaster
[root@puppet ~]# systemctl enable puppetmaster
[root@puppet ~]# cd /var/lib/puppet/ssl/ca/signed/
[root@puppet /var/lib/puppet/ssl/ca/signed]# ls
puppet.pem
给自己签署的证书
4. node1
编辑配置文件,添加设置 服务器对应的hosts名
[root@node1 /etc/puppet]# vim puppet.conf
[main]
......
server = puppet.up.com
[root@node1 /etc/puppet]# ping puppet.up.com
PING puppet.up.com (172.16.0.60) 56(84) bytes of data.
64 bytes from puppet.up.com (172.16.0.60): icmp_seq=1 ttl=64 time=0.497 ms
[root@node1 ~]# systemctl start puppet
[root@node1 ~]# systemctl enable puppet
5. server
查看节点的签名请求信息
[root@puppet ~]# puppet cert -l
"node1.up.com" (SHA256) 31:34:6A:AA:98:97:0E:01:96:44:31:D2:7D:37:0F:85:13:0E:80:A4:1E:36:84:4D:D8:6A:45:C2:2C:E8:31:61
查看节点的签名请求信息
[root@puppet ~]# puppet cert -s node1.up.com
给节点1 签发证书
[root@puppet /var/lib/puppet/ssl/ca/signed]# ls
node1.up.com.pem puppet.pem
这个
---------------------------------------------
如果签发失败:
1. 检查主机名解析
2. 检查时间,是不是一致的
解决:
节点删除 /var/lib/puppet/ssl
server删除
[root@puppet /var/lib/puppet/ssl/ca/signed]# ls
node1.up.com.pem
节点重启puppet
重新请求签发
---------------------------------------------
开始同步配置:
[root@puppet /etc/puppet]# vim manifests/site.pp
import "nodes.pp"
##指定节点清单文件
$puppetserver="puppet.up.com"
##指定puppetserver
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node 'node1.up.com' {
##定义节点信息
include hosts
##准备同步的文件
}
[root@puppet /etc/puppet/modules]# mkdir -p hosts/{manifests,files}
[root@puppet /etc/puppet/modules/hosts/manifests]# vim init.pp
class hosts {
package {"setup":
ensure => present,
allow_virtual => true
}
package类,定义了 hosts文件是哪个包安装的
如果已经安装,就不安装
如果没有安装,就安装
file {"/etc/hosts":
owner => root,
group => root,
mode => 0644,
source => "puppet://$puppetserver/modules/hosts/etc/hosts",
require => Package["setup"],
}
file类,定义同步文件的属性--所有者 所属组 权限
source 来源,从puppet的xx位置给节点同步
require 依赖安装package类里面的包
}
准备操作:
/etc/hosts --> node1
[root@puppet ~]# rpm -qf /etc/hosts
setup-2.8.71-7.el7.noarch
[root@puppet ~]# ll /etc/hosts
-rw-r--r-- 1 root root 253 Dec 5 10:00 /etc/hosts
拷贝准备同步的文件:
[root@puppet /etc/puppet/modules/hosts]# mkdir files/etc
[root@puppet /etc/puppet/modules/hosts]# cp /etc/hosts files/etc/
[root@puppet /etc/puppet/modules/hosts]# vim files/etc/hosts
修改,让它和节点上的不一样
[root@puppet /etc/puppet/modules]# tree .
.
└── hosts
├── files
│ └── etc
│ └── hosts
└── manifests
└── init.pp
[root@puppet ~]# systemctl restart puppetmaster
节点同步:手动
[root@node1 ~]# puppet agent --test
[root@node1 ~]# cat /etc/hosts
练习:同步apache的配置文件
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node 'node1.up.com' {
include hosts
include httpd <--
}
[root@puppet ~]# yum install -y httpd
[root@puppet /etc/puppet/modules]# cp -r hosts/ httpd
[root@puppet /etc/puppet/modules/httpd]# vim manifests/init.pp
class httpd {
package {"httpd":
ensure => present,
allow_virtual => true
}
file {"/etc/httpd/conf/httpd.conf":
owner => root,
group => root,
mode => 0644,
source => "puppet://$puppetserver/modules/httpd/etc/httpd.conf",
require => Package["httpd"],
}
}
[root@puppet /etc/puppet/modules/httpd/files/etc]# cp /etc/httpd/conf/httpd.conf .
[root@puppet /etc/puppet/modules/httpd]# vim files/etc/httpd.conf
# 加入虚拟主机的配置
[root@puppet /etc/puppet/modules]# tree httpd/
httpd/
├── files
│ └── etc
│ └── httpd.conf
└── manifests
└── init
[root@puppet ~]# systemctl restart puppetmaster
节点同步:手动
[root@node1 ~]# puppet agent --test
[root@node1 ~]# cat /etc/hosts
练习:同步apache的配置文件
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node 'node1.up.com' {
include hosts
include httpd <--
}
puppet安装apache,准备配置文件
[root@puppet ~]# yum install -y httpd
借用hosts的目录结构,改名给Apache用
[root@puppet /etc/puppet/modules]# cp -r hosts/ httpd
进入apache的目录组,编辑部署同步脚本
[root@puppet /etc/puppet/modules/httpd]# vim manifests/init.pp
class httpd {
package {"httpd":
ensure => present,
allow_virtual => true
}
file {"/etc/httpd/conf/httpd.conf":
owner => root,
group => root,
mode => 0644,
source => "puppet://$puppetserver/modules/httpd/etc/httpd.conf",
require => Package["httpd"],
}
}
package对应关系
service对应关系
class 对应关系在node
[root@puppet /etc/puppet/modules/httpd/files/etc]# cp /etc/httpd/conf/httpd.conf .
[root@puppet /etc/puppet/modules/httpd]# vim files/etc/httpd.conf
# 加入虚拟主机的配置
[root@puppet /etc/puppet/modules]# tree httpd/
httpd/
├── files
│ └── etc
│ └── httpd.conf
└── manifests
└── init
[root@puppet ~]# systemctl restart puppetmaster
练习:
/etc/motd 文件没有可以创建
在全局模块中添加motd
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node 'node1.up.com' {
include hosts
include httpd
include motd
}
复制hosts目录结构,改名给motd用
[root@puppet /etc/puppet/modules]# cp -r hosts/ motd
编辑motd下的配置脚本
[root@puppet /etc/puppet/modules/motd]# vim manifests/init.pp
class motd {
file {"/etc/motd":
owner => root,
group => root,
mode => 0644,
source => "puppet://$puppetserver/modules/motd/etc/motd",
}
}
[root@puppet /etc/puppet/modules/motd/files/etc]# cp /etc/motd .
[root@puppet /etc/puppet/modules/motd/files/etc]# cat motd
hello all
[root@node1 ~]# puppet agent --test
[root@node1 ~]# cat /etc/motd
hello all
增加节点
node2
[root@node2 ~]# lftp 172.16.0.99
lftp 172.16.0.99:~> cd release/
lftp 172.16.0.99:/release> get epel-release-7-6.noarch.rpm
[root@node2 ~]# rpm -ivh epel-release-7-6.noarch.rpm
[root@node2 ~]# yum install -y ruby ruby-libs puppet facter
[root@node2 /etc/puppet]# scp 172.16.0.61:/etc/puppet/puppet.conf .
[root@node2 ~]# systemctl start puppet
[root@node2 ~]# systemctl enable puppet
[root@puppet ~]# puppet cert -l
"node2.up.com" (SHA256) 78:46:02:C6:7D:BC:82:E7:AA:98:88:EE:51:55:B1:B3:A2:98:0F:07:5A:22:A1:20:E3:89:25:AA:AF:C3:00:98
[root@puppet ~]# puppet cert -s node2.up.com
[root@puppet ~]# cd /var/lib/puppet/ssl/ca/signed/
[root@puppet /var/lib/puppet/ssl/ca/signed]# ls
node1.up.com.pem node2.up.com.pem <--
给node1同步的文件,给node2同步一份:
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node 'node1.up.com' {
include hosts
include httpd
include motd
}
node 'node2.up.com' {
include hosts
include httpd
include motd
}
[root@puppet ~]# systemctl restart puppetmaster
[root@node2 ~]# puppet agent --test
配置自动同步:
[root@node1 ~]# vim /etc/puppet/puppet.conf
[agent]
......
report = true
runinterval = 5
[root@node2 ~]# vim /etc/puppet/puppet.conf
[agent]
......
report = true
runinterval = 5
配置agent每间隔5秒钟,与server同步一次数据文件。
[root@node1 ~]# systemctl restart puppet
[root@node2 ~]# systemctl restart puppet
[root@puppet ~]# vim test-puppet.sh
#!/bin/bash
# 每个3秒,向/etc/motd写入一行内容
file="/etc/puppet/modules/motd/files/etc/motd"
for i in `seq 1 10`
do
echo `date` >> $file
sleep 3
done
[root@puppet ~]# chmod +x test-puppet.sh
[root@puppet ~]# ./test-puppet.sh
[root@node1 ~]# watch -n 1 cat /etc/motd
[root@node2 ~]# watch -n 1 cat /etc/motd
练习:给节点同步一条计划任务
每隔1小时,与时间服务器同步一次时间
0 * * * * ntpdate ntp1.aliyun.com
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node 'node1.up.com' {
include hosts
include httpd
include motd
include crontab
}
node 'node2.up.com' {
include hosts
include httpd
include motd
include crontab
}
[root@puppet /etc/puppet/modules]# cp -r hosts/ crontab
[root@puppet /etc/puppet/modules/crontab]# vim manifests/init.pp
class crontab {
package {"ntpdate":
ensure => present,
allow_virtual => true
}
service {"crond":
ensure => running,
enable => true,
require => Package["ntpdate"],
}
cron {"ntpdate":
command => "/usr/sbin/ntpdate ntp1.aliyun.com",
user => root,
hour => "*",
minute => "1",
require => Service["crond"]
}
}
[root@puppet /etc/puppet/modules/crontab]# tree .
.
└── manifests
└── init.pp
[root@puppet ~]# systemctl restart puppetmaster
[root@node1 ~]# crontab -l
[root@node2 ~]# crontab -l
自动签发证书:
[root@puppet /etc/puppet]# vim puppet.conf
[main]
......
autosign = true
autosign = /etc/puppet/autosign.conf
[root@puppet /etc/puppet]# vim autosign.conf
*.up.com
[root@puppet ~]# systemctl restart puppetmaster
[root@puppet ~]# vim /etc/hosts
172.16.0.60 puppet.up.com puppet
172.16.0.61 node1.up.com node1
172.16.0.62 node2.up.com node2
172.16.0.63 node3.up.com node3
加入节点node3:
[root@puppet ~]# scp /etc/hosts 172.16.0.63:/etc/
[root@node3 ~]# lftp 172.16.0.99
lftp 172.16.0.99:~> cd release/
lftp 172.16.0.99:/release> get epel-release-7-6.noarch.rpm
[root@node3 ~]# rpm -ivh epel-release-7-6.noarch.rpm
[root@node3 ~]# yum install -y ruby ruby-libs puppet facter
[root@node3 ~]# cd /etc/puppet/
[root@node3 /etc/puppet]# scp 172.16.0.61:/etc/puppet/puppet.conf .
[root@node3 ~]# systemctl start puppet
[root@node3 ~]# systemctl enable puppet
[root@puppet /var/lib/puppet/ssl/ca/signed]# ls
node1.up.com.pem node2.up.com.pem node3.up.com.pem <--puppet自动签发的证书
[root@puppet /etc/puppet]# vim manifests/nodes.pp
node "node1.up.com" {
include hosts
include httpd
include motd
include crontab
}
node "node2.up.com" {
include hosts
include httpd
include motd
include crontab
}
node "node3.up.com" {
include hosts
include httpd
include motd
include crontab
}
[root@puppet /etc/puppet]# systemctl restart puppetmaster
问题参考-脚本工具
Ansible
公钥分发
生存公钥 (公钥默认路径:/root/.ssh/id_rsa.pub)
# ssh-keygen -t rsa
在/test目录下创建hostfile文件,格式 ip pass(密码)
172.16.14.22 123456
172.16.14.23 123456
#!/bin/bash
# 公钥分发的脚本(适合密码不同的情况)
keyfile="/root/.ssh/id_rsa.pub"
host="/test/hostfile"
while read line
do
ip=`echo $line | awk '{print $1}'`
pass=`echo $line | awk '{print $2}'`
sshpass -p $pass ssh-copy-id -i $keyfile $ip &> /dev/null && echo "$ip 公钥安装成功."
done < $host
启动yml脚本运行失败
配置文件是否书写规范,按照提示检查对应的行附近是否有单词写错的.
远程yml脚本无法启动服务
检查notify:名字要和handlers的name:相同,否则服务无法启动
puppet
如果证书签发失败:
- 检查主机名解析
- 检查时间,是不是一致的
解决:
节点删除 /var/lib/puppet/ssl
server删除
[root@puppet /var/lib/puppet/ssl/ca/signed]# ls
node1.up.com.pem
节点重启puppet
重新请求签发
ansible的yaml脚本中执行sql命令
用mysql -e 选项.