文章目录
一、基础环境配置
1.1 OpenStack 环境配置
主机名 | 内存 | 硬盘 | 网卡 |
---|---|---|---|
controller | 8G | 30G+1024G | VNet1:192.168.100.100 NAT:20.0.0.100 |
compute1 | 8G | 30G+1024G | VNet1:192.168.100.101 NAT:20.0.0.101 |
compute2 | 8G | 30G+1024G | VNet1:192.168.100.102 NAT:20.0.0.102 |
【部署思路】
一、配置操作系统+OpenStack运行环境
二、配置OpenStack平台基础服务(rabbitmq、mariadb、memcache、Apache)
三、配置OpenStack keystone组件
四、配置OpenStack Glance组件
五、配置placement服务
六、配置OpenStack Nova组件
七、配置OpenStack Neutron组件
八、配置OpenStack dashboard组件
九、配置OpenStack Cinder组件
十、常用云主机操作
Linux系统:最小化安装CentOS7.6
1.2 基础环境配置
配置项(所有节点):
- 主机名
hostnamectl set-hostname <主机名>
bash
- 防火墙、核心防护
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
- 免交互
- 添加hosts文件
vi /etc/hosts
192.168.100.100 controller
192.168.100.101 compute1
192.168.100.102 compute2
PS:以上为局域网IP
- 非对称密钥
ssh-keygen -t rsa
ssh-copy-id controller
ssh-copy-id compute1
ssh-copy-id compute2
- 基础环境依赖包
yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre pcre-devel expat-devel cmake bzip2
yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
5、时间同步
- controller节点配置
yum install chrony -y
vim /etc/chrony.conf
3 #server 0.centos.pool.ntp.org iburst
4 #server 1.centos.pool.ntp.org iburst
5 #server 2.centos.pool.ntp.org iburst
6 #server 3.centos.pool.ntp.org iburst
7 server ntp.aliyun.com iburst ' //iburst:一次发8个包而不是一个包 '
17 rtcsync ' 自动同步时间,比ntpdate方便 '
27 allow 192.168.100.0/24
' 注释掉3~6行,添加第7行 '
systemctl enable chronyd
systemctl restart chronyd
- 使用 chronyc sources 命令查询时间同步信息
chronyc sources
compute1,compute2节点时间同步设置基本相同,只需指明同步源为controller即可。
yum install chrony -y
vim /etc/chrony.conf
server controller iburst
systemctl enable chronyd
systemctl restart chronyd
chronyc sources
二、OpenStack系统环境配置
配置服务(控制节点):
2.1 安装、配置MariaDB
- 安装Mariadb和必要环境
yum -y install mariadb mariadb-server python2-PyMySQL
#此包用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端
yum -y install libibverbs
' // A library and drivers for direct userspace use of RDMA (InfiniBand/iWARP/RoCE) hardware '
- 添加MySQL子配置文件,增加如下内容
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.100.100
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
- 开机自启动、开启服务
systemctl enable mariadb
systemctl start mariadb
- 执行MariaDB 安全配置脚本交互式地初始化mariadb
mysql_secure_installation
Enter current password for root (enter for none): #回车
OK, successfully used password, moving on...
Set root password? [Y/n] Y
Remove anonymous users? [Y/n] Y
... Success!
Disallow root login remotely? [Y/n] N
... skipping.
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y
2.2 安装,配置RabbitMQ
所有创建虚拟机的指令,控制端都会发送到rabbitmq,node节点监听rabbitmq
- 安装RabbitMQ
yum -y install rabbitmq-server
- 启动RabbitMQ服务,并设置其开机启动。
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
- 创建MQ(消息队列)用户"openstack",用于controler和node节点连接rabbitmq的认证
rabbitmqctl add_user openstack RABBIT_PASS
- 配置openstack用户的操作权限(正则,配置读写权限)
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
' //Rabbitmq set_permissions [-p <vhost>] <user> <conf> <write> <read> '
#可查看25672和5672 两个端口(5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口)
- 查看rabbitmq插件列表、开启rabbitmq的web管理界面的插件
rabbitmq-plugins list
rabbitmq-plugins enable rabbitmq_management
netstat -anptu | grep 5672
使用192.168.100.100:15672或者20.0.0.100均可访问RabbitMQ
默认账号密码均为guest
2.3 安装memcached
-
作用
- 安装memcached是用于存储session信息服务
- 身份验证机制使用Memcached来缓存令牌 在登录openstack的dashboard时,会产生一些session信息,这些session信息会存放到memcached中
-
安装Memcached
yum install -y memcached python-memcached
' //python-*模块在OpenStack中起到连接数据库的作用 '
- 修改Memcached配置文件
vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,controller" ' //仅添加controller '
- 开机自启动
systemctl enable memcached
systemctl start memcached
netstat -nautp | grep 11211
2.4 安装、配置etcd
- 安装etcd
yum -y install etcd
- 修改etcd配置文件
vim /etc/etcd/etcd.conf
1 #[Member]
3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
5 ETCD_LISTEN_PEER_URLS="http://192.168.100.100:2380"
6 ETCD_LISTEN_CLIENT_URLS="http://192.168.100.100:2379"
9 ETCD_NAME="controller"
19 #[Clustering]
20 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.100:2380"
21 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.100:2379
26 ETCD_INITIAL_CLUSTER="controller=http://192.168.100.100:2380"
27 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" #集群唯一标识
28 ETCD_INITIAL_CLUSTER_STATE="new" #初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群
若为DNS,则表示此集群将作为被加入的对象
- 开机自启动、开启服务,检测端口
systemctl enable etcd.service
systemctl start etcd.service
netstat -anutp |grep 2379
netstat -anutp |grep 2380