OPENSTACK R 版本

 

OPENSTACK   R 版本

目录

 

OPENSTACK   R 版本

 

 

1.准备环境

主机名

hosts

防火墙

ipv6

网卡名

2.安装网络时间协议(NTP)

 主节点配置

从节点

重启

3.openstack软件包

4.安装数据库【controller节点】

5.安装消息队列服务【controller节点】

6.安装memcache【controller节点】

7.安装etcd【controller节点】

8.安装keystone

9.配置apache

10.安装glance

11.nova安装  控制节点

12.nova  计算节点

13.安装neutron网络服务   (控制节点)

14.安装neutron网络服务   (计算节点)

15.网络配置

16.安装horizon服务

17.vxlan

18.启动实例(控制节点)

19.openstack调整实例大小

opernstack 虚拟机宽带限速

20.安装块存储服务cinder

21.自定义存储卡

22.add nfs

23.其他测试

24.界面创建卷类型


 

1.准备环境

主机名、hosts解析、防火墙、ipv6、网卡名eth0、阿里源(无必须)

主机名

hostnamectl set-hostname controller
hostnamectl set-hostname compute1

hosts

cat >> /etc/hosts << EOF
172.16.30.4 controller
172.16.30.5 compute1
EOF

防火墙

systemctl status firewalld
systemctl stop firewalld
systemctl disable firewalld



setenforce 0

永久关闭SELinux(重启后生效)
编辑/etc/selinux/config 文件,将SELinux的默认值enforcing 改为 disabled,下次开机就不会再启动
注意:此时也不可以用setenforce 1 命令临时打开 

ipv6

echo "net.ipv6.conf.all.disable_ipv6 1" >>/etc/sysctl.conf
sysctl -p

网卡名

1.编辑网卡信息

[root@linux-node2~]# cd /etc/sysconfig/network-scripts/  #进入网卡目录

[root@linux-node2network-scripts]# mv ifcfg-eno16777728 ifcfg-eth0  #重命名网卡名称

[root@linux-node2 network-scripts]# cat ifcfg-eth0  #编辑网卡信息

TYPE=Ethernet

BOOTPROTO=static

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

IPV4_FAILURE_FATAL=no

NAME=eth0  #name修改为eth0

ONBOOT=yes

IPADDR=192.168.56.12

NETMASK=255.255.255.0

GATEWAY=192.168.56.2

DNS1=192.168.56.2

2.修改grub

[root@linux-node2 ~]# cat /etc/sysconfig/grub

GRUB_TIMEOUT=5

GRUB_DEFAULT=saved

GRUB_DISABLE_SUBMENU=true

GRUB_TERMINAL_OUTPUT=”console”

GRUB_CMDLINE_LINUX=”crashkernel=auto rhgb net.ifnames=0 biosdevname=0 quiet”

GRUB_DISABLE_RECOVERY=”true”

[root@linux-node2 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg  #生成启动菜单

Generating

grub configuration file …

Found

linux image: /boot/vmlinuz-3.10.0-229.el7.x86_64

Found

initrd image: /boot/initramfs-3.10.0-229.el7.x86_64.img

Found

linux image: /boot/vmlinuz-0-rescue-1100f7e6c97d4afaad2e396403ba7f61

Found

initrd image: /boot/initramfs-0-rescue-1100f7e6c97d4afaad2e396403ba7f61.img

Done

3.验证是否修改成功

[root@linux-node2 ~]# reboot  #必须重启系统生效

[root@linux-node2 ~]# yum install net-tools  #默认centos7不支持ifconfig 需要看装net-tools包

[root@linux-node2 ~]# ifconfig eth0  #在次查看网卡信息

eth0:

flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

inet 192.168.56.12  netmask 255.255.255.0  broadcast 192.168.56.255

inet6 fe80::20c:29ff:fe5c:7bb1  prefixlen 64

scopeid 0x20<link>

ether 00:0c:29:5c:7b:b1  txqueuelen 1000  (Ethernet)

RX packets 152  bytes 14503 (14.1 KiB)

RX errors 0  dropped 0

overruns 0  frame 0

TX packets 98  bytes 14402 (14.0 KiB)

TX errors 0  dropped 0 overruns 0  carrier 0

collisions 0

2.安装网络时间协议(NTP)

yum install chrony -y

 主节点配置

vim /etc/chrony.conf
server ntp.aliyun.com iburst

allow 172.16.30.0/24

从节点

vim /etc/chrony.conf
server controller iburst

重启

systemctl restart chronyd.service
systemctl enable chronyd.service

3.openstack软件包

yum update -y
yum install centos-release-openstack-rocky python-openstackclient openstack-selinux openstack-utils -y

4.安装数据库【controller节点】

yum install mariadb mariadb-server python2-PyMySQL -y

创建和编辑/etc/my.cnf.d/openstack.cnf文件(/etc/my.cnf.d/如果需要,请备份现有配置文件)并完成以下操作
创建一个[mysqld]部分,并将bind-address

cat > /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]
bind-address = controller

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF

启动

systemctl enable mariadb.service
systemctl start mariadb.service

通过运行mysql_secure_installation 脚本来保护数据库服务。特别是,为数据库root帐户选择合适的密码 :

# mysql_secure_installation
回车-n(无密码)-一路y

5.安装消息队列服务【controller节点】

yum install rabbitmq-server -y

启动消息队列服务,并将其配置为在系统引导时启动

systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service


rabbitmq-plugins enable rabbitmq_management

添加openstack用户:

rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

安装好之后,使用netstat -tnlup 查看,如果有下图所示的25672和5672端口,则表示安装成功。

6.安装memcache【controller节点】

yum install memcached python-memcached -y

编辑/etc/sysconfig/memcached

sed -i '/OPTIONS/c\OPTIONS "-l 0.0.0.0"' /etc/sysconfig/memcached

启动Memcached服务

systemctl enable memcached.service
systemctl start memcached.service

安装和启动好之后,同样使用netstat -tnlup查看端口情况,看到11211端口有程序在侦听则表示memcache安装成功

7.安装etcd【controller节点】

yum install etcd -y

编辑/etc/etcd/etcd.conf

cp -a /etc/etcd/etcd.conf{,.bak}
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.70.13:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.70.13:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.70.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.70.13:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.16.70.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

启动

systemctl enable etcd
systemctl start etcd

安装和启动好之后,同样使用netstat -tnlup查看端口情况,看到2379和2380端口有程序在侦听则表示etcd安装成功

8.安装keystone

使用数据库访问客户端以root用户身份连接到数据库服务器:

$ mysql -u root -p
创建keystone数据库:
MariaDB [(none)]> CREATE DATABASE keystone;
授予对keystone数据库的适当访问权限:(可替换KEYSTONE_DBPASS为合适的密码)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

安装软件包

yum install openstack-keystone httpd mod_wsgi -y

编辑/etc/keystone/keystone.conf

cp -a /etc/keystone/keystone.conf{,.bak}
grep -Ev "^$|#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf



openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet

填充身份服务数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

初始化Fernet密钥存储库

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份服务

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

9.配置apache

编辑/etc/httpd/conf/httpd.conf

echo "ServerName controller" >> /etc/httpd/conf/httpd.conf

创建/usr/share/keystone/wsgi-keystone.conf文件链接

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

启动Apache HTTP服务

systemctl enable httpd.service
systemctl start httpd.service

配置管理帐户

cat > admin-openrc.sh << EOF
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
EOF

创建用户绑定项目赋予权限

openstack domain create --description "Domain" example
# 创建服务所使用的项目
openstack project create --domain default --description "Service Project" service
# 创建项目
openstack project create --domain default --description "Demo Project" demo
# 创建用户
openstack user create --domain default --password DEMO_PASS demo
# 创建user角色
openstack role create user
# 将角色和项目用户绑定
openstack role add --project demo --user demo user



openstack user create --domain default --password kaikai136 kaikai
openstack role add --project admin --user kaikai admin

10.安装glance

配置数据库

使用数据库访问客户端以root用户身份连接到数据库服务器:
$ mysql -u root -p
创建glance数据库:
MariaDB [(none)]> CREATE DATABASE glance;
授予对glance数据库的适当访问权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';

创建glance用户:

openstack user create --domain default --password-prompt glance
openstack user create --domain default --password GLANCE_PASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

安装软件包:

yum install openstack-glance -y

编辑/etc/glance/glance-api.conf 

cp -a /etc/glance/glance-api.conf{,.bak}
cp -a /etc/glance/glance-registry.conf{,.bak}
grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
grep -Ev '^$|#' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-api.conf database connection   mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri    http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type   password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name   Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name   Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name   service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username   glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password   GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor   keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores   file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store   file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir   /var/lib/glance/images/

编辑/etc/glance/glance-registry.conf 

openstack-config --set /etc/glance/glance-registry.conf database connection   mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type   password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name   Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name   Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name   service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username   glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password   GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor   keystone

填充图像服务数据库

su -s /bin/sh -c "glance-manage db_sync" glance

启动映像服务

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service

验证glance

glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public
glance image-list

导出

glance image-download --file /root/cirros.img 319c7fb7-4237-41e5-bad5-241e5025931a
命令解释:/root/cirros.img为导出后的镜像存放路径及名称,789eb102-031b-4559-a00e-eeeb3272c37c为需要导出镜像的ID

11.nova安装  控制节点

数据库配置

使用数据库访问客户端以root用户身份连接到数据库服务器:
$ mysql -u root -p
创建nova_api,nova,nova_cell0,和placement数据库:
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> CREATE DATABASE placement;

授予对数据库的适当访问权限

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
exit;

  创建nova用户:

openstack user create --domain default --password NOVA_PASS nova

admin向nova用户添加角色

openstack role add --project service --user nova admin

创建nova服务实体

openstack service create --name nova --description "OpenStack Compute" compute

创建Compute API服务

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

使用您选择的来创建展示位置服务用户

openstack user create --domain default --password PLACEMENT_PASS placement
openstack role add --project service --user placement admin

创建Placement API服务

openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

安装软件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

编辑/etc/nova/nova.conf

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis   osapi_compute,metadata

openstack-config --set /etc/nova/nova.conf api_database connection   mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection   mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf placement_database connection   mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

openstack-config --set /etc/nova/nova.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf api auth_strategy   keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url   http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type   password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name   Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name   Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name   service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username   nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password   NOVA_PASS

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip   172.16.70.13
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron   true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver   nova.virt.firewall.NoopFirewallDriver


openstack-config --set /etc/nova/nova.conf vnc enabled   true
openstack-config --set /etc/nova/nova.conf vnc server_listen  ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address   ' $my_ip'

openstack-config --set /etc/nova/nova.conf glance api_servers   http://controller:9292

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path   /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf placement region_name   RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name   Default
openstack-config --set /etc/nova/nova.conf placement project_name   service
openstack-config --set /etc/nova/nova.conf placement auth_type   password
openstack-config --set /etc/nova/nova.conf placement user_domain_name   Default
openstack-config --set /etc/nova/nova.conf placement auth_url   http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username   placement
openstack-config --set /etc/nova/nova.conf placement password   PLACEMENT_PASS

通过将以下配置添加到来启用对Placement API的访问

vim /etc/httpd/conf.d/00-nova-placement-api.conf

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

重新启动httpd服务

systemctl restart httpd

填充nova-api和placement数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

验证

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
|  名称 |                 UUID                 |           Transport URL            |                    数据库连接                   | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |  False   |
| cell1 | 646b8425-7d9b-4781-acd7-0a379e8e894d | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |  False   |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+

启动

systemctl enable openstack-nova-api.service openstack-nova-consoleauth openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

12.nova  计算节点

安装软件包

yum install openstack-nova-compute -y

编辑/etc/nova/nova.conf

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis   osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller

openstack-config --set /etc/nova/nova.conf api auth_strategy   keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url   http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type   password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name   Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name   Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name   service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username   nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password   NOVA_PASS

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip   172.16.70.12
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron   true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver   nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf vnc enabled   true
openstack-config --set /etc/nova/nova.conf vnc server_listen   0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address  ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url   http://controller:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance api_servers   http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path   /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf placement region_name   RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name   Default
openstack-config --set /etc/nova/nova.conf placement project_name   service
openstack-config --set /etc/nova/nova.conf placement auth_type   password
openstack-config --set /etc/nova/nova.conf placement user_domain_name   Default
openstack-config --set /etc/nova/nova.conf placement auth_url   http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username   placement
openstack-config --set /etc/nova/nova.conf placement password   PLACEMENT_PASS

确定计算节点是否支持虚拟机硬件加速

egrep -c '(vmx|svm)' /proc/cpuinfo

如果此命令返回值不是0,则计算节点支持硬件加速,不需要加入下面的配置。
如果此命令返回值是0,则计算节点不支持硬件加速,并且必须配置libvirt为使用QEMU而不是KVM,需要编辑/etc/nova/nova.conf 文件中的[libvirt]部分

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

启动Compute服务及其相关服务,并设置开机自启

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service

controller节点测试


openstack compute service list --service nova-compute
+----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary       | Host     | Zone | Status  | State | Updated At                 |
+----+--------------+----------+------+---------+-------+----------------------------+
| 11 | nova-compute | compute1 | nova | enabled | up    | 2021-01-20T08:55:21.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 70ad8af0-77de-4f55-8c04-6cc3cc1ee7f1
Checking host mapping for compute host 'compute1': 59ee0013-487a-49d6-ab27-d7f2100eaf61
Creating host mapping for compute host 'compute1': 59ee0013-487a-49d6-ab27-d7f2100eaf61
Found 1 unmapped computes in cell: 70ad8af0-77de-4f55-8c04-6cc3cc1ee7f1

设置适当的发现时间间隔(可选)

vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
systemctl restart openstack-nova-api.service
openstack compute service list --service nova-compute
openstack compute service list
openstack catalog list
nova-status upgrade check

---------------------
默认OpenStack的CPU超配比例是1:16,内存超配比例是1:1.5

13.安装neutron网络服务   (控制节点)

使用数据库访问客户端以root用户身份连接到数据库服务器

mysql
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
exit;

创建服务凭证

openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

创建网络服务API

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

安装软件包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

配置 /etc/neutron/neutron.conf

cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf  database connection   mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set  /etc/neutron/neutron.conf  DEFAULT core_plugin   ml2
openstack-config --set  /etc/neutron/neutron.conf  DEFAULT service_plugins
openstack-config --set  /etc/neutron/neutron.conf  DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller
openstack-config --set  /etc/neutron/neutron.conf  DEFAULT auth_strategy   keystone

openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken auth_url   http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken memcached_servers   controller:11211
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken auth_type   password
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken project_domain_name   default
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken user_domain_name   default
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken project_name   service
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken username   neutron
openstack-config --set  /etc/neutron/neutron.conf  keystone_authtoken password   NEUTRON_PASS


openstack-config --set  /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_status_changes   true
openstack-config --set  /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_data_changes   true
openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

*******
openstack-config --set  /etc/neutron/neutron.conf nova auth_url   http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf nova auth_type   password
openstack-config --set  /etc/neutron/neutron.conf nova project_domain_name   default
openstack-config --set  /etc/neutron/neutron.conf nova user_domain_name   default
openstack-config --set  /etc/neutron/neutron.conf nova region_name   RegionOne
openstack-config --set  /etc/neutron/neutron.conf nova project_name   service
openstack-config --set  /etc/neutron/neutron.conf nova username   nova
openstack-config --set  /etc/neutron/neutron.conf nova password   NOVA_PASS

配置/etc/neutron/plugins/ml2/ml2_conf.ini

cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true

配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改内核参数

echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.disable_ipv6   = 1' >> /etc/sysctl.conf

modprobe br_netfilter
sysctl -p

dhcp agent配置文件dhcp_agent.ini

cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

配置元数据代理,以便和nova通讯

cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini

openstack-config --set  /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host  controller
openstack-config --set  /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret  METADATA_SECRET

修改nova的配置文件,为了和neutron交互

openstack-config --set  /etc/nova/nova.conf neutron url   http://controller:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url   http://controller:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type   password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name   default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name   default
openstack-config --set  /etc/nova/nova.conf neutron region_name   RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name   service
openstack-config --set  /etc/nova/nova.conf neutron username   neutron
openstack-config --set  /etc/nova/nova.conf neutron password   NEUTRON_PASS
openstack-config --set  /etc/nova/nova.conf neutron service_metadata_proxy   true
openstack-config --set  /etc/nova/nova.conf neutron metadata_proxy_shared_secret   METADATA_SECRET

建立ml2的软连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

填充neutron数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service

启动neutron服务

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service

14.安装neutron网络服务   (计算节点)

yum install openstack-neutron-linuxbridge ebtables ipset -y

修改配置文件

修改neutron主配置文件

cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf



openstack-config --set  /etc/neutron/neutron.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy   keystone
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type   password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name   default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name   default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name   service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username   neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password   NEUTRON_PASS
openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path   /var/lib/neutron/tmp

配置Linux网桥代理

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings   provider:eth0
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan   false
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group   true
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver   neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

修改内核参数

echo 'net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.disable_ipv6 = 1' >> /etc/sysctl.conf

modprobe br_netfilter
sysctl -p

修改nova

配置文件

openstack-config --set /etc/nova/nova.conf neutron url   http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url   http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type   password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name   default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name   default
openstack-config --set /etc/nova/nova.conf neutron region_name   RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name   service
openstack-config --set /etc/nova/nova.conf neutron username   neutron
openstack-config --set /etc/nova/nova.conf neutron password   NEUTRON_PASS

重启服务

systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service

验证(控制节点)

openstack compute service list

openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 9f24a0b7-5878-4386-abde-cb2de7e558ab | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| bf6a201c-093c-4d5f-b45e-f7625948de26 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| c3ffe06a-b3ef-4dad-9b17-c1f4196701dd | Linux bridge agent | compute1   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| c8ffbcf6-3ded-478e-92f8-f16e70a39381 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

15.网络配置

增加一个网卡   (控制节点)

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider,net_1

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:eth0,net_1:eth1
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:eth0
systemctl restart neutron-server.service neutron-linuxbridge-agent.service

计算节点

openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings   provider:eth0,net_1:eth1
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings   provider:eth0
systemctl restart neutron-linuxbridge-agent.service


openstack network create  --share --external --provider-physical-network net_1 --provider-network-type flat net_1

openstack subnet create --network net_1 --allocation-pool start=172.16.30.68,end=172.16.30.70 --dns-nameserver 114.114.114.114 --gateway 172.16.30.254 --subnet-range 172.16.30.0/24 net_1

16.安装horizon服务

yum install openstack-dashboard -y
vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

TIME_ZONE = "Asia/Shanghai"


vim /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}

systemctl restart httpd.service memcached.service

17.vxlan

(控制节点)

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin    ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins    router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips    true

openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers    flat,vlan,vxlan
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types    vxlan
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers    linuxbridge,l2population
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges    1:10000


openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan    true
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip    172.16.0.4
openstack-config --set  /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population    true

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service

(计算节点)

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan    true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip    172.16.71.12
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population    true

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  enable_security_group    true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  firewall_driver    neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

systemctl restart openstack-nova-compute.service

systemctl restart neutron-linuxbridge-agent.service


vim /etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {
    ...
    'enable_router': True,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

systemctl restart httpd.service memcached.service

18.启动实例(控制节点)

1、检查各个节点间的网络通讯ping

2、删除NetworkManager软件包    在控制节点和计算节点都执行

yum remove NetworkManager -y
yum install conntrack-tools -y
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano


ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

openstack flavor list
openstack image list
openstack network list
openstack security group list

openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider

openstack subnet create --network provider --allocation-pool start=172.16.30.10,end=172.16.30.15 --dns-nameserver 114.114.114.114 --gateway 172.16.30.254 --subnet-range 172.16.30.0/24 provider
openstack subnet create --network provider --allocation-pool start=172.16.70.50,end=172.16.70.100 --dns-nameserver 114.114.114.114 --gateway 172.16.70.254 --subnet-range 172.16.70.0/24 provider

创建网络:

openstack network create selfservice
openstack subnet create --network selfservice --dns-nameserver 114.114.114.114 --gateway 172.16.71.254 --subnet-range 172.16.71.0/24 selfservice

创建路由器:

openstack router create router

openstack router add subnet router selfservice
openstack router set router --external-gateway provider

19.openstack调整实例大小

修改controller和各个computer节点的nova.cnf文件

openstack-config --set /etc/nova/nova.conf  DEFAULT allow_resize_to_same_host True
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_filters RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

然后重启服务
控制节点:

systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

计算节点

systemctl restart libvirtd.service openstack-nova-compute.service

passwd

vim /etc/openstack-dashboard/local_settings

# The OPENSTACK_HYPERVISOR_FEATURES settings can be used to enable optional
# services provided by hypervisors.
OPENSTACK_HYPERVISOR_FEATURES = {
    'can_set_mount_point': False,
    'can_set_password': True,
}

注意:新版openstack中dashboard界面已经修改,无法看到上述设置虚拟机密码栏目,需要切换到老版界面,同样编辑上述配置文件,修改如下配置参数:

LAUNCH_INSTANCE_LEGACY_ENABLED = True
LAUNCH_INSTANCE_NG_ENABLED = True
systemctl restart httpd.service

在计算节点上:
 

openstack-config --set /etc/nova/nova.conf libvirt inject_password True
systemctl restart libvirtd.service openstack-nova-compute.service

启动实例配置

centos
#!/bin/sh
mv /root/.ssh/authorized_keys /root/.ssh/authorized_keys.old
cp  /home/centos/.ssh/authorized_keys /root/.ssh/
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
passwd root<<EOF
kaikai136
kaikai136
EOF
service ssh restart
ubuntu
#!/bin/sh
sed -i '/^$/d' /etc/ssh/sshd_config
sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
cp -f /home/ubuntu/.ssh/authorized_keys /root/.ssh/
passwd root<<EOF
kaikai136
kaikai136
EOF
service ssh restart

opernstack 虚拟机宽带限速

openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins   neutron.services.qos.qos_plugin.QoSPlugin
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers qos
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent extensions qos

systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

在计算节点上:

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent extensions qos
systemctl start neutron-linuxbridge-agent.service


neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000 --max-burst-kbps 300
neutron qos-bandwidth-limit-rule-update bw-limiter --max-kbps 3000 --max-burst-kbps 300

20.安装块存储服务cinder

(控制节点)

mysql
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
exit;
openstack user create --domain default --password CINDER_PASS cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
yum install openstack-cinder -y

安装和配置的部件

cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf database connection   mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy   keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type   password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_id   default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_id   default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name   service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username   cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password   CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip   172.16.70.13
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path   /var/lib/cinder/tmp
su -s /bin/sh -c "cinder-manage db sync" cinder


openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service

(存储节点)

yum install centos-release-openstack-rocky python-openstackclient openstack-selinux openstack-utils -y
yum install openstack-utils -y

yum install lvm2 device-mapper-persistent-data -y

systemctl enable lvm2-lvmetad.service
systemctl restart lvm2-lvmetad.service
echo '- - -' >/sys/class/scsi_host/host0/scan

pvcreate /dev/sdb
pvcreate /dev/sdc

vgcreate cinder-ssd /dev/sdb
vgcreate cinder-sata /dev/sdc
vim /etc/lvm/lvm.conf
130行
filter = [ "a/sda4/", "a/sdc/", "r/.*/"]
filter = [ "a/sda4/", "r/.*/"]
yum install openstack-cinder targetcli python-keystone openstack-utils -y
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
openstack-config --set  /etc/cinder/cinder.conf database connection   mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller
openstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy   keystone

openstack-config --set  /etc/cinder/cinder.conf DEFAULT my_ip   172.16.70.11

21.自定义存储卡

openstack-config --set  /etc/cinder/cinder.conf DEFAULT enabled_backends   ssd,sata
openstack-config --set  /etc/cinder/cinder.conf DEFAULT glance_api_servers   http://controller:9292
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_type   password
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_domain_id   default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken user_domain_id   default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_name   service
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken username   cinder
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken password   CINDER_PASS
openstack-config --set  /etc/cinder/cinder.conf ssd volume_driver   cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set  /etc/cinder/cinder.conf ssd volume_group   cinder-ssd
openstack-config --set  /etc/cinder/cinder.conf ssd iscsi_protocol   iscsi
openstack-config --set  /etc/cinder/cinder.conf ssd iscsi_helper   lioadm
openstack-config --set  /etc/cinder/cinder.conf ssd volume_backend_name   ssd
openstack-config --set  /etc/cinder/cinder.conf sata volume_driver   cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set  /etc/cinder/cinder.conf sata volume_group   cinder-sata
openstack-config --set  /etc/cinder/cinder.conf sata iscsi_protocol   iscsi
openstack-config --set  /etc/cinder/cinder.conf sata iscsi_helper   lioadm
openstack-config --set  /etc/cinder/cinder.conf sata volume_backend_name   sata
openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path   /var/lib/cinder/tmp
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service

验证

openstack volume service list

22.add nfs

yum install nfs-utils -y
vim /etc/export
/data 172.16.30.0/24(rw,async,no_root_squash,no_all_squash)
mkdir /data
systemctl enable rpcbind
systemctl enable nfs
systemctl restart rpcbind
systemctl restart nfs

23.其他测试

yum install nfs-utils -y

showmount -e 172.16.30.5
Export list for 172.16.30.5:
/data 172.16.30.0/24

(存储节点)

openstack-config --set  /etc/cinder/cinder.conf DEFAULT enabled_backends   ssd,sata,nfs
openstack-config --set  /etc/cinder/cinder.conf DEFAULT enabled_backends  sata,nfs

openstack-config --set  /etc/cinder/cinder.conf nfs volume_driver cinder.volume.drivers.nfs.NfsDriver
openstack-config --set  /etc/cinder/cinder.conf nfs nfs_shares_config /etc/cinder/nfs_shares
openstack-config --set  /etc/cinder/cinder.conf nfs volume_backend_name nfs

配置 /etc/cinder/nfs_shares

vim /etc/cinder/nfs_shares
172.16.30.5:/data
systemctl restart openstack-cinder-volume.service
openstack volume service list


showmount -e 172.16.30.5

24.界面创建卷类型

卷类型  >> 名称(ssd, sata, nfs)  >> 卷类型扩展规格(键,值)

volume_backend_name ssd
volume_backend_name sata
volume_backend_name	nfs

nfs使用(虚拟机内部)

mkfs.ext4 /dev/vdb
mount /dev/vdb /mnt

nfs服务器

mount -o loop /data/volume-2a4b2afc-5aa5-424f-948c-2ba62d301c24 /srv


********************************************openstack mv   restart******************************************************

控制节点

# cinder
systemctl restart openstack-cinder-volume.service target.service
---------compute
# compute  neutron
systemctl restart neutron-linuxbridge-agent.service
systemctl restart openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service

systemctl enable rpcbind
systemctl enable nfs
systemctl restart rpcbind
systemctl restart nfs

----------controller

# nova
systemctl restart openstack-nova-api.service openstack-nova-consoleauth openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# neutron
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
# glance
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl restart rabbitmq-server.service
systemctl restart httpd.service
systemctl restart memcached.service
systemctl restart mariadb.service
systemctl restart chronyd.service
systemctl restart neutron-l3-agent.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
openstack volume service list
openstack compute service list
openstack compute service list --service nova-compute
openstack compute service list
openstack catalog list
nova-status upgrade check
nova service-list
neutron agent-list
glance image-list
cinder service-list
cinder-manage service list

控制节点

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip   172.16.30.1
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip   172.16.30.1

计算节点

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip   172.16.30.2

(存储节点)

openstack-config --set  /etc/cinder/cinder.conf DEFAULT my_ip   172.16.30.3

cat >> /etc/hosts << EOF
172.16.30.1 controller
172.16.30.2 compute1
EOF

+++++++++++++++++++++++++++++++++tacker+++++++++++++++++++++++++++++++++++++++++++++++++++++++++

mysql
CREATE DATABASE tacker;
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' IDENTIFIED BY 'TACKER_DBPASS';
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' IDENTIFIED BY 'TACKER_DBPASS';
exit;
openstack user create --domain default --password TACKER_PASS tacker
openstack role add --project service --user tacker admin

openstack service create --name tacker --description "Tacker Project" nfv-orchestration
openstack endpoint create --region RegionOne nfv-orchestration public http://<TACKER_NODE_IP>:9890/
openstack endpoint create --region RegionOne nfv-orchestration internal http://<TACKER_NODE_IP>:9890/
openstack endpoint create --region RegionOne nfv-orchestration admin http://<TACKER_NODE_IP>:9890/

openstack endpoint create --region RegionOne--publicurl 'http://<TACKER_NODE_IP>:9890/' --adminurl 'http://<TACKER_NODE_IP>:9890/' --internalurl 'http://<TACKER_NODE_IP>:9890/' <SERVICE-ID>

git clone https://github.com/openstack/tacker -b <branch_name>
cd tacker
sudo pip install -r requirements.txt
sudo python setup.py install
sudo mkdir /var/log/tacker

**********************************mistral***************************************************************

mysql
CREATE DATABASE manila;

GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'localhost' IDENTIFIED BY 'MANILA_DBPASS';
GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'%' IDENTIFIED BY 'MANILA_DBPASS';
exit;
openstack user create --domain default --password MANILA_PASS manila
openstack role add --project service --user manila admin
openstack service create --name manila --description "OpenStack Shared File Systems" share
openstack service create --name manilav2 --description "OpenStack Shared File Systems V2" sharev2
openstack endpoint create --region RegionOne share public http://controller:8786/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne share internal http://controller:8786/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne share admin http://controller:8786/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne sharev2 public http://controller:8786/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne sharev2 internal http://controller:8786/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne sharev2 admin http://controller:8786/v2/%\(tenant_id\)s
yum install openstack-manila python-manilaclient -y
cp -a /etc/manila/manila.conf{,.bak}
grep -Ev '^$|#' /etc/manila/manila.conf.bak > /etc/manila/manila.conf
openstack-config --set /etc/manila/manila.conf database connection  mysql+pymysql://manila:MANILA_DBPASS@controller/manila
openstack-config --set /etc/manila/manila.conf DEFAULT transport_url  rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/manila/manila.conf DEFAULT default_share_type  default_share_type
openstack-config --set /etc/manila/manila.conf DEFAULT share_name_template  share-%s
openstack-config --set /etc/manila/manila.conf DEFAULT rootwrap_config  /etc/manila/rootwrap.conf
openstack-config --set /etc/manila/manila.conf DEFAULT api_paste_config  /etc/manila/api-paste.ini
openstack-config --set /etc/manila/manila.conf DEFAULT auth_strategy  keystone
openstack-config --set /etc/manila/manila.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/manila/manila.conf keystone_authtoken www_authenticate_uri  http://controller:5000
openstack-config --set /etc/manila/manila.conf keystone_authtoken auth_url  http://controller:5000
openstack-config --set /etc/manila/manila.conf keystone_authtoken auth_type  password
openstack-config --set /etc/manila/manila.conf keystone_authtoken project_domain_name  Default
openstack-config --set /etc/manila/manila.conf keystone_authtoken user_domain_name  Default
openstack-config --set /etc/manila/manila.conf keystone_authtoken project_name  service
openstack-config --set /etc/manila/manila.conf keystone_authtoken username  manila
openstack-config --set /etc/manila/manila.conf keystone_authtoken password  MANILA_PASS
openstack-config --set /etc/manila/manila.conf DEFAULT my_ip  172.16.30.1
openstack-config --set /etc/manila/manila.conf oslo_concurrency lock_path  /var/lock/manila
su -s /bin/sh -c "manila-manage db sync" manila

mkdir -p /var/lock/manila/tmp
chown manila:manila /var/lock/manila/tmp
systemctl enable openstack-manila-api.service openstack-manila-scheduler.service
systemctl start openstack-manila-api.service openstack-manila-scheduler.service
***manila node
yum install openstack-neutron openstack-neutron-linuxbridge ebtables -y
cp -a /etc/manila/manila.conf{,.bak}
grep -Ev '^$|#' /etc/manila/manila.conf.bak > /etc/manila/manila.conf
openstack-config --set /etc/manila/manila.conf database connection   mysql+pymysql://manila:MANILA_DBPASS@controller/manila

openstack-config --set /etc/manila/manila.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller

openstack-config --set /etc/manila/manila.conf DEFAULT default_share_type   default_share_type
openstack-config --set /etc/manila/manila.conf DEFAULT rootwrap_config   /etc/manila/rootwrap.conf

openstack-config --set /etc/manila/manila.conf DEFAULT auth_strategy   keystone

openstack-config --set /etc/manila/manila.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set /etc/manila/manila.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set /etc/manila/manila.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set /etc/manila/manila.conf keystone_authtoken auth_type   password
openstack-config --set /etc/manila/manila.conf keystone_authtoken project_domain_name   Default
openstack-config --set /etc/manila/manila.conf keystone_authtoken user_domain_name   Default
openstack-config --set /etc/manila/manila.conf keystone_authtoken project_name   service
openstack-config --set /etc/manila/manila.conf keystone_authtoken username   manila
openstack-config --set /etc/manila/manila.conf keystone_authtoken password   MANILA_PASS

openstack-config --set /etc/manila/manila.conf DEFAULT my_ip   172.16.30.3

openstack-config --set /etc/manila/manila.conf oslo_concurrency lock_path   /var/lib/manila/tmp




openstack-config --set /etc/manila/manila.conf DEFAULT enabled_share_backends   generic
openstack-config --set /etc/manila/manila.conf DEFAULT enabled_share_protocols   NFS


openstack-config --set /etc/manila/manila.conf neutron url   http://controller:9696
openstack-config --set /etc/manila/manila.conf neutron www_authenticate_uri   http://controller:5000
openstack-config --set /etc/manila/manila.conf neutron auth_url   http://controller:5000
openstack-config --set /etc/manila/manila.conf neutron memcached_servers   controller:11211
openstack-config --set /etc/manila/manila.conf neutron auth_type   password
openstack-config --set /etc/manila/manila.conf neutron project_domain_name   Default
openstack-config --set /etc/manila/manila.conf neutron user_domain_name   Default
openstack-config --set /etc/manila/manila.conf neutron region_name   RegionOne
openstack-config --set /etc/manila/manila.conf neutron project_name   service
openstack-config --set /etc/manila/manila.conf neutron username   neutron
openstack-config --set /etc/manila/manila.conf neutron password   NEUTRON_PASS


openstack-config --set /etc/manila/manila.conf nova www_authenticate_uri   http://controller:5000
openstack-config --set /etc/manila/manila.conf nova auth_url   http://controller:5000
openstack-config --set /etc/manila/manila.conf nova memcached_servers   controller:11211
openstack-config --set /etc/manila/manila.conf nova auth_type   password
openstack-config --set /etc/manila/manila.conf nova project_domain_name   Default
openstack-config --set /etc/manila/manila.conf nova user_domain_name   Default
openstack-config --set /etc/manila/manila.conf nova region_name   RegionOne
openstack-config --set /etc/manila/manila.conf nova project_name   service
openstack-config --set /etc/manila/manila.conf nova username   nova
openstack-config --set /etc/manila/manila.conf nova password   NOVA_PASS


openstack-config --set /etc/manila/manila.conf cinder www_authenticate_uri   http://controller:5000
openstack-config --set /etc/manila/manila.conf cinder auth_url   http://controller:5000
openstack-config --set /etc/manila/manila.conf cinder memcached_servers   controller:11211
openstack-config --set /etc/manila/manila.conf cinder auth_type   password
openstack-config --set /etc/manila/manila.conf cinder project_domain_name   Default
openstack-config --set /etc/manila/manila.conf cinder user_domain_name   Default
openstack-config --set /etc/manila/manila.conf cinder region_name   RegionOne
openstack-config --set /etc/manila/manila.conf cinder project_name   service
openstack-config --set /etc/manila/manila.conf cinder username   cinder
openstack-config --set /etc/manila/manila.conf cinder password   CINDER_PASS


openstack-config --set /etc/manila/manila.conf generic share_backend_name   GENERIC
openstack-config --set /etc/manila/manila.conf generic share_driver   manila.share.drivers.generic.GenericShareDriver
openstack-config --set /etc/manila/manila.conf generic driver_handles_share_servers   True
openstack-config --set /etc/manila/manila.conf generic service_instance_flavor_id   100
openstack-config --set /etc/manila/manila.conf generic service_image_name   manila-service-image
openstack-config --set /etc/manila/manila.conf generic service_instance_user   manila
openstack-config --set /etc/manila/manila.conf generic service_instance_password   manila
openstack-config --set /etc/manila/manila.conf generic interface_driver   manila.network.linux.interface.BridgeInterfaceDriver
systemctl enable openstack-manila-share.service
systemctl restart openstack-manila-share.service
manila type-create default_share_type True
+----------------------+--------------------------------------+
| Property             | Value                                |
+----------------------+--------------------------------------+
| required_extra_specs | driver_handles_share_servers : True  |
| Name                 | default_share_type                   |
| Visibility           | public                               |
| is_default           | YES                                  |
| ID                   | 92bdd1fd-1e8f-4c64-9d33-28f1ffe0c9a1 |
| optional_extra_specs |                                      |
| Description          | None                                 |
+----------------------+--------------------------------------+
curl https://tarballs.openstack.org/manila-image-elements/images/manila-service-image-master.qcow2 | \
glance image-create \
--name "manila-service-image" \
--disk-format qcow2 \
--container-format bare \
--visibility public --progress
openstack-config --set  /etc/cinder/cinder.conf database connection   mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url   rabbit://openstack:RABBIT_PASS@controller
openstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy   keystone

openstack-config --set  /etc/cinder/cinder.conf DEFAULT my_ip   172.16.70.11
# 自定义存储卡
openstack-config --set  /etc/cinder/cinder.conf DEFAULT enabled_backends   sata
openstack-config --set  /etc/cinder/cinder.conf DEFAULT glance_api_servers   http://controller:9292
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri   http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_url   http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken memcached_servers   controller:11211
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_type   password
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_domain_id   default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken user_domain_id   default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_name   service
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken username   cinder
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken password   CINDER_PASS
# 自定义存储卡
openstack-config --set  /etc/cinder/cinder.conf ssd volume_driver   cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set  /etc/cinder/cinder.conf ssd volume_group   cinder-ssd
openstack-config --set  /etc/cinder/cinder.conf ssd iscsi_protocol   iscsi
openstack-config --set  /etc/cinder/cinder.conf ssd iscsi_helper   lioadm
openstack-config --set  /etc/cinder/cinder.conf ssd volume_backend_name   ssd

openstack-config --set  /etc/cinder/cinder.conf sata volume_driver   cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set  /etc/cinder/cinder.conf sata volume_group   cinder-sata
openstack-config --set  /etc/cinder/cinder.conf sata iscsi_protocol   iscsi
openstack-config --set  /etc/cinder/cinder.conf sata iscsi_helper   lioadm
openstack-config --set  /etc/cinder/cinder.conf sata volume_backend_name   sata
openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path   /var/lib/cinder/tmp

systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service

猜你喜欢

转载自blog.csdn.net/kaikai136412162/article/details/113678238