部署OpenStack Ocata的几个优化及排障记录(持续更新...)

1.compute节点上安装Ocata,openstack-nova-compute组件无法安装上,报错Requires: qemu-kvm-rhev >= 2.9.0
解决方法:
在compute节点上执行:

echo "
[Virt]
name=CentOS-$releasever - Base
baseurl=http://mirrors.163.com/centos/7.6.1810/virt/x86_64/kvm-common/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
">> /etc/yum.repos.d/CentOS-Base.repo

再次执行安装即可:

yum -y install openstack-selinux python-openstackclient yum-plugin-priorities openstack-nova-compute openstack-utils ntpdate

2.主机节点启动后,如果dashboard中获取卷失败
排查、解决办法:
controller1节点上

systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

发现进程全部为dead状态,启动之,保险起见还是再次设置为开机启动,查看状态

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

cinder节点上

systemctl status openstack-cinder-volume.service target.service lvm2-lvmetad.service

发现进程全部为dead状态,同样启动、开机启动、再次查看状态

systemctl start openstack-cinder-volume.service target.service lvm2-lvmetad.service
systemctl enable openstack-cinder-volume.service target.service lvm2-lvmetad.service
systemctl status openstack-cinder-volume.service target.service lvm2-lvmetad.service

相关日志:
controller1节点
/var/log/cinder/api.log
/var/log/cinder/cinder-image.log
/var/log/cinder/schedular.log
cinder节点
/var/log/cinder/volume.log
再次在dashboard中查看卷状态,为正常。

3.重启节点后,虚机不能DHCP动态获悉IP地址
由于neutron主节点部署在controller1上,在controller1上查看dhcp-agent状态

openstack service list
neutron agent list

均无错误,查看dhcp-agent日志

tail -f /var/log/neutron/dhcp-agent.log

实在不行就看看时间是否同步,时间不同步肯定不给分发IP

猜你喜欢

转载自blog.51cto.com/12114052/2424882