Contents:
Section 1 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–1--OpenStack Charms Deployment Guide
Section 2 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–2-Install MAAS
Section 3 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–3-Install Juju
Section 4 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–4-Install openstack
Section 8 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–8--Configure OpenStack
Section 9 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–9--Network Topology
Section 11 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–11–Access Juju Dashboard
参考文档:
《Specific series upgrade procedures-percona-cluster charm: series upgrade to Focal》
《OpenStack Charms Deployment Guide0.0.1dev276》
《ReleaseNotes1501》
[BUG] openstack hacluster apache2 service not running, wrong ssl cert name
Background note: This article is the steps of manual infrastructure HA after the deployment of bundle openstack-base #70 .
According to the "Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223-Appendix T-OpenStack High Availability" , HA can be divided into two categories, one is native HA and the other is non-native HA.
Native HA includes:
service | Application/Charm | Remarks |
---|---|---|
Ceph | ceph-mon, ceph-osd | |
MySQL | percona-cluster | MySQL 5.x; external high-availability technology required for client access; available before Ubuntu 20.04 LTS |
MySQL | mysql-innodb-cluster | MySQL 8.x; start using Ubuntu 20.04 LTS |
OVN | oven-central, oven-chassis | OVN is a highly available design that can be applied to OpenStack Ussuri, starting from Ubuntu 18.04 LTS and Ubuntu 20.04 LTS |
RabbitMQ | rabbitmq-server | |
Swift | swift-storage |
Deploy rabbitmq server cluster:
In the original text, the highly available commands for rabbitmq-server are:
juju deploy -n 3 --to lxd,lxd,lxd --config min-cluster-size=3 rabbitmq-server
This article has been changed to:
juju add-unit --to lxd:0 rabbitmq-server
juju add-unit --to lxd:1 rabbitmq-server
Other non-native HAs are:
general deployment commands for a cluster composed of three units.
juju deploy -n 3 --config vip=<ip-address> <charm-name>
juju deploy --config cluster_count=3 hacluster <charm-name>-hacluster
juju add-relation <charm-name>-hacluster:ha <charm-name>:ha
Deploy the keystone cluster:
The keystone high-availability configuration method is:
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.0.7.12 keystone
juju deploy --config cluster_count=3 hacluster keystone-hacluster
juju add-relation keystone-hacluster:ha keystone:ha
Because the keystone in the bundle openstack-base-70 has been installed, the above command fails.
After reading the document, I mean that you can add unit extensions as follows to deploy a keystone cluster:
juju add-unit --to lxd:1 keystone
juju add-unit --to lxd:2 keystone
juju set keystone vip=10.0.7.12
juju deploy --config cluster_count=3 --series focal hacluster keystone-hacluster
juju add-relation keystone-hacluster:ha keystone:ha
The juju set command juju version 2.8 is no longer supported. The
command is found to be changed to
juju add-unit --to lxd:1 keystone
juju add-unit --to lxd:2 keystone
juju config keystone vip=10.0.7.12
juju deploy --config cluster_count=3 --series focal hacluster keystone-hacluster
juju add-relation keystone-hacluster:ha keystone:ha
#Rebuild the keystone cluster, it is not recommended, it will hook fail:
#juju remove-unit keystone/0 --force --no-wait
#juju remove-applicationg keystone --force --no-wait
#juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.0.7.13 --series focal ./openstack-base-1/keystone --debug
#juju deploy --config cluster_count=3 hacluster keystone-hacluster
#juju add-relation keystone-hacluster:ha keystone:ha
Deploy the vault cluster:
The method in the original "OpenStack Charms Deployment Guide-0.0.1dev276-Infrastructure high availability" is:
In addition to hacluster and MySQL, Havault deployment requires etcd and easyrsa applications. In addition, each vault unit in the cluster must have its own unsealed vault instance.
In these example commands, for simplicity, a single percona-cluster unit is used
juju deploy --to lxd:1 percona-cluster mysql
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.246.114.11 vault
juju deploy --config cluster_count=3 hacluster vault-hacluster
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 etcd
juju deploy --to lxd:0 cs:~containers/easyrsa
juju add-relation vault:ha vault-hacluster:ha
juju add-relation vault:shared-db percona-cluster:shared-db
juju add-relation etcd:db vault:etcd
juju add-relation etcd:certificates easyrsa:client
However, because in openstack-base, the database uses mysql-innodb-cluster and has been clustered, because in focal, percona-cluster has been replaced by mysql-innodb-cluster.
Therefore, the juju command should have the following changes based on the actual situation:
#juju remove-unit vault/0 --force --no-wait
#juju remove-application vault --force --no-wait
#juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.0.7.22 --series focal vault --debug
juju add-unit --to lxd:1 vault
juju add-unit --to lxd:2 vault
juju config vault vip=10.0.7.21
juju deploy --config cluster_count=3 --series focal hacluster vault-hacluster
juju add-relation vault:ha vault-hacluster:ha
Screenshot before vault HA:
Unblock the three vault units separately:
Unblock vault/0:
export VAULT_ADDR="http://10.0.1.248:8200"
vault operator init -key-shares=5 -key-threshold=3
vault operator unseal FyoFAkE7rlqfVSnDwm4943tYAwx51UfSntW73rQdK7SX
vault operator unseal sj38M2qmnOAegNijJ1XYtxer17rGqtrJP7OPCeG8Tq1Q
vault operator unseal /s5IYKaUo4u4vvkP6fUEDwxtHjHdtek6HIgQ+GQ4okaG
export VAULT_TOKEN=s.YpBOElRdghjenojFo4YrXNPe
vault token create -ttl=720h
juju run-action --wait vault/leader authorize-charm token=s.ajIKkgKxDjy28EqiRqZWgkS5
juju run-action --wait vault/leader 'generate-root-ca'
View vault status:
juju status vault
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.8.7 unsupported 14:42:06+08:00
App Version Status Scale Charm Store Rev OS Notes
vault 1.5.4 blocked 3 vault local 0 ubuntu
vault-hacluster active 3 hacluster jujucharms 72 ubuntu
vault-mysql-router 8.0.23 active 3 mysql-router local 0 ubuntu
Unit Workload Agent Machine Public address Ports Message
vault/0* active idle 0/lxd/7 10.0.1.248 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-hacluster/0* active idle 10.0.1.248 Unit is ready and clustered
vault-mysql-router/0* active idle 10.0.1.248 Unit is ready
vault/1 blocked idle 1/lxd/8 10.0.2.12 8200/tcp Unit is sealed
vault-hacluster/1 active idle 10.0.2.12 Unit is ready and clustered
vault-mysql-router/1 active idle 10.0.2.12 Unit is ready
vault/2 blocked idle 2/lxd/7 10.0.2.11 8200/tcp Unit is sealed
vault-hacluster/2 active idle 10.0.2.11 Unit is ready and clustered
vault-mysql-router/2 active idle 10.0.2.11 Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 10.0.0.159 node4 focal default Deployed
0/lxd/7 started 10.0.1.248 juju-2c0e84-0-lxd-7 focal default Container started
1 started 10.0.0.156 node2 focal default Deployed
1/lxd/8 started 10.0.2.12 juju-2c0e84-1-lxd-8 focal default Container started
2 started 10.0.0.157 node1 focal default Deployed
2/lxd/7 started 10.0.2.11 juju-2c0e84-2-lxd-7 focal default Container started
juju run-action vault/0 pause --wait #可以不执行
juju status vault
Then unblock vault/1:
export VAULT_ADDR="http://10.0.2.12:8200"
vault operator unseal FyoFAkE7rlqfVSnDwm4943tYAwx51UfSntW73rQdK7SX
vault operator unseal sj38M2qmnOAegNijJ1XYtxer17rGqtrJP7OPCeG8Tq1Q
vault operator unseal /s5IYKaUo4u4vvkP6fUEDwxtHjHdtek6HIgQ+GQ4okaG
juju status vault
juju status --format=yaml vault | grep public-address | awk '{print $2}'
juju run-action vault/0 resume --wait
Unblock vault/2 again:
export VAULT_ADDR="http://10.0.2.11:8200"
vault operator unseal FyoFAkE7rlqfVSnDwm4943tYAwx51UfSntW73rQdK7SX
vault operator unseal sj38M2qmnOAegNijJ1XYtxer17rGqtrJP7OPCeG8Tq1Q
vault operator unseal /s5IYKaUo4u4vvkP6fUEDwxtHjHdtek6HIgQ+GQ4okaG
Start three vault units:
juju run-action vault/0 resume --wait
juju run-action vault/1 resume --wait
juju run-action vault/2 resume --wait
juju status vault
Deploy etcd as the vault storage backend, and easyrsa as the source of tls certification for etcd.
Note:
After deploying etcd, deploy easyrsa, don’t worry
juju deploy -n 3 --config channel=3.1/stable --to lxd:0,lxd:1,lxd:2 --series focal cs:etcd-546
juju add-relation vault:shared-db mysql-innodb-cluster:shared-db
juju add-relation etcd:db vault:etcd
juju deploy --to lxd:0 --series focal cs:~containers/easyrsa
juju add-relation etcd:certificates easyrsa:client
Show vault etcd easyrsa status:
juju status vault etcd easyrsa
Show all status:
Deploy placement cluster:
juju add-unit --to lxd:0 placement
juju add-unit --to lxd:1 placement
juju config placement vip=10.0.7.32
juju deploy --config cluster_count=3 --series focal hacluster placement-hacluster
juju add-relation placement-hacluster:ha placement:ha
Deploy the ceph-radosgw cluster:
juju add-unit --to lxd:1 ceph-radosgw
juju add-unit --to lxd:2 ceph-radosgw
juju config ceph-radosgw vip=10.0.7.42
juju deploy --config cluster_count=3 --series focal hacluster ceph-radosgw-hacluster
juju add-relation ceph-radosgw-hacluster:ha ceph-radosgw:ha
Deploy the cinder cluster:
juju add-unit --to lxd:0 cinder
juju add-unit --to lxd:2 cinder
juju config cinder vip=10.0.7.47
juju deploy --config cluster_count=3 --series focal hacluster cinder-hacluster
juju add-relation cinder-hacluster:ha cinder:ha
Deploy the glance cluster:
juju add-unit --to lxd:0 glance
juju add-unit --to lxd:1 glance
juju config glance vip=10.0.7.52
juju deploy --config cluster_count=3 --series focal hacluster glance-hacluster
juju add-relation glance-hacluster:ha glance:ha
Deploy the neutron-api cluster:
juju add-unit --to lxd:0 neutron-api
juju add-unit --to lxd:1 neutron-api
juju config neutron-api vip=10.0.7.57
juju deploy --config cluster_count=3 --series focal hacluster neutron-api-hacluster
juju add-relation neutron-api-hacluster:ha neutron-api:ha
Deploy the nova-cloud-controller cluster:
juju add-unit --to lxd:1 nova-cloud-controller
juju add-unit --to lxd:2 nova-cloud-controller
juju config nova-cloud-controller vip=10.0.7.62
juju deploy --config cluster_count=3 --series focal hacluster nova-cloud-controller-hacluster
juju add-relation nova-cloud-controller-hacluster:ha nova-cloud-controller:ha
After the deployment is completed, the nava-cloud-controller status block is found, showing the miss relation with memcached.
After querying the information, "ReleaseNotes1501" , memcached must be deployed as follows and add relationships.
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --series focal memcached --debug
juju add-relation nova-cloud-controller memcached
Deploy the openstack-dashboard cluster:
juju add-unit --to lxd:0 openstack-dashboard
juju add-unit --to lxd:2 openstack-dashboard
juju config openstack-dashboard vip=10.0.7.67
juju deploy --config cluster_count=3 --series focal hacluster openstack-dashboard-hacluster --debug
juju add-relation openstack-dashboard-hacluster:ha openstack-dashboard:ha
Except for easyrsa, all HAs are deployed juju status
:
If the status of the above group is block and there is "Services not running that should be: apache2", the certificate should be re-imported.
juju run-action --wait vault/0 reissue-certificates