Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223--4-Install openstack

Contents:
Section 1 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–1--OpenStack Charms Deployment Guide
Section 2 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–2-Install MAAS

Section 3 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–3-Install Juju

Section 4 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–4-Install openstack

Section 5 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–5--Make the bundle install openstack

Section 6 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–6--Configure vault and set the life cycle of digital certificates

Section 7 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–7--juju Offline deployment of bundles

Section 8 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–8--Configure OpenStack

Section 9 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–9--Network Topology

Section 10 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–10–OpenStack Highly Available Infrastructure Practical

Section 11 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–11–Access Juju Dashboard

Install OpenStack

In the previous part, we installed Juju and created the Juju controller and model. We are now going to use Juju to install OpenStack itself. There are two methods to choose from:

1 Through a separate charm. This method will give you a deep understanding of how Juju works and how the components of OpenStack work together. If you have never installed OpenStack in Juju, please select this option.

2 Use charm bundle. This method provides a way to automatically install OpenStack. If you are familiar with how OpenStack is built with Juju, please select this option.
The current page introduces method #1. Method #2, see Deploying OpenStack using bundles.

Important matters Regardless of the installation method used, once cloud computing is deployed, the following management practices related to the charm version and machine series are recommended: 1
The entire charm service used to manage the cloud should make any major changes in the cloud (such as migrating to a new charm service) , Upgrade cloud service, upgrade machine series) before upgrading to the latest stable charm version. For details, please refer to Charm Upgrade .
2 The Juju machines that make up the cloud should run the same series (such as "bionic" or "focal"
, but not a mixture of the two). For more information, see Series Upgrade .

Although this page is very long, but only using three different Juju command: juju deploy, juju add-unitand juju add-relation. Before continuing, you may need to read the relevant chapters in the Juju documentation:

This page will show how to install a minimal non-highly available OpenStack cloud. For the topic of high availability , please refer to OpenStack high availability .

OpenStack release

As mentioned in the overview section of the guide, OpenStack Ussuri will be deployed on Ubuntu 20.04 LTS (Focal) cloud nodes. To achieve this, the default package archive ("distro") of the cloud node will be used during the installation of each OpenStack application. Please note that some applications themselves are not part of the OpenStack project, so they are not applicable (exceptions, Ceph applications use this method).
Please refer to Perform the upgrade in the OpenStack Upgrade Appendix for more information about cloud archive releases. And details on how to use them when upgrading OpenStack.

Important The OpenStack distribution selected may affect the installation and configuration instructions.

Installation progress

There are many changes involved in the OpenStack installation of a charm. Most of the time this process there will be some components have not been met, this will result in similar error messages appear in juju statusthe output of command. do not panic. In fact, these are opportunities to understand the interdependence of various software. Once the appropriate applications and relationships are added and processed , messages such as lost relationships and blocking relationships will disappear.

Tips for a convenient way to monitor the progress of the installation is to let the command watch -n 5 -c juju status --colorrun in a separate terminal.

Deploy OpenStack

Assuming you have exactly followed the instructions on the Install Juju page, there should now be a Juju controller named "maas-controller" and an empty Juju model named "openstack". Now switch to this background:

juju switch maas-controller:openstack

In the following sections, various OpenStack components will be added to the "OpenStack" model. Each application will be installed from the online Charm store, and many will specify configuration options via YAML files.

Note that you do not need to wait for the Juju command to complete before issuing further commands. However, it is very beneficial to understand the impact of a command on the current state of the cloud.

Ceph OSD

The Ceph-osd application is deployed to four nodes using the ceph-osd symbol. The name of the block device that supports osd depends on the hardware on the node. All possible devices on the node should be used as the value of the osd-devices option (space separated). Here, we will use the same device on each cloud node: /dev/sdb. The file ceph-osd.yaml contains the configuration

vim ceph-osd.yaml
ceph-osd:
  osd-devices: /dev/sdb
  source: distro

To deploy the application, we will use the "compute" tag we placed on each node on the Install MAAS page.

juju deploy -n 4 --config ceph-osd.yaml --constraints tags=compute ceph-osd

If a message from the ceph-osd unit (such as "non-raw device detected") appears in the output of the juju state, you need to use the operations zap-disk and add-disk attached to the "ceph-osd" symbol. Zap-disk operations are destructive in nature. Use it only if you want to clear all the data and signatures on the disk that are used by Ceph.

Note that since ceph-osd is deployed on four nodes, and only four nodes are available in this environment, strictly speaking, there is no need to use the "compute" tag

Nova calculation

The Nova-compute application is deployed to a node using the nova-compute charm. Then, we will extend the application to two other machines. The file nova-compute.yaml contains the following configuration:

vim nova-compute.yaml 
nova-compute:
  enable-live-migration: true
  enable-resize: true
  migration-auth-type: ssh
  openstack-origin: distro

Since there are no more idle Juju machines (MAAS nodes) available, the initial node must be located by the computer. This means that we have placed multiple services on the node. We chose machines 1, 2, and 3:

juju deploy -n 3 --to 1,2,3 --config nova-compute.yaml nova-compute

Note that the
"nova compute" charm is designed to support each application of a mirror format type at any given time. When existing instances use the previous format, changing the format (see the
charm option libvirt-image-backend) will require manual image conversion for each instance. See bug LP # 1826888.

Swift storage

The fast storage application is deployed to three nodes (computers 0, 2 and 3) with the charm of swift storage. swift-storage.yaml contains the following configuration:

vim swift-storage.yaml
swift-storage:
  block-device: sdc
  overwrite: "true"
  openstack-origin: distro

This configuration points to the block device /dev/sdc. Adjust according to the available hardware. In a production environment, avoid the use of loopback devices.

Deploy to three machines:

juju deploy -n 3 --to 0,2,3 --config swift-storage.yaml swift-storage

MySQL InnoDB Cluster

MySQL InnoDB cluster always requires at least three database units, which will be integrated on machines 0, 1, and 2:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 mysql-innodb-cluster

Vault

Vault is required to manage TLS certificates, which will support encrypted communication between cloud applications.

Deploy it this way:

juju deploy --to lxd:3 vault

This is the first application to connect to the cloud database set up in the previous section. The process is as follows:

  1. Create an application-specific instance of the mysql router (a slave)
  2. Add a relationship between the mysql router instance and the database
  3. Add a relationship between the application and the mysql router instance

The combination of step 2 and step 3 connects the application to the cloud database.

The following are the corresponding commands for Vault:

juju deploy mysql-router vault-mysql-router
juju add-relation vault-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation vault-mysql-router:shared-db vault:shared-db

Vault now needs to be activated and unblocked. This kind of charm also requires authorization to perform certain tasks. These steps are included in the Vault page. Do it now.
Once the above unit part of the output command is completed, the charm status should look similar to this:

Unit                     Workload  Agent  Machine  Public address  Ports     Message
ceph-osd/0*              blocked   idle   0        10.0.0.206                Missing relation: monitor
ceph-osd/1               blocked   idle   1        10.0.0.208                Missing relation: monitor
ceph-osd/2               blocked   idle   2        10.0.0.209                Missing relation: monitor
ceph-osd/3               blocked   idle   3        10.0.0.213                Missing relation: monitor
mysql-innodb-cluster/0*  active    idle   0/lxd/0  10.0.0.211                Unit is ready: Mode: R/W
mysql-innodb-cluster/1   active    idle   1/lxd/0  10.0.0.212                Unit is ready: Mode: R/O
mysql-innodb-cluster/2   active    idle   2/lxd/0  10.0.0.214                Unit is ready: Mode: R/O
nova-compute/0*          blocked   idle   1        10.0.0.208                Missing relations: image, messaging
nova-compute/1           blocked   idle   2        10.0.0.209                Missing relations: image, messaging
nova-compute/2           blocked   idle   3        10.0.0.213                Missing relations: messaging, image
swift-storage/0*         blocked   idle   0        10.0.0.206                Missing relations: proxy
swift-storage/1          blocked   idle   2        10.0.0.209                Missing relations: proxy
swift-storage/2          blocked   idle   3        10.0.0.213                Missing relations: proxy
vault/0*                 active    idle   3/lxd/0  10.0.0.217      8200/tcp  Unit is ready (active: true, mlock: disabled)
  vault-mysql-router/0*  active    idle            10.0.0.217                Unit is ready

Neutron network

The Neutron network has four applications:

  1. neutron-api
  2. neutron-api-plugin-ovn (subordinate)
  3. oven-central
  4. The ovn-chassis (subordinate)
    file neutron.yaml contains three of the required configurations:
vim neutron.yaml 
ovn-chassis:
  bridge-interface-mappings: br-ex:eth1
  ovn-bridge-mappings: physnet1:br-ex
neutron-api:
  neutron-security-groups: true
  flat-network-providers: physnet1
  openstack-origin: distro
ovn-central:
  source: distro

The bridge interface mapping bridge-interface-mappings setting refers to the network interface that the OVN chassis Chassis will bind to. In the above example, it is'eth1' and it should be an unused interface. In MAAS, this interface must have an "unconfigured" IP mode (see post-delegation configuration in the MAAS document ). All four nodes should have this interface to ensure that any node can accommodate the OVN chassis Chassis.

The flat network provider flat-network-providers setting allows the neutron flat network provider used in this example scenario and name it "physnet1". When we set up the public network on the next page, we will quote the flat network provider and its name.

Ovn-bridge-mappings set to map the data port interface to the flat network provider

The main application of OVN is OVN-central, which requires at least three units. They will be assembled with machines 0, 1, and 2:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config neutron.yaml ovn-central

The neutron-api application will be loaded into machine 1:

juju deploy --to lxd:1 --config neutron.yaml neutron-api

Deploy the lower-level charm application:

juju deploy neutron-api-plugin-ovn
juju deploy --config neutron.yaml ovn-chassis

Add the necessary relationships:

juju add-relation neutron-api-plugin-ovn:neutron-plugin neutron-api:neutron-plugin-api-subordinate
juju add-relation neutron-api-plugin-ovn:ovsdb-cms ovn-central:ovsdb-cms
juju add-relation ovn-chassis:ovsdb ovn-central:ovsdb
juju add-relation ovn-chassis:nova-compute nova-compute:neutron-plugin
juju add-relation neutron-api:certificates vault:certificates
juju add-relation neutron-api-plugin-ovn:certificates vault:certificates
juju add-relation ovn-central:certificates vault:certificates
juju add-relation ovn-chassis:certificates vault:certificates

Add neutron-api to the cloud database:

juju deploy mysql-router neutron-api-mysql-router
juju add-relation neutron-api-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation neutron-api-mysql-router:shared-db neutron-api:shared-db

Keystone

The key application will be container machine 0.
deploy:

juju deploy --to lxd:0 --config openstack-origin=distro keystone`

Keystone to join cloud database:

juju deploy mysql-router keystone-mysql-router
juju add-relation keystone-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation keystone-mysql-router:shared-db keystone:shared-db

You can also add two relationships at this time:

juju add-relation keystone:identity-service neutron-api:identity-service
juju add-relation keystone:certificates vault:certificates

RabbitMQ

The Rabbitmq-server application will be packaged on machine 2 in the charm of rabbitmq-server:

juju deploy --to lxd:2 rabbitmq-server

Two relationships can be added at this time:

juju add-relation rabbitmq-server:amqp neutron-api:amqp
juju add-relation rabbitmq-server:amqp nova-compute:amqp

The juju statuscharm status output at this time should be similar to this:

Unit                           Workload  Agent  Machine  Public address  Ports              Message
ceph-osd/0*                    blocked   idle   0        10.0.0.206                         Missing relation: monitor
ceph-osd/1                     blocked   idle   1        10.0.0.208                         Missing relation: monitor
ceph-osd/2                     blocked   idle   2        10.0.0.209                         Missing relation: monitor
ceph-osd/3                     blocked   idle   3        10.0.0.213                         Missing relation: monitor
keystone/0*                    active    idle   0/lxd/2  10.0.0.223      5000/tcp           Unit is ready
  keystone-mysql-router/0*     active    idle            10.0.0.223                         Unit is ready
mysql-innodb-cluster/0*        active    idle   0/lxd/0  10.0.0.211                         Unit is ready: Mode: R/W
mysql-innodb-cluster/1         active    idle   1/lxd/0  10.0.0.212                         Unit is ready: Mode: R/O
mysql-innodb-cluster/2         active    idle   2/lxd/0  10.0.0.214                         Unit is ready: Mode: R/O
neutron-api/0*                 active    idle   1/lxd/2  10.0.0.220      9696/tcp           Unit is ready
  neutron-api-mysql-router/0*  active    idle            10.0.0.220                         Unit is ready
  neutron-api-plugin-ovn/0*    active    idle            10.0.0.220                         Unit is ready
nova-compute/0*                blocked   idle   1        10.0.0.208                         Missing relations: image
  ovn-chassis/1                active    idle            10.0.0.208                         Unit is ready
nova-compute/1                 blocked   idle   2        10.0.0.209                         Missing relations: image
  ovn-chassis/0*               active    idle            10.0.0.209                         Unit is ready
nova-compute/2                 blocked   idle   3        10.0.0.213                         Missing relations: image
  ovn-chassis/2                active    idle            10.0.0.213                         Unit is ready
ovn-central/0*                 active    idle   0/lxd/1  10.0.0.218      6641/tcp,6642/tcp  Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1                  active    idle   1/lxd/1  10.0.0.221      6641/tcp,6642/tcp  Unit is ready
ovn-central/2                  active    idle   2/lxd/1  10.0.0.219      6641/tcp,6642/tcp  Unit is ready
rabbitmq-server/0*             active    idle   2/lxd/2  10.0.0.222      5672/tcp           Unit is ready
swift-storage/0*               blocked   idle   0        10.0.0.206                         Missing relations: proxy
swift-storage/1                blocked   idle   2        10.0.0.209                         Missing relations: proxy
swift-storage/2                blocked   idle   3        10.0.0.213                         Missing relations: proxy
vault/0*                       active    idle   3/lxd/0  10.0.0.217      8200/tcp           Unit is ready (active: true, mlock: disabled)
  vault-mysql-router/0*        active    idle            10.0.0.217                         Unit is ready

Nova cloud controller

Nova-cloud-controller applications, including nova-scheduler, nova-api and nova-conductor services, will be packaged on machine 0 using the nova-cloud-controller charm. The file nova-cloud-controller.yaml contains the following configuration:

vim controller.yaml
nova-cloud-controller:
  network-manager: Neutron
  openstack-origin: distro

deploy:

juju deploy --to lxd:3 --config nova-cloud-controller.yaml nova-cloud-controller

Add nova-cloud-controller to the cloud database:

juju deploy mysql-router ncc-mysql-router
juju add-relation ncc-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation ncc-mysql-router:shared-db nova-cloud-controller:shared-db

Note that in order to keep the juju state output compressed, the expected nova-cloud-controller-mysql-router
application name has been shortened to ncc-mysql-router.

Five other relationships can be added at this time:

juju add-relation nova-cloud-controller:identity-service keystone:identity-service
juju add-relation nova-cloud-controller:amqp rabbitmq-server:amqp
juju add-relation nova-cloud-controller:neutron-api neutron-api:neutron-api
juju add-relation nova-cloud-controller:cloud-compute nova-compute:cloud-compute
juju add-relation nova-cloud-controller:certificates vault:certificates

Placement

The placement application will be integrated and installed to machine 2
deployment using the placemet charm :

juju deploy --to lxd:3 --config openstack-origin=distro placement

Join the cloud database location:

juju deploy mysql-router placement-mysql-router
juju add-relation placement-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation placement-mysql-router:shared-db placement:shared-db

Three other relationships can be added at this time:

juju add-relation placement:identity-service keystone:identity-service
juju add-relation placement:placement nova-cloud-controller:placement
juju add-relation placement:certificates vault:certificates

OpenStack dashboard

The Openstack-dashboard application (Horizon) will be assembled on computer 1 through the openstack-dashboard charm.
Deployment:

juju deploy --to lxd:1 --config openstack-origin=distro openstack-dashboard

Add openstack-dashboard to the cloud database:

juju deploy mysql-router dashboard-mysql-router
juju add-relation dashboard-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation dashboard-mysql-router:shared-db openstack-dashboard:shared-db

Note that in order to keep the juju state output compressed, the expected openstack-dashboard-mysql-router application name has been shortened to
dashboard-mysql-router

Two more relationships are needed:

juju add-relation openstack-dashboard:identity-service keystone:identity-service
juju add-relation openstack-dashboard:certificates vault:certificates

Glance

The glance application will be integrated and installed on machine 2 using the glance charm.
Deployment:

juju deploy --to lxd:3 --config openstack-origin=distro glance

Join the cloud database browsing:

juju deploy mysql-router glance-mysql-router
juju add-relation glance-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation glance-mysql-router:shared-db glance:shared-db

Four relationships can be added at this time:

juju add-relation glance:image-service nova-cloud-controller:image-service
juju add-relation glance:image-service nova-compute:image-service
juju add-relation glance:identity-service keystone:identity-service
juju add-relation glance:certificates vault:certificates

At this time, the charn status output by the juju status command should be similar to this:

Unit                           Workload  Agent  Machine  Public address  Ports              Message
ceph-osd/0*                    blocked   idle   0        10.0.0.206                         Missing relation: monitor
ceph-osd/1                     blocked   idle   1        10.0.0.208                         Missing relation: monitor
ceph-osd/2                     blocked   idle   2        10.0.0.209                         Missing relation: monitor
ceph-osd/3                     blocked   idle   3        10.0.0.213                         Missing relation: monitor
glance/0*                      active    idle   3/lxd/3  10.0.0.224      9292/tcp           Unit is ready
  glance-mysql-router/0*       active    idle            10.0.0.224                         Unit is ready
keystone/0*                    active    idle   0/lxd/2  10.0.0.223      5000/tcp           Unit is ready
  keystone-mysql-router/0*     active    idle            10.0.0.223                         Unit is ready
mysql-innodb-cluster/0*        active    idle   0/lxd/0  10.0.0.211                         Unit is ready: Mode: R/W
mysql-innodb-cluster/1         active    idle   1/lxd/0  10.0.0.212                         Unit is ready: Mode: R/O
mysql-innodb-cluster/2         active    idle   2/lxd/0  10.0.0.214                         Unit is ready: Mode: R/O
neutron-api/0*                 active    idle   1/lxd/2  10.0.0.220      9696/tcp           Unit is ready
  neutron-api-mysql-router/0*  active    idle            10.0.0.220                         Unit is ready
  neutron-api-plugin-ovn/0*    active    idle            10.0.0.220                         Unit is ready
nova-cloud-controller/0*       active    idle   3/lxd/1  10.0.0.216      8774/tcp,8775/tcp  Unit is ready
  ncc-mysql-router/0*          active    idle            10.0.0.216                         Unit is ready
nova-compute/0*                active    idle   1        10.0.0.208                         Unit is ready
  ovn-chassis/1                active    idle            10.0.0.208                         Unit is ready
nova-compute/1                 active    idle   2        10.0.0.209                         Unit is ready
  ovn-chassis/0*               active    idle            10.0.0.209                         Unit is ready
nova-compute/2                 active    idle   3        10.0.0.213                         Unit is ready
  ovn-chassis/2                active    idle            10.0.0.213                         Unit is ready
openstack-dashboard/0*         active    idle   1/lxd/3  10.0.0.210      80/tcp,443/tcp     Unit is ready
  dashboard-mysql-router/0*    active    idle            10.0.0.210                         Unit is ready
ovn-central/0*                 active    idle   0/lxd/1  10.0.0.218      6641/tcp,6642/tcp  Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1                  active    idle   1/lxd/1  10.0.0.221      6641/tcp,6642/tcp  Unit is ready
ovn-central/2                  active    idle   2/lxd/1  10.0.0.219      6641/tcp,6642/tcp  Unit is ready
placement/0*                   active    idle   3/lxd/2  10.0.0.215      8778/tcp           Unit is ready
  placement-mysql-router/0*    active    idle            10.0.0.215                         Unit is ready
rabbitmq-server/0*             active    idle   2/lxd/2  10.0.0.222      5672/tcp           Unit is ready
swift-storage/0*               blocked   idle   0        10.0.0.206                         Missing relations: proxy
swift-storage/1                blocked   idle   2        10.0.0.209                         Missing relations: proxy
swift-storage/2                blocked   idle   3        10.0.0.213                         Missing relations: proxy
vault/0*                       active    idle   3/lxd/0  10.0.0.217      8200/tcp           Unit is ready (active: true, mlock: disabled)
  vault-mysql-router/0*        active    idle            10.0.0.217                         Unit is ready

Ceph monitor

The Ceph-mon application will be installed on machines 0, 1, and 2 using the ceph-mon charm.
deploy:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=distro ceph-mon

Three relationships can be added at this time:

juju add-relation ceph-mon:osd ceph-osd:mon
juju add-relation ceph-mon:client nova-compute:ceph
juju add-relation ceph-mon:client glance:ceph

For the above relationship,

Nova-compute: The Ceph relationship makes Ceph the storage backend for Nova's non-bootable disk images. Nova-compute charm
option libvirt-image-backend must be set to "rbd" to take effect.
glance: The Ceph relationship makes Ceph the storage backend of Glance.

Cinder

The cinder application will be integrated and installed on machine 1 using the cinder charm. The file cinder.yaml contains the configuration:

vim cinder.yaml
cinder:
  glance-api-version: 2
  block-device: None
  openstack-origin: distro

deploy:

juju deploy --to lxd:1 --config cinder.yaml cinder

Add cinder to the cloud database:

juju deploy mysql-router cinder-mysql-router
juju add-relation cinder-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation cinder-mysql-router:shared-db cinder:shared-db

Four relationships can be added at this time:

juju add-relation cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service
juju add-relation cinder:identity-service keystone:identity-service
juju add-relation cinder:amqp rabbitmq-server:amqp
juju add-relation cinder:image-service glance:image-service

The glance: image-service relationship above will enable Cinder to use the Glance API. (For example, enable Cinder to perform volume snapshots of Glance images)
Like Glance, Cinder will use Ceph as its storage backend (hence block-device: None in the configuration file). This will be implemented via the cinder-ceph subordinate charm:
Like Glance, Cinder will use Ceph as its storage backend (hence block-device: None in the configuration file). This will be achieved through the accessory charm of cinder-ceph:

juju deploy cinder-ceph

Four relationships need to be added:

juju add-relation cinder-ceph:storage-backend cinder:storage-backend
juju add-relation cinder-ceph:ceph ceph-mon:client
juju add-relation cinder-ceph:ceph-access nova-compute:ceph-access
juju add-relation cinder:certificates vault:certificates

Swift proxy
Swift-proxy application will be packaged on computer 3 with swift-proxy charm:

vim swift-proxy.yaml
swift-proxy:
  zone-assignment: auto
  swift-hash: "<uuid>"

Swift proxy needs to provide a unique identifier/proxy (UUID). Use the uuid -v 4 command to generate one (you may need to install the uuid deb package first) and insert it into the file.

deploy:

juju deploy --to lxd:3 --config swift-proxy.yaml swift-proxy

We need two relationships:

juju add-relation swift-proxy:swift-storage swift-storage:swift-storage
juju add-relation swift-proxy:identity-service keystone:identity-service

The
last component of NTP is the NTP client, which is used to synchronize the time on each cloud node. This is done through the ntp affiliated charm
:

juju deploy ntp

The following relationship will add an ntp unit next to each ceph-osd unit, thereby adding one on each of the four cloud nodes:

juju add-relation ceph-osd:juju-info ntp:juju-info

Final results and dashboard access

Once all applications are deployed and the relationship between them is added, we need to wait for
the output of juju status. The final result should not have any similar error messages. Here is a sample output (including relationships) of a successful cloud deployment.
One of the milestones in the OpenStack deployment was logging into the Horizon dashboard for the first time. You will need its IP address and administrator password.
Get the address in this way:

juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' | head -1

The password is queried from Keystone:

juju run --unit keystone/0 leader-get admin_passwd

In this example, the address is '10.0.0.210' and the password is'kohy6shoh3diWav5'.

Then, the URL of the dashboard becomes:

http://10.0.0.210/horizon

These certificates are:

Domain: admin_domain
User Name:admin
Password: kohy6shoh3diWav5 6shoh3diwav5

Once logged in, you should see something like this:
Insert picture description here
Enable access to the console:

juju config nova-cloud-controller console-access-protocol=novnc

Next step

You have successfully deployed OpenStack using Juju and MAAS. The next step is to present cloud capabilities to users. This will involve setting up the network, images, and user environment. Now enter the configuration of OpenStack.

Guess you like

Origin blog.csdn.net/m0_49212388/article/details/109308899