Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223--5--Make the bundle install openstack

Contents:
Section 1 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–1--OpenStack Charms Deployment Guide
Section 2 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–2-Install MAAS

Section 3 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–3-Install Juju

Section 4 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–4-Install openstack

Section 5 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–5--Make the bundle install openstack

Section 6 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–6--Configure vault and set the life cycle of digital certificates

Section 7 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–7--juju Offline deployment of bundles

Section 8 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–8--Configure OpenStack

Section 9 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–9--Network Topology

Section 10 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–10–OpenStack Highly Available Infrastructure Practical

Section 11 Multi-node OpenStack Charms Deployment Guide 0.0.1.dev223–11–Access Juju Dashboard

Install OpenStack from the bundle

The Juju Charm bundle is a package of many Charm deployments, including all the required relationships and configurations (see Charm bundle in the Juju documentation). Therefore, OpenStack can be installed from the bundle.

Tips

The Install OpenStack page shows how to use Juju to individually deploy, configure, and associate applications. It is recommended to use this installation method to understand how OpenStack fits together from a higher perspective. It also provides an opportunity to gain experience using Juju, which in turn will prepare you for cloud management after deployment.

The bundle shown here provides a minimal OpenStack cloud and assumes that MAAS is used as a backup cloud for Juju. Due to unknown factors in the local environment (usually related to hardware), the bundle may need to be modified before deployment. The bundle and its deployment are described in detail in the Charm Store entry openstack-base here .

Once the bundle configuration is confirmed, OpenStack can be deployed:

juju deploy /path/to/bundle/file

The time required to complete the installation will depend on the hardware capabilities of the underlying MAAS node. Once completed, you should continue to configure OpenStack if it has not been completed yet.

Finally, once the cloud functionality is verified, please refer to the OpenStack Administrator's Guide for long-term guidance.

The following is the actual installation process:
Basic OpenStack Cloud
Openstack Base #70
This package deploys basic OpenStack Cloud (Ussuri, Ceph Octopus) on Ubuntu 20.04 LTS (Focal), providing Dashboard, Compute, Network, Block Storage, Object Storage, Identity And Image service. See: Stable bundle .

This sample package is designed to run on bare metal using Juju 2.x and MAAS (Metal-as-a-Service, Metal-as-a-Service); before using this package, you need to set up at least 3 physical servers MAAS deployment.

Some configuration options in the bundle may need to be adjusted before deployment to suit specific hardware settings. For example, the network device name and the block device name can be different, and the password should be yours.

For example, there is a similar section in the bundle.yaml file. The third "column" is the value to be set. Some servers may not have eno2, they may have eth2 or other network device names. This needs to be adjusted before deployment. The same principle applies to osd devices. The third column is the whitelist of devices used for cephosd. Before deployment, make adjustments by editing bundle.yaml.

variables:
  openstack-origin:    &openstack-origin     distro
  data-port:           &data-port            br-ex:eno2
  worker-multiplier:   &worker-multiplier    0.25
  osd-devices:         &osd-devices          /dev/sdb /dev/vdb

The server should have:

  • At least 8gb of physical memory
  • Enough CPU cores to meet your capacity needs
  • Two disks (identified by /dev/sda and /dev/sdb); the first disk is used by MAAS for OS installation, the second is used for Ceph storage
  • Two wired network ports on Eno1 and eno2 (see below)

The server should have two physical network ports connected; the first port is used for general communication between services in the cloud, and the second port is used for "public" network traffic that comes from instances running in the cloud (North/South flow).

3 nodes for Nova Compute and Ceph, with RabbitMQ, MySQL, Keystone, Glance, Neutron, OVN, Nova Cloud controller, Ceph RADOS gateway, Cinder and Horizon in the LXC container.

All physical servers (not including the LXC container) will also be installed with NTP and configured to maintain time synchronization.

deploy

By booting the Juju controller on the MAAS cloud without a defined network space, a basic non-ha cloud can be deployed using the following command:

juju deploy bundle.yaml

When there is a network space in the MAAS cluster, it is necessary to clarify and define the network space where the charm application will be deployed. This can be achieved through an overlay bundle. An example of an overlay yaml file is provided, which most likely needs to be edited before deployment to represent the expected network space in an existing MAAS cluster. Example usage:

juju deploy bundle.yaml --overlay openstack-base-spaces-overlay.yaml

Issue a certificate

This version uses Vault to provide certificates to supported services. This allows secure communication between end users and cloud services, as well as ensuring the security of communication between cloud services. Before the configuration is completed and the cloud is used, vault needs to be unsealed and equipped with a CA certificate. If you don't do this, the following message (in juju state) will appear after deployment:

'certificates' missing, 'ovsdb' incomplete

For more information, please refer to the Vault and Certificate lifecycle management section in the OpenStack Charms Deployment Guide . Example steps are provided in the OpenStack High Availability Guide .

Expand

The design of Nova Compute and Ceph services is horizontally scalable.
Expand Nova Compute horizontally:

juju add-unit nova-compute # Add one more unit
juju add-unit -n5 nova-compute # Add 5 more units

Scaling Ceph horizontally:

juju add-unit ceph-osd # Add one more unit
juju add-unit -n50 ceph-osd # add 50 more units

Note: Ceph can be adjusted together with Nova Compute by using the – to option to add unit:

juju add-unit --to <machine-id-of-compute-service> ceph-osd

Note: Other services in this bundle can
be extended in conjunction with the hacluster charm to generate scalable and highly available services-these services will be included in different bundles.

Make sure it works

To ensure that your cloud is running properly, please download this package and then run the following section.
All commands are executed in the extended bundle.
Install OpenStack client tools

In order to configure and use your cloud, you need to install the appropriate client tools:

sudo snap install openstackclients

Issue a certificate for the deployment

This version uses vault to provide certificates to supported services. This allows secure communication between end users and cloud services, as well as ensuring the security of communication between cloud services. Before the configuration is completed and the cloud is used, vault needs to be unsealed and equipped with a CA certificate.
For details, please refer to the Vault and certificate lifecycle management in the appendix in the OpenStack Charms Deployment Guide .

Access cloud

Check if you can access your cloud through the command line:

source openrc
openstack catalog list

You should get a complete list of all services registered in the cloud, including identity, computing, images, and networking.

Placement video

In order to run the instance on your cloud, you need to upload an image to launch the instance:

curl https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \
    openstack image create --public --container-format=bare \
        --disk-format=qcow2 focal

Note: Tsinghua mirror source can be used in China to speed up:

curl https://mirrors.tuna.tsinghua.edu.cn/ubuntu-cloud-images/focal/current/focal-server-cloudimg-amd64.img | \
    openstack image create --public --container-format=bare \
        --disk-format=qcow2 focal

Other images of different architectures can be obtained from Ubuntu cloud images . Make sure to use appropriate images for different cpu architectures.

Note: The
domestic Tsinghua source is: https://mirrors.tuna.tsinghua.edu.cn/ubuntu-cloud-images/focal/current/

Note: For ARM 64-bit (arm64) clients, you also need to configure the image to boot in UEFI mode:

curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-arm64.img | \
    openstack image create --public --container-format=bare \
        --disk-format=qcow2 --property hw_firmware_type=uefi focal

Configure the network

For a quick test, we will set up an "external" network and shared router ("provider-router"), which all tenants can use to publicly access the instance:

For example (for private cloud):

openstack network create --external --provider-network-type flat \
    --provider-physical-network physnet1 ext_net

openstack subnet create --subnet-range 192.168.1.0/24 --no-dhcp \
    --gateway 192.168.1.152 --network ext_net \
    --allocation-pool start=192.168.1.190,end=192.168.1.230 ext

You need to adjust the network configuration parameters to which eno2 on all servers are connected; in a public cloud deployment, these ports will be connected to a publicly addressable part of the Internet.

We also need an internal network for the administrator user, the instance is actually connected to:

openstack network create internal

openstack subnet create --network internal \
    --subnet-range 172.16.16.0/24 \
    --dns-nameserver 8.8.8.8 \
    internal_subnet

openstack router create provider-router

openstack router set --external-gateway ext_net provider-router

openstack router add subnet provider-router internal_subnet

Neutron provides a wide range of configuration options, please refer to the OpenStack Neutron documentation for details .

Configure a flavor

Starting from the OpenStack Newton version, the default flavor is no longer created during installation. Therefore, before booting an instance, you need to create at least one machine type:

openstack flavor create --ram 2048 --disk 20 --ephemeral 20 m1.small

Boot example

First generate an SSH key pair so that you can access them after launching the instance:

mkdir -p ~/.ssh
touch ~/.ssh/id_rsa_cloud
chmod 600 ~/.ssh/id_rsa_cloud
openstack keypair create mykey > ~/.ssh/id_rsa_cloud

Note: You can also upload an existing public key to the cloud instead of generating a new one:

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

Now you can launch an instance on your cloud:

openstack server create --image focal --flavor m1.small --key-name mykey \
    --network internal focal-test

Attach a volume

First, create a 10G volume in cinder:

openstack volume create --size=10 <name-of-volume>

Then attach it to the instance we just booted:

openstack server add volume focal-test <name-of-volume>

Once you log in to the instance, you can access the attached volume (see below). It will need to be formatted and mounted!

In order to access the instance you just booted in the cloud, you need to assign a floating IP address to this instance:

FIP=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack server add floating ip focal-test $FIP

Then allow access via SSH (and ping)-you only need to perform the following steps once:

PROJECT_ID=$(openstack project list -f value -c ID \
       --domain admin_domain)

SECGRP_ID=$(openstack security group list --project $PROJECT_ID \
    | awk '/default/{print$2}')

openstack security group rule create $SECGRP_ID \
    --protocol icmp --ingress --ethertype IPv4

openstack security group rule create $SECGRP_ID \
    --protocol icmp --ingress --ethertype IPv6

openstack security group rule create $SECGRP_ID \
    --protocol tcp --ingress --ethertype IPv4 --dst-port 22

openstack security group rule create $SECGRP_ID \
    --protocol tcp --ingress --ethertype IPv6 --dst-port 22

After running these commands, you should be able to access the instance:

ssh ubuntu@$FIP

Log in to OpenStack Dashboard

First determine the IP address of OpenStack Dashboard:

juju status openstack-dashboard

Enter the following URL in your web browser: https://dashboard-ip/horizon/

Print your certificate:

source openrc
env | grep OS_

what's next?

Configuring and managing services on an OpenStack cloud is complicated; please refer to the OpenStack Management Guide for a complete reference on how to configure an OpenStack cloud according to your needs.


The commands I used for configuration are as follows:

juju  deploy cs:bundle/openstack-base-70

Approximately the output looks as follows:

Located bundle "cs:bundle/openstack-base-70"
Resolving charm: cs:ceph-mon-49
Resolving charm: cs:ceph-osd-304
Resolving charm: cs:ceph-radosgw-290
Resolving charm: cs:cinder-304
Resolving charm: cs:cinder-ceph-257
Resolving charm: cs:mysql-router-3
Resolving charm: cs:mysql-router-3
Resolving charm: cs:glance-298
Resolving charm: cs:mysql-router-3
Resolving charm: cs:keystone-317
Resolving charm: cs:mysql-router-3
Resolving charm: cs:mysql-innodb-cluster-1
Resolving charm: cs:neutron-api-287
Resolving charm: cs:neutron-api-plugin-ovn-1
Resolving charm: cs:mysql-router-3
Resolving charm: cs:nova-cloud-controller-346
Resolving charm: cs:nova-compute-319
Resolving charm: cs:mysql-router-3
Resolving charm: cs:ntp-41
Resolving charm: cs:openstack-dashboard-305
Resolving charm: cs:ovn-central-1
Resolving charm: cs:ovn-chassis-3
Resolving charm: cs:placement-12
Resolving charm: cs:mysql-router-3
Resolving charm: cs:rabbitmq-server-104
Resolving charm: cs:vault-40
Resolving charm: cs:mysql-router-3
Executing changes:
- upload charm cs:ceph-mon-49 for series focal
- deploy application ceph-mon on focal using cs:ceph-mon-49
- set annotations for ceph-mon
- upload charm cs:ceph-osd-304 for series focal
- deploy application ceph-osd on focal using cs:ceph-osd-304
- set annotations for ceph-osd
- upload charm cs:ceph-radosgw-290 for series focal
- deploy application ceph-radosgw on focal using cs:ceph-radosgw-290
- set annotations for ceph-radosgw
- upload charm cs:cinder-304 for series focal
- deploy application cinder on focal using cs:cinder-304
  added resource policyd-override
- set annotations for cinder
- upload charm cs:cinder-ceph-257 for series focal
- deploy application cinder-ceph on focal using cs:cinder-ceph-257
- set annotations for cinder-ceph
- upload charm cs:mysql-router-3 for series focal
- deploy application cinder-mysql-router on focal using cs:mysql-router-3
- set annotations for cinder-mysql-router
- deploy application dashboard-mysql-router on focal using cs:mysql-router-3
- set annotations for dashboard-mysql-router
- upload charm cs:glance-298 for series focal
- deploy application glance on focal using cs:glance-298
  added resource policyd-override
- set annotations for glance
- deploy application glance-mysql-router on focal using cs:mysql-router-3
- set annotations for glance-mysql-router
- upload charm cs:keystone-317 for series focal
- deploy application keystone on focal using cs:keystone-317
  added resource policyd-override
- set annotations for keystone
- deploy application keystone-mysql-router on focal using cs:mysql-router-3
- set annotations for keystone-mysql-router
- upload charm cs:mysql-innodb-cluster-1 for series focal
- deploy application mysql-innodb-cluster on focal using cs:mysql-innodb-cluster-1
  added resource mysql-shell
- set annotations for mysql-innodb-cluster
- upload charm cs:neutron-api-287 for series focal
- deploy application neutron-api on focal using cs:neutron-api-287
  added resource policyd-override
- set annotations for neutron-api
- upload charm cs:neutron-api-plugin-ovn-1 for series focal
- deploy application neutron-api-plugin-ovn on focal using cs:neutron-api-plugin-ovn-1
- set annotations for neutron-api-plugin-ovn
- deploy application neutron-mysql-router on focal using cs:mysql-router-3
- set annotations for neutron-mysql-router
- upload charm cs:nova-cloud-controller-346 for series focal
- deploy application nova-cloud-controller on focal using cs:nova-cloud-controller-346
  added resource policyd-override
- set annotations for nova-cloud-controller
- upload charm cs:nova-compute-319 for series focal
- deploy application nova-compute on focal using cs:nova-compute-319
- set annotations for nova-compute
- deploy application nova-mysql-router on focal using cs:mysql-router-3
- set annotations for nova-mysql-router
- upload charm cs:ntp-41 for series focal
- deploy application ntp on focal using cs:ntp-41
- set annotations for ntp
- upload charm cs:openstack-dashboard-305 for series focal
- deploy application openstack-dashboard on focal using cs:openstack-dashboard-305
  added resource policyd-override
  added resource theme
- set annotations for openstack-dashboard
- upload charm cs:ovn-central-1 for series focal
- deploy application ovn-central on focal using cs:ovn-central-1
- set annotations for ovn-central
- upload charm cs:ovn-chassis-3 for series focal
- deploy application ovn-chassis on focal using cs:ovn-chassis-3
- set annotations for ovn-chassis
- upload charm cs:placement-12 for series focal
- deploy application placement on focal using cs:placement-12
- set annotations for placement
- deploy application placement-mysql-router on focal using cs:mysql-router-3
- set annotations for placement-mysql-router
- upload charm cs:rabbitmq-server-104 for series focal
- deploy application rabbitmq-server on focal using cs:rabbitmq-server-104
- set annotations for rabbitmq-server
- upload charm cs:vault-40 for series focal
- deploy application vault on focal using cs:vault-40
  added resource core
  added resource vault
- set annotations for vault
- deploy application vault-mysql-router on focal using cs:mysql-router-3
- set annotations for vault-mysql-router
- add new machine 0
- add new machine 1
- add new machine 2
- add relation nova-compute:amqp - rabbitmq-server:amqp
- add relation nova-cloud-controller:identity-service - keystone:identity-service
- add relation glance:identity-service - keystone:identity-service
- add relation neutron-api:identity-service - keystone:identity-service
- add relation neutron-api:amqp - rabbitmq-server:amqp
- add relation glance:amqp - rabbitmq-server:amqp
- add relation nova-cloud-controller:image-service - glance:image-service
- add relation nova-compute:image-service - glance:image-service
- add relation nova-cloud-controller:cloud-compute - nova-compute:cloud-compute
- add relation nova-cloud-controller:amqp - rabbitmq-server:amqp
- add relation openstack-dashboard:identity-service - keystone:identity-service
- add relation nova-cloud-controller:neutron-api - neutron-api:neutron-api
- add relation cinder:image-service - glance:image-service
- add relation cinder:amqp - rabbitmq-server:amqp
- add relation cinder:identity-service - keystone:identity-service
- add relation cinder:cinder-volume-service - nova-cloud-controller:cinder-volume-service
- add relation cinder-ceph:storage-backend - cinder:storage-backend
- add relation ceph-mon:client - nova-compute:ceph
- add relation nova-compute:ceph-access - cinder-ceph:ceph-access
- add relation ceph-mon:client - cinder-ceph:ceph
- add relation ceph-mon:client - glance:ceph
- add relation ceph-osd:mon - ceph-mon:osd
- add relation ntp:juju-info - nova-compute:juju-info
- add relation ceph-radosgw:mon - ceph-mon:radosgw
- add relation ceph-radosgw:identity-service - keystone:identity-service
- add relation placement - keystone
- add relation placement - nova-cloud-controller
- add relation keystone:shared-db - keystone-mysql-router:shared-db
- add relation cinder:shared-db - cinder-mysql-router:shared-db
- add relation glance:shared-db - glance-mysql-router:shared-db
- add relation nova-cloud-controller:shared-db - nova-mysql-router:shared-db
- add relation neutron-api:shared-db - neutron-mysql-router:shared-db
- add relation openstack-dashboard:shared-db - dashboard-mysql-router:shared-db
- add relation placement:shared-db - placement-mysql-router:shared-db
- add relation vault:shared-db - vault-mysql-router:shared-db
- add relation keystone-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation cinder-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation nova-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation glance-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation neutron-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation dashboard-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation placement-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation vault-mysql-router:db-router - mysql-innodb-cluster:db-router
- add relation neutron-api-plugin-ovn:neutron-plugin - neutron-api:neutron-plugin-api-subordinate
- add relation ovn-central:certificates - vault:certificates
- add relation ovn-central:ovsdb-cms - neutron-api-plugin-ovn:ovsdb-cms
- add relation neutron-api:certificates - vault:certificates
- add relation ovn-chassis:nova-compute - nova-compute:neutron-plugin
- add relation ovn-chassis:certificates - vault:certificates
- add relation ovn-chassis:ovsdb - ovn-central:ovsdb
- add relation vault:certificates - neutron-api-plugin-ovn:certificates
- add relation vault:certificates - cinder:certificates
- add relation vault:certificates - glance:certificates
- add relation vault:certificates - keystone:certificates
- add relation vault:certificates - nova-cloud-controller:certificates
- add relation vault:certificates - openstack-dashboard:certificates
- add relation vault:certificates - placement:certificates
- add relation vault:certificates - ceph-radosgw:certificates
- add unit ceph-osd/0 to new machine 0
- add unit ceph-osd/1 to new machine 1
- add unit ceph-osd/2 to new machine 2
- add unit nova-compute/0 to new machine 0
- add unit nova-compute/1 to new machine 1
- add unit nova-compute/2 to new machine 2
- add lxd container 0/lxd/0 on new machine 0
- add lxd container 1/lxd/0 on new machine 1
- add lxd container 2/lxd/0 on new machine 2
- add lxd container 0/lxd/1 on new machine 0
- add lxd container 1/lxd/1 on new machine 1
- add lxd container 2/lxd/1 on new machine 2
- add lxd container 0/lxd/2 on new machine 0
- add lxd container 0/lxd/3 on new machine 0
- add lxd container 1/lxd/2 on new machine 1
- add lxd container 2/lxd/2 on new machine 2
- add lxd container 1/lxd/3 on new machine 1
- add lxd container 0/lxd/4 on new machine 0
- add lxd container 1/lxd/4 on new machine 1
- add lxd container 0/lxd/5 on new machine 0
- add lxd container 1/lxd/5 on new machine 1
- add lxd container 2/lxd/3 on new machine 2
- add lxd container 2/lxd/4 on new machine 2
- add lxd container 2/lxd/5 on new machine 2
- add lxd container 0/lxd/6 on new machine 0
- add unit ceph-mon/0 to 0/lxd/0
- add unit ceph-mon/1 to 1/lxd/0
- add unit ceph-mon/2 to 2/lxd/0
- add unit ceph-radosgw/0 to 0/lxd/1
- add unit cinder/0 to 1/lxd/1
- add unit glance/0 to 2/lxd/1
- add unit keystone/0 to 0/lxd/2
- add unit mysql-innodb-cluster/0 to 0/lxd/3
- add unit mysql-innodb-cluster/1 to 1/lxd/2
- add unit mysql-innodb-cluster/2 to 2/lxd/2
- add unit neutron-api/0 to 1/lxd/3
- add unit nova-cloud-controller/0 to 0/lxd/4
- add unit openstack-dashboard/0 to 1/lxd/4
- add unit ovn-central/0 to 0/lxd/5
- add unit ovn-central/1 to 1/lxd/5
- add unit ovn-central/2 to 2/lxd/3
- add unit placement/0 to 2/lxd/4
- add unit rabbitmq-server/0 to 2/lxd/5
- add unit vault/0 to 0/lxd/6
Deploy of bundle completed.

Guess you like

Origin blog.csdn.net/m0_49212388/article/details/109327064