OpenStack private cloud deployment based on MOS9.0

The private cloud deployment operations in this manual are based on the completion of the Mos9.0 source construction. If there is no Mos9.0 source (fuel) available, please complete the Mos9.0 source construction first.

1.  Network environment requirements

1.1 Network List _

serial number

network name

use

1

PXE network

It is used for other nodes in the OpenStack cluster to obtain image files from the master node online, and complete the system installation and OpenStack environment deployment.

2

storage network

Network required for data storage communication in an OpenStack environment

3

management network

Networking for managing cloud instances in an OpenStack environment

4

private network

User self-built network

5

public network

OpenStack provides an open network and can provide public cloud services. It is built here for private clouds and will not be elaborated.

1.2 List of network equipment

serial number

device name

quantity

use

1

TL-R483 router

1

Used to provide public network gateway

2

H3C SMB-S1824G Gigabit Switch

>2

Used to divide the network, configure gateway, trunk port

3

cable

some

 

Notice:

l In the private cloud, the router is only used to provide the public network gateway, and the configuration can be determined according to the actual situation.

l The minimum standard of the switch used is that all interfaces are gigabit manageable (the trunk port can be configured)

1.3 Network Deployment _

network wiring diagram

l Switch A is responsible for carrying the PXE network of the cluster. It uses exclusive lines. The PXE network must be reachable at the LAN2 layer. Generally, it needs to be interconnected with the network where the user host is located for convenient management.

l Switch B carries the user network, storage network, and management network of the cluster. In addition, the user network needs to be interconnected with the network where the user host resides (routing is usually established on the top-level switch, which requires technical support from the network department).

l Each interface of switch B needs to carry multiple networks. Therefore, configure the TRUNCK interface as required.

l The router only provides an external network gateway. The control node in the cluster needs to be connected to the external network, but the computing node can not be connected. In addition, the router needs to set the ip mapping from the external network to the network where the user host is located, otherwise the user cannot enter the dash interface.

l The network wiring sequence of all servers is that the pxe network corresponds to the first network card, followed by the management, storage, and private network corresponding to the second network card, and the third is the external network.

According to this network route, the user host can not only access all the servers in the OpenStack cluster, access the fuel interface, and manage the cloud environment, but also smoothly access the cloud host through the OpenStack user network to enjoy cloud services. The ip mapping of the router accesses the dash interface of OpenStack to manage the cloud host. According to the above deployment scheme, users can successfully deploy the OpenStack private cloud network environment with minimal changes to the existing network environment, and can meet the needs of future management and use.

Here is an example wiring diagram:

Second, the server standard

Among the servers participating in the construction of the environment, the server serving as the control node must have at least 3 network interfaces (NICs), and the server serving as the computing node must have at least 2 network interfaces (NICs). The configuration requirements are as follows:

device name

node name

configure

host point

master

Ordinary desktop CPU 4 cores, memory 4G, hard disk 500G

control node

controller

R730 CPU32 core, memory>=32G, hard disk 278GB * 3

calculate node

compute

R730 CPU32 core, memory>=64G, hard disk 3.3T

The data can be adjusted according to actual needs.

3. Create a new OpenStack environment

2.1 Create a new cloud environment

1. Create a new openstack environment with a custom name.

 

2. Click to create a new OpenStack environment

 

3. After entering the name, click forward

 

4. Default configuration, click forward

 

5. The storage mode adopts ceph, select ceph and click forward

 

6. Default configuration, click forward

 

7. Default configuration, click forward

 

Click New

2.2 Configuring the cloud environment

After the new one is created, enter the Cloud environment

Click Settings -> Computing Configuration, pay attention, check KVM, configure as shown in the figure below, and then save the configuration.

 

Click Settings -> Storage Configuration, and modify the value of Ceph object replication factor to 2 or 3 as needed

 

 

Click Network->Others to enter the following interface

 

Modify the first item of the NTP server list to 10.1.211.29 (that is, the ip of the master node), and remove the other two items with the minus sign on the right, and then click Save Settings

After saving, click on the console

 

add node

Click to add node

 

Check the roles of controller and ceph osd. After checking the roles, check the first node below, and then click Apply Changes at the top right

 

Click Add Node at the top right to continue adding a second node.

 

Check the two roles of compute and ceph. After checking the roles, check the last remaining node below, and then click Apply Changes at the top right

Configure the interface

After configuring the roles of each node, enter the node and configure a node interface

Click the node, and then click the settings button in the red box in the figure below to enter the settings of the node

 

Then click Configure Interface to enter the interface configuration. The CONTROLLER node needs to be connected to the public network, so the interface configuration method is adopted, that is, the PXE network exclusively owns the network card eno1, eno2 is network sharing such as management, storage, and private (ie user network), and eno3 is the public network. exclusive. As shown below:

 

Control Node Interface Configuration

Use drag and drop to move the logical network between physical network ports, drag and drop into the mode as shown above, and then click Apply

Then click the settings button of the second node in the node to enter the interface settings of the second node, as shown below

 

Click to configure interface

 

Compute Node Interface Configuration

The same as the first node, use drag and drop to move the logical network between physical network ports, and drag and drop into the mode as shown above.

configure disk

Generally, in addition to the storage space occupied by the system, virtual storage only allocates about 100GB of storage, and all the remaining storage is ceph storage.

 

Verify the network

Then click Apply, then Network, click Connectivity Check, click Verify Network

 

After waiting for about 5 minutes, the verification result is as follows after successful

 

If the verification fails, please check the network configuration of each node, and then check again.

release

Click Console, click Deploy to start deployment.

 

After one to two hours of deployment, the deployment was successful.

 

Click Horizon to jump to the openstack background management interface. Our website here is http://172.16.0.3 . If you need to access under the working network, you need to set ip mapping in the router, and map this ip to the ip under the working network. Then access it through the mapped ip.

 

The default account password is admin

So far, the openstack private cloud has been successfully deployed.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325914965&siteId=291194637