opentack R Finishing Installation

all-in-one
system version: VERSION = "18.04.2 LTS (Bionic Beaver)"
OpenStack version: R version of the
URL: https: //docs.openstack.org/install-guide/openstack-services.html

-------------------------------------------------- ----------------------------------
basic environment ready
1. security
password ways: component name 123, for example nova123
2.host file
unified / etc / hosts file to add the format: ip the Controller
3.NTP time synchronization
can be used as a chrony ntp service, view the command chronyc sources, manually synchronize: ntpdate ntp1.aliyun.com
common ntp service:
the nTP granted fast time domain services: cn.ntp.org.cn
ntp1.aliyun.com
command: APT -y install chrony
/etc/chrony/chrony.conf comment out the line pool, add the cn.ntp.org.cn iBurst Server
#systemctl && Start chrony enable chrony systemctl
4. source environment ready to
use 163 and other domestic sources, including faster download
command:
# Software-APT install the Properties-the Common
# the Add-APT-Cloud-Archive Repository: Rocky ## can only be used ubuntu 18.04
# Apt update && apt -y dist- upgrade ## need to restart a kernel upgrade
# apt install -y python-openstackclient ## penstack -openstackclient command from the Python
5. The database
using a relational database as MariaDB
command:
# -Y APT the install MariaDB -server python-pymysql
modify configuration files, create and edit /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
the bind-address = ip address 192.168.137.134 ## the Controller node

Engine-Storage-= default InnoDB
the innodb_file_per_table ON =
max_connections = 4096
collation = utf8_general_ci Server-
Character-SET-Server UTF8 =
# systemctl Start enable MariaDB MariaDB && systemctl
6. The message queue
RabbitMQ i.e. a message queue, is mainly used to implement the application asynchronous and decoupled, but also can play a message buffer, a message distribution effect. RabbitMQ AMQP protocol is used, it is a binary protocol. The default start port 5672
# APT -y install RabbitMQ-Server
# ## rabbit123 rabbitmqctl add_user OpenStack OpenStack rabbit123 change_password change the password rabbitmqctl
# rabbitmqctl set_permissions OpenStack. "*". "*". "*" ## set_permissions [-p <vhost> ] <user> <conf> < write> <read> allows the user to configure and read openstack

7.
the memcached the Memcached is a high-performance distributed memory object caching system for dynamic Web applications to reduce database load. It is to reduce the number of cache data read from the database by the object in memory and to provide a dynamic, database-driven site speed
command:
# Apt -y install memcached python-memcache
modify /etc/memcached.conf the -l 127.0.0.1 modified to control the -l 192.168.137.134 ip address management node
# && systemctl enable systemctl Start memcached memcached
8.etcd
ETCD "The name of the source on two ideas, namely unix "/ etc" folder and distributed systems "d" istibuted. "/ etc " folder where to store configuration data for a single system, and storage configuration information etcd large-scale distributed systems. Therefore, , "d" istibuted of "/ etc", it is "etcd" .etcd consistent and fault-tolerant stored metadata. ETCD distributed system using key-value store as consistency, for configuration management, service discovery and coordination distribution type of work using etcd common modes include distributed leadership election, distributed lock and monitoring machine activity.
command:
# APT -y install etcd
modify the configuration file / etc / default / etcd
ETCD_NAME = "the Controller"
ETCD_DATA_DIR = "/ var / lib / ETCD "
ETCD_INITIAL_CLUSTER_STATE =" new new "
ETCD_INITIAL_CLUSTER_TOKEN =" Cluster-01-ETCD "
ETCD_INITIAL_CLUSTER="controller=http://192.168.137.134:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.137.134:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.137.134:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.137.134:2379"
# systemctl start etcd && systemctl enable etcd

-------------------------------------------------- ------------------------------
OpenStack component installation configuration

Keystone
1.Keystone (OpenStack the Identity Service) is the OpenStack framework, responsible for authentication, token service rules and service functions, which implements the OpenStack Identity API. Keystone is similar to a service bus, or a registry whole Openstack framework of other services through the keystone to register their services Endpoint (service access URL), any service calls between each other, need to go through the Keystone identity verification to obtain Endpoint target services to find the target service. The User, Credentials, Authentication, Token, Tenant, Service, Endpoint, Role
https://www.cnblogs.com/yuki-lau/archive/2013/01/04/2843918.html
2. Add the database components
# MySQL
> create database keystone; ## create keystone database
> grant all privileges on keystone * to 'keystone' @ 'localhost' identified by 'keystone123';. ## keystone authorized users keytone library
.> grant all privileges on keystone * to 'keystone '@'% 'IDENTIFIED by' keystone123 ';
> the flush privileges;
> Delete from mysql.user where host = ' %' and user = 'keystone'; deleting the corresponding data table ##
> select * from mysql.user where user = 'keystone'; ## see
3. Installation Configuration
# apt the install apache2 for libapache2-Keystone -Y-MOD WSGI
/etc/keystone/keystone.conf: modify profile
[Database]
Connection MySQL + = pymysql: // Keystone: keystone123 @ Controller / Keystone
[token]
Provider = Fernet
# -s SU / bin / sh -c "keystone- manage db_sync" keystone ## is filled identity server database, Keystone data table 44, there is no echo
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone ### Fernet initialization key store
#-Keystone-the Manage credential_setup --keystone the User Group Keystone Keystone --keystone-
# Keystone-the Manage Bootstrap --bootstrap-password admin123 \
HTTP-ADMIN-url --bootstrap: // the Controller: 5000 / v3 / \
--bootstrap-Internal-url HTTP: // the Controller: 5000 / v3 / \
--bootstrap-public-url HTTP: // the Controller: 5000 / v3 / \
--bootstrap-Region-the above mentioned id RegionOne
modify the configuration file: /etc/apache2/apache2.conf
ServerName the Controller ## no default line
# systemctl start apache2 && systemctl enable apache2 ## and open the boot from Kai, each such services are subject
to the environmental variables:
Export OS_USERNAME = ADMIN
Export OS_PASSWORD = admin123
Export OS_PROJECT_NAME = ADMIN
Export OS_USER_DOMAIN_NAME = the Default
Export OS_PROJECT_DOMAIN_NAME = the Default
Export OS_AUTH_URL = HTTP: // the Controller: 5000 / v3
Export OS_IDENTITY_API_VERSION = 3
create a domain, project, users and roles:
# Openstack domain create --description "An Example Domain" example ## with output, create a domain
# openstack project create --domain default --description " Service Project" service ## with output, create a project
# openstack project create --domain default --description "Demo project" myproject ## with output, create a project 2
# OpenStack the create the user --domain default prompt myuser --password-## has the output, create users, passwords myuser123
# OpenStack myRole ## have the create Role output , create a role
# openstack role add --project myproject --user myuser myrole ## no output, add a role user
validation:
# unset environment variables remove OS_AUTH_URL OS_PASSWORD ## certification address and password
# openstack --os-auth HTTP -url: // the Controller: 5000 / v3 \
> - OS-Project-Domain-name the Default - OS-the User-Domain-name the Default \
> --Os-project-name admin --os -username admin token issue ## get as admin token
# OpenStack - OS-auth-url HTTP: // the Controller: 5000 / v3 \
> - OS-Project the Default - OS-name--domain the User-the Default Domain-name \
> - OS - OS-Project-name myproject-myuser username token Issue ## acquired as a token of myuser
create an environment variable:
admin-openrc.sh and demo-openrc.sh, and execute permission + X admin-openrc.sh the chmod
Export OS_PROJECT_DOMAIN_NAME the Default =
Export OS_USER_DOMAIN_NAME the Default =
Export OS_PROJECT_NAME = ADMIN ## demo-openrc.sh use MyProject
Export Demo OS_USERNAME = ADMIN ##-OpenRC. use sh myuser
Export OS_PASSWORD admin123 = ## demo-openrc.sh use myuser123
Export OS_AUTH_URL = HTTP: // Controller: 5000 / V3
Export OS_IDENTITY_API_VERSION. 3 =
= 2 OS_IMAGE_API_VERSION Export
#. admin-openrc.sh environment variables introduced ##
# openstack token issue ## generates a token
-------------------------- ------------------------------

the Glance
1.Glance openstack is a mirror image of the service. It provides a virtual image of the inquiry, registration and transfer services. Glance itself does not implement the storage capability of mirrored storage. Glance just a proxy. It acts as a link between the mirrored storage services and other components of Openstack.
2. Add the database components
# MySQL
> the Create Database the Glance;
> Grant All privileges ON * to the Glance 'the Glance' @ 'localhost' IDENTIFIED by 'glance123';.
.> Grant All privileges ON * to the Glance 'the Glance' @ ' % 'IDENTIFIED by' glance123 ';
> the flush privileges;
3. add users and services, and Endpoint
# OpenStack Create user---password --domain default password prompt Glance ## to glance123
# OpenStack Role --project the Add-service - user glance admin ## did not echo
# openstack service create --name glance --description " OpenStack Image" image ## to create image service
# openstack endpoint create --region RegionOne image public http://controller:9292 ##创建endpoint,public
# openstack endpoint create --region RegionOne image internal http://controller:9292 ##internal
# openstack endpoint create --region RegionOne image admin http://controller:9292 ##admin
4.安装配置
# apt install glance -y
修改配置文件:/etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:glance123@controller/glance ##在database下添加该行
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance123 ##只用修改,用户glance的密码
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
修改配置文件:/etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:glance123@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
= glance123 password
[paste_deploy]
flavors = Keystone
# SU -s / bin / SH -C "Manage db_sync-Glance" echo Glance ## has introduced 15 tables.
# Systemctl restart glance-api glance- registry && systemctl enable glance-api glance-registry ## open and since the launch of the service, restart the service here is to take effect profile
5. Verify
# wget http: //download.cirros-cloud. net / 0.4.0 / cirros-0.4.0- x86_64-disk.img ## download the test image cirros, about 13M
# OpenStack the Create Image "cirros" --file cirros-0.4.0-x86_64-disk.img --disk -format qcow2 --container-format bare --public ## to create a mirror
# openstack image list ## to view mirror presence

--------------------------------------------------------

Nova-Controller
1. http://www.cnblogs.com/horizonli/p/5172216.html control nodes
controlling node comprises a network control, schedule management, service API, the storage volume management, database management, and identity management and image management, Dashboard-related services, of course, in order to support these services, the node need to install SQL, MQ and NTP services
2. database configuration
# MySQL
> the Create database nova_api;
> the Create database Nova;
> the Create database nova_cell0;
> the Create database Placement;
> Grant all privileges on nova_api * to 'nova ' @ 'localhost' identified by 'nova123';. ## nova and nova_cell0, placement same empowerment
.> grant all privileges on nova_api * to 'nova' @ '%' identified by 'nova123 ';
3. create a compute service credentials
# openstack user create --domain default --password- prompt nova ## password nova123
# OpenStack --project the Add Role service --user Nova ADMIN
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

# openstack user create --domain default --password-prompt placement
# openstack role add --project service --user placement admin
# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://controller:8778
# openstack endpoint create --region RegionOne placement internal http://controller:8778
# openstack endpoint create --region RegionOne placement admin http://controller:8778

4 nova software installation configuration
# -Y APT the install nova-API-Conductor nova nova-consoleauth nova-novncproxy nova-Scheduler API-nova-Placement
# profile: /etc/nova/nova.conf
[api_database]
Connection + = MySQL pymysql: // nova: nova123 @ controller / nova_api

[database]
connection = mysql+pymysql://nova:nova123@controller/nova

[placement_database]
connection = mysql+pymysql://placement:placement123@controller/placement

[DEFAULT]
transport_url = rabbit://openstack:rabbit123@controller
my_ip = 192.168.137.134
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[Placement]
region_name = RegionOne
project_domain_name the Default =
project_name = Service
AUTH_TYPE = password
USER_DOMAIN_NAME can = the Default
auth_url = HTTP: // the Controller: 5000 / v3
username = Placement
password = password change placement123 ## set up their own

# Su -s / bin / sh -c "nova-manage api_db sync" nova ## is not echoed, and filling nova_api placement database, table 32 is
# su -s / bin / sh -c "nova-manage cell_v2 map_cell0 "nova ## did not echo the registered cell0 library
# su -s / bin / sh -c " nova-manage cell_v2 create_cell --name = cell1 --verbose "nova ## has uuid echo
# su -s / bin / sh -c "nova-manage db sync" nova ## 110 table
# su -s / bin / sh -c "nova-manage cell_v2 list_cells" nova ## and verification cell0 Cell1
# systemctl the restart Nova-API Nova-consoleauth nova-scheduler nova-conductor nova- novncproxy

------------------------
Nova-Compute computing node
1. computing nodes running virtual machine instance, by default it is KVM virtualization engine, computing node need to install the network proxy service instance connected to the virtual network through a network proxy
2. installation and configuration
# apt install nova-compute -y
modify the configuration file: /etc/nova/nova.conf because it is calculated and controlled on a single machine on just under the url line to pay more vnc
[vnc]
Enabled = to true
server_listen = $ my_ip
server_proxyclient_address = $ my_ip
novncproxy_base_url = HTTP: // the Controller: 6080 / vnc_auto.html than ## more months vnc control node address
# egrep - c '(vmx | svm)' / proc / cpuinfo ## to see if virtualization, if the result is greater than or equal to 1, the virtual machine machine support hardware acceleration
If the result is 0, modify the configuration files: / etc / nova / nova- compute.conf
[the libvirt]
virt_type = QEMU
# Nova-systemctl Compute the restart
3. verify
# openstack compute service list --service nova- compute ## to view information computing node
# Su -s / bin / sh -c "nova-manage cell_v2 discover_hosts --verbose" nova ## computing nodes found
to modify the node controller configuration file: /etc/nova/nova.conf
[Scheduler]
discover_hosts_in_cells_interval = 300

Verify that the above problem with the installation:
# ## OpenStack Compute Service List View service
# openstack catalog list ## View api's Endpoint
# ## OpenStack Image List view mirror
# nova-status upgrade check ## to see if the cells and placement API work


------------------------------
neutron 网络provider network
1.控制节点
# mysql
> CREATE DATABASE neutron;
> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron123';
> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron123';
# openstack user create --domain default --password-prompt neutron
# openstack role add --project service --user neutron admin
# openstack service create --name neutron --description "OpenStack Networking" network
# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696
安装包
# apt -y install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
配置文件:/etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron123@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:rabbit123@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron123
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
修改配置文件:/etc/neutreon/plugin/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
修改配置文件:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDrive
# sysctl -a | grep net.bridge.bridge-nf-call-ip
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
修改配置文件:/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

修改配置文件:/etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = metadata123 ##设置密码
修改配置文件:/etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron123
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata123
# Su -s / bin / sh -c "neuntro-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron ## .167 filled neutron database tables, have echoed
# systemctl restart Nova-API
# systemctl restart neutron-neutron-linuxbridge-Server Agent neutron-neutron-the Metadata dhcp-Agent-Agent

2. The computing node
# APT the install Agent -Y-Neutron-linuxbridge
# systemctl the restart Nova-Compute Neutron-Agent-linuxbridge


3. Verify
# openstack extension list --network ## lists the extensions loaded, to verify the successful launch of the neutron server process
# openstack network agent list ## lists the network agent

-------------------------------------------------- ----
Horizon-Dashboard
1.Horizon is to manage, control and services Web OpenStack control panel, it can manage instances, image, created key pair, the instance is added volume, operating Swift containers. In addition, users can use a terminal (console) or VNC directly access an instance in the control panel.

2. Installation Configuration
# apt -y install openstack-dashboard
modify the configuration file: /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "Controller"
allowed_hosts = [ '*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Shanghai"
配置文件:/etc/apache2/conf-available/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
# systemctl restart apache2
http://controller/horizon
---------------------------------------------
cinder-controller
1.为Openstack提供块存储服务
2.添加数据库
# mysql
> create database cinder;
> grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'cinder123';
> grant all privileges on cinder.* to 'cinder'@'%' identified by 'cinder123';
# openstack user create --domain default --password-prompt cinder
# openstack role add --project service --user cinder admin
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s


3.安装配置
# apt install cinder-api cinder-scheduler -y
修改配置文件:/etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:cinder123@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:rabbit123@controller
auth_strategy = keystone
my_ip = 192.168.137.134
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder123
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
# Su -s / bin / sh -c "cinder-manage db sync" cinder ## filled cinder library table 35
to modify the configuration file: /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
# systemctl the restart nova- api cinder-scheduler apache2

----------------------------
cinder-存储
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb
修改配置文件:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
[DEFAULT]
enabled_backends = lvm
glance_api_servers = http://controller:9292
# systemctl restart tgt cinder-scheduler cinder-volume
-----------------
cinder-backup(可选)
# apt install cinder-backup -y
修改配置文件:/etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL ## using openstack catalog show object-store view SWIFT_URL
verify:
# OpenStack Volume Service List

--------------------------------------------------------
创建instance
1.创建网络
# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
# openstack subnet create --network provider --allocation-pool start=203.0.113.101,end=203.0.113.250 --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 --subnet-range 203.0.113.0/24 provider
# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
# openstack security group rule create --proto icmp default
# openstack security group rule create --proto tcp --dst-port 22 default

# openstack flavor list
# openstack image list
# openstack network list
# openstack security group list
# openstack server create --flavor m1.nano --image cirros --nic net-id=02b2e397-ebe0-4656-9a43-12bcfcc7b243 --security-group default provider-instance
# openstack server list
# openstack console url show provider-instance

Guess you like

Origin www.cnblogs.com/guoguodelu/p/10929289.html