基于Openstack-Ansible的Stein版本的Openstack安装

目标

完成三台机器的openstack部署,以下是所有步骤。操作系统是Ubuntu 18.04

官方教程

https://docs.openstack.org/project-deploy-guide/openstack-ansible/stein/deploymenthost.html
按照官方给的教程一步步来,

第一步, 准备部署机器

因为是Ubuntu,所以直接按照命令敲就好,这里注意第五步Configure NTP to synchronize with a suitable time source.
因为需要时间同步,且官方文档有些问题,因为在第四步要求安装ntp,但是在官方docs中要求安装chrony,这两个无法共存。这里还是参考最新的文档,使用NTP,关于配置NTP,openstack官方没给。探索后,具体步骤如下:

  1. 编辑 /etc/ntp.conf

  2. 允许其他节点查询时间
    注意:restrict default kod nomodify notrap nopeer noquery 这个命令是拒绝默认主机的一切操作。
    控制节点允许计算节点查询,所以将restrict关键字中去除noquery,nopeer

  3. 在Ubuntu 18.04中已经自动启动ntp,所以只需要systemctl restart ntp.service。

  4. 然后通过ntpq -p 查看,如下所示
    remote refid st t when poll reach delay offset jitter
    ==============================================================================
    0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
    1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
    2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
    3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
    ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 0.000 0.000
    。。。
    其他信息省略了。

然后配置SSH 免密登录,ssh-copy-id 就好了

关于配置部署主机的网络需要特别留意。官方建议和br-mgmt处于同一网络,在后面配置目标主机时会详细介绍,这里不需要太在意,只要能让主机上网,并记住当前的子网网段就好,比如我的就是191.168.0.0/24。

最后就是下载相关的文件,然后起步就好。这里我遇到一个问题,官方写检出19.0.0分支,但是我查看了官方版本没有这个tag,╮(╯▽╰)╭。
在这里插入图片描述
所以只能选择19.0.0.0rc1。这是发行候选(rc,release candidate),勉强可以用了。

第二步,准备目标主机

继续按照教程安装 https://docs.openstack.org/project-deploy-guide/openstack-ansible/stein/targethosts.html
但是在第5步 Install additional software packages,它要求安装chrony,这个和前面说的NTP矛盾,但是还是官方大,目标主机用chrony去同步前面的部署机器的时间。
修改/etc/chrony/chrony.conf注意两点 ,第一个是保证和部署主机同步,然后允许子网主机查询时间
server 192.168.0.142 iburst
allow 192.168.0.0/24
然后在目标主机上通过chronyc sources -v,进行一次时间同步。

接下来的网络配置是重点内容,按照如下格式进行微调:

network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: false
      dhcp6: false
      
  vlans:
    vlan10:
      accept-ra: no
      id: 10
      link: enp2s0
    vlan20:
      accept-ra: no
      id: 20
      link: enp2s0
    vlan30:
      accept-ra: no
      id: 30
      link: enp2s0
  bridges:
    br-mgmt:
      addresses: [192.168.0.6/24]
      interfaces: [enp2s0]
      gateway4: 192.168.0.1
      nameservers:
        addresses: [223.5.5.5,223.6.6.6]
      routes:
        - to: 0.0.0.0/0
          via: 192.168.0.1
          metric: 100
      parameters:
        stp: false
        forward-delay: 0
        # wait-port: 0 该参数未找到interface对应项
        # https://bugs.launchpad.net/netplan/+bug/1671544
    br-mgmt0:
      addresses: [192.168.0.61/24]
    br-vxlan:
      addresses: [192.168.4.6/24]
      interfaces: [vlan30]
      parameters:
        stp: false
        forward-delay: 0
    br-vlan:
      interfaces: [vlan10]
      parameters:
        stp: false
        forward-delay: 0
    br-storage:
      addresses: [192.168.8.6/24]
      interfaces: [vlan20]
      parameters:
        stp: false
        forward-delay: 0

第三步,配置部署

主要是修改openstack_user_deploy.yml文件,官方给了 pike版本的例子,没有最新的stein版本,下面是我配置的例子

---
cidr_networks:
  container: 192.168.0.0/24
  tunnel: 192.168.4.0/24
  storage: 192.168.8.0/24

used_ips:
  - "192.168.0.1,172.29.236.61"
  - "192.168.4.1,192.168.4.6"
  - "192.168.8.1,192.168.8.6"

global_overrides:
  # The internal and external VIP should be different IPs, however they
  # do not need to be on separate networks.
  external_lb_vip_address: 192.168.0.61
  internal_lb_vip_address: 192.168.0.6
  management_bridge: "br-mgmt"
  provider_networks:
    - network:
        container_bridge: "br-mgmt"
        container_type: "veth"
        container_interface: "eth1"
        ip_from_q: "container"
        type: "raw"
        group_binds:
          - all_containers
          - hosts
        is_container_address: true
    - network:
        container_bridge: "br-vxlan"
        container_type: "veth"
        container_interface: "eth10"
        ip_from_q: "tunnel"
        type: "vxlan"
        range: "1:1000"
        net_name: "vxlan"
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-vlan"
        container_type: "veth"
        container_interface: "eth12"
        host_bind_override: "eth12"
        type: "flat"
        net_name: "flat"
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-vlan"
        container_type: "veth"
        container_interface: "eth11"
        type: "vlan"
        range: "101:200,301:400"
        net_name: "vlan"
        group_binds:
          - neutron_linuxbridge_agent
    - network:
        container_bridge: "br-storage"
        container_type: "veth"
        container_interface: "eth2"
        ip_from_q: "storage"
        type: "raw"
        group_binds:
          - glance_api
          - cinder_api
          - cinder_volume
          - nova_compute

###
### Infrastructure
###

# galera, memcache, rabbitmq, utility
shared-infra_hosts:
  infra1:
    ip: 192.168.0.6

# repository (apt cache, python packages, etc)
repo-infra_hosts:
  infra1:
    ip: 192.168.0.6

# load balancer
haproxy_hosts:
  infra1:
    ip: 192.168.0.6

###
### OpenStack
###

# keystone
identity_hosts:
  infra1:
    ip: 192.168.0.6

# cinder api services
storage-infra_hosts:
  infra1:
    ip: 192.168.0.6

# glance
image_hosts:
  infra1:
    ip: 192.168.0.6

# nova api, conductor, etc services
compute-infra_hosts:
  infra1:
    ip: 192.168.0.6

# heat
orchestration_hosts:
  infra1:
    ip: 192.168.0.6

# horizon
dashboard_hosts:
  infra1:
    ip: 192.168.0.6

# neutron server, agents (L3, etc)
network_hosts:
  infra1:
    ip: 192.168.0.6

# nova hypervisors
compute_hosts:
  compute1:
    ip: 192.168.0.2

# cinder storage host (LVM-backed)
storage_hosts:
  storage1:
    ip: 192.168.0.5
    container_vars:
      cinder_backends:
        limit_container_types: cinder_volume
        lvm:
          volume_group: cinder-volumes
          volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
          volume_backend_name: LVM_iSCSI
          iscsi_ip_address: "192.168.8.5"

猜你喜欢

转载自blog.csdn.net/u014377853/article/details/90402612
今日推荐