Kubesphere multi-node installation, cluster deployment

Table of contents

foreword

1. Environmental preparation

1. Prepare the virtual machine

2. Write vagrantfile

3. Start and configure the virtual machine

2. Deploy kubesphere

1. Node allocation (single master node)

 2. Install the necessary tools (executed on all three virtual machines)

3. Create a cluster








foreword

KubeSphere  is a   cloud-native application-oriented distributed operating system built on top of Kubernetes . It is completely open source, supports multi-cloud and multi-cluster management, provides full-stack IT automation operation and maintenance capabilities, and simplifies the DevOps workflow of enterprises. Its architecture can facilitate plug-and-play integration between third-party applications and cloud-native ecological components.

The multi-server environment in this article is a virtual machine, and the installation process is organized according to official documents.







1. Environmental preparation

1. Prepare the virtual machine

  • See another article for the download address of virtualBox and vagrant.

Still using VMware Workstation to build a linux virtual machine? Don't be so troublesome . VirtualBox+vagrant teaches you how to easily handle virtual machines

  •   After installation, set the virtual machine ip address segment, all of which are 56.x ip addresses.

  •  Select Management-Global Settings and select a hard disk with a large space to store the image. I have selected a folder under the E drive here.

2. Write vagrantfile

vagrantfile is used to create virtual machines. With this configuration file, there is no need to manually create a virtual machine, and vagrant will automatically create a virtual machine based on this file. Use it below to create three virtual machines, namely k8s-node1, k8s-node2 and k8s-node3.

Vagrant.configure("2") do |config|
	# 遍历
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # 设置虚拟机的Box
            node.vm.box = "centos/7"

            # 设置虚拟机的主机名
            node.vm.hostname="k8s-node#{i}"

            # 设置虚拟机的IP  102 103 104
            node.vm.network "private_network", ip: "192.168.56.#{101+i}", netmask: "255.255.255.0"

            # 设置主机与虚拟机的共享目录
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"

            # VirtaulBox相关配置
            node.vm.provider "virtualbox" do |v|
                # 设置虚拟机的名称
                v.name = "k8s-node#{i}"
                # 设置虚拟机的内存大小
                v.memory = 4096
                # 设置虚拟机的CPU个数
                v.cpus = 4
            end
        end
   end
end

Note: This file should be placed in the English directory without spaces

3. Start and configure the virtual machine

  • Run cmd under the folder where the vagrantfile is located and enter vagrant up and press Enter, and the virtual machine starts to be deployed automatically.

  •  Wait for the virtual machine to be deployed.

  •  Enter the virtual machine to enable root password access. This operation is for the convenience of connecting through client tools such as Xshell later.

  Enter vagrant ssh xxx xxx in the CMD window just now to represent the host name as follows:


vagrant ssh k8s-node1 

  Switch to root user password after connection: vagrant

su root 

Modify the sshd_config file

vi /etc/ssh/sshd_config

修改
PasswordAuthentication yes

 restart sshd

service sshd restart

 All three virtual machines must perform the appeal operation

  •  configure network

Stop all virtual machines, select three nodes, and then select Management-Global Settings-Network Add NAT Network

 Then modify the network type of each virtual machine to NAT network. And regenerate the MAC address.

 Note: In the fourth step, you must click the refresh button to regenerate the MAC address

Check the IP addresses of the three virtual machines, and check the IP address of eth0. The IP addresses of the three virtual machines eth0 are different.

//k8s-node1
[root@k8s-node1 vagrant]# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:bd:06:c8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 358sec preferred_lft 358sec
    inet6 fe80::a00:27ff:febd:6c8/64 scope link 

//k8s-node2
[root@k8s-node2 ~]# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:66:07:0d brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.4/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 435sec preferred_lft 435sec
    inet6 fe80::a00:27ff:fe66:70d/64 scope link 
       valid_lft forever preferred_lft forever
//k8s-node3
[root@k8s-node3 ~]# ip addr
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:8e:d6:e6 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.5/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 445sec preferred_lft 445sec
    inet6 fe80::a00:27ff:fe8e:d6e6/64 scope link 
       valid_lft forever preferred_lft forever


  •  Set up the linux environment (all three virtual machines need to be executed)

turn off firewall

systemctl stop firewalld
systemctl disable firewalld

 close seLinux


sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
 

close swap

 swapoff -a #Temporarily close
sed -ri 's/.*swap.*/#&/' /etc/fstab #Permanently close
free -g #Verification, swap must be 0

  • Add host record
vi /etc/hosts
10.0.2.15 k8s-node1
10.0.2.4 k8s-node2
10.0.2.5 k8s-node3

Note: The IP address must be your own eth0 address. 

 Pass bridged IPV4 traffic to the iptables chain:

cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

 App takes effect

sysctl --system

 Tip: After the above operations, you can back up the virtual machine image to prevent future problems and reconfiguration.






2. Deploy kubesphere






1. Node allocation (single master node)

node ip hostname role
10.0.2.15 k8s-node1 master,etcd

10.0.2.4

k8s-node2 node

10.0.2.5

k8s-node3 node

The node whose role is master is the master node, no specific business is deployed, only management

The node whose role is node is a working node, and the specific business will be deployed on this node

Official map:

 2. Install the necessary tools (executed on all three virtual machines)

  • Type curl to check if curl is installed, if not install curl 

If the above content is returned, curl has been installed, otherwise enter

yum install -y curl
  •  install socat
yum install -y socat
  •  install conntrack
yum install -y conntrack

 switch to root user

Check the local time and find that the time zone is UTC

timedatectl

 

  •  Switch time zone to Shanghai
timedatectl set-timezone Asia/Shanghai

 Check the time zone again, it becomes CST

  •  Install Time Synchronization Tool
yum install -y chrony

 View time synchronization list

chronyc -n sources -v

 Do not modify it for now, use its default time synchronization list.

 View local time synchronization status

chronyc tracking

 

 Check the local time to make sure that the time of the virtual machine is consistent for three days

timedatectl status
  •  Download the kubesphere installation tool (executed by the master node)
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -

 Note: Executing the above command will download the latest version of KubeKey (v1.1.1), you can modify the version number in the command to download the specified version.

Note: If the download fails, please execute export KKZONE=cn first and then execute the above command

  • To  kk add executable permissions:
chmod +x kk

3. Create a cluster (executed by the master node)

  •  Create configuration file
./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]

示例:./kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.1.1 

  • Recommended Kubernetes versions to install KubeSphere v3.1.1: v1.17.9, v1.18.8, v1.19.8 and v1.20.4. If you don't specify a Kubernetes version, KubeKey will install Kubernetes v1.19.8 by default. See the support matrix for more information on supported Kubernetes versions .

  • If you don't add the flag to the command in this step  --with-kubesphere, KubeSphere will not be deployed and can only be installed using the field in the configuration file  addons , or  ./kk create cluster add this flag again when you use the command later.

  • If you add the flag  --with-kubesphere without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

 After completion, you will see that a configuration template has been generated

  •  Configure the content of relevant nodes according to the actual situation of your own machine

  •  Install kubesphere with one command
./kk create cluster -f config-sample.yaml

Note: If you use another name, you will need to change the above  config-sample.yaml to your own file. 

 The entire installation process may take 10 to 20 minutes, depending on your computer and network environment.

  • finish installation
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.0.2:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             20xx-xx-xx xx:xx:xx
#####################################################
  •  Check whether all components are installed successfully, and all statuses are running or Completed.
kubectl get pod --all-namespaces

 

 You can now  access KubeSphere's web console by <NodeIP:30880 using the default account and password ( ).admin/P@88w0rd

Note: To access the console, you may need to configure port forwarding rules depending on your environment. Also make sure the port is open in your security group  30880.

Log in


Guess you like

Origin blog.csdn.net/qq_31277409/article/details/120392952