docker k8s offline installation

The first chapter K8s installation

1.1 Configuring basic environment

reference:

https://www.jianshu.com/p/832bcd89bc07

https://www.cnblogs.com/ericnie/p/7749588.html

http://www.cnblogs.com/cuibobo/articles/8276291.html

1.1.1 $ cat /etc/hosts

192.168.11.1 master

192.168.11.2 node

1.1.2 Disabling the firewall

Check firewall status

$ firewall-cmd --state

Turn off the firewall

$ systemctl stop firewalld

$ systemctl disable firewalld

1.1.3 Close Swap

Run cat /proc/swapscheck Swap really has not been closed.

# Close Swap, do not take effect after restarting the machine

swapoff -a

# Modify / etc / fstab permanently closed Swap

cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')

# Redhat

sed -i "s/\/dev\/mapper\/rhel-swap/\#\/dev\/mapper\/rhel-swap/g" /etc/fstab

# CentOS

sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

After all # modify remount mount point

mount -a

 

# View Swap

free -m

cat /proc/swaps

 

To redhat example:

1.1.3.1 to redhat system as an example

1) to determine the system name

$ Uname # Name System

 

2) to determine the specific system

$ Cat / etc / redhat-release # specific system

 

3) Check Swaps closed

$ cat /proc/swaps

Not closed

 

closed

 

 

4) Close the step of swaps

$ Swapoff -a # temporarily closed swap (do not take effect after the restart), permanently closed execute the following two commands need to restart to take effect

$ cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')

$ sed -i "s/\/dev\/mapper\/rhel-swap/\#\/dev\/mapper\/rhel-swap/g" /etc/fstab

$ mount -a

 

 

1.1.4 Disable SELINUX

$ setenforce 0

$ cat /etc/selinux/config

SELINUX=disabled

1.1.5 Creating /etc/sysctl.d/k8s.conf file, add the following

cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

Run the following command for the changes to take effect:

$ modprobe br_netfilter

$ sysctl -p /etc/sysctl.d/k8s.conf

1.2 Installation kubeadm, kubelet, kubectl

1.2.1 Installation

$ yum install -y kubelet-1.11.0-0.x86_64.rpm kubeadm-1.11.0-0.x86_64.rpm kubectl-1.11.0-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.11.0-0.x86_64.rpm socat-1.7.1.3-1.el6.rf.x86_64.rpm

    

Configuring kubelet

After installation is complete, we also need to kubeletbe configured as a yumway to install the source of kubeletthe generated configuration file parameters --cgroup-driverchanged systemd, and dockerthe cgroup-driverShi cgroupfs, both of which must be consistent with the job, we can docker infosee the command:

$ docker info |grep Cgroup
Cgroup Driver: cgroupfs

Modify a file kubeletconfiguration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, which will KUBELET_CGROUP_ARGSchange the parameters as cgroupfs:

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

There is also a question about the swap partition, before we hand build high-availability cluster kubernetes an article already mentioned, Kubernetesfrom 1.8 began requiring system shutdown Swap, if not close, the default configuration kubeletwill not start, we can by the startup parameters kubelet --fail-swap-on=falsechange this limit, so we need to add a configuration in the above configuration file (in ExecStartbefore):

 

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

Of course, the best thing will be swapto turn off, so can improve kubeletperformance. After editing, re-load our configuration files:

$ systemctl daemon-reload

 

 

1.2.3 associated mirror mounting k8

docker load --input etcd-amd64_3.2.18.tar

docker load --input k8s-dns-kube-dns-amd64_1.14.8.tar

docker load --input kube-controller-manager-amd64_v1.11.0.tar

docker load --input pause_3.1.tar

docker load --input flannel_v0.10.0-amd64.tar

docker load --input k8s-dns-sidecar-amd64_1.14.8.tar

docker load --input kube-proxy-amd64_v1.11.0.tar

docker load --input k8s-dns-dnsmasq-nanny-amd64_1.14.8.tar

docker load --input kube-apiserver-amd64_v1.11.0.tar

docker load --input kube-scheduler-amd64_v1.11.0.tar

docker load --input pause-amd64_3.1.tar

docker load --input coredns_1.1.3.tar

1.3 Initialization Master node

kubeadm init --kubernetes-version=1.11.0 --apiserver-advertise-address 172.22.1.185 --pod-network-cidr=10.244.0.0/16

 

I0905 16:01:50.462186   21858 feature_gate.go:230] feature gates: &{map[]}

[init] using Kubernetes version: v1.11.0

[preflight] running pre-flight checks

I0905 16:01:50.481910   21858 kernel_validator.go:81] Validating kernel version

I0905 16:01:50.481992   21858 kernel_validator.go:96] Validating kernel config

        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03

[preflight/images] Pulling images required for setting up a Kubernetes cluster

[preflight/images] This might take a minute or two, depending on the speed of your internet connection

[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[preflight] Activating the kubelet service

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [docker185 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.22.1.185]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Generated etcd/ca certificate and key.

[certificates] Generated etcd/server certificate and key.

[certificates] etcd/server serving cert is signed for DNS names [docker185 localhost] and IPs [127.0.0.1 ::1]

[certificates] Generated etcd/peer certificate and key.

[certificates] etcd/peer serving cert is signed for DNS names [docker185 localhost] and IPs [172.22.1.185 127.0.0.1 ::1]

[certificates] Generated etcd/healthcheck-client certificate and key.

[certificates] Generated apiserver-etcd-client certificate and key.

[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"

[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"

[init] this might take a minute or longer if the control plane images have to be pulled

[apiclient] All control plane components are healthy after 41.002557 seconds

[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster

[markmaster] Marking the node docker185 as master by adding the label "node-role.kubernetes.io/master=''"

[markmaster] Marking the node docker185 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "docker185" as an annotation

[bootstraptoken] using token: qepaue.2rj1lsdt6jxr0q8z

[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes master has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of machines by running the following on each node

as root:

 

  kubeadm join 172.22.1.185:6443 --token qepaue.2rj1lsdt6jxr0q8z --discovery-token-ca-cert-hash sha256:6c54033e82ac761333c37505df625c924524ce2adfd9afc1af6893ae5b1d70bb

   kubeadm automatically checks whether the current environment there is "residual" last command executed. If so, you must clean up after the line executes init. We can clean up the environment by "kubeadm reset", to prepare again.

Centos server system slowly open the terminal, and displays 'abrt-cli status' timed out

Input abrt-auto-reporting enabled resolves

Configuring kubectl authentication information (Master node operation)

For non-root user #

of -ubuntu

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

# For root

export KUBECONFIG=/etc/kubernetes/admin.conf

 

It may be directly into ~ / .bash_profile

 

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

 

1.3.2 Starting kubelet

sudo systemctl enable kubelet && sudo systemctl start kubelet

1.3.3 restart kubelet

systemctl daemon-reload

systemctl restart omelet

1.3.4 Checking kubelet Launch OK

sudo systemctl status kubelet.service

 

1.3.5 In order to use more convenient, enabled kubectl command auto-completion:

 

echo "source <(kubectl completion bash)" >> ~/.bashrc

View version 1.3.6

$ kubectl version

 

problem

Run kubectl version

Information Display 8080 can not be found

solution:

kubectl alias = "kubectl --kubeconfig = / etc / kubernetes / kubelet.kubeconfig"

unalias kubectl

https://yq.aliyun.com/articles/149595

kubectl proxy --port=8080 &

 

1.4 network installation pod

1.4.1 acquisition pod

export KUBECONFIG=/etc/kubernetes/admin.conf

[root@k8s-1 ~]# kubectl get pods --all-namespaces

 

1.5 network installation pod

Let kubernetes cluster to work, you must install the pod network, or can not communicate between the pod. Kubernetes support a variety of network solutions, where we start with flannel.

kubectl Create -f kube-flannel-rbac.yml

kubectl Create -f kube-flannel.yml

 

 

View node status:

 [root@localhost images]# kubectl get nodes

NAME        STATUS    ROLES     AGE       VERSION

localhost   Ready     master    2h        v1.11.0

The default configuration Kubernetes Pod does not scheduled to Master node. If you want to k8s-master also use as a Node, execute the following command:

kubectl taint node localhost node-role.kubernetes.io/master-

kubectl taint node docker185 node-role.kubernetes.io/master-

 

example:

[root@localhost images]# kubectl taint node localhost node-role.kubernetes.io/master-

node/localhost untainted

If you want to restore the Master Only state, execute the following command:

kubectl taint node localhost  node-role.kubernetes.io/master="":NoSchedule

 

k8 use and maintenance

1.1 kubectl command

kubectl controls the Kubernetes cluster manager

command

description

Basic Commands (Beginner):

create

Create a resource from a file or from stdin.

expose

A replication controller, service, deployment or pod

And expose it as a new Kubernetes Service

run

Run a specified image in the cluster

set   

Wherein a set of specified objects

Basic Commands (Intermediate):

explain

View Resource Documents

get

Display one or more resources

edit

Editing a resource on the server

delete

Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:

rollout

Manage the rollout of a resource

scale

To Deployment, ReplicaSet, Replication Controller or Job

autoscale

Automatically adjusts the number of copies of a Deployment, ReplicaSet, or the ReplicationController

Cluster Management Commands:

certificate

Modify certificate resources.

cluster-info

Display cluster information

top

Display Resource (CPU/Memory/Storage) usage.

cordon

Marked node is unschedulable

uncordon

Marked node is schedulable

drain

Drain node in preparation for maintenance

taint

Taints update on one or more node

Troubleshooting and Debugging Commands:

describe

Display a specific resource or resources before the group

logs

Output of the pod in the container log

attach

 Attach container to a running

exec

Execute a command in a container in

port-forward

Forward one or more local ports to a pod

proxy

Run a proxy to Kubernetes API server

cp

Copying files and directories to copy the files and directories and containers from the container.

auth

Inspect authorization

Advanced Commands:

apply

Configure resources by file name or the standard input stream (stdin)

patch

a resource update patch with strategic merge field (s)

replace

Replace a resource by filename or stdin

wait

Experimental: Wait for one condition on one or many resources

convert

Conversion profile at different API versions

Settings Commands:

label

Update labels on this resource

annotate

Update a resource comment

completion

Output shell completion code for the specified shell (bash or zsh)

Other Commands:

alpha

Commands for features in alpha

api-resources

Print the supported API resources on the server

api-versions

 Print the supported API versions on the server, in the form of "group/version"

config

修改 kubeconfig 文件

plugin

Runs a command-line plugin

version

输出 client 和 server 的版本信息

 

1.1.1 kubectl label

更新在这个资源上的 labels

用label控制pod的位置

kubectl label nodes localhost nd=web

有了nd这个自定义label,接下来就可以指定将pod部署到localhost节点上。编辑nginx.yml

[root@localhost yml]# vi nginx.yml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: nginx-deployment001

spec:

  replicas: 5

  template:

    metadata:

      labels:

        app: web_server

    spec:

      containers:

      - name: nginx

        image: nginx:1.7.9

    nodeSelector:

      nd: web

 

要删除label nd 执行如下命令

kubectl label nodes localhost nd-

-          即删除

1.1.2 kubectl delete

Usage:

  kubectl delete ([-f FILENAME] | TYPE [(NAME | -l label | --all)]) [options]

docker delete

--all=false

 Delete all resources, including uninitialized ones, in the namespace of the

specified resource types.

--cascade=true

 If true, cascade the deletion of the resources managed by this resource (e.g.

Pods created by a ReplicationController).  Default true.

--field-selector=''

 Selector (field query) to filter on, supports '=', '==', and '!='.(e.g.

--field-selector key1=value1,key2=value2). The server only supports a limited number of field

queries per type.

-f, --filename=[]

 containing the resource to delete.

--force=false

 Only used when grace-period=0. If true, immediately remove resources from API

and bypass graceful deletion. Note that immediate deletion of some resources may result in

inconsistency or data loss and requires confirmation.

--grace-period=-1

 Period of time in seconds given to the resource to terminate gracefully.

Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true

(force deletion).

--ignore-not-found=false

 Treat "resource not found" as a successful delete. Defaults to

"true" when --all is specified.

 --include-uninitialized=false

 If true, the kubectl command applies to uninitialized objects.

If explicitly set to false, this flag overrides other flags that make the kubectl commands apply to

uninitialized objects, e.g., "--all". Objects with empty metadata.initializers are regarded as

initialized.

--now=false

 If true, resources are signaled for immediate shutdown (same as

--grace-period=1).

-o, --output=''

 Output mode. Use "-o name" for shorter output (resource/name).

  -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you

want to manage related manifests organized within the same directory.

-l, --selector=''

 Selector (label query) to filter on, not including uninitialized ones.

      --timeout=0s: The length of time to wait before giving up on a delete, zero means determine a

timeout from the size of the object

--wait=true:

 If true, wait for resources to be gone before returning. This waits for

finalizers.

1.1.2.1 删除pod

kubectl delete –f nginx.yml

 

Guess you like

Origin www.cnblogs.com/lsolation/p/10983832.html