Original link: http://blog.geekidentity.com/k8s/kops/install-k8s-with-kops-in-china/For some well-known reasons, AWS China does not have a k8s cluster, so we need to install k8s ourselves. And k8s officially provides a tool kops to help us quickly create a k8s cluster on AWS. Below is the detailed process of creating a cluster in AWS using kops.
Install kops (Binaries)
We recommend using a low-profile server as the k8s management machine, and installing management tools such as kops on it.
Download the compiled binaries from github
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
Install other dependencies
kubectl
kubectl is a CLI tool for managing and operating Kubernetes clusters.
Get the release version from kubernetes official kubectl:
wget -O kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Install the AWS CLI tools
awscli is written in Python. After installing Python and pip, just run the following command directly.
pip install awscli
Create an account
Before version 1.6.2, deploying a K8s cluster through kops required the use of AWS's Route53 to provide the DNS service function. But starting from version 1.6.2, kops supports the deployment of gossip-based clusters and no longer relies on Route53, which makes deployment easier.
Configure an AWS account and use this account to create a dedicated account for kops:
$ aws configure
AWS Access Key ID [None]: <your-accesskeyID>
AWS Secret Access Key [None]: <your-secretAccessKey>
Default region name [None]: cn-north-1
Default output format [None]: json
In order to deploy a cluster with kops, you also need to create an IAM user for kops kops
and assign the appropriate permissions:
$ aws iam create-group --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
$ aws iam create-user --user-name kops
$ aws iam add-user-to-group --user-name kops --group-name kops
kops
Create keys for users:
$ aws iam create-access-key --user-name kops
The above command will return the sum of kops
the user . Then we can update the configuration to use the key of the newly created user:AccessKeyID
SecretAccessKey
awscli
kops
$ aws configure
AWS Access Key ID [None]: <accesskeyID-of-kops-user>
AWS Secret Access Key [None]: <secretAccessKey-of-kops-user>
Default region name [None]: cn-north-1
Default output format [None]: json
You also need to kops
export the user's key to an environment variable on the command line:
$ export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
$ export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
$ export AWS_REGION=$(aws configure get region)
Finally, generate the SSH key:
$ ssh-keygen
Configure S3
It should be noted that in order for kops to create a gossip-based cluster, the cluster name needs to be used .k8s.local
as a suffix, for example, here we name the cluster as cluster.k8s.local
:
$ export NAME=cluster.k8s.local
Then create an S3 bucket to store the data of the cluster. For example, here we name this bucket cluster.k8s.local-state.ym
:
$ aws s3api create-bucket --bucket ${NAME}-state-store --create-bucket-configuration LocationConstraint=$AWS_REGION
$ export KOPS_STATE_STORE=s3://cluster.k8s.local-state-store
ready for kops ami
We have to build our own AMI as there is no official kops ami for AWS China region.
Create a cluster
The following command will create the cluster configuration file, but will not actually create the cluster:
Note: kops-1.8.1 does not support Ningxia, China, only Beijing,
$ kops create cluster \
--name=${NAME} \
--image=ami-089b06f993df09d53 \
--zones=cn-north-1a \
--master-count=1 \
--master-size="t2.micro" \
--node-count=1 \
--node-size="t2.micro" \
--vpc=<your vpc id> \
--subnets=<stringSlice> \
--networking=calico \
--ssh-public-key="~/.ssh/id_rsa.pub"
For the network model, use calico, because the network will be planned by itself online. When using the default kubenet of k8s, k8s will modify the AWS routing table, which means that k8s needs to have its own routing table. All need to have its own subnet, if In the production environment, the network planning has been done. If the specified subnet is used, the k8s network will not operate normally.
Before creating a cluster, you can check that the cluster's configuration file is correct:
$ kops edit cluster ${NAME}
On AWS we usually use our own key to connect to the server
...
spec:
sshKeyName: <your ssh key name>
...
Because some websites are blocked, it is recommended to use a proxy to build a cluster.
...
spec:
egressProxy:
httpProxy:
host: http-proxy
port: port
excludes: amazonaws.com.cn,amazonaws.cn,aliyun.cn,aliyuncs.com
...
You can also specify the docker version
...
spec:
docker:
logDriver: json-file
version: 17.03.2-ce
...
If it is confirmed that there is no problem, you can use the following command to create the cluster:
$ kops update cluster ${NAME} --yes
After the cluster is created, it takes a while to wait for the initialization of the cluster. After the cluster is up, you can verify the status of the cluster:
$ kops validate cluster
The tools have been installed before kubectl
, and you can also use them to kubectl
check the cluster status here:
$ kubectl get nodes
destroy the cluster
Before destroying the cluster, you need to confirm which resources kops will delete:
$ kops delete cluster --name ${NAME}
If it's ok, you can actually delete the cluster:
$ kops delete cluster --name ${NAME} --yes