How to use kops in AWS China

Original link: http://blog.geekidentity.com/k8s/kops/aws-china-cn/

How to use kops in AWS China

getting Started

Kops used to only support Google Cloud DNS and Amazon Route53 to configure kubernetes clusters. But gossip was added from 1.6.2 so clusters can be configured without these DNS providers. Thanks to gossip, providing a fully functional kubernetes cluster without Route53 in AWS China region is officially supported since 1.7 . Currently only cn-north-1 (Beijing area) is available, but new areas are coming soon .

Most of the following procedures for configuring a cluster are the same as the guide for using kops in AWS . Therefore, the following mainly describes the different points used in China, and similar parts will be omitted.

Install kops

Install kubectl

Setup your environment

AWS

When executing aws configure, remember to set the default zone name to the correct name, for example, cn-north-1.

AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:

and export it correctly.

export AWS_REGION=$(aws configure get region)

Configure DNS

As mentioned at the beginning, it's easy to create a gossip-based cluster by having the cluster name end in .k8s.local . We'll use this trick below. The rest can be safely skipped.

Testing your DNS setup

This part can also be safely skipped due to gossip.

Cluster state storage

Since we are deploying the cluster in AWS China region, we need to create a dedicated S3 bucket in AWS China region.

aws s3api create-bucket --bucket prefix-example-com-state-store --create-bucket-configuration LocationConstraint=$AWS_REGION

Create your first cluster

Make sure you have a VPC with normal internet access

First, we have to address slow and unstable connections to the internet outside of China, otherwise the following process won't work. One way is to set up a NAT instance that routes traffic through some reliable connection. Details will not be discussed here.

ready for kops ami

We have to build our own AMI as there is no official kops ami for AWS China region. There are two ways to achieve this.

ImageBuilder

First, launch an instance with fast and stable Internet access in a private subnet.

Because the instance is launched in a private subnet, we need to make sure it can connect via VPN or bastion using the private IP.

SUBNET_ID=<subnet id> # 私有子网ID
SECURITY_GROUP_ID=<security group id>
KEY_NAME=<key pair name on aws>

AMI_ID=$(aws ec2 describe-images --filters Name=name,Values=debian-jessie-amd64-hvm-2016-02-20-ebs --query 'Images[*].ImageId' --output text)
INSTANCE_ID=$(aws ec2 run-instances --image-id $AMI_ID --instance-type m3.medium --key-name $KEY_NAME --security-group-ids $SECURITY_GROUP_ID --subnet-id $SUBNET_ID --no-associate-public-ip-address --query 'Instances[*].InstanceId' --output text)
aws ec2 create-tags --resources ${INSTANCE_ID} --tags Key=k8s.io/role/imagebuilder,Value=1

Now follow the ImageBuilder documentation in kube-deploy to build the image.

go get k8s.io/kube-deploy/imagebuilder
cd ${GOPATH}/src/k8s.io/kube-deploy/imagebuilder

sed -i '' "s|publicIP := aws.StringValue(instance.PublicIpAddress)|publicIP := aws.StringValue(instance.PrivateIpAddress)|" pkg/imagebuilder/aws.go
make

# If the keypair specified is not `$HOME/.ssh/id_rsa`, `aws.yaml` need to be modified to add the full path to the private key.
echo 'SSHPrivateKey: "/absolute/path/to/the/private/key"' >> aws.yaml

${GOPATH}/bin/imagebuilder --config aws.yaml --v=8 --publish=false --replicate=false --up=false --down=false

Notice

imagebuilder may complain that the image cannot be found after a failed build and execution. But from the exception log, we can find that the AMI has actually been registered. Although bootstrap-vz claims to have this issue, it seems that the newly created AMI is temporarily handling the unavailable state. kubernetes/kube-deploy#293 .

Wait a minute or so and the AMI should be ready to use.

Copy AMI from another region

Follow this comment to copy kops mirrors from other areas, eg: ap-southeast-1.

Get AMI ID

Either way we end up with an AMI, for example, k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-09.

Prepare local environment

Set several environment variables.

export NAME=example.k8s.local
export KOPS_STATE_STORE=s3://prefix-example-com-state-store

Create a cluster configuration

We need to pay attention to which Availability Zones are available to us. There are only two Availability Zones in the AWS China (Beijing) Region. It will have the same issues like other regions with less than three AZs and no real HA support on two AZs. But more masternodes can be added to improve the reliability of an AZ.

aws ec2 describe-availability-zones

Below is the create cluster command which will create a full internal cluster in an existing VPC. The following command will generate the cluster configuration, but will not start building it (will not start the server). Make sure to generate an SSH key pair before creating the cluster.

VPC_ID=<vpc id>
VPC_NETWORK_CIDR=<vpc network cidr> # e.g. 172.30.0.0/16
AMI=<owner id/ami name> # e.g. 123456890/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-09-09

kops create cluster \
    --zones ${AWS_REGION}a \
    --vpc ${VPC_ID} \
    --network-cidr ${VPC_NETWORK_CIDR} \
    --image ${AMI} \
    --associate-public-ip=false \
    --api-loadbalancer-type internal \
    --topology private \
    --networking weave \
    ${NAME}

Custom cluster configuration

Now that we have a cluster configuration file, we adjust the subnet configuration to reuse the shared subnet by editing the description .

kops edit cluster $NAME

Then change the corresponding subnet to specify that subnet id and remove the cidr, e.g.

spec:
  subnets:
  - id: subnet-12345678
    name: cn-north-1a
    type: Private
    zone: cn-north-1a
  - id: subnet-87654321
    name: utility-cn-north-1a
    type: Utility
    zone: cn-north-1a

Another tweak we can employ here is to add a docker configuration that changes the image to the official registry image in China. This will increase the stability and download speed of pulling images from docker hub.

spec:
  docker:
    logDriver: ""
    registryMirrors:
    - https://registry.docker-cn.com

Please be aware that this mirror may not be suitable for some situations. It can be replaced by any other registry image as long as it is compatible with the docker API. Not suitable for some situations. It can be replaced by any other registry image as long as it is compatible with the docker API.

Build the Cluster

Use the Cluster

Delete the Cluster

What's next?

Add more master nodes

an AZ

To achieve this, we can add more parameters to kops create cluster.

  --master-zones ${AWS_REGION}a --master-count 3 \
  --zones ${AWS_REGION}a --node-count 2 \

two AZ

  --master-zones ${AWS_REGION}a,${AWS_REGION}b --master-count 3 \
  --zones ${AWS_REGION}a,${AWS_REGION}b --node-count 2 \

Note that when one of the AZs goes down, this still has a 50% chance of making the cluster unavailable.

offline mode

It's a naive, unfinished attempt to configure the cluster in a way that minimizes internet requirements, because even with some kind of proxy or VPN, it's still not that fast and is always much more expensive than downloading from S3.

## Setup vars

KUBERNETES_VERSION=$(curl -fsSL --retry 5 "https://dl.k8s.io/release/stable.txt")
KOPS_VERSION=$(curl -fsSL --retry 5 "https://api.github.com/repos/kubernetes/kops/releases/latest" | grep 'tag_name' | cut -d\" -f4)
ASSET_BUCKET="some-asset-bucket"
ASSET_PREFIX=""

# Please note that this filename of cni asset may change with kubernetes version
CNI_FILENAME=cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz


export KOPS_BASE_URL=https://s3.cn-north-1.amazonaws.com.cn/$ASSET_BUCKET/kops/$KOPS_VERSION/
export CNI_VERSION_URL=https://s3.cn-north-1.amazonaws.com.cn/$ASSET_BUCKET/kubernetes/network-plugins/$CNI_FILENAME

## Download assets

KUBERNETES_ASSETS=(
  network-plugins/$CNI_FILENAME
  release/$KUBERNETES_VERSION/bin/linux/amd64/kube-apiserver.tar
  release/$KUBERNETES_VERSION/bin/linux/amd64/kube-controller-manager.tar
  release/$KUBERNETES_VERSION/bin/linux/amd64/kube-proxy.tar
  release/$KUBERNETES_VERSION/bin/linux/amd64/kube-scheduler.tar
  release/$KUBERNETES_VERSION/bin/linux/amd64/kubectl
  release/$KUBERNETES_VERSION/bin/linux/amd64/kubelet
)
for asset in "${KUBERNETES_ASSETS[@]}"; do
  dir="kubernetes/$(dirname "$asset")"
  mkdir -p "$dir"
  url="https://storage.googleapis.com/kubernetes-release/$asset"
  wget -P "$dir" "$url"
  [ "${asset##*.}" != "gz" ] && wget -P "$dir" "$url.sha1"
  [ "${asset##*.}" == "tar" ] && wget -P "$dir" "${url%.tar}.docker_tag"
done

KOPS_ASSETS=(
  "images/protokube.tar.gz"
  "linux/amd64/nodeup"
  "linux/amd64/utils.tar.gz"
)
for asset in "${KOPS_ASSETS[@]}"; do
  kops_path="kops/$KOPS_VERSION/$asset"
  dir="$(dirname "$kops_path")"
  mkdir -p "$dir"
  url="https://kubeupv2.s3.amazonaws.com/kops/$KOPS_VERSION/$asset"
  wget -P "$dir" "$url"
  wget -P "$dir" "$url.sha1"
done

## Upload assets

## Get default S3 multipart_threshold

AWS_S3_DEFAULT_MULTIPART_THRESHOLD=$(aws configure get default.s3.multipart_threshold)

if [ ! -n "$AWS_S3_DEFAULT_MULTIPART_THRESHOLD" ]; then
  AWS_S3_DEFAULT_MULTIPART_THRESHOLD=8MB
fi

## Set multipart_threshold to 1024MB to prevent Etag not returns MD5 when upload multipart

aws configure set default.s3.multipart_threshold 1024MB

aws s3api create-bucket --bucket $ASSET_BUCKET --create-bucket-configuration LocationConstraint=$AWS_REGION
for dir in "kubernetes" "kops"; do
  aws s3 sync --acl public-read "$dir" "s3://$ASSET_BUCKET/$ASSET_PREFIX$dir"
done

aws configure set default.s3.multipart_threshold $AWS_S3_DEFAULT_MULTIPART_THRESHOLD

Add these parameters to the command line when creating the cluster.

  --kubernetes-version https://s3.cn-north-1.amazonaws.com.cn/$ASSET_BUCKET/kubernetes/release/$KUBERNETES_VERSION

Now, most of the resources needed to set up a cluster via kops and kubernetes will be downloaded from the specified S3 bucket, with the exception of images like pause-amd64, dns related, etc. These images are not available in the Docker Hub image because they are hosted on gcr.io. If you can't access the Internet scientifically, there will be some problems.

Assets API

It's not tested because when the author tries to provide a cluster in AWS China region, this method is just PR. This is the official way to implement offline mode and should outperform previous naive attempts.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325064134&siteId=291194637