Set up k8s development env (by quqi99)

版权声明:本文为博主原创文章,如需转载,请注明出处! https://blog.csdn.net/quqi99/article/details/80989679

版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (作者:张华 发表于:2018-07-10)

Sign the CLA

Sign via Hellosign - https://github.com/kubernetes/community/blob/master/CLA.md
then set email for github - https://github.com/settings/emails
git config –global user.email “[email protected]

Run local k8s via source code

# Install some packages
sudo apt install -y gcc make socat git build-essential

# Install docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt-cache policy docker-ce
sudo apt install docker-ce

# Change default location of docker image
service docker stop
rsync -aXS /var/lib/docker/* /bak/.docker/
rm -rf /var/lib/docker/*
echo /bak/.docker/ /var/lib/docker none bind 0 0 >> /etc/fstab
mount –a
service docker start

# Install etcd > 3.2.13
ETCD_VER=v3.2.18
DOWNLOAD_URL="https://github.com/coreos/etcd/releases/download"
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo /bin/cp -f etcd-${ETCD_VER}-linux-amd64/{etcd,etcdctl} /usr/bin
rm -rf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz etcd-${ETCD_VER}-linux-amd64

# Install golang > 1.10.2
wget https://dl.google.com/go/go1.10.3.linux-amd64.tar.gz
sudo rm -rf /usr/lib/go && sudo tar -C /usr/lib -xzf go1.10.3.linux-amd64.tar.gz
export GOROOT=/usr/lib/go
export GOPATH=/bak/golang
export PATH=$GOROOT/bin:$GOPATH/bin:$PATH

# Install and run kubernetes in local env - https://www.cnblogs.com/edisonxiang/p/6951787.html
mkdir -p $GOPATH/src/k8s.io
#go get -d k8s.io/kubernetes
git clone [email protected]:zhhuabj/kubernetes.git $GOPATH/src/k8s.io/kubernetes
#make GOGCFLAGS="-N -l"  #Debug it
sudo usermod -a -G docker ${USER}
sudo systemctl restart docker.service
sudo systemctl disable kubelet.service
sudo systemctl stop kubelet.service

#注意:一直不成功的原因是需要用小写true,它是区分大小写的
KUBE_ENABLE_CLUSTER_DASHBOARD=true ./hack/local-up-cluster.sh
#注意:加GO_OUT可避免再次编译
GO_OUT=/bak/golang/src/k8s.io/kubernetes/_output/bin 
KUBE_ENABLE_CLUSTER_DASHBOARD=true ./hack/local-up-cluster.sh

# Test local env
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
cluster/kubectl.sh get pods --all-namespaces

github process

https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md
k8s与openstack不一样,openstack使用gerrit来review code, 但是k8s使用github的PR机制。

kubernetes提交PR的流程可以采用pull模型(Shared Repository Model,https://gist.github.com/seshness/3943237),也可以采用fork模型(https://www.cnblogs.com/edisonxiang/p/6951787.html)。我们采用fork模型:

# Click 'Fork' button to fork your own branch - https://github.com/kubernetes/kubernetes, then we have https://github.com/zhhuabj/kubernetes
mkdir -p $GOPATH/src/k8s.io
git clone git@github.com:zhhuabj/kubernetes.git $GOPATH/src/k8s.io/kubernetes
cd kubernetes
hack/local-up-cluster.sh

# set up upstream branch
git remote add upstream https://github.com/kubernetes/kubernetes.git
git remote set-url --push upstream no_push
git remote -v

# Update our branch
git fetch upstream
git checkout master
git rebase upstream/master
#git pull upstream master

# Add new branch myfeature
git checkout -b myfeature
git config --global user.email "[email protected]"
git config --global user.name "zhhuabj"
# Add or Modify files
...
git add .
git commit -a -F ./msg
git commit --amend -a -F ./message
git commit -m "update"
git push origin myfeature
git push origin :myfeature  #delete remote branch

# Rebase unmerged PR into our repo
git fetch upstream pull/56136/head:BRANCHNAME

# Merge multiple local commits into a full commit by using 'git squash'
git log
git rebase -i HEAD~6 把顶部的六个版本聚到一起进入编辑页面
  把需要压缩的日志前面的pick都改为s(squash的缩写)
  注意必须保留一个pick,如果将所有的pick都改为了s那就没有合并的载体了就会报如下错误
  依次输入CTRL+X Y ENTER三个命令完成编辑。
  最后Git Push orgin branchname 

# Pull Request - https://github.com/zhhuabj/kubernetes, 在新上传的Branch上,点击Compare & Pull Request按钮创建一个Pull Requst

# 最后https://github.com/kubernetes/kubernetes/pulls就可以找到刚刚提交的Pull Request。

Review process

https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md#the-testing-and-merge-workflow
openstack社区更开放,使用gerrit机制,新人都能review代码,并能+1。
但k8s使用github的PR,相对封闭一些,新人是不能review代码的,新人的角色叫contributor,可以修改issue (在issue上回复/assign)并提交代码。
只有每个子模块下OWNERS文件定义的reviewer, approver角色的人员(https://github.com/kubernetes/community/blob/master/community-membership.md)才能review代码(reviewer可以LDTM(looks good to me, +1), approver可以+2,一个+1一个+2就可以进代码,但openstack中是只能两个+2才可以)

如果要为某个issue创建PR, 需要在PR的描述里填写fixes #issue_num 。这样PR在 merge后issue会“自动”关闭。PR创建后,k8s机器人会做以下几件事:
在相应OWNER列表里选取一个人做为reviewer
如果是kubernetes member,则启动CI来检查PR,例如UT, e2e test;如果不是kuberentes member ,则需要一个member帮忙启动相应ci
待CI没有问题后,可以ping相应的reviewers来检查代码了

在reviewer认为可以后,需要标lgtm (look go to me) 标签;同时需要该模块的approver标记approve标签。两个标签都有了以后,就可以等待合并了。代码的合并也是由k8s机器人完成的,可以在 http://submit-queue.k8s.io/#/queue 看到等待合并的PR。在合并之前,k8s机器人也会自动重新跑ci以保证代码没有问题。
以上三步差不多就可以将typo提交到主干上。其中大部分工作都有k8s机器人自动完成,比如分配reviewer。
Bot命令如下:

  • Jenkins verification: @k8s-bot verify test this
  • GCE E2E: @k8s-bot cvm gce e2e test this
  • Test all: @k8s-bot test this please, issue #IGNORE
  • CRI test: @k8s-bot cri test this.
  • Verity test: @k8s-bot verify test this
  • LGTM (only applied if you are one of assignees):: /lgtm
  • LGTM cancel: /lgtm cancel
    更多命令见 https://prow.k8s.io/command-help

How to do test

https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md
make verify
make test
make test-integration

How to debug k8s

local-up-cluster.sh是通过_output/local/bin/linux/amd64/hyperkube在容器里启动k8s各服务的,那样是不方便使用(dlv exec O U T D I R / b i n / k u b e l e t kubelet_flags)来调试基于B/S的k8s服务的,那首先将k8s各服务以本地进程的形式启动,这样调试k8s服务就变得像调试openstack服务一样。

1, 第一步创建systemd启动配置文件:

$ cat /lib/systemd/system/kube-etcd.service
[Unit]
Description=Kube-etcd Service
After=network.target
[Service]
Type=notify
ExecStart=/usr/bin/etcd -name etcd -data-dir /var/lib/etcd \
          -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
          -advertise-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target

$ cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kube-apiserver Service
After=network.target
[Service]
Type=notify
ExecStart=/bak/golang/src/k8s.io/kubernetes/_output/bin/kube-apiserver \
            --admission-control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny \
            --apiserver-count=1 \
            --cors-allowed-origins=.* \
            --enable-garbage-collector=false \
            --etcd-servers=http://127.0.0.1:2379 \
            --insecure-bind-address=0.0.0.0 \
            --insecure-port=8080 \
            --log-dir=~/.kube/log/kube-apiserver \
            --logtostderr=false \
            --service-cluster-ip-range=10.0.0.0/16 \
            --v=5 \
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target

$ cat /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kube-controller-manager Service
After=network.target
[Service]
Type=simple
ExecStart=/bak/golang/src/k8s.io/kubernetes/_output/bin/kube-controller-manager \
          --enable-garbage-collector=false \
          --logtostderr=false \
          --log-dir=~/.kube/log/kube-controller-manager \
          --pod-eviction-timeout=5m0s \
          --master=http://0.0.0.0:8080 \
          --node-monitor-grace-period=40s \
          --terminated-pod-gc-threshold=12500 \
          --leader-elect=true \
          --v=4 \
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target

$ cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kube-scheduler Service
After=network.target
[Service]
Type=simple
ExecStart=/bak/golang/src/k8s.io/kubernetes/_output/bin/kube-scheduler \
            --log-dir=~/.k8s/log/kube-scheduler \
            --logtostderr=false \
            --master=http://0.0.0.0:8080 \
            --leader-elect=true \
            --v=5 \
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target

# prepare kubelet.kubeconfig and kube-proxy.kubeconfig
export KUBE_APISERVER="http://127.0.0.1:8080"
./_output/bin/kubectl config set-cluster myk8s --server=${KUBE_APISERVER} --kubeconfig=kubelet.kubeconfig
./_output/bin/kubectl config set-credentials kubelet --kubeconfig=kubelet.kubeconfig
./_output/bin/kubectl config set-context myk8s-context --cluster=myk8s --user=kubelet --kubeconfig=kubelet.kubeconfig
./_output/bin/kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

./_output/bin/kubectl config set-cluster myk8s --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
./_output/bin/kubectl config set-credentials kube-proxy --kubeconfig=kube-proxy.kubeconfig
./_output/bin/kubectl config set-context myk8s-context --cluster=myk8s --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
./_output/bin/kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
cp *.kubeconfig /home/hua/.kube/

$ cat /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/bak/golang/src/k8s.io/kubernetes/_output/bin/kubelet \
          --address=127.0.0.1 --port=10250 --hostname-override=127.0.0.1 \
          --pod-infra-container-image=docker.io/kubernetes/pause \
          --fail-swap-on=false --cgroup-driver=cgroupfs \
          --kubeconfig=/home/hua/.kube/kubelet.kubeconfig \
          --runtime-cgroups=/systemd/system.slice \
          --kubelet-cgroups=/systemd/system.slice \
          --eviction-hard='nodefs.available<1%' \
          --logtostderr=false --log-dir=~/.kube/log/kubelet --v=4
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

$ cat /lib/systemd/system/kube-proxy.service[Unit]
Description=Kube-proxy Service
After=network.target
[Service]
Type=simple
ExecStart=/bak/golang/src/k8s.io/kubernetes/_output/bin/kube-proxy \
            --log-dir=~/.k8s/log/kube-proxy \
            --logtostderr=false \
            --master=http://0.0.0.0:8080 \
            --kubeconfig=/home/hua/.kube/kube-proxy.kubeconfig \
            --proxy-mode=userspace \
            --v=5
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target

2, 第二步,启动各服务:

sudo systemctl --system daemon-reload
sudo systemctl start kube-etcd.service
etcdctl -C http://localhost:4001 cluster-health
sudo systemctl start kube-apiserver.service
sudo systemctl start kube-controller-manager.service
sudo systemctl start kube-scheduler.service
sudo systemctl start kubelet.service

3, 第二步,验证安装是否正确:

$ /bak/golang/src/k8s.io/kubernetes/_output/bin/kubectl -s http://127.0.0.1:8080 get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  

/bak/golang/src/k8s.io/kubernetes/_output/bin/kubectl config set-cluster myk8s --server=http://127.0.0.1:8080
/bak/golang/src/k8s.io/kubernetes/_output/bin/kubectl config set-context myk8s-context --cluster=myk8s --namespace=default --user=client
/bak/golang/src/k8s.io/kubernetes/_output/bin/kubectl config use-context myk8s-context
/bak/golang/src/k8s.io/kubernetes/_output/bin/kubectl config set preferences.colors true
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    server: http://127.0.0.1:8080
  name: myk8s
contexts:
- context:
    cluster: myk8s
    namespace: default
    user: client
  name: myk8s-context
current-context: myk8s-context
kind: Config
preferences:
  colors: true
users: []

$ /bak/golang/src/k8s.io/kubernetes/_output/bin/kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}  
$ ./_output/bin/kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
127.0.0.1   Ready     <none>    11m       v1.12.0-alpha.0.1999+32dc6cc08aa034-dirty
$ ./_output/bin/kubectl get events

4,第四步,例如要调试kubelet服务的话,先停止该服务(sudo systemctl stop kubelet),然后使用(dlv exec O U T D I R / b i n / k u b e l e t kubelet_flags)命令启动,如下:

$ sudo /bak/golang/bin/dlv --headless -l 127.0.0.1:1234 exec /bak/golang/src/k8s.io/kubernetes/_output/bin/kubelet -- --fail-swap-on=False --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice  --v=4
API server listening at: 127.0.0.1:1234

$ sudo /bak/golang/bin/dlv connect 127.0.0.1:1234
Type 'help' for list of commands.
(dlv) b main.main
Breakpoint 1 set at 0x2d08348 for main.main() ./_output/local/go/src/k8s.io/kubernetes/cmd/kubelet/kubelet.go:36
(dlv) c

安装dashboard

该命令(KUBE_ENABLE_CLUSTER_DASHBOARD=true ./hack/local-up-cluster.sh)会自动安装dashboard。
注意:如果不成功原因是需要用小写true,它是区分大小写的。

安装成功后使用命令(cluster/kubectl.sh cluster-info)查看它的访问地址如下:

https://localhost:6443//api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 

这个链接有点问题,在api处有两个斜线会造成看不到UI,改成如下的:

https://localhost:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/  

也可运行命令(kubectl proxy –port=8001 –kubeconfig=/var/run/kubernetes/admin.kubeconfig –accept-hosts=’^*$’)访问:

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/  

上面的’–accept-hosts’用于在非本机外部访问,但Dashboard只允许localhost和127.0.0.1使用HTTP连接进行访问,而其它地址只允许使用HTTPS。因此,如果需要在非本机访问Dashboard的话,只能采用NodePort:

kubectl -n kube-system edit service kubernetes-dashboard
$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.0.0.232   <none>        443:31050/TCP   1h
visit: https://192.168.99.216:31050/

这时访问dashboard仍然有下列问题:

"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\": no RBAC policy matched",

这是因为最新版的k8s默认启用了RBAC(–authorization-mode=Node,RBAC),并为未认证用户赋予了一个默认的身份:anonymous
对于API Server来说,它是使用证书进行认证的,我们需要先创建一个证书:

# generate client-certificate-data
grep 'client-certificate-data' /var/run/kubernetes/admin.kubeconfig | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
# generate client-key-data
grep 'client-key-data' /var/run/kubernetes/admin.kubeconfig | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
# generate p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

然后将该p12证书导入到浏览器即可。此时默认的anonymous身份的token可以这样获取:

cluster/kubectl.sh get secret -n kube-system | grep dashboard
cluster/kubectl.sh -n kube-system  get secret kubernetes-dashboard-token-kglhd -o jsonpath={.data.token}| base64 -d

anonymous身份可能看不到很多东西,所以我们再在kube-system名空间下再创建一个admin用户并和cluster-admin角色关联:

cat > /tmp/admin-user.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
EOF
cat > /tmp/admin-user-role-binding.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
EOF
cluster/kubectl.sh create -f /tmp/admin-user.yaml
cluster/kubectl.sh create -f /tmp/admin-user-role-binding.yaml
cluster/kubectl.sh -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}')

直接修改local-up-cluster.sh代替hyperkube用本地进程启动

或者直接修改脚本去掉hyperkube, 然后运行ENABLE_CLUSTER_DASHBOARD=True ./hack/local-up-cluster.sh

diff --git a/hack/local-up-cluster.sh b/hack/local-up-cluster.sh
index 3b688d3..95de0df 100755
--- a/hack/local-up-cluster.sh
+++ b/hack/local-up-cluster.sh
@@ -202,7 +202,8 @@ do
 done

 if [ "x$GO_OUT" == "x" ]; then
-    make -C "${KUBE_ROOT}" WHAT="cmd/kubectl cmd/hyperkube"
+    #make -C "${KUBE_ROOT}" WHAT="cmd/kubectl cmd/hyperkube"
+    make -C "${KUBE_ROOT}" GOGCFLAGS="-N -l" WHAT="cmd/kubelet cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/cloud-controller-manager cmd/kube-scheduler cmd/kubectl"
 else
     echo "skipped the build."
 fi
@@ -578,7 +579,7 @@ function start_apiserver {
     fi

     APISERVER_LOG=${LOG_DIR}/kube-apiserver.log
-    ${CONTROLPLANE_SUDO} "${GO_OUT}/hyperkube" apiserver ${swagger_arg} ${audit_arg} ${authorizer_arg} ${priv_arg} ${runtime_config} \
+    ${CONTROLPLANE_SUDO} "${GO_OUT}/kube-apiserver" ${swagger_arg} ${audit_arg} ${authorizer_arg} ${priv_arg} ${runtime_config} \
       ${cloud_config_arg} \
       ${advertise_address} \
       ${node_port_range} \
@@ -650,7 +651,7 @@ function start_controller_manager {
     fi

     CTLRMGR_LOG=${LOG_DIR}/kube-controller-manager.log
-    ${CONTROLPLANE_SUDO} "${GO_OUT}/hyperkube" controller-manager \
+    ${CONTROLPLANE_SUDO} "${GO_OUT}/kube-controller-manager" \
       --v=${LOG_LEVEL} \
       --vmodule="${LOG_SPEC}" \
       --service-account-private-key-file="${SERVICE_ACCOUNT_KEY}" \
@@ -685,7 +686,7 @@ function start_cloud_controller_manager {
     fi

     CLOUD_CTLRMGR_LOG=${LOG_DIR}/cloud-controller-manager.log
-    ${CONTROLPLANE_SUDO} ${EXTERNAL_CLOUD_PROVIDER_BINARY:-"${GO_OUT}/hyperkube" cloud-controller-manager} \
+    ${CONTROLPLANE_SUDO} ${EXTERNAL_CLOUD_PROVIDER_BINARY:-"${GO_OUT}/cloud-controller-manager"} \
       --v=${LOG_LEVEL} \
       --vmodule="${LOG_SPEC}" \
       ${node_cidr_args} \
@@ -791,7 +792,7 @@ function start_kubelet {
     )

     if [[ -z "${DOCKERIZE_KUBELET}" ]]; then
-      sudo -E "${GO_OUT}/hyperkube" kubelet "${all_kubelet_flags[@]}" >"${KUBELET_LOG}" 2>&1 &
+      sudo -E "${GO_OUT}/kubelet" "${all_kubelet_flags[@]}" >"${KUBELET_LOG}" 2>&1 &
       KUBELET_PID=$!
     else

@@ -889,14 +890,14 @@ EOF
       done
     fi >>/tmp/kube-proxy.yaml

-    sudo "${GO_OUT}/hyperkube" proxy \
+    sudo "${GO_OUT}/kube-proxy" \
       --v=${LOG_LEVEL} \
       --config=/tmp/kube-proxy.yaml \
       --master="https://${API_HOST}:${API_SECURE_PORT}" >"${PROXY_LOG}" 2>&1 &
     PROXY_PID=$!

     SCHEDULER_LOG=${LOG_DIR}/kube-scheduler.log
-    ${CONTROLPLANE_SUDO} "${GO_OUT}/hyperkube" scheduler \
+    ${CONTROLPLANE_SUDO} "${GO_OUT}/kube-scheduler" \
       --v=${LOG_LEVEL} \
       --kubeconfig "$CERT_DIR"/scheduler.kubeconfig \
       --feature-gates="${FEATURE_GATES}" \

How to read source code

http://dockone.io/article/895

Reference

[1] https://kubernetes.io/docs/imported/community/devel/
[2] https://github.com/kubernetes/community/tree/master/contributors/devel
[3] Bug - https://github.com/kubernetes/community/issues
[4] Submit code review - https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md
[5] Membership - https://github.com/kubernetes/community/blob/master/community-membership.md
[6] CONTRIBUTING - https://github.com/kubernetes/community/blob/master/CONTRIBUTING.md
[7] Code format - https://github.com/golang/go/wiki/CodeReviewComments
[8] Slack - https://kubernetes.slack.com/messages
[9] Mail-list - https://groups.google.com/forum/#!forum/kubernetes-dev
[10] SIG-list (Special Interest Groups) - https://github.com/kubernetes/community/blob/master/sig-list.md
[11] open-bug - https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md
[12] BP - https://github.com/kubernetes/community/tree/master/contributors/design-proposals
[13] https://github.com/kubernetes/community/blob/master/community-membership.md
[14] test - https://github.com/kubernetes/community/blob/master/contributors/devel/testing.md
[15] https://ress.infoq.com/minibooks/Kubernetes-handbook/zh/pdf/kubernetes.pdf
[16] https://kubernetes.io/docs/home/
[17] https://github.com/kelseyhightower/kubernetes-the-hard-way
[18] https://github.com/ubuntu/microk8s

猜你喜欢

转载自blog.csdn.net/quqi99/article/details/80989679