client-go 入门学习

安装

go get k8s.io/client-[email protected]
go get k8s.io/apimachinery/pkg/apis/meta/v1

注意版本对应:
client-go github

如果遇到有些包没有, 执行 go mod tidy。


RESTClient,DynamicClient和ClientSet Demo

在Kubernetes上,通常需要Client来访问Kubernetes中的对象,目前最常用的是RESTClient, DynamicClient和ClientSet这三种Client。今天就先介绍下这三个Client基本含义及大概的用法。

K8s二开之 client-go 初探


基本操作介绍

连接 API Server

我们Go client的第一步就是建立一个与API Server的连接。为了做到这一点,我们要使用实体包中的clientcmd,如下代码所示:

import (
...
    "k8s.io/client-go/tools/clientcmd"
)
func main() {
    
    
    kubeconfig := filepath.Join(
         os.Getenv("HOME"), ".kube", "config",
    )
    config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
    if err != nil {
    
    
        log.Fatal(err)
    }
...
}

_Client-go_通过提供实体功能来从不同的上下文中获取你的配置,从而使之成为一个不重要的任务。

你能从kubeconfig文件启动配置来连接API server。当你的代码运行在集群之外的时候这是一个理想的方案。clientcmd.BuildConfigFromFlags(“”, configFile)

当你的代码运行在这个集群中的时候,你可以用上面的函数并且不使用任何参数,这个函数就会通过集群的信息去连接api server。

clientcmd.BuildConfigFromFlags("", "")

或者我们可以通过rest包来创建一个使用集群中的信息去配置启动的。k8s里所有的Pod都会以Volume的方式自动挂载k8s里面默认的ServiceAccount,所以会用默认的ServiceAccount的授权信息),如下:

import "k8s.io/client-go/rest"
...
rest.InClusterConfig()

创建一个clientset

我们需要创建一个序列化的client为了让我们获取API对象。在kubernetes包中的Clientset类型定义,提供了去访问公开的API对象的序列化client,如下:

type Clientset struct {
    
    
    *authenticationv1beta1.AuthenticationV1beta1Client
    *authorizationv1.AuthorizationV1Client
...
    *corev1.CoreV1Client
}

一旦我们有正确的配置连接,我们就能使用这个配置去初始化一个clientset,如下:

func main() {
    
    
    config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
    ...
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
    
    
        log.Fatal(err)
    }
}

对于我们的例子,我们使用的是v1的API对象。下一步,我们要使用clientset通过CoreV1()去访问核心api资源,如下:

func main() {
    
    
    ...
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
    
    
        log.Fatal(err)
    }
    api := clientset.CoreV1()
}

获取集群的PVC列表

我们对clientset执行的最基本操作之一获取存储的API对象的列表。在我们的例子中,我们将要拿到一个namespace下面的pvc列表,如下:

import (
...
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func main() {
    
    
    var ns, label, field string
    flag.StringVar(&ns, "namespace", "", "namespace")
    flag.StringVar(&label, "l", "", "Label selector")
    flag.StringVar(&field, "f", "", "Field selector")
...
    api := clientset.CoreV1()
    // setup list options
    listOptions := metav1.ListOptions{
    
    
        LabelSelector: label, 
        FieldSelector: field,
    }
    pvcs, err := api.PersistentVolumeClaims(ns).List(listOptions)
    if err != nil {
    
    
        log.Fatal(err)
    }
    printPVCs(pvcs)
...
}

在上面的代码中,我们使用ListOptions指定 label 和 field selectors (还有namespace)来缩小pvc列表的范围,这个结果的返回类型是v1.PeristentVolumeClaimList。下面的这个代码展示了我们如何去遍历和打印从api server中获取的pvc列表。

func printPVCs(pvcs *v1.PersistentVolumeClaimList) {
    
    
    template := "%-32s%-8s%-8s\n"
    fmt.Printf(template, "NAME", "STATUS", "CAPACITY")
    for _, pvc := range pvcs.Items {
    
    
        quant := pvc.Spec.Resources.Requests[v1.ResourceStorage]
        fmt.Printf(
            template, 
            pvc.Name, 
            string(pvc.Status.Phase), 
            quant.String())
    }
}

监听集群中pvc

k8s的Go client框架支持为指定的API对象在其生命周期事件中监听集群的能力,包括创建,更新,删除一个指定对象时候触发的CREATED,MODIFIED,DELETED事件。对于我们的命令行工具,我们将要监听在集群中已经声明的PVC的总量。

对于某一个namespace,当pvc的容量到达了某一个阈值(比如说200Gi),我们将会采取某个动作。为了简单起见,我们将要在屏幕上打印个通知。但是在更复杂的实现中,可以使用相同的办法触发一个自动操作。

启动监听功能

现在让我们为PersistentVolumeClaim这个资源通过Watch去创建一个监听器。然后这个监听器通过ResultChan从go的channel中访问事件通知。

func main() {
    
    
...
    api := clientset.CoreV1()
    listOptions := metav1.ListOptions{
    
    
        LabelSelector: label, 
        FieldSelector: field,
    }
    watcher, err :=api.PersistentVolumeClaims(ns).Watch(listOptions)
    if err != nil {
    
    
      log.Fatal(err)
    }
    ch := watcher.ResultChan()
...
}

循环事件

接下来我们将要处理资源事件。但是在我们处理事件之前,我们先声明resource.Quantity类型的的两个变量为maxClaimsQuant和totalClaimQuant来分别表示我们的申请资源阈值和运行总数。

import(
    "k8s.io/apimachinery/pkg/api/resource"
    ...
)
func main() {
    
    
    var maxClaims string
    flag.StringVar(&maxClaims, "max-claims", "200Gi", "Maximum total claims to watch")
    var totalClaimedQuant resource.Quantity
    maxClaimedQuant := resource.MustParse(maxClaims)
...
    ch := watcher.ResultChan()
    for event := range ch {
    
    
        pvc, ok := event.Object.(*v1.PersistentVolumeClaim)
        if !ok {
    
    
            log.Fatal("unexpected type")
        }
        ...
    }
}

在上面的for-range循环中,watcher的channel用于处理来自服务器传入的通知。每个事件赋值给变量event,并且event.Object的类型被声明为PersistentVolumeClaim类型,所以我们能从中提取出来。

处理ADDED事件

当一个新的PVC创建的时候,event.Type的值被设置为watch.Added。然后我们用下面的代码去获取新增的声明的容量(quant),将其添加到正在运行的总容量中(totalClaimedQuant)。最后我们去检查是否当前的容量总值大于当初设定的最大值(maxClaimedQuant),如果大于的话我们就触发一个事件。

import(
    "k8s.io/apimachinery/pkg/watch"
    ...
)
func main() {
    
    
...
    for event := range ch {
    
    
        pvc, ok := event.Object.(*v1.PersistentVolumeClaim)
        if !ok {
    
    
            log.Fatal("unexpected type")
        }
        quant := pvc.Spec.Resources.Requests[v1.ResourceStorage]
        switch event.Type {
    
    
            case watch.Added:
                totalClaimedQuant.Add(quant)
                log.Printf("PVC %s added, claim size %s\n", 
                    pvc.Name, quant.String())
                if totalClaimedQuant.Cmp(maxClaimedQuant) == 1 {
    
    
                    log.Printf(
                        "\nClaim overage reached: max %s at %s",
                        maxClaimedQuant.String(),
                        totalClaimedQuant.String())
                    // trigger action
                    log.Println("*** Taking action ***")
                }
            }
        ...
        }
    }
}

处理DELETED事件

代码也会在PVC被删除的时候做出反应,它执行相反的逻辑以及把被删除的这个PVC申请的容量在正在运行的容量的总值里面减去。

func main() {
    
    
...
    for event := range ch {
    
    
        ...
        switch event.Type {
    
    
        case watch.Deleted:
            quant := pvc.Spec.Resources.Requests[v1.ResourceStorage]
            totalClaimedQuant.Sub(quant)
            log.Printf("PVC %s removed, size %s\n", 
               pvc.Name, quant.String())
            if totalClaimedQuant.Cmp(maxClaimedQuant) <= 0 {
    
    
                log.Printf("Claim usage normal: max %s at %s",
                    maxClaimedQuant.String(),
                    totalClaimedQuant.String(),
                )
                // trigger action
                log.Println("*** Taking action ***")
            }
        }
        ...
    }
}

运行程序

当程序在一个运行中的集群被执行的时候,首先会列出PVC的列表。然后开始监听集群中新的PersistentVolumeClaim事件。

$> ./pvcwatch
Using kubeconfig:  /Users/vladimir/.kube/config
--- PVCs ----
NAME                            STATUS  CAPACITY
my-redis-redis                  Bound   50Gi
my-redis2-redis                 Bound   100Gi
-----------------------------
Total capacity claimed: 150Gi
-----------------------------
--- PVC Watch (max claims 200Gi) ----
2018/02/13 21:55:03 PVC my-redis2-redis added, claim size 100Gi
2018/02/13 21:55:03
At 50.0% claim capcity (100Gi/200Gi)
2018/02/13 21:55:03 PVC my-redis-redis added, claim size 50Gi
2018/02/13 21:55:03
At 75.0% claim capcity (150Gi/200Gi)

下面让我们部署一个应用到集群中,这个应用会申请75Gi容量的存储。(例如,让我们通过helm去部署一个实例influxdb)。

helm install --name my-influx \
--set persistence.enabled=true,persistence.size=75Gi stable/influxdb

正如下面你看到的,我们的工具立刻反应出来有个新的声明以及一个警告因为当前的运行的声明总量已经大于我们设定的阈值。

--- PVC Watch (max claims 200Gi) ----
...
2018/02/13 21:55:03
At 75.0% claim capcity (150Gi/200Gi)
2018/02/13 22:01:29 PVC my-influx-influxdb added, claim size 75Gi
2018/02/13 22:01:29
Claim overage reached: max 200Gi at 225Gi
2018/02/13 22:01:29 *** Taking action ***
2018/02/13 22:01:29
At 112.5% claim capcity (225Gi/200Gi)

相反,从集群中删除一个PVC的时候,该工具会相应展示提示信息。

...
At 112.5% claim capcity (225Gi/200Gi)
2018/02/14 11:30:36 PVC my-redis2-redis removed, size 100Gi
2018/02/14 11:30:36 Claim usage normal: max 200Gi at 125Gi
2018/02/14 11:30:36 *** Taking action ***

client-go 进入pod执行命令

进入单个 pod 执行命令

package main

import (
	"flag"
	"fmt"
	"io"
	"os"
	"path/filepath"

	"golang.org/x/crypto/ssh/terminal"
	corev1 "k8s.io/api/core/v1"
	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/kubernetes/scheme"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/tools/remotecommand"
	"k8s.io/client-go/util/homedir"
)

func main() {
    
    
     var kubeconfig *string
     if home := homedir.HomeDir(); home != "" {
    
    
            kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")} else {
    
    
     kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
     }
     flag.Parse()
     
     config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
     if err != nil {
    
    
        panic(err)
     }
     clientset, err := kubernetes.NewForConfig(config)
     if err != nil {
    
    
        panic(err)
     }

     req := clientset.CoreV1().RESTClient().Post().
         Resource("pods").
         Name("myapp-d46f5678b-m98p2").
         Namespace("default").
         SubResource("exec").
         VersionedParams(&corev1.PodExecOptions{
    
    
             Command: []string{
    
    "echo", "hello world"},
             Stdin:   true,
             Stdout:  true,
             Stderr:  true,
             TTY:     false,
         }, scheme.ParameterCodec)
     exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())

     if !terminal.IsTerminal(0) || !terminal.IsTerminal(1) {
    
    
        fmt.Errorf("stdin/stdout should be terminal")
     }
     
     oldState, err := terminal.MakeRaw(0)
     if err != nil {
    
    
        fmt.Println(err)
     }
     
     defer terminal.Restore(0, oldState)
 
     screen := struct {
    
    
           io.Reader
           io.Writer
     }{
    
    os.Stdin, os.Stdout}
     
     if err = exec.Stream(remotecommand.StreamOptions{
    
    
        Stdin: screen,
        Stdout: screen,
        Stderr: screen,
        Tty:    false,
     }); err != nil {
    
    
        fmt.Print(err)
     }
}
$ go run execpod.go 
hello world

进入多个 pod 执行命令

package main
import (
	"bufio"
	"fmt"
	"io"

	"github.com/spf13/pflag"
	v1 "k8s.io/api/core/v1"
	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/kubernetes/scheme"
	restclient "k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/tools/remotecommand"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)


type App struct {
    
    
	Config    *restclient.Config
	Namespace string
	PodName   string
}

func NewApp(namespace string, podName string) *App {
    
    
	config := LoadKubernetesConfig()
	return &App{
    
    Config: config, Namespace: namespace, PodName: podName}
}


func LoadKubernetesConfig() *restclient.Config {
    
    
	kubeconfig := pflag.Lookup("kubefile").Value.String()
	// uses the current context in kubeconfig
	config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
	if err != nil {
    
    
		panic(err.Error())
	}
	return config
}

func ExecCommandInPodContainer(config *restclient.Config, namespace string, podName string, containerName string,
	command string) (string, error) {
    
    
	client, err := kubernetes.NewForConfig(config)

	reader, writer := io.Pipe()
	var cmdOutput string

	go func() {
    
    
		scanner := bufio.NewScanner(reader)
		for scanner.Scan() {
    
    
			line := scanner.Text()
			cmdOutput += fmt.Sprintln(line)
		}
	}()

	stdin := reader
	stdout := writer
	stderr := writer
	tty := false

	cmd := []string{
    
    
		"bash",
		"-c",
		command,
	}

	req := client.CoreV1().RESTClient().Post().Resource("pods").Name(podName).
		Namespace(namespace).SubResource("exec")

	option := &v1.PodExecOptions{
    
    
		Command:   cmd,
		Container: containerName,
		Stdin:     stdin != nil,
		Stdout:    stdout != nil,
		Stderr:    stderr != nil,
		TTY:       tty,
	}

	req.VersionedParams(
		option,
		scheme.ParameterCodec,
	)
	exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
	if err != nil {
    
    
		return "", err
	}
	err = exec.Stream(remotecommand.StreamOptions{
    
    
		Stdin:  stdin,
		Stdout: stdout,
		Stderr: stderr,
		Tty:    tty,
	})

	if err != nil {
    
    
		return "", err
	}

	return cmdOutput, nil
}

func (orch *App) GetClusterInfo() (string, error) {
    
    
	// check clusters info
	result, err := ExecCommandInPodContainer(orch.Config, orch.Namespace, orch.PodName, "nginx", "echo `date +%Y%m%d-%H:%M` hello wold")
	if err != nil {
    
    
		fmt.Println("Error occoured" + err.Error())
	}
	return result, nil
}


func main() {
    
    
	pflag.String("kubefile", "/root/.kube/config", "Kube file to load")
        pflag.String("namespace", "default", "App Namespace")
        pflag.Parse()
        appNamespace := pflag.Lookup("namespace").Value.String()
        config := LoadKubernetesConfig()
	client, err := kubernetes.NewForConfig(config)
        if err != nil {
    
    
           fmt.Println(err.Error())
        }
        appPodList, err := client.CoreV1().Pods(appNamespace).List(metav1.ListOptions{
    
    })
        if err != nil {
    
    
           fmt.Println(err.Error())
        }

        for _, pod := range appPodList.Items {
    
    
            app := NewApp(pod.ObjectMeta.Namespace, pod.ObjectMeta.Name)
            result, err := app.GetClusterInfo()
            if err != nil {
    
    
		fmt.Println(err.Error())
	    }
            fmt.Println(result)
        }
}
$ go run test1.go
20201216-10:13 hello wold

20201216-10:13 hello wold

client-go增删改查crd

示例 CRD

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # list of versions supported by this CustomResourceDefinition
  versions:
    - name: v1
      # Each version can be enabled/disabled by Served flag.
      served: true
      # One and only one version must be marked as the storage version.
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                cronSpec:
                  type: string
                image:
                  type: string
                replicas:
                  type: integer
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct

通过 kubectl 创建一下这个 crd ,然后再创建几个 crd 对象


list 资源

首先是如何 list 前面创建的3个资源,类似 kubectl get crontab.stable.example.com 的效果。

简单来说就是通过 k8s.io/client-go/dynamic 里的 Interface 提供的方法来操作 crd 资源。 关键是怎么拿到 NamespaceableResourceInterface 实例以及把结果转换为自定义的结构体。

package main

import (
        "encoding/json"
        "fmt"
        "os"
        "path/filepath"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/tools/clientcmd"
)

var gvr = schema.GroupVersionResource{
    
    
        Group:    "stable.example.com",
        Version:  "v1",
        Resource: "crontabs",
}

type CrontabSpec struct {
    
    
        CronSpec string `json:"cronSpec"`
        Image    string `json:"image"`
}

type Crontab struct {
    
    
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec CrontabSpec `json:"spec,omitempty"`
}

type CrontabList struct {
    
    
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`

        Items []Crontab `json:"items"`
}

func listCrontabs(client dynamic.Interface, namespace string) (*CrontabList, error) {
    
    
        list, err := client.Resource(gvr).Namespace(namespace).List(metav1.ListOptions{
    
    })
        if err != nil {
    
    
                return nil, err
        }
        data, err := list.MarshalJSON()
        if err != nil {
    
    
                return nil, err
        }
        var ctList CrontabList
        if err := json.Unmarshal(data, &ctList); err != nil {
    
    
                return nil, err
        }
        return &ctList, nil
}

func main() {
    
    
        kubeconfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")
        config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
        if err != nil {
    
    
                panic(err)
        }

        client, err := dynamic.NewForConfig(config)
        if err != nil {
    
    
                panic(err)
        }
        list, err := listCrontabs(client, "default")
        if err != nil {
    
    
                panic(err)
        }
        for _, t := range list.Items {
    
    
                fmt.Printf("%s %s %s %s\n", t.Namespace, t.Name, t.Spec.CronSpec, t.Spec.Image)
        }
}

代码相对来说比较简单,有一个要注意的地方就是 gvr 里各个字段的值来自 crd 定义的 yaml 文件:

spec:
  # group name to use for REST API: /apis/<group>/<version>
  # 对应 Group 字段的值
  group: stable.example.com
  # list of versions supported by this CustomResourceDefinition
  versions:
    - name: v1   # 对应 Version 字段的可选值
  # ...
names:
  # plural name to be used in the URL: /apis/<group>/<version>/<plural>
  # 对应 Resource 字段的值
  plural: crontabs

注意:因为这个 crd 定义的是 namespace 资源,如果是非 namespace 资源的话,应当改为使用不指定 namespace 的方法:

client.Resource(gvr).List(metav1.ListOptions{
    
    })

get 资源

get 资源的方法也是通过 dynamic.Interface 来实现,关键是怎么把结果转换为上面定义的结构体, 关键代码示例如下: get.go

package main

import (
        "encoding/json"
        "fmt"
        "os"
        "path/filepath"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/tools/clientcmd"
)

var gvr = schema.GroupVersionResource{
    
    
        Group:    "stable.example.com",
        Version:  "v1",
        Resource: "crontabs",
}

type CrontabSpec struct {
    
    
        CronSpec string `json:"cronSpec"`
        Image    string `json:"image"`
}

type Crontab struct {
    
    
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec CrontabSpec `json:"spec,omitempty"`
}

type CrontabList struct {
    
    
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`

        Items []Crontab `json:"items"`
}

func getCrontab(client dynamic.Interface, namespace string, name string) (*Crontab, error) {
    
    
        utd, err := client.Resource(gvr).Namespace(namespace).Get(name, metav1.GetOptions{
    
    })
        if err != nil {
    
    
                return nil, err
        }
        data, err := utd.MarshalJSON()
        if err != nil {
    
    
                return nil, err
        }
        var ct Crontab
        if err := json.Unmarshal(data, &ct); err != nil {
    
    
                return nil, err
        }
        return &ct, nil
}
func main() {
    
    
        kubeconfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")
        config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
        if err != nil {
    
    
                panic(err)
        }

        client, err := dynamic.NewForConfig(config)
        if err != nil {
    
    
                panic(err)
        }
        ct, err := getCrontab(client, "default", "cron-1")
        if err != nil {
    
    
                panic(err)
        }
        fmt.Printf("%s %s %s %s\n", ct.Namespace, ct.Name, ct.Spec.CronSpec, ct.Spec.Image)
}

执行效果:

$ go run main.go
default cron-1 * * * * */5 my-awesome-cron-image-1

create 资源

create 资源的方法也是通过 dynamic.Interface 来实现 ,这里主要记录一下怎么基于 yaml 文本的内容来创建资源。

关键代码示例如下: create.go

package main

import (
        "encoding/json"
        "fmt"
        "os"
        "path/filepath"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
        "k8s.io/apimachinery/pkg/runtime/serializer/yaml"
)

var gvr = schema.GroupVersionResource{
    
    
        Group:    "stable.example.com",
        Version:  "v1",
        Resource: "crontabs",
}

type CrontabSpec struct {
    
    
        CronSpec string `json:"cronSpec"`
        Image    string `json:"image"`
}

type Crontab struct {
    
    
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec CrontabSpec `json:"spec,omitempty"`
}

type CrontabList struct {
    
    
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`

        Items []Crontab `json:"items"`
}

func createCrontabWithYaml(client dynamic.Interface, namespace string, yamlData string) (*Crontab, error) {
    
    
        decoder := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
        obj := &unstructured.Unstructured{
    
    }
        if _, _, err := decoder.Decode([]byte(yamlData), nil, obj); err != nil {
    
    
                return nil, err
        }

        utd, err := client.Resource(gvr).Namespace(namespace).Create(obj, metav1.CreateOptions{
    
    })
        if err != nil {
    
    
                return nil, err
        }
        data, err := utd.MarshalJSON()
        if err != nil {
    
    
                return nil, err
        }
        var ct Crontab
        if err := json.Unmarshal(data, &ct); err != nil {
    
    
                return nil, err
        }
        return &ct, nil
}

func main() {
    
    
        kubeconfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")
        config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
        if err != nil {
    
    
                panic(err)
        }

        client, err := dynamic.NewForConfig(config)
        if err != nil {
    
    
                panic(err)
        }
        createData := `
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
  name: cron-4
spec:
  cronSpec: "* * * * */15"
  image: my-awesome-cron-image-4
`
        ct, err := createCrontabWithYaml(client, "default", createData)
        if err != nil {
    
    
                panic(err)
        }
        fmt.Printf("%s %s %s %s\n", ct.Namespace, ct.Name, ct.Spec.CronSpec, ct.Spec.Image)
}

执行效果:

$ go run main.go
default cron-4 * * * * */15 my-awesome-cron-image-4

$ kubectl get crontab.stable.example.com cron-4
NAME     AGE
cron-4   5m33s

update 资源

update 资源的方法也是通过 dynamic.Interface 来实现 ,这里主要记录一下怎么基于 yaml 文本的内容来更新资源。

关键代码示例如下:

package main

import (
        "encoding/json"
        "fmt"
        "os"
        "path/filepath"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
        "k8s.io/apimachinery/pkg/runtime/serializer/yaml"

)

var gvr = schema.GroupVersionResource{
    
    
        Group:    "stable.example.com",
        Version:  "v1",
        Resource: "crontabs",
}

type CrontabSpec struct {
    
    
        CronSpec string `json:"cronSpec"`
        Image    string `json:"image"`
}

type Crontab struct {
    
    
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec CrontabSpec `json:"spec,omitempty"`
}

type CrontabList struct {
    
    
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`

        Items []Crontab `json:"items"`
}

func updateCrontabWithYaml(client dynamic.Interface, namespace string, yamlData string) (*Crontab, error) {
    
    
        decoder := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
        obj := &unstructured.Unstructured{
    
    }
        if _, _, err := decoder.Decode([]byte(yamlData), nil, obj); err != nil {
    
    
                return nil, err
        }

        utd, err := client.Resource(gvr).Namespace(namespace).Get(obj.GetName(), metav1.GetOptions{
    
    })
        if err != nil {
    
    
                return nil, err
        }
        obj.SetResourceVersion(utd.GetResourceVersion())
        utd, err = client.Resource(gvr).Namespace(namespace).Update(obj, metav1.UpdateOptions{
    
    })
        if err != nil {
    
    
                return nil, err
        }

        data, err := utd.MarshalJSON()
        if err != nil {
    
    
                return nil, err
        }
        var ct Crontab
        if err := json.Unmarshal(data, &ct); err != nil {
    
    
                return nil, err
        }
        return &ct, nil
}

func main() {
    
    
        kubeconfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")
        config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
        if err != nil {
    
    
                panic(err)
        }

        client, err := dynamic.NewForConfig(config)
        if err != nil {
    
    
                panic(err)
        }
        updateData := `
apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
  name: cron-2
spec:
  cronSpec: "* * * * */15"
  image: my-awesome-cron-image-2-update
`
        ct, err := updateCrontabWithYaml(client, "default", updateData)
        if err != nil {
    
    
                panic(err)
        }
        fmt.Printf("%s %s %s %s\n", ct.Namespace, ct.Name, ct.Spec.CronSpec, ct.Spec.Image)
}

执行效果:

 $ kubectl get crontab.stable.example.com cron-2 -o jsonpath='{
    
    .spec}'
map[cronSpec:* * * * */8 image:my-awesome-cron-image-2]

$ go run main.go
default cron-2 * * * * */15 my-awesome-cron-image-2-update

$ kubectl get crontab.stable.example.com cron-2 -o jsonpath='{
    
    .spec}'
map[cronSpec:* * * * */15 image:my-awesome-cron-image-2-update]

patch 资源

patch 资源的方法跟 patch pod 之类的代码类似,关键代码示例如下:

package main

import (
        "os"
        "path/filepath"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/apimachinery/pkg/types"
)

var gvr = schema.GroupVersionResource{
    
    
        Group:    "stable.example.com",
        Version:  "v1",
        Resource: "crontabs",
}

type CrontabSpec struct {
    
    
        CronSpec string `json:"cronSpec"`
        Image    string `json:"image"`
}

type Crontab struct {
    
    
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec CrontabSpec `json:"spec,omitempty"`
}

type CrontabList struct {
    
    
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`

        Items []Crontab `json:"items"`
}

func patchCrontab(client dynamic.Interface, namespace, name string, pt types.PatchType, data []byte) error {
    
    
        _, err := client.Resource(gvr).Namespace(namespace).Patch(name, pt, data, metav1.PatchOptions{
    
    })
        return err
}


func main() {
    
    
        kubeconfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")
        config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
        if err != nil {
    
    
                panic(err)
        }

        client, err := dynamic.NewForConfig(config)
        if err != nil {
    
    
                panic(err)
        }
        patchData := []byte(`{
    
    "spec": {
    
    "image": "my-awesome-cron-image-1-patch"}}`)
        if err := patchCrontab(client, "default", "cron-1", types.MergePatchType, patchData); err != nil {
    
    
           panic(err)
        }
}

执行效果:

$ kubectl get crontab.stable.example.com cron-1 -o jsonpath='{
    
    .spec.image}'
my-awesome-cron-image-1

$ go run main.go

$ kubectl get crontab.stable.example.com cron-1 -o jsonpath='{
    
    .spec.image}'
my-awesome-cron-image-1-patch

delete 资源

delete 资源相对来说简单很多,关键代码示例如下:

package main

import (
        "os"
        "path/filepath"

        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/tools/clientcmd"
)

var gvr = schema.GroupVersionResource{
    
    
        Group:    "stable.example.com",
        Version:  "v1",
        Resource: "crontabs",
}

type CrontabSpec struct {
    
    
        CronSpec string `json:"cronSpec"`
        Image    string `json:"image"`
}

type Crontab struct {
    
    
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec CrontabSpec `json:"spec,omitempty"`
}

type CrontabList struct {
    
    
        metav1.TypeMeta `json:",inline"`
        metav1.ListMeta `json:"metadata,omitempty"`

        Items []Crontab `json:"items"`
}

func deleteCrontab(client dynamic.Interface, namespace string, name string) error {
    
    
        return client.Resource(gvr).Namespace(namespace).Delete(name, nil)
}

func main() {
    
    
        kubeconfig := filepath.Join(os.Getenv("HOME"), ".kube", "config")
        config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
        if err != nil {
    
    
                panic(err)
        }

        client, err := dynamic.NewForConfig(config)
        if err != nil {
    
    
                panic(err)
        }
        if err := deleteCrontab(client, "default", "cron-3"); err != nil {
    
    
                panic(err)
        }
}

结果:

$ go run main.go
$ kubectl get crontab.stable.example.com
NAME     AGE
cron-1   4h5m
cron-2   4h5m
cron-4   17m

client-go 输出资源

获取pod,pv,pvc,namespace数量并打印

client.go

package main
import (
    "flag"
//    "context"
    "fmt"
    "os"
    "path/filepath"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/tools/clientcmd"
)
func main() {
    
    
    var kubeconfig *string
    if home := homeDir(); home != "" {
    
    
        kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
    } else {
    
    
        kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
    }
    flag.Parse()
    // uses the current context in kubeconfig
    config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
    if err != nil {
    
    
        panic(err.Error())
    }
    // creates the clientset
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
    
    
        panic(err.Error())
    }
    
    pods, err1 := clientset.CoreV1().Pods("").List(metav1.ListOptions{
    
    })
    if err1 != nil {
    
    
       panic(err1.Error())
    }
    pvs, err2 := clientset.CoreV1().PersistentVolumes().List(metav1.ListOptions{
    
    })
    if err2 != nil {
    
    
       panic(err2.Error())
    }
    pvcs, err3 := clientset.CoreV1().PersistentVolumeClaims("").List(metav1.ListOptions{
    
    })
    if err3 != nil {
    
    
       panic(err3.Error())
    }
    namespaces, err4 := clientset.CoreV1().Namespaces().List(metav1.ListOptions{
    
    })
    if err4 != nil {
    
    
       panic(err4.Error())
    }
    fmt.Printf("There are %d pods in the cluster\n", len(pods.Items))
    fmt.Printf("There are %d pvs in the cluster\n", len(pvs.Items))
    fmt.Printf("There are %d pvcs in the cluster\n", len(pvcs.Items))
    fmt.Printf("There are %d namespaces in the cluster\n", len(namespaces.Items))

    fmt.Println("---------pods----------")
    for _, pod := range pods.Items {
    
    
       fmt.Printf("Name: %s, Status: %s, CreateTime: %s\n", pod.ObjectMeta.Name, pod.Status.Phase, pod.ObjectMeta.CreationTimestamp)
    }
    fmt.Println("---------pvs----------")
    for _, pv := range pvs.Items {
    
    
       fmt.Printf("Name: %s, Status: %s, CreateTime: %s\n", pv.ObjectMeta.Name, pv.Status.Phase, pv.ObjectMeta.CreationTimestamp)
    }
    fmt.Println("---------pvcs----------")
    for _, pvc := range pvcs.Items {
    
    
       fmt.Printf("Name: %s, Status: %s, CreateTime: %s\n", pvc.ObjectMeta.Name, pvc.Status.Phase, pvc.ObjectMeta.CreationTimestamp)
    }
    fmt.Println("---------namespaces----------")
    for _, namespace := range namespaces.Items {
    
    
       fmt.Printf("Name: %s, Status: %s, CreateTime: %s\n", namespace.ObjectMeta.Name, namespace.Status.Phase, namespace.ObjectMeta.CreationTimestamp)
    }
}
func homeDir() string {
    
    
    if h := os.Getenv("HOME"); h != "" {
    
    
        return h
    }
    return os.Getenv("USERPROFILE") // windows
}

执行:

$ go run client.go
There are 4 pods in the cluster
There are 2 pvs in the cluster
There are 2 pvcs in the cluster
There are 6 namespaces in the cluster
---------pods----------
Name: backend, Status: Running, CreateTime: 2020-10-23 02:24:45 -0700 PDT
Name: database, Status: Running, CreateTime: 2020-10-23 02:24:45 -0700 PDT
Name: frontend, Status: Running, CreateTime: 2020-10-23 02:24:45 -0700 PDT
Name: backend, Status: Running, CreateTime: 2020-10-24 02:34:47 -0700 PDT
---------pvs----------
Name: pv, Status: Bound, CreateTime: 2020-09-28 19:19:46 -0700 PDT
Name: task-pv-volume, Status: Bound, CreateTime: 2020-11-27 04:34:38 -0800 PST
---------pvcs----------
Name: pvc, Status: Bound, CreateTime: 2020-09-28 19:23:51 -0700 PDT
Name: task-pv-claim, Status: Bound, CreateTime: 2020-11-28 06:27:54 -0800 PST
---------namespaces----------
Name: app-stack, Status: Active, CreateTime: 2020-09-28 19:00:18 -0700 PDT
Name: default, Status: Active, CreateTime: 2020-09-25 23:11:56 -0700 PDT
Name: kube-node-lease, Status: Active, CreateTime: 2020-09-25 23:11:55 -0700 PDT
Name: kube-public, Status: Active, CreateTime: 2020-09-25 23:11:55 -0700 PDT
Name: kube-system, Status: Active, CreateTime: 2020-09-25 23:11:54 -0700 PDT
Name: rq-demo, Status: Active, CreateTime: 2020-10-22 20:01:59 -0700 PDT

打印pod详细信息

package main

import (
	"flag"
	"fmt"
	"os"
	"path/filepath"
	"time"

	"k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/tools/clientcmd"
)

func main() {
    
    
	// 配置 k8s 集群外 kubeconfig 配置文件,默认位置 $HOME/.kube/config
	var kubeconfig *string
	if home := homeDir(); home != "" {
    
    
		kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
	} else {
    
    
		kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
	}
	flag.Parse()

	//在 kubeconfig 中使用当前上下文环境,config 获取支持 url 和 path 方式
	config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
	if err != nil {
    
    
		panic(err.Error())
	}

	// 根据指定的 config 创建一个新的 clientset
	clientset, err := kubernetes.NewForConfig(config)
	if err != nil {
    
    
		panic(err.Error())
	}
	for {
    
    
		// 通过实现 clientset 的 CoreV1Interface 接口列表中的 PodsGetter 接口方法 Pods(namespace string) 返回 PodInterface
		// PodInterface 接口拥有操作 Pod 资源的方法,例如 Create、Update、Get、List 等方法
		// 注意:Pods() 方法中 namespace 不指定则获取 Cluster 所有 Pod 列表
		pods, err := clientset.CoreV1().Pods("").List(metav1.ListOptions{
    
    })
		if err != nil {
    
    
			panic(err.Error())
		}
		fmt.Printf("There are %d pods in the k8s cluster\n", len(pods.Items))

		// 获取指定 namespace 中的 Pod 列表信息
		namespace := "default"
		pods, err = clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{
    
    })
		if err != nil {
    
    
			panic(err)
		}
		fmt.Printf("\nThere are %d pods in namespaces %s\n", len(pods.Items), namespace)
		for _, pod := range pods.Items {
    
    
			fmt.Printf("Name: %s, Status: %s, CreateTime: %s\n", pod.ObjectMeta.Name, pod.Status.Phase, pod.ObjectMeta.CreationTimestamp)
		}

		// 获取指定 namespaces 和 podName 的详细信息,使用 error handle 方式处理错误信息
		namespace = "default"
		podName := "backend"
		pod, err := clientset.CoreV1().Pods(namespace).Get(podName, metav1.GetOptions{
    
    })
		if errors.IsNotFound(err) {
    
    
			fmt.Printf("Pod %s in namespace %s not found\n", podName, namespace)
		} else if statusError, isStatus := err.(*errors.StatusError); isStatus {
    
    
			fmt.Printf("Error getting pod %s in namespace %s: %v\n",
				podName, namespace, statusError.ErrStatus.Message)
		} else if err != nil {
    
    
			panic(err.Error())
		} else {
    
    
			fmt.Printf("\nFound pod %s in namespace %s\n", podName, namespace)
			maps := map[string]interface{
    
    }{
    
    
				"Name":        pod.ObjectMeta.Name,
				"Namespaces":  pod.ObjectMeta.Namespace,
				"NodeName":    pod.Spec.NodeName,
				"Annotations": pod.ObjectMeta.Annotations,
				"Labels":      pod.ObjectMeta.Labels,
				"SelfLink":    pod.ObjectMeta.SelfLink,
				"Uid":         pod.ObjectMeta.UID,
				"Status":      pod.Status.Phase,
				"IP":          pod.Status.PodIP,
				"Image":       pod.Spec.Containers[0].Image,
			}
			prettyPrint(maps)
//                        fmt.Println(maps)
		}

		time.Sleep(10 * time.Second)
	}
}
//分列输出
func prettyPrint(maps map[string]interface{
    
    }) {
    
     
	lens := 0
	for k, _ := range maps {
    
    
		if lens <= len(k) {
    
    
			lens = len(k)
		}
            fmt.Println(lens)
	}
	for key, values := range maps {
    
    
		spaces := lens - len(key)
		v := ""
		for i := 0; i < spaces; i++ {
    
    
			v += " "
		}
		fmt.Printf("%s: %s%v\n", key, v, values)
	}
}

func homeDir() string {
    
    
	if h := os.Getenv("HOME"); h != "" {
    
    
		return h
	}
	return os.Getenv("USERPROFILE") // windows
}

输出:

root@master:~/k8s_dev/client_pod_details# go run client.go 
There are 21 pods in the k8s cluster

There are 6 pods in namespaces default
Name: backend, Status: Running, CreateTime: 2020-10-24 02:34:47 -0700 PDT
Name: delpersistentvolumeclaims-1606753860-xfkgx, Status: Succeeded, CreateTime: 2020-11-30 08:31:02 -0800 PST
Name: delpersistentvolumeclaims-1606753920-hftgz, Status: Succeeded, CreateTime: 2020-11-30 08:32:03 -0800 PST
Name: delpersistentvolumeclaims-1606753980-j8k76, Status: Succeeded, CreateTime: 2020-11-30 08:33:03 -0800 PST
Name: patch-demo-76b69ff5cd-htjmd, Status: Running, CreateTime: 2020-10-24 02:59:29 -0700 PDT
Name: patch-demo-76b69ff5cd-hwzd5, Status: Running, CreateTime: 2020-11-30 04:42:15 -0800 PST

Found pod backend in namespace default
Name:        backend
NodeName:    node2
IP:          192.168.104.34
Image:       nginx
Uid:         1c206880-592b-4ad1-89bb-26bd51e8ddf5
Status:      Running
Namespaces:  default
Annotations: map[cni.projectcalico.org/podIP:192.168.104.34/32]
Labels:      map[]
SelfLink:    /api/v1/namespaces/default/pods/backend

client-go管理pvc

采集并删除pvc

client.go

package main
import (
        "context"
        "time"
	"fmt"

	"github.com/spf13/pflag"
        v1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	restclient "k8s.io/client-go/rest"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/klog"
	clientset "k8s.io/client-go/kubernetes"
)


// GetPersistentVolumeClaims 方法将会从 Kuberenetes 自动获取到 PersistentVolumeClaims 服务对象
func GetPersistentVolumeClaims(name string) *v1.PersistentVolumeClaim {
    
    
	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
	PersistentVolumeClaims, err := kubeClient.CoreV1().PersistentVolumeClaims("").Get(context.TODO(),name,metav1.GetOptions{
    
    })
	if err != nil {
    
    
		klog.Errorf("Error getting PersistentVolumeClaims %v", err)
	}
	return PersistentVolumeClaims
}

// ListersistentVolumeClaims 方法将会从Kuberenetes自动获取到 PersistentVolumeClaims 服务对象
func ListPersistentVolumeClaims() *v1.PersistentVolumeClaimList {
    
    
	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
	PersistentVolumeClaims, err := kubeClient.CoreV1().PersistentVolumeClaims("").List(context.TODO(),metav1.ListOptions{
    
    })
	if err != nil {
    
    
		klog.Errorf("Error getting PersistentVolumeClaims %v", err)
	}

	return PersistentVolumeClaims
}

// DelPersistentVolumeClaims 方法将会从Kuberenetes自动删除 PersistentVolumeClaims 服务对象
func DelPersistentVolumeClaims(pvc v1.PersistentVolumeClaim)  {
    
    

	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
	pvcname := pvc.GetName()
        namespace := pvc.GetNamespace()
	pvcstatusPhase := string(pvc.Status.Phase)
        pvcCreationTime := pvc.GetCreationTimestamp()
	age := int(time.Since(pvcCreationTime.Time).Seconds())
        //pvc.Labels[PVC_WAIT_KEY] = PVC_WAIT_GC_VALUE
        //pvc.Labels[PVC_DELETE_TIME] = time.Now().In(unitv1alpha1.GetCstZone()).Format(layout: "2006-01-02 15:04:05.999999999")
        //fmt.Println(pvc.Labels)
         

// 删除条件pvc STATUS is "Released" and age 大于72小时
	if pvcstatusPhase == "Pending" {
    
    
		if age > 20 {
    
    

		   err := kubeClient.CoreV1().PersistentVolumeClaims(namespace).Delete(context.TODO(),pvcname,&metav1.DeleteOptions{
    
    })

		   if err != nil {
    
    
				klog.Errorf("Delete pvc error: %v", err)
	           }
                }
        }
}

func loadKubernetesClientsForDelPersistentVolumeClaims() (*restclient.Config, *clientset.Clientset) {
    
    
	klog.Infof("starting getting PersistentVolumeClaims")
	kubeconfig := pflag.Lookup("kubefile").Value.String()
	// uses the current context in kubeconfig
	config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
	if err != nil {
    
    
		panic(err.Error())
	}
	kubeClient, errClient := clientset.NewForConfig(config)
	if errClient != nil {
    
    
		klog.Errorf("Error received creating client %v", errClient)
	}
	return config, kubeClient
}


func main() {
    
    
	pflag.String("kubefile", "/Users/jamesjiang/.kube/config", "Kube file to load")
	pflag.String("timetoexec", "800", "Seconds to execute in a single period")
	pflag.String("option", "backupBinaryLog", "Function options")
	pflag.Parse()
        option := pflag.Lookup("option").Value.String()
	if option == "DelPersistentVolumeClaims" {
    
    
		fmt.Println("【DelpersistentVolumeClaims】")
                PersistentVolumeClaims :=  ListPersistentVolumeClaims()
		for _, pvc := range PersistentVolumeClaims.Items {
    
    
		    DelPersistentVolumeClaims(pvc)
		}
	}
}

测试

$   kubectl get pvc
NAME            STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cron-pv-claim   Pending                                              manual         6m30s
pvc             Bound     pv               512M       RWX            shared         62d
task-pv-claim   Bound     task-pv-volume   10Gi       RWO            manual         40h
$ go build .
$ ./cronserver --kubefile /root/.kube/config --option DelPersistentVolumeClaims
$   kubectl get pvc
NAME            STATUS    VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc             Bound     pv               512M       RWX            shared         62d
task-pv-claim   Bound     task-pv-volume   10Gi       RWO            manual         40h

通过label标签筛选删除pvc

(pod、pv也通用)
package main
import (
        "context"
//        "time"
	"fmt"

	"github.com/spf13/pflag"
        v1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	restclient "k8s.io/client-go/rest"
        "k8s.io/apimachinery/pkg/labels"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/klog"
	clientset "k8s.io/client-go/kubernetes"
)


// GetPersistentVolumeClaims 方法将会从 Kuberenetes 自动获取到 PersistentVolumeClaims 服务对象
func GetPersistentVolumeClaims(name string) *v1.PersistentVolumeClaim {
    
    
	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
	PersistentVolumeClaims, err := kubeClient.CoreV1().PersistentVolumeClaims("").Get(context.TODO(),name,metav1.GetOptions{
    
    })
	if err != nil {
    
    
		klog.Errorf("Error getting PersistentVolumeClaims %v", err)
	}
	return PersistentVolumeClaims
}

// ListersistentVolumeClaims 方法将会从Kuberenetes自动获取到 PersistentVolumeClaims 服务对象
func ListPersistentVolumeClaims() *v1.PersistentVolumeClaimList {
    
    
	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
	PersistentVolumeClaims, err := kubeClient.CoreV1().PersistentVolumeClaims("").List(context.TODO(),metav1.ListOptions{
    
    })
	if err != nil {
    
    
		klog.Errorf("Error getting PersistentVolumeClaims %v", err)
	}

	return PersistentVolumeClaims
}

// ListNamespace 方法将会从Kuberenetes自动获取到 Namespace 服务对象
func ListNamespaces() *v1.NamespaceList {
    
    
	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
        Namespaces, err := kubeClient.CoreV1().Namespaces().List(context.TODO(),metav1.ListOptions{
    
    })
	if err != nil {
    
    
		klog.Errorf("Error getting Namespaces %v", err)
	}

	return Namespaces
}


// DelPersistentVolumeClaims 方法将会从Kuberenetes自动删除 PersistentVolumeClaims 服务对象
func DelLabelPersistentVolumeClaims(namespace string)  {
    
    

        
	var (
		PVC_WAIT_GC_VALUE = "PVC_WAIT_GC_VALUE"
                PVC_DELETE_TIME = "295200s"
	)
	_, kubeClient := loadKubernetesClientsForDelPersistentVolumeClaims()
        labelPvc := labels.SelectorFromSet(labels.Set(map[string]string{
    
    "PVC_WAIT_KEY": PVC_WAIT_GC_VALUE, "PVC_DELETE_TIME": PVC_DELETE_TIME}))
	listPvcOptions := metav1.ListOptions{
    
    
		LabelSelector: labelPvc.String(),
	}
	PersistentVolumeClaims, err := kubeClient.CoreV1().PersistentVolumeClaims(namespace).List(context.TODO(), listPvcOptions)
        for _, PersistentVolumeClaim := range PersistentVolumeClaims.Items {
    
    
            fmt.Printf("Start to delete pvc %s in %s namespace! \n", PersistentVolumeClaim.ObjectMeta.Name, namespace)
        }
	err = kubeClient.CoreV1().PersistentVolumeClaims(namespace).DeleteCollection(context.TODO(), &metav1.DeleteOptions{
    
    }, listPvcOptions)
	if err != nil {
    
    
			klog.Errorf("Drop pvc labels err %v",err)
	}
}

func loadKubernetesClientsForDelPersistentVolumeClaims() (*restclient.Config, *clientset.Clientset) {
    
    
	klog.Infof("starting getting PersistentVolumeClaims")
	kubeconfig := pflag.Lookup("kubefile").Value.String()
	// uses the current context in kubeconfig
	config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
	if err != nil {
    
    
		panic(err.Error())
	}
	kubeClient, errClient := clientset.NewForConfig(config)
	if errClient != nil {
    
    
		klog.Errorf("Error received creating client %v", errClient)
	}
	return config, kubeClient
}


func main() {
    
    
	pflag.String("kubefile", "/Users/jamesjiang/.kube/config", "Kube file to load")
	pflag.String("timetoexec", "800", "Seconds to execute in a single period")
	pflag.String("option", "backupBinaryLog", "Function options")
	pflag.Parse()
        option := pflag.Lookup("option").Value.String()
	if option == "DelLabelPersistentVolumeClaims" {
    
    
		fmt.Println("【DelpersistentVolumeClaims】")
                Namespaces :=  ListNamespaces()
		for _, namespace := range Namespaces.Items {
    
    
                   namespace := namespace.ObjectMeta.Name
		   DelLabelPersistentVolumeClaims(namespace)
		}
	}
}
go build .

测试

创建一个pvc,labels分别为PVC_WAIT_KEY: PVC_WAIT_GC_VALUE和PVC_DELETE_TIME: 295200s

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: backup-pv-claim
  labels:
    PVC_WAIT_KEY: PVC_WAIT_GC_VALUE
    PVC_DELETE_TIME: 295200s
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
$ ./cronserver --option DelLabelPersistentVolumeClaims --kubefile /root/.kube/config
【DelpersistentVolumeClaims】
I1130 22:59:55.328428   60651 client.go:76] starting getting PersistentVolumeClaims
I1130 22:59:55.403363   60651 client.go:76] starting getting PersistentVolumeClaims
Start to delete pvc backup-pv-claim in default namespace!  //删除pvc日志
I1130 22:59:55.512610   60651 client.go:76] starting getting PersistentVolumeClaims

client-go管理namespace

操作namespace创建查看删除

package main

import (
	"flag"
	"fmt"
	apiv1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/util/homedir"
	"path/filepath"
)

func main() {
    
    
	// 配置 k8s 集群外 kubeconfig 配置文件,默认位置 $HOME/.kube/config
	var kubeconfig *string
	if home := homedir.HomeDir(); home != "" {
    
    
		kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
	} else {
    
    
		kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
	}
	flag.Parse()

	//在 kubeconfig 中使用当前上下文环境,config 获取支持 url 和 path 方式
	config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
	if err != nil {
    
    
		panic(err)
	}

	// 根据指定的 config 创建一个新的 clientset
	clientset, err := kubernetes.NewForConfig(config)
	if err != nil {
    
    
		panic(err)
	}

	// 通过实现 clientset 的 CoreV1Interface 接口列表中的 NamespacesGetter 接口方法 Namespaces 返回 NamespaceInterface
	// NamespaceInterface 接口拥有操作 Namespace 资源的方法,例如 Create、Update、Get、List 等方法
	name := "client-go-test"
	namespacesClient := clientset.CoreV1().Namespaces()
	namespace := &apiv1.Namespace{
    
    
		ObjectMeta: metav1.ObjectMeta{
    
    
			Name: name,
		},
		Status: apiv1.NamespaceStatus{
    
    
			Phase: apiv1.NamespaceActive,
		},
	}

	// 创建一个新的 Namespaces
	fmt.Println("Creating Namespaces...")
	result, err := namespacesClient.Create(namespace)
	if err != nil {
    
    
		panic(err)
	}
	fmt.Printf("Created Namespaces %s on %s\n", result.ObjectMeta.Name, result.ObjectMeta.CreationTimestamp)

	// 获取指定名称的 Namespaces 信息
	fmt.Println("Getting Namespaces...")
	result, err = namespacesClient.Get(name, metav1.GetOptions{
    
    })
	if err != nil {
    
    
		panic(err)
	}
	fmt.Printf("Name: %s, Status: %s, selfLink: %s, uid: %s\n",
		result.ObjectMeta.Name, result.Status.Phase, result.ObjectMeta.SelfLink, result.ObjectMeta.UID)

	// 删除指定名称的 Namespaces 信息
	fmt.Println("Deleting Namespaces...")
	deletePolicy := metav1.DeletePropagationForeground
	if err := namespacesClient.Delete(name, &metav1.DeleteOptions{
    
    
		PropagationPolicy: &deletePolicy,
	}); err != nil {
    
    
		panic(err)
	}
	fmt.Printf("Deleted Namespaces %s\n", name)
}

执行:

root@master:~/k8s_dev/namespace# go run client.go
Creating Namespaces...
Created Namespaces client-go-test on 2020-12-01 02:18:18 -0800 PST
Getting Namespaces...
Name: client-go-test, Status: Active, selfLink: /api/v1/namespaces/client-go-test, uid: d6b24e5a-1d17-40b9-848e-5599f6fddcd6
Deleting Namespaces...
Deleted Namespaces client-go-test

常用api

deployment

//声明deployment对象
var deployment *v1beta1.Deployment
//构造deployment对象
//创建deployment
deployment, err := clientset.AppsV1beta1().Deployments(<namespace>).Create(<deployment>)
//更新deployment
deployment, err := clientset.AppsV1beta1().Deployments(<namespace>).Update(<deployment>)
//删除deployment
err := clientset.AppsV1beta1().Deployments(<namespace>).Delete(<deployment.Name>, &meta_v1.DeleteOptions{
    
    })
//查询deployment
deployment, err := clientset.AppsV1beta1().Deployments(<namespace>).Get(<deployment.Name>, meta_v1.GetOptions{
    
    })
//列出deployment
deploymentList, err := clientset.AppsV1beta1().Deployments(<namespace>).List(&meta_v1.ListOptions{
    
    })
//watch deployment
watchInterface, err := clientset.AppsV1beta1().Deployments(<namespace>).Watch(&meta_v1.ListOptions{
    
    })

service

//声明service对象
var service *v1.Service
//构造service对象
//创建service
service, err := clientset.CoreV1().Services(<namespace>).Create(<service>)
//更新service
service, err := clientset.CoreV1().Services(<namespace>).Update(<service>)
//删除service
err := clientset.CoreV1().Services(<namespace>).Delete(<service.Name>, &meta_v1.DeleteOptions{
    
    })
//查询service
service, err := clientset.CoreV1().Services(<namespace>).Get(<service.Name>, meta_v1.GetOptions{
    
    })
//列出service
serviceList, err := clientset.CoreV1().Services(<namespace>).List(&meta_v1.ListOptions{
    
    })
//watch service
watchInterface, err := clientset.CoreV1().Services(<namespace>).Watch(&meta_v1.ListOptions{
    
    })

pod

//声明pod对象
var pod *v1.Pod
//创建pod
pod, err := clientset.CoreV1().Pods(<namespace>).Create(<pod>)
//更新pod
pod, err := clientset.CoreV1().Pods(<namespace>).Update(<pod>)
//删除pod
err := clientset.CoreV1().Pods(<namespace>).Delete(<pod.Name>, &meta_v1.DeleteOptions{
    
    })
//查询pod
pod, err := clientset.CoreV1().Pods(<namespace>).Get(<pod.Name>, meta_v1.GetOptions{
    
    })
//列出pod
podList, err := clientset.CoreV1().Pods(<namespace>).List(&meta_v1.ListOptions{
    
    })
//watch pod
watchInterface, err := clientset.CoreV1().Pods(<namespace>).Watch(&meta_v1.ListOptions{
    
    })

猜你喜欢

转载自blog.csdn.net/qq_43762191/article/details/124925928