Qi Yan is waiting for you in the **Cloud Native Treasure Box** official account to discuss application migration, GitOps, secondary development, solutions, CNCF ecology, and life with you.
The kubectl-ai project is akubectl
plugin for generating and applying Kubernetes manifests using OpenAI GPT.
Install kubectl-ai
brew way
Add tobrew
Click and install:
brew tap sozercan/kubectl-ai https://github.com/sozercan/kubectl-ai
brew install kubectl-ai
Krew way
krew index add kubectl-ai and install:
kubectl krew index add kubectl-ai https://github.com/sozercan/kubectl-ai
kubectl krew install kubectl-ai/kubectl-ai
The execution process is as follows
# krew索引添加kubectl-ai
$ kubectl krew index add kubectl-ai https://github.com/sozercan/kubectl-ai
WARNING: You have added a new index from "https://github.com/sozercan/kubectl-ai"
The plugins in this index are not audited for security by the Krew maintainers.
Install them at your own risk.
# krew安装kubectl-ai
$ kubectl krew install kubectl-ai/kubectl-ai
Updated the local copy of plugin index.
Updated the local copy of plugin index "kubectl-ai".
Installing plugin: kubectl-ai
Installed plugin: kubectl-ai
\
| Use this plugin:
| kubectl kubectl-ai
| Caveats:
| \
| | This plugin requires an OpenAI key.
| /
/
binary mode
- Download the binaries from GitHub releases.
- If you want to use it as a
kubectl
plugin, please copy thekubectl-ai
binary to yourPATH
folder. If you don't need to use it as akubectl
plugin, you can also use thekubectl-ai
binary independently.
Configure kubectl-ai
prerequisites
kubectl-ai
Requires an OpenAI API key, or an API key and endpoint for the Azure OpenAI service, and a valid Kubernetes configuration.
For OpenAI and Azure OpenAI, you can use the following environment variables:
export OPENAI_API_KEY=<your OpenAI key>
export OPENAI_API_KEY=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
export OPENAI_DEPLOYMENT_NAME=<your OpenAI deployment/model name. defaults to "gpt-3.5-turbo">
The following OpenAI models are supported:
code-davinci-002
text-davinci-003
gpt-3.5-turbo-0301
(Azure name namegpt-35-turbo-0301
)gpt-3.5-turbo
gpt-35-turbo-0301
gpt-4-0314
gpt-4-32k-0314
For the Azure OpenAI service, you can use the following environment variables:
export AZURE_OPENAI_ENDPOINT=<your Azure OpenAI endpoint, like "https://my-aoi-endpoint.openai.azure.com">
If the AZURE_OPENAI_ENDPOINT
variable is set, the Azure OpenAI service will be used. Otherwise, it uses the OpenAI API.
Flags and environment variables
- You can set
-require-confirmation
a flag, orREQUIRE_CONFIRMATION
an environment variable to prompt the user for confirmation before applying the manifest. Default is true. - You can set the
-temperature
flag or theTEMPERATURE
environment variable between 0 and 1. Higher temperatures will produce more creative results. A lower temperature representation will produce more deterministic results. Default is 0.
Using kubectl-ai
Create a resource list with specified values
Deployment is a resource type in Kubernetes used to manage the number and upgrade of pod replicas. Deployment controls the number of copies of a pod by creating a ReplicaSet, and provides a rolling update function to achieve downtime-free upgrades.
First, we issue thecreate an nginx deployment with 3 replicas
command,
$ kubectl kubectl-ai "create an nginx deployment with 3 replicas ,and create an servie"
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+ Reprompt
▸ Apply
Don't Apply
At this time we see the final prompt Would you like to apply this?, which means do you want to apply this resource list? There are three options in total
- Reprompt: Re-prompt to modify the resource list
- Apply: Apply this resource list directly
- Don’t Apply: Do not apply the resource list
With the up and down arrows, we can toggle the options for selecting a response, and execute them by executing the Enter key.
Re-prompt to modify your resource list
...
$ Reprompt: update to 5 replicas and port 6080 and Service type is NodePort
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 6080
targetPort: 80
selector:
app: nginx
sessionAffinity: None
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+ Reprompt
▸ Apply
Don't Apply
The prompts for successful execution are as follows:
✔ Apply
If the execution fails, the following prompt may appear. The author has previously allowed an nginx-deployment resource with the same name, which conflicts with the resource list generated by kubectl-ai, so the prompt is as follows.
✔ Apply
Error: Apply failed with 2 conflicts: conflicts with "kubectl-client-side-apply" using apps/v1:
- .spec.replicas
- .spec.template.spec.containers[name="nginx"].image
Verify resource status
kubectl get all -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-deployment-5d59d67564-9hxsx 1/1 Running 0 6m14s 10.244.39.38 master-1 <none> <none>
pod/nginx-deployment-5d59d67564-k7jr2 1/1 Running 0 6m14s 10.244.39.59 master-1 <none> <none>
pod/nginx-deployment-5d59d67564-n4pw4 1/1 Running 0 6m14s 10.244.39.35 master-1 <none> <none>
pod/nginx-deployment-5d59d67564-s8z8v 1/1 Running 0 6m14s 10.244.39.23 master-1 <none> <none>
pod/nginx-deployment-5d59d67564-zw6nw 1/1 Running 0 6m14s 10.244.39.51 master-1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 290d <none>
service/nginx-service NodePort 10.102.192.38 <none> 6080:30562/TCP 6m13s app=nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nginx-deployment 5/5 5 5 6m14s nginx nginx:1.7.9 app=nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nginx-deployment-5d59d67564 5 5 5 6m14s nginx nginx:1.7.9 app=nginx,pod-template-hash=5d59d67564
Verify whether the nginx service is normal
$ curl 10.102.192.38:6080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
is outputThank you for using nginx.
, indicating that the service is normal.
Create multiple objects at once
$ kubectl ai "create a foo namespace then create nginx pod in that namespace"
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Namespace
metadata:
name: foo
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: foo
spec:
containers:
- name: nginx
image: nginx:latest
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+ Reprompt
▸ Apply
Don't Apply
--require-confirmation
logo
$ kubectl ai "create a service with type LoadBalancer with selector as 'app:nginx'" --require-confirmation=false
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
Note that the plugin does not yet know the current state of the cluster, so it will always generate a complete inventory.
——————————————————————————————————————————————————————————————————————————————————————————————
Qi Yan is waiting for you in the **Cloud Native Treasure Box** official account to discuss application migration, GitOps, secondary development, solutions, CNCF ecology, and life with you.