Rasa课程、Rasa培训、Rasa面试、Rasa实战系列之Local Kubernetes deploy

Rasa课程、Rasa培训、Rasa面试、Rasa实战系列之Local Kubernetes deploy

Kind 简介

kind是一个使用 Docker 容器“节点”运行本地 Kubernetes 集群的工具。
kind 主要是为测试 Kubernetes 本身而设计的,但也可以用于本地开发

  • kind 支持多节点(包括 HA)集群
  • kind 支持从源代码构建 Kubernetes 发布版本
  • 支持 make / bash 或 docker
  • kind 支持 Linux、macOS 和 Windows
  • kind 是经过 认证的 Kubernetes 安装程序

在这里插入图片描述

Kind安装

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind

运行脚本: curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64

[root@localhost my_setup_libs]# ls
[root@localhost my_setup_libs]# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.12.0/kind-linux-amd64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    97  100    97    0     0    174      0 --:--:-- --:--:-- --:--:--   174
100   655  100   655    0     0    481      0  0:00:01  0:00:01 --:--:--  1100
 85 6511k   85 5541k    0     0  26669      0  0:04:10  0:03:32  0:00:38  7946
 96 6511k   96 6276k    0     0  22493      0  0:04:56  0:04:45  0:00:11  7815
100 6511k  100 6511k    0     0  21446      0  0:05:10  0:05:10 --:--:--  7948
[root@localhost my_setup_libs]# 
[root@localhost my_setup_libs]# 
[root@localhost my_setup_libs]# ls
kind

修改执行权限

[root@localhost my_setup_libs]# chmod +x ./kind
[root@localhost my_setup_libs]# ls
kind
[root@localhost my_setup_libs]# mv ./kind /usr/bin/kind

Kind测试

[root@localhost ~]# kind
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
  kind [command]

Available Commands:
  build       Build one of [node-image]
  completion  Output shell completion code for the specified shell (bash, zsh or fish)
  create      Creates one of [cluster]
  delete      Deletes one of [cluster]
  export      Exports one of [kubeconfig, logs]
  get         Gets one of [clusters, nodes, kubeconfig]
  help        Help about any command
  load        Loads images into nodes
  version     Prints the kind CLI version

Flags:
  -h, --help              help for kind
      --loglevel string   DEPRECATED: see -v instead
  -q, --quiet             silence all stderr output
  -v, --verbosity int32   info log verbosity, higher value produces more output
      --version           version for kind

Use "kind [command] --help" for more information about a command.

创建集群,默认情况下,集群将被命名为 name kind。使用该–name标志为集群分配不同的上下文名称。如果您希望create cluster命令阻塞直到控制平面达到就绪状态,您可以使用–wait标志并指定超时。要使用–wait您必须指定等待时间的单位。例如,等待 30 秒,做–wait 30s,等待 5 分钟–wait 5m,等等。

[root@localhost ~]# kind create cluster
Creating cluster "kind" ...
⢀⡱ Ensuring node image (kindest/node:v1.23.4) � 
⢄⡱ Ensuring node image (kindest/node:v1.23.4) 

如果Ensuring node image (kindest/node:v1.23.4)一直在运行 ,可以使用docker pull kindest/node:v1.23.4下载镜像。

在这里插入图片描述

[root@localhost ~]#  docker pull kindest/node:v1.23.4
v1.23.4: Pulling from kindest/node
d90aaaa042df: Pull complete 
83c6aa287edc: Downloading [=====>                                             ]  61.11MB/510.7MB
83c6aa287edc: Downloading [>                                                  ]  2.681MB/510.7MB

83c6aa287edc: Downloading [============================================>      ]  452.8MB/510.7MB

kinds/node 下载成功

[root@localhost ~]#  docker pull kindest/node:v1.23.4
v1.23.4: Pulling from kindest/node
d90aaaa042df: Pull complete 
83c6aa287edc: Downloading [=====>                                             ]  61.11MB/510.7MB
83c6aa287edc: Downloading [>                                                  ]  2.681MB/510.7MB

83c6aa287edc: Downloading [============================================>      ]  452.8MB/510.7MB

[root@localhost ~]#  docker pull kindest/node:v1.23.4
v1.23.4: Pulling from kindest/node


Digest: sha256:0e34f0d0fd448aa2f2819cfd74e99fe5793a6e4938b328f657c8e3f81ee0dfb9
Status: Downloaded newer image for kindest/node:v1.23.4
docker.io/kindest/node:v1.23.4

Kind 创建集群

[root@localhost ~]# kind create cluster
Creating cluster "kind" ...
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

[root@localhost ~]# 
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                                       NAMES
80a59804462a   kindest/node:v1.23.4   "/usr/local/bin/entr…"   3 minutes ago   Up 3 minutes   127.0.0.1:37056->6443/tcp                   kind-control-plane
674158b6b7b1   redis                  "docker-entrypoint.s…"   29 hours ago    Up 7 hours     0.0.0.0:6379->6379/tcp, :::6379->6379/tcp   my_rasa_redis
22e4bb6c4929   nginx                  "/docker-entrypoint.…"   3 weeks ago     Up 7 hours     0.0.0.0:88->80/tcp, :::88->80/tcp           mynginx
9ab047feff74   rasa/duckling          "duckling-example-ex…"   3 weeks ago     Up 7 hours     0.0.0.0:8000->8000/tcp, :::8000->8000/tcp   my_rasa_duckling
[root@localhost ~]# 

kubectl工具安装

[root@localhost ~]# yum install -y kubectl
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.163.com
base                                                                                                                                                        | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                                                            | 3.5 kB  00:00:00     
extras                                                                                                                                                      | 2.9 kB  00:00:00     
updates                                                                                                                                                     | 2.9 kB  00:00:00     
No package kubectl available.
Error: Nothing to do
[root@localhost ~]# 

配置kubernetes.repo ,继续安装kubectl工具

[root@localhost ~]# 
[root@localhost ~]# cat > /etc/yum.repos.d/kubernetes.repo << END
> [kubernetes]
> name = kubernetes
> baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> gpgchek = 1
> gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
>          https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> enable = 1
> END
[root@localhost ~]# 
[root@localhost ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name = kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgchek = 1
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
         https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enable = 1
[root@localhost ~]# 

安装kubectl


[root@localhost ~]# yum -y install kubectl
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.163.com
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.23.6-0 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================================================================================================================
 Package                                   Arch                                     Version                                     Repository                                    Size
===================================================================================================================================================================================
Installing:
 kubectl                                   x86_64                                   1.23.6-0                                    kubernetes                                   9.5 M

Transaction Summary
===================================================================================================================================================================================
Install  1 Package

Total download size: 9.5 M
Installed size: 44 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/kubernetes/packages/868c4a6ee448d1e8488938812a19a991b5132c81de511cd737d93493b98451cc-kubectl-1.23.6-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Public key for 868c4a6ee448d1e8488938812a19a991b5132c81de511cd737d93493b98451cc-kubectl-1.23.6-0.x86_64.rpm is not installed
868c4a6ee448d1e8488938812a19a991b5132c81de511cd737d93493b98451cc-kubectl-1.23.6-0.x86_64.rpm                                                                | 9.5 MB  00:00:00     
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <[email protected]>"
 Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Retrieving key from https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x0B5FC9E2:
 Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
 Fingerprint: 9c5a 47a0 dedd 6927 121f 9095 daff b062 0b5f c9e2
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x307EA071:
 Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)"
 Fingerprint: 7f92 e05b 3109 3bef 5a3c 2d38 feea 9169 307e a071
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0x836F4BEB:
 Userid     : "gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) <[email protected]>"
 Fingerprint: 59fe 0256 8272 69dc 8157 8f92 8b57 c5c2 836f 4beb
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kubectl-1.23.6-0.x86_64                                                                                                                                         1/1 
  Verifying  : kubectl-1.23.6-0.x86_64                                                                                                                                         1/1 

Installed:
  kubectl.x86_64 0:1.23.6-0                                                                                                                                                        

Complete!
[root@localhost ~]# 

kubectl 测试

kubectl 新建命名空间demo

[root@localhost ~]# kubectl create namespace demo
namespace/demo created
[root@localhost ~]# 

部署测试应用

[root@localhost ~]# kubectl --namespace demo create deployment webstuff --image=nginx --replicas 3
deployment.apps/webstuff created
[root@localhost ~]# 

查询pod

[root@localhost ~]# kubectl --namespace demo get pods
NAME                        READY   STATUS              RESTARTS   AGE
webstuff-577ddfbb74-bt5cd   0/1     ContainerCreating   0          5m33s
webstuff-577ddfbb74-ftpvx   0/1     ContainerCreating   0          5m33s
webstuff-577ddfbb74-x4rgv   0/1     ContainerCreating   0          5m33s

查询日志

[root@localhost ~]# kubectl --namespace demo  describe pod webstuff-577ddfbb74-bt5cd  
Name:           webstuff-577ddfbb74-bt5cd
Namespace:      demo
Priority:       0
Node:           kind-control-plane/172.18.0.2
Start Time:     Tue, 26 Apr 2022 19:59:41 +0800
Labels:         app=webstuff
                pod-template-hash=577ddfbb74
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/webstuff-577ddfbb74
Containers:
  nginx:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-shpv6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-shpv6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m34s  default-scheduler  Successfully assigned demo/webstuff-577ddfbb74-bt5cd to kind-control-plane
  Normal  Pulling    6m33s  kubelet            Pulling image "nginx"
[root@localhost ~]# 

查询pod已经运行

[root@localhost ~]# kubectl --namespace demo get pods
NAME                        READY   STATUS    RESTARTS   AGE
webstuff-577ddfbb74-bt5cd   1/1     Running   0          9m12s
webstuff-577ddfbb74-ftpvx   1/1     Running   0          9m12s
webstuff-577ddfbb74-x4rgv   1/1     Running   0          9m12s

扩容到5个pods

[root@localhost ~]# kubectl --namespace demo scale deployment webstuff  --replicas 5
deployment.apps/webstuff scaled
[root@localhost ~]# 
[root@localhost ~]# 

查询pods

[root@localhost ~]# kubectl --namespace demo get pods
NAME                        READY   STATUS    RESTARTS   AGE
webstuff-577ddfbb74-bt5cd   1/1     Running   0          24m
webstuff-577ddfbb74-cbhf7   1/1     Running   0          99s
webstuff-577ddfbb74-ftpvx   1/1     Running   0          24m
webstuff-577ddfbb74-spnp4   1/1     Running   0          99s
webstuff-577ddfbb74-x4rgv   1/1     Running   0          24m
[root@localhost ~]# 

调整为1个pod

[root@localhost ~]# kubectl --namespace demo scale deployment webstuff  --replicas 1
deployment.apps/webstuff scaled
[root@localhost ~]#  kubectl --namespace demo get pods
NAME                        READY   STATUS    RESTARTS   AGE
webstuff-577ddfbb74-ftpvx   1/1     Running   0          25m
[root@localhost ~]# 
[root@localhost ~]# 

查询日志


[root@localhost ~]# kubectl --namespace demo logs webstuff-577ddfbb74-ftpvx 
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/04/26 12:08:17 [notice] 1#1: using the "epoll" event method
2022/04/26 12:08:17 [notice] 1#1: nginx/1.21.6
2022/04/26 12:08:17 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/04/26 12:08:17 [notice] 1#1: OS: Linux 3.10.0-1160.el7.x86_64
2022/04/26 12:08:17 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/04/26 12:08:17 [notice] 1#1: start worker processes
2022/04/26 12:08:17 [notice] 1#1: start worker process 37
2022/04/26 12:08:17 [notice] 1#1: start worker process 38
2022/04/26 12:08:17 [notice] 1#1: start worker process 39
2022/04/26 12:08:17 [notice] 1#1: start worker process 40
[root@localhost ~]# 
[root@localhost ~]# 

在默认命名空间里面查询不到pod

[root@localhost ~]# kubectl   logs webstuff-577ddfbb74-ftpvx 
Error from server (NotFound): pods "webstuff-577ddfbb74-ftpvx" not found
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# kubectl   get pods
No resources found in default namespace.
[root@localhost ~]# 

在demo命名空间可以查询

[root@localhost ~]# kubectl --namespace demo get pods
NAME                        READY   STATUS    RESTARTS   AGE
webstuff-577ddfbb74-ftpvx   1/1     Running   0          30m

查询命名空间

[root@localhost ~]# kubectl get namespace 
NAME                 STATUS   AGE
default              Active   58m
demo                 Active   36m
kube-node-lease      Active   58m
kube-public          Active   58m
kube-system          Active   58m
local-path-storage   Active   58m
[root@localhost ~]# 

删掉demo命名空间

[root@localhost ~]# kubectl delete namespace demo
namespace "demo" deleted

kubectl构建Rasa空间

运行脚本 kubectl create namespace rasa构建Rasa空间

[root@localhost ~]# kubectl create namespace rasa
namespace/rasa created
[root@localhost ~]# kubectl get namespace 
NAME                 STATUS   AGE
default              Active   60m
kube-node-lease      Active   60m
kube-public          Active   60m
kube-system          Active   60m
local-path-storage   Active   60m
rasa                 Active   80s

构建配置文件manifest.yaml

[root@localhost ~]# 
[root@localhost ~]# mkdir my_rasa_kube
[root@localhost ~]# cd my_rasa_kube/
[root@localhost my_rasa_kube]# pwd
/root/my_rasa_kube
[root@localhost my_rasa_kube]# cat manifest.yaml
---
apiVersion: apps/v1              #当前格式的版本
kind: Deployment                 #当前创建资源的类型, 当前类型是Deployment
metadata:                        #当前资源的元数据
  name: rasa-custom-model
  labels:
    app: rasa             #当前资源的名字 是元数据必须的项
spec:                            #是当前Deployment的规格说明
  replicas: 2
  selector:
    matchLabels:
      app: rasa
  template:
    metadata:
      labels:
        app: rasa
    spec:
      containers:
      - name: rasa-demo
        image: koaning/rasa-demo
        ports:
          - containerPort: 8080
        command: ["rasa", "run", "--enable-api", "--port", "8080", "--debug"] 

---
apiVersion: v1
kind: Service
metadata:
  name: rasa-web
spec:
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: rasa
  type: LoadBalancer
[root@localhost my_rasa_kube]# 

运行脚本docker pull koaning/rasa-demo 下载 rasa-demo

[root@localhost my_rasa_kube]# docker pull koaning/rasa-demo
Using default tag: latest
latest: Pulling from koaning/rasa-demo
b4d181a07f80: Pull complete 
de8ecf497b75: Pull complete 
707b80804672: Pull complete 
283715715396: Pull complete 
8353afd48f6b: Pull complete 
55ba2b40f728: Pull complete 
c114e941f230: Pull complete 
428f693b35fb: Pull complete 
e5fc23a5f626: Pull complete 
Digest: sha256:8100a1b3d9899baf226269ad0dbf60a60084c38f3b3f32a9f5af9fa08f14901a
Status: Downloaded newer image for koaning/rasa-demo:latest
docker.io/koaning/rasa-demo:latest
[root@localhost my_rasa_kube]# 
[root@localhost my_rasa_kube]# 

应用配置文件

[root@localhost my_rasa_kube]# kubectl  --namespace rasa apply -f manifest.yaml
deployment.apps/rasa-custom-model unchanged
service/rasa-web unchanged
[root@localhost my_rasa_kube]# kubectl --namespace rasa get pods
NAME                                 READY   STATUS    RESTARTS   AGE
rasa-custom-model-5c8c85789f-7bt48   1/1     Running   0          10h
rasa-custom-model-5c8c85789f-mhpct   1/1     Running   0          10h

修改replicas: 2为replicas: 3

[root@localhost my_rasa_kube]# kubectl  --namespace rasa apply -f manifest.yaml
deployment.apps/rasa-custom-model configured
service/rasa-web unchanged
[root@localhost my_rasa_kube]# 

查询pods

[root@localhost my_rasa_kube]# kubectl --namespace rasa get pods
NAME                                 READY   STATUS    RESTARTS   AGE
rasa-custom-model-5c8c85789f-7bt48   1/1     Running   0          10h
rasa-custom-model-5c8c85789f-jqlw8   1/1     Running   0          99s
rasa-custom-model-5c8c85789f-mhpct   1/1     Running   0          10h
[root@localhost my_rasa_kube]# 

将replicas修改回为2

[root@localhost my_rasa_kube]# kubectl  --namespace rasa apply -f manifest.yaml
deployment.apps/rasa-custom-model configured
service/rasa-web unchanged
[root@localhost my_rasa_kube]# kubectl --namespace rasa get pods
NAME                                 READY   STATUS    RESTARTS   AGE
rasa-custom-model-5c8c85789f-7bt48   1/1     Running   0          10h
rasa-custom-model-5c8c85789f-mhpct   1/1     Running   0          10h
[root@localhost my_rasa_kube]# 

查询日志

[root@localhost my_rasa_kube]# kubectl  --namespace rasa logs rasa-custom-model-5c8c85789f-mhpct |more
2022-04-26 17:36:15 DEBUG    rasa.telemetry  - Could not read telemetry settings from configuration file: Configuration 'metrics' key not found.
2022-04-26 17:36:15 WARNING  rasa.utils.common  - Failed to write global config. Error: [Errno 13] Permission denied: '/.config'. Skipping.
2022-04-26 17:36:15 DEBUG    rasa.cli.utils  - Parameter 'credentials' not set. Using default location 'credentials.yml' instead.
2022-04-26 17:36:21 DEBUG    matplotlib  - (private) matplotlib data path: /usr/local/lib/python3.7/site-packages/matplotlib/mpl-data
2022-04-26 17:36:21 DEBUG    matplotlib  - matplotlib data path: /usr/local/lib/python3.7/site-packages/matplotlib/mpl-data
2022-04-26 17:36:21 WARNING  matplotlib  - Matplotlib created a temporary config/cache directory at /tmp/matplotlib-wtat_mwx because the default path (/.config/matplotlib) is not 
a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to bett
er support multiprocessing.
2022-04-26 17:36:21 DEBUG    matplotlib  - CONFIGDIR=/tmp/matplotlib-wtat_mwx
2022-04-26 17:36:21 DEBUG    matplotlib  - matplotlib version 3.3.4
2022-04-26 17:36:21 DEBUG    matplotlib  - interactive is False
2022-04-26 17:36:21 DEBUG    matplotlib  - platform is linux
2022-04-26 17:36:21 DEBUG    matplotlib  - loaded modules: ['sys', 'builtins', '_frozen_importlib', '_imp', '_thread', '_warnings', '_weakref', 'zipimport', '_frozen_importlib_ext
ernal', '_io', 'marshal', 'posix', 'encodings', 'codecs', '_codecs', 'encodings.aliases', 'encodings.utf_8', '_signal', '__main__', 'encodings.latin_1', 'io', 'abc', '_abc', '_boo
tlocale', '_locale', 'site', 'os', 'stat', '_stat', '_collections_abc', 'posixpath', 'genericpath', 'os.path', '_sitebuiltins', 'types', 'importlib', 'importlib._bootstrap', 'impo
rtlib._bootstrap_external', 'warnings', 'importlib.util', 'importlib.abc', 'importlib.machinery', 'contextlib', 'collections', 'operator', '_operator', 'keyword', 'heapq', '_heapq
', 'itertools', 'reprlib', '_collections', 'functools', '_functools', 'google', 'mpl_toolkits', 'ruamel', 're', 'enum', 'sre_compile', '_sre', 'sre_parse', 'sre_constants', 'copyr
eg', 'rasa', 'logging', 'time', 'traceback', 'linecache', 'tokenize', 'token', 'weakref', '_weakrefset', 'collections.abc', 'string', '_string', 'threading', 'atexit', 'rasa.versi
on', 'rasa.api', 'rasa.shared', 'rasa.shared.constants', 'typing', 'typing.io', 'typing.re', 'rasa.__main__', 'argparse', 'gettext', 'locale', 'platform', 'subprocess', 'signal', 
'errno', '_posixsubprocess', 'select', 'selectors', 'math', 'rasa_sdk', 'rasa_sdk.version', 'rasa_sdk.cli', 'rasa_sdk.interfaces', 'copy', 'rasa_sdk.events', 'datetime', '_datetim
e', 'rasa_sdk.forms', 'rasa_sdk.utils', 'asyncio', 'asyncio.base_events', 'concurrent', 'concurrent.futures', 'concurrent.futures._base', 'socket', '_socket', 'ssl', '_ssl', 'base
64', 'struct', '_struct', 'binascii', 'asyncio.constants', 'asyncio.coroutines', 'inspect', 'dis', 'opcode', '_opcode', 'asyncio.base_futures', 'asyncio.format_helpers', 'asyncio.
log', 'asyncio.events', 'contextvars', '_contextvars', 'asyncio.base_tasks', '_asyncio', 'asyncio.futures', 'asyncio.protocols', 'asyncio.sslproto', 'asyncio.transports', 'asyncio
.tasks', 'asyncio.locks', 'asyncio.runners', 'asyncio.queues', 'asyncio.streams', 'asyncio.subprocess', 'asyncio.unix_events', 'asyncio.base_subprocess', 'asyncio.selector_events'
, 'rasa_sdk.constants', 'rasa.constants', 'rasa.telemetry', 'hashlib', '_hashlib', '_blake2', '_sha3', 'json', 'json.decoder', 'json.scanner', '_json', 'json.encoder', 'multiproce
ssing', 'multiprocessing.context', 'multiprocessing.process', 'multiprocessing.reduction', 'pickle', '_compat_pickle', '_pickle', 'array', '__mp_main__', 'pathlib', 'fnmatch', 'nt
path', 'urllib', 'urllib.parse', 'textwrap', 'uuid', '_uuid', 'async_generator', 'async_generator._version', 'async_generator._impl', 'async_generator._util', 'requests', 'urllib3
', '__future__', 'urllib3.exceptions', 'urllib3.packages', 'urllib3.packages.ssl_match_hostname', 'urllib3.packages.six', 'urllib3.packages.six.moves', 'http', 'http.client', 'ema
il', 'email.parser', 'email.feedparser', 'email.errors', 'email._policybase', 'email.header', 'email.quoprimime', 'email.base64mime', 'email.charset', 'email.encoders', 'quopri', 
'email.utils', 'random', 'bisect', '_bisect', '_random', 'email._parseaddr', 'calendar', 'email.message', 'uu', 'email._encoded_words', 'email.iterators', 'urllib3.packages.six.mo
ves.http_client', 'urllib3._version', 'urllib3.connectionpool', 'urllib3.connection', 'urllib3.util', 'urllib3.util.connection', 'urllib3.contrib', 'urllib3.contrib._appengine_env
iron', 'urllib3.util.wait', 'urllib3.util.request', 'urllib3.util.response', 'urllib3.util.retry', 'urllib3.util.ssl_', 'hmac', 'urllib3.util.url', 'urllib3.util.ssltransport', 'u
rllib3.

。。。。。
sklearn.model_selection._split', 'sklearn.utils.multiclass', 'sklearn.model_selection._validation', 'sklearn.utils.metaestimators', 'sklearn.metrics', 'sklearn.metrics._ranking', 'sklearn.utils.extmath', 'sklearn.utils._logistic_sigmoid', 'sklearn.utils.sparsefuncs_fast', '_cython_0_29_23', 'sklearn.utils.sparsefuncs', 'sklearn.preprocessing', 'sklearn.preprocessing._function_transformer', 'sklearn.preprocessing._data', 'sklearn.preprocessing._csr_polynomial_expansion', 'sklearn.preprocessing._encoders', 'sklearn.utils._encode', 'sklearn.preprocessing._label', 'sklearn.preprocessing._discretization', 'sklearn.metrics._base', 'sklearn.metrics._classification', 'sklearn.metrics.cluster', 'sklearn.metrics.cluster._supervised', 'sklearn.metrics.cluster._expected_mutual_info_fast', 'sklearn.metrics.cluster._unsupervised', 'sklearn.metrics.pairwise', 'sklearn.utils._mask', 'sklearn.metrics._pairwise_fast', 'sklearn.metrics.cluster._bicluster', 'sklearn.metrics._regression', 'sklearn._loss', 'sklearn._loss.glm_distribution', 'sklearn.utils.stats', 'sklearn.metrics._scorer', 'sklearn.metrics._plot', 'sklearn.metrics._plot.det_curve', 'sklearn.metrics._plot.base', 'sklearn.metrics._plot.roc_curve', 'sklearn.metrics._plot.precision_recall_curve', 'sklearn.metrics._plot.confusion_matrix', 'sklearn.model_selection._search', 'sklearn.utils.random', 'sklearn.utils._random', 'rasa.nlu.components', 'rasa.shared.nlu.training_data.features', 'rasa.utils.tensorflow.model_data_utils', 'rasa.core.featurizers.tracker_featurizers', 'rasa.shared.core.generator', 'rasa.core.policies.ensemble', 'rasa.core.training.training', 'rasa.core.policies.fallback', 'rasa.core.policies.memoization', 'rasa.core.policies.rule_policy', 'rasa.core.test', 'rasa.nlu.test', 'rasa.utils.plotting', 'matplotlib', 'matplotlib.cbook', 'matplotlib.cbook.deprecation', 'matplotlib.rcsetup', 'matplotlib.animation', 'matplotlib._animation_data', 'matplotlib.fontconfig_pattern', 'matplotlib.colors', 'matplotlib.docstring', 'matplotlib._color_data', 'cycler', 'matplotlib._version', 'matplotlib.ft2font', 'kiwisolver']
2022-04-26 17:36:22 DEBUG    rasa.core.utils  - Available web server routes: 
/conversations/<conversation_id:path>/messages     POST                           add_message
/conversations/<conversation_id:path>/tracker/events POST                           append_events
/webhooks/rasa                                     GET                            custom_webhook_RasaChatInput.health
/webhooks/rasa/webhook                             POST                           custom_webhook_RasaChatInput.receive
/webhooks/rest                                     GET                            custom_webhook_RestInput.health
/webhooks/rest/webhook                             POST                           custom_webhook_RestInput.receive
/model/test/intents                                POST                           evaluate_intents
/model/test/stories                                POST                           evaluate_stories
/conversations/<conversation_id:path>/execute      POST                           execute_action
/domain                                            GET                            get_domain
/                                                  GET                            hello
/model                                             PUT                            load_model
/model/parse                                       POST                           parse
/conversations/<conversation_id:path>/predict      POST                           predict
/conversations/<conversation_id:path>/tracker/events PUT                            replace_events
/conversations/<conversation_id:path>/story        GET                            retrieve_story
/conversations/<conversation_id:path>/tracker      GET                            retrieve_tracker
/status                                            GET                            status
/model/predict                                     POST                           tracker_predict
/model/train                                       POST                           train
/conversations/<conversation_id:path>/trigger_intent POST                           trigger_intent
/model                                             DELETE                         unload_model
/version                                           GET                            version
2022-04-26 17:36:22 INFO     root  - Starting Rasa server on http://localhost:8080
2022-04-26 17:36:22 DEBUG    rasa.core.utils  - Using the default number of Sanic workers (1).
2022-04-26 17:36:22 INFO     root  - Enabling coroutine debugging. Loop id 93866314814560.
2022-04-26 17:36:22 INFO     rasa.model  - Loading model models/nlu-20210803-140007.tar.gz...
2022-04-26 17:36:22 DEBUG    rasa.model  - Extracted model to '/tmp/tmpz_td3rhc'.
2022-04-26 17:36:23 DEBUG    rasa.utils.tensorflow.models  - Loading the model from /tmp/tmpz_td3rhc/nlu/component_5_DIETClassifier.tf_model with finetune_mode=False...
2022-04-26 17:36:23 DEBUG    rasa.nlu.classifiers.diet_classifier  - Following metrics will be logged during training: 
2022-04-26 17:36:23 DEBUG    rasa.nlu.classifiers.diet_classifier  -   t_loss (total loss)
2022-04-26 17:36:23 DEBUG    rasa.nlu.classifiers.diet_classifier  -   i_acc (intent acc)
2022-04-26 17:36:23 DEBUG    rasa.nlu.classifiers.diet_classifier  -   i_loss (intent loss)
2022-04-26 17:36:59 DEBUG    rasa.utils.tensorflow.models  - Finished loading the model.
2022-04-26 17:36:59 DEBUG    rasa.nlu.classifiers.diet_classifier  - Failed to load model for 'ResponseSelector'. Maybe you did not provide enough training data and no model was trained or the path '/tmp/tmpz_td3rhc/nlu' doesn't exist?
2022-04-26 17:36:59 DEBUG    rasa.core.tracker_store  - Connected to InMemoryTrackerStore.
2022-04-26 17:36:59 DEBUG    rasa.core.lock_store  - Connected to lock store 'InMemoryLockStore'.
2022-04-26 17:36:59 DEBUG    rasa.model  - Extracted model to '/tmp/tmp9g24u6kz'.
2022-04-26 17:36:59 DEBUG    rasa.core.nlg.generator  - Instantiated NLG to 'TemplatedNaturalLanguageGenerator'.
2022-04-26 17:36:59 INFO     root  - Rasa server is up and running.
[root@localhost my_rasa_kube]# 



查询svc

[root@localhost my_rasa_kube]# kubectl  --namespace rasa get svc
NAME       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
rasa-web   LoadBalancer   10.96.235.34   <pending>     8080:31921/TCP   10h
[root@localhost my_rasa_kube]# 

启动转发

[root@localhost my_rasa_kube]# kubectl  --namespace rasa port-forward svc/rasa-web 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Rasa客户端应用测试

[root@localhost my_rasa_kube]# curl --request POST \
>   --url http://localhost:8080/model/parse \
>   --header 'Content-Type:application/json' \
>   -- data '{
    
    
>       "text":"hello"
> }'
{
    
    "version":"2.8.1","status":"failure","message":"No text message defined in request_body. Add text message to request body in order to obtain the intent and extracted entities.","reason":"BadRequest","details":{
    
    },"help":null,"code":400}curl: (6) Could not resolve host: data
curl: (3) URL using bad/illegal format or missing URL
[root@localhost my_rasa_kube]# 

提示测试失败,其中的参数data多了一个空格 ,修改后重新测试

[root@localhost my_rasa_kube]# curl --request POST \
>   --url http://localhost:8080/model/parse \
>   --header 'Content-Type:application/json' \
>   --data '{
    
    
>       "text": "hello"
> }'

测试成功

[root@localhost my_rasa_kube]# 
[root@localhost my_rasa_kube]# curl --request POST \
>   --url http://localhost:8080/model/parse \
>   --header 'Content-Type:application/json' \
>   --data '{
    
    
>       "text": "hello"
> }'
{
    
    "text":"hello","intent":{
    
    "id":-5324341420008758790,"name":"greet","confidence":0.9999986886978149},"entities":[],"intent_ranking":[{
    
    "id":-5324341420008758790,"name":"greet","confidence":0.9999986886978149},{
    
    "id":-5333118589931628653,"name":"affirm","confidence":4.7248207124539476e-7},{
    
    "id":579721226075200419,"name":"bot_challenge","confidence":3.622365909450309e-7},{
    
    "id":6099099648896425355,"name":"mood_great","confidence":1.7313301725607744e-7},{
    
    "id":5439447353542522403,"name":"goodbye","confidence":1.131188653857862e-7},{
    
    "id":-5193994886173468481,"name":"mood_unhappy","confidence":6.645655048487242e-8},{
    
    "id":8430625568176418711,"name":"deny","confidence":5.379793321935722e-8}],"response_selector":{
    
    "all_retrieval_intents":[],"default":{
    
    "response":{
    
    "id":null,"responses":null,"response_templates":null,"confidence":0.0,"intent_response_key":null,"utter_action":"utter_None","template_name":"utter_None"},"ranking":[]}}}[root@localhost my_rasa_kube]# 

查询rasa命名空间的pods,2个pods做负载均衡

[root@localhost my_rasa_kube]# kubectl  --namespace rasa get pods
NAME                                 READY   STATUS    RESTARTS   AGE
rasa-custom-model-5c8c85789f-7bt48   1/1     Running   0          10h
rasa-custom-model-5c8c85789f-mhpct   1/1     Running   0          10h
[root@localhost my_rasa_kube]# 

其中 rasa-custom-model-5c8c85789f-7bt48 启动Rasa运行中。

[root@localhost my_rasa_kube]# kubectl  --namespace rasa logs rasa-custom-model-5c8c85789f-7bt48 |more
2022-04-26 17:36:19 DEBUG    rasa.telemetry  - Could not read telemetry settings from configuration file: Configuration 'metrics' key not found.
2022-04-26 17:36:19 WARNING  rasa.utils.common  - Failed to write global config. Error: [Errno 13] Permission denied: '/.config'. Skipping.
2022-04-26 17:36:19 DEBUG    rasa.cli.utils  - Parameter 'credentials' not set. Using default location 'credentials.yml' instead.
。。。。
 2022-04-26 17:37:02 DEBUG    rasa.core.tracker_store  - Connected to InMemoryTrackerStore.
2022-04-26 17:37:02 DEBUG    rasa.core.lock_store  - Connected to lock store 'InMemoryLockStore'.
2022-04-26 17:37:03 DEBUG    rasa.model  - Extracted model to '/tmp/tmp8h9u5mk2'.
2022-04-26 17:37:03 DEBUG    rasa.core.nlg.generator  - Instantiated NLG to 'TemplatedNaturalLanguageGenerator'.
2022-04-26 17:37:03 INFO     root  - Rasa server is up and running.
[root@localhost my_rasa_kube]# 

其中rasa-custom-model-5c8c85789f-mhpct响应Rasa客户端请求

[root@localhost my_rasa_kube]# kubectl  --namespace rasa logs rasa-custom-model-5c8c85789f-mhpct|more
2022-04-26 17:36:15 DEBUG    rasa.telemetry  - Could not read telemetry settings from configuration file: Configuration 'metrics' key not found.
2022-04-26 17:36:15 WARNING  rasa.utils.common  - Failed to write global config. Error: [Errno 13] Permission denied: '/.config'. Skipping.
2022-04-26 17:36:15 DEBUG    rasa.cli.utils  - Parameter 'credentials' not set. Using default location 'credentials.yml' instead.
。。。。。
2022-04-26 17:36:59 INFO     root  - Rasa server is up and running.
2022-04-26 23:52:01 ERROR    rasa.server  - No text message defined in request_body. Add text message to request body in order to obtain the intent and extracted entities.
2022-04-26 23:55:21 DEBUG    rasa.nlu.classifiers.diet_classifier  - There is no trained model for 'ResponseSelector': The component is either not trained or didn't receive enough
 training data.
2022-04-26 23:55:21 DEBUG    rasa.nlu.selectors.response_selector  - Adding following selector key to message property: default
2022-04-26 23:55:21 DEBUG    rasa.core.processor  - Received user message 'hello' with intent '{'id': -5324341420008758790, 'name': 'greet', 'confidence': 0.9999986886978149}' and
 entities '[]'

附: vmvare虚拟机 磁盘扩容日志

[root@localhost ~]# docker pull kindest/node:v1.23.4
v1.23.4: Pulling from kindest/node
d90aaaa042df: Pull complete 
83c6aa287edc: Extracting  433.9MB
failed to register layer: Error processing tar file(exit status 1): write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/11/fs/usr/local/bin/etcdctl-3.5.1: no space left on device
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   13M  3.8G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G   16G  1.6G  91% /
/dev/sda1               1014M  185M  830M  19% /boot
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/be39cdda8d424d12df732c3ca9af5f44cda50302a0ad586a09c6e54f7a6ed60f/merged
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/a9340445008f6a6fdf0e231c67fa5807c4703c8d695a9e545667498dc3f08820/merged
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/3fd896ec36855f746b3ceda17cf03ce03c432877d3cebda56a9f3cad7d424b56/merged
tmpfs                    781M   36K  781M   1% /run/user/0
/dev/sr0                 4.4G  4.4G     0 100% /run/media/root/CentOS 7 x86_64
[root@localhost ~]# 

在这里插入图片描述
https://blog.csdn.net/mm52013/article/details/108122164

[root@localhost ~]# 
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   13M  3.8G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G   16G  1.6G  91% /
/dev/sda1               1014M  185M  830M  19% /boot
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/be39cdda8d424d12df732c3ca9af5f44cda50302a0ad586a09c6e54f7a6ed60f/merged
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/a9340445008f6a6fdf0e231c67fa5807c4703c8d695a9e545667498dc3f08820/merged
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/3fd896ec36855f746b3ceda17cf03ce03c432877d3cebda56a9f3cad7d424b56/merged
tmpfs                    781M   24K  781M   1% /run/user/0
/dev/sr0                 4.4G  4.4G     0 100% /run/media/root/CentOS 7 x86_64
[root@localhost ~]# 
[root@localhost ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ae57c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/mapper/centos-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@localhost ~]#  fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): p
Partition number (3,4, default 3): 
First sector (41943040-104857599, default 41943040): 
Using default value 41943040
Last sector, +sectors or +size{
    
    K,M,G} (41943040-104857599, default 104857599): 
Using default value 104857599
Partition 3 of type Linux and of size 30 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@localhost ~]# 
[root@localhost ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ae57c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM
/dev/sda3        41943040   104857599    31457280   83  Linux

Disk /dev/mapper/centos-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# partprobe /dev/sda
[root@localhost ~]# 
[root@localhost ~]# pvcreate /dev/sda3
  Physical volume "/dev/sda3" successfully created.
[root@localhost ~]# 
[root@localhost ~]# vgscan
  Reading volume groups from cache.
  Found volume group "centos" using metadata type lvm2
[root@localhost ~]# 
[root@localhost ~]# vgextend centos /dev/sda3
  Volume group "centos" successfully extended
[root@localhost ~]# 
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   13M  3.8G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G   16G  1.6G  91% /
/dev/sda1               1014M  185M  830M  19% /boot
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/be39cdda8d424d12df732c3ca9af5f44cda50302a0ad586a09c6e54f7a6ed60f/merged
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/a9340445008f6a6fdf0e231c67fa5807c4703c8d695a9e545667498dc3f08820/merged
overlay                   17G   16G  1.6G  91% /var/lib/docker/overlay2/3fd896ec36855f746b3ceda17cf03ce03c432877d3cebda56a9f3cad7d424b56/merged
tmpfs                    781M   28K  781M   1% /run/user/0
/dev/sr0                 4.4G  4.4G     0 100% /run/media/root/CentOS 7 x86_64
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# lvextend -L +2G /dev/mapper/centos-root
  Size of logical volume centos/root changed from <17.00 GiB (4351 extents) to <19.00 GiB (4863 extents).
  Logical volume centos/root successfully resized.
[root@localhost ~]# xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=512    agcount=4, agsize=1113856 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=4455424, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 4455424 to 4979712
[root@localhost ~]# 
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   13M  3.8G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   19G   16G  3.6G  82% /
/dev/sda1               1014M  185M  830M  19% /boot
overlay                   19G   16G  3.6G  82% /var/lib/docker/overlay2/be39cdda8d424d12df732c3ca9af5f44cda50302a0ad586a09c6e54f7a6ed60f/merged
overlay                   19G   16G  3.6G  82% /var/lib/docker/overlay2/a9340445008f6a6fdf0e231c67fa5807c4703c8d695a9e545667498dc3f08820/merged
overlay                   19G   16G  3.6G  82% /var/lib/docker/overlay2/3fd896ec36855f746b3ceda17cf03ce03c432877d3cebda56a9f3cad7d424b56/merged
tmpfs                    781M   28K  781M   1% /run/user/0
/dev/sr0                 4.4G  4.4G     0 100% /run/media/root/CentOS 7 x86_64
[root@localhost ~]# lvextend -L +6G /dev/sr0 
  "/dev/sr0": Invalid path for Logical Volume.
  Run `lvextend --help' for more information.
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# ll /dev/cdrom
lrwxrwxrwx. 1 root root 3 Apr 26 12:25 /dev/cdrom -> sr0
[root@localhost ~]# ll /dev/sr0
brw-rw----+ 1 root cdrom 11, 0 Apr 26 12:25 /dev/sr0
[root@localhost ~]# lvextend -L +28G /dev/mapper/centos-root
  Insufficient free space: 7168 extents needed, but only 7167 available
[root@localhost ~]# lvextend -L +27G /dev/mapper/centos-root
  Size of logical volume centos/root changed from <19.00 GiB (4863 extents) to <46.00 GiB (11775 extents).
  Logical volume centos/root successfully resized.
[root@localhost ~]# fs_growfs /dev/mapper/centos-root
bash: fs_growfs: command not found...
[root@localhost ~]# xfs_growfs /dev/mapper/centos-root
meta-data=/dev/mapper/centos-root isize=512    agcount=5, agsize=1113856 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=4979712, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 4979712 to 12057600
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G   13M  3.8G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-root   46G   16G   31G  34% /
/dev/sda1               1014M  185M  830M  19% /boot
overlay                   46G   16G   31G  34% /var/lib/docker/overlay2/be39cdda8d424d12df732c3ca9af5f44cda50302a0ad586a09c6e54f7a6ed60f/merged
overlay                   46G   16G   31G  34% /var/lib/docker/overlay2/a9340445008f6a6fdf0e231c67fa5807c4703c8d695a9e545667498dc3f08820/merged
overlay                   46G   16G   31G  34% /var/lib/docker/overlay2/3fd896ec36855f746b3ceda17cf03ce03c432877d3cebda56a9f3cad7d424b56/merged
tmpfs                    781M   28K  781M   1% /run/user/0
/dev/sr0                 4.4G  4.4G     0 100% /run/media/root/CentOS 7 x86_64
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# 

Rasa 3.x系列博客分享

猜你喜欢

转载自blog.csdn.net/duan_zhihua/article/details/124419553
今日推荐