【illustrate】
gpmall is an open source e-commerce platform based on SpringBoot+Dubbo. Mr. Xu fixed some bugs in the project and re-published it to gitee for some problems in k8s cluster deployment. It is recommended to download and learn from his gitee, gpmall's More introductions and source code download links are: gpamll
The following deployment process borrowed and signed Mr. Xu's Youdao Cloud notes , and made optimization and refinement.
Since the internal private cloud restricts access to the external network, all the images used during this deployment are from the internal Harbor mirror warehouse, and the deployment details in the kubernetes cluster on the internal Huawei private cloud are recorded in detail.
For the manual deployment process of high-performance kubernetes clusters, please refer to: Deployment of high-performance kubernetes clusters
For the automatic deployment process of high-performance kubernetes clusters, please refer to: ansible automatic deployment of k8s
1 project compilation
1.1 Compilation environment preparation
gpmall is based on the separation of the front-end and the front-end. The front-end environment needs to install the node environment. For the installation tutorial, please refer to: Node installation .
The back-end code requires Idea and Maven environment, the configuration process can refer to: Idea installation , Maven installation
1.2 Compile the module
gpmall is based on microservices. Each module needs to be compiled separately, and the compilation order is particular. The compilation order of each module is as follows:
- gpmall-parent
- gpmall-commons
- user-service
- shopping-service
- order-service
- pay-service
- market-service
- comment-service
- search-service
Here, the first module is taken as an example to illustrate how to use IDEA to compile the module, and the compilation process of other modules is the same.
1. Open the module
2. Select the module to compile
3. After opening, double-click the install of Lifecycle under Maven on the right to compile
If there is a root project in some modules, just compile the root project.
4. After the compilation is completed, there will be a jar package in the warehouse directory, and this jar package can be used for subsequent deployment. You can view the log information of the install for the specific generated directory of the jar package
Compile each module sequentially according to the above compilation order, and finally get the following jar package for subsequent deployment:
(1)user-provider-0.0.1-SNAPSHOT.jar
(2)gpmall-user-0.0.1-SNAPSHOT.jar
(3)shopping-provider-0.0.1-SNAPSHOT.jar
(4)order-provider-0.0.1-SNAPSHOT.jar
(5)comment-provider-0.0.1-SNAPSHOT.jar
(6)search-provider-1.0-SNAPSHOT.jar
(7)gpmall-shopping-0.0.1-SNAPSHOT.jar
The above seven jar packages need to be uploaded to a Linux host in order to make Docker images for them. Here we take the ansible-controller node as an example.
Transfer the above 7 jar packages to the ansible control host using tools such as xftp or sftp, and store them in 7 corresponding directories, as shown in the figure below.
[zhangsan@controller ~]$ ls -ld /data/zhangsan/gpmall/gpmall-jar/*
drwxr-xr-x 2 root root 4096 3月 24 17:48 /data/zhangsan/gpmall/gpmall-jar/comment-provider
drwxr-xr-x 2 root root 4096 3月 24 17:48 /data/zhangsan/gpmall/gpmall-jar/gpmall-shopping
drwxr-xr-x 2 root root 4096 3月 24 17:48 /data/zhangsan/gpmall/gpmall-jar/gpmall-user
drwxr-xr-x 2 root root 4096 3月 24 17:48 /data/zhangsan/gpmall/gpmall-jar/order-provider
drwxr-xr-x 2 root root 4096 3月 24 17:49 /data/zhangsan/gpmall/gpmall-jar/search-provider
drwxr-xr-x 2 root root 4096 3月 24 17:49 /data/zhangsan/gpmall/gpmall-jar/shopping-provider
drwxr-xr-x 2 root root 4096 3月 24 17:49 /data/zhangsan/gpmall/gpmall-jar/user-provider
1.3 Make a Docker image
Although k8s no longer supports docker since version 1.24, the image built with docker can still be used in the k8s cluster, so the docker image is still built one by one for the 7 jar packages compiled earlier.
1.3.1 Environment preparation
1. Install docker
A Docker image can be created on any Linux host. Here, an openEuler host is used to build a Docker image. The operation on the k8s-master01 node is used as an example below.
Since openEuler does not have docker installed by default, you can execute the following command to install docker.
# 安装docker
[zhangsan@controller ~]$ sudo dnf -y install docker
# 新建和编辑/etc/docker/daemon.json 配置文件
2. Configure the docker service
By default, docker will pull images from the official mirror warehouse. You can modify the configuration file to modify the mirror warehouse address. Here, add the private mirror warehouse to it, as shown below.
[zhangsan@controller ~]$ sudo vim /etc/docker/daemon.json
{
"insecure-registries":["192.168.18.18:9999"]
}
3. Restart the docker service
# 重新docker服务
[zhangsan@controller ~]$ sudo systemctl restart docker.service
1.3.2 Compile Dockerfile
A Dockerfile needs to be created for the jar package in the directory where each jar package is located. The following takes the first jar package (user-provider-0.0.1-SNAPSHOT.jar) as an example, create and edit Dockerfile in the directory where the jar package is located, the content is as follows (other jar packages can be modified by referring to the instructions in it) :
[zhangsan@controller user-provider]$ sudo vim Dockerfile
FROM 192.168.18.18:9999/common/java:openjdk-8u111-alpine
#记得将下面的zhangsan更改成自己的目录
WORKDIR /data/zhangsan/gpmall
#下边记得根据模块更换对应的jar包
ADD user-provider-0.0.1-SNAPSHOT.jar $WORKDIR
ENTRYPOINT ["java","-jar"]
#下边记得根据模块更换对应的jar包
CMD ["$WORKDIR/user-provider-0.0.1-SNAPSHOT.jar"]
1.3.3 Using Dockerfile to build a mirror image
In order not to modify the image name in yaml when deploying k8s later, it is recommended to directly use the module name as the image name, and the tag is unified as latest. The image names corresponding to each module are shown in the following table:
module name |
docker image name |
Remark |
user-provider |
user-provider:latest |
|
gpmall-user |
gpmall-user:latest |
|
shopping-provider |
shopping-provider:latest |
|
gpmall-shopping |
gpmall-shopping:latest |
|
order-provider |
order-provider:latest |
|
comment-provider |
comment-provider:latest |
|
search-provider |
search-provider:latest |
|
gpmall-front |
gpmall-front:latest |
The front end, the back generates a mirror image separately |
Execute the following command to create a mirror:
### 构建Docker镜像的命令格式为如下(注意最后的一个点不能省):
### docker build -t 镜像名:tag标签 .
[zhangsan@controller user-provider]$ sudo docker build -t user-provider:latest .
Sending build context to Docker daemon 62.39MB
Step 1/5 : FROM 192.168.18.18:9999/common/java:openjdk-8u111-alpine
openjdk-8u111-alpine: Pulling from common/java
53478ce18e19: Pull complete
d1c225ed7c34: Pull complete
887f300163b6: Pull complete
Digest: sha256:f0506aad95c0e03473c0d22aaede25402584ecdab818f0aeee8ddc317f7145ed
Status: Downloaded newer image for 192.168.18.18:9999/common/java:openjdk-8u111-alpine
---> 3fd9dd82815c
Step 2/5 : WORKDIR /data/zhangsan/gpmall
---> Running in bb5239c3d849
Removing intermediate container bb5239c3d849
---> e791422cdb40
Step 3/5 : ADD user-provider-0.0.1-SNAPSHOT.jar /data/zhangsan/gpmall
---> 61ece5f0c8fe
Step 4/5 : ENTRYPOINT ["java","-jar"]
---> Running in 8e1a6a0d6f30
Removing intermediate container 8e1a6a0d6f30
---> beac96264c93
Step 5/5 : CMD ["/data/zhangsan/gpmall/user-provider-0.0.1-SNAPSHOT.jar"]
---> Running in a5993541334a
Removing intermediate container a5993541334a
---> 502d57ed4303
Successfully built 502d57ed4303
Successfully tagged user-provider:latest
1.3.4 View image
[zhangsan@controller user-provider]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
user-provider latest 502d57ed4303 24 seconds ago 207MB
192.168.18.18:9999/common/java openjdk-8u111-alpine 3fd9dd82815c 6 years ago 145MB
Similarly, build corresponding images for other jar packages one by one.
1.4 Create a front-end image
1.4.1 Dependency installation
The front-end code is located in the gpmall-front folder, so you need to execute the command [npm install] in this directory, as shown in the figure below.
1.4.2 Package release
Execute the [npm run build] command to package, and a dist folder will be generated in the directory after completion, as shown in the figure below.
Use tools such as xftp or sftp to copy the folder to the host where the Docker image was built earlier, and configure the docker environment on the machine, as shown below.
[zhangsan@controller ~]$ ls /data/zhangsan/gpmall/frontend/
dist
1.4.3 Configuring Web Services
The gpmall project is developed separately from the front and back ends, and the front end needs to be deployed independently. Here, Nginx is selected as the web server, and the dist folder generated above is added to the image of nginx and the proxy is configured. Therefore, the configuration file nginx.conf of nginx needs to be prepared first. The content is as follows, save the file to the directory where dist is located:
[zhangsan@controller ~]$ cd /data/zhangsan/gpmall/frontend/
[zhangsan@controller frontend]$ sudo vim nginx.conf
worker_processes auto;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
client_max_body_size 20m;
server {
listen 9999;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#这里是重点,需要将所有访问/user的请求转发到集群内部对应的服务端口上,不然前端数据无法展示
location /user {
proxy_pass http://gpmall-user-svc:8082;
proxy_redirect off;
proxy_cookie_path / /user;
}
#这里是重点,需要将所有访问/shopping的请求转发到集群内部对应的服务端口上,不然前端数据无法展示
location /shopping {
proxy_pass http://gpmall-shopping-svc:8081;
proxy_redirect off;
proxy_cookie_path / /shopping;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
1.4.4 Compile Dockfile
In the directory where the dist is located, create a Dockerfile for building the front-end image. The contents of the file are as follows:
[zhangsan@controller frontend]$ sudo vim Dockerfile
FROM 192.168.18.18:9999/common/nginx:latest
COPY dist/ /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/nginx.conf
Finally, confirm that the dist directory, nginx.conf configuration file and Dockerfile are in the same directory.
1.4.5 Build front-end image
Use the Dockerfile to build the image for the front end.
[zhangsan@controller frontend]$ sudo docker build -t gpmall-front:latest .
Sending build context to Docker daemon 11.01MB
Step 1/3 : FROM 192.168.18.18:9999/common/nginx:latest
latest: Pulling from common/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Pull complete
a0bcbecc962e: Pull complete
Digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3
Status: Downloaded newer image for 192.168.18.18:9999/common/nginx:latest
---> 605c77e624dd
Step 2/3 : COPY dist/ /usr/share/nginx/html/
---> 1b2bfaf186a0
Step 3/3 : COPY nginx.conf /etc/nginx/nginx.conf
---> a504c7bbf947
Successfully built a504c7bbf947
Successfully tagged gpmall-front:latest
1.4.6 View the front-end image
# 查看镜像
[zhangsan@controller frontend]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gpmall-front latest a504c7bbf947 2 minutes ago 152MB
192.168.18.18:9999/common/nginx latest 605c77e624dd 14 months ago 141MB
1.5 Upload the image to the private image repository
In order to facilitate later project deployment, the previously built images need to be uploaded to the mirror warehouse. Due to the limitation of the internal network, all the images are uploaded to the private mirror warehouse here. For the construction process of the private mirror warehouse, please refer to: Building a Harbor mirror warehouse
The address of Harbor’s private mirror warehouse deployed internally this time is: http://192.168.18.18:9999/
After successfully building a private mirror warehouse, you need to create a member account, such as admin.
Uploading a Docker image to a mirror warehouse usually requires the following three operations:
1.5.1 Log in to the mirror warehouse
Use the account in the private mirror warehouse to log in to the private mirror warehouse, the command is as follows:
[zhangsan@controller ~]$ sudo docker login 192.168.18.18:9999
Username: admin
Password:
1.5.2 Mark mirroring
The syntax for marking a mirror image is:
docker tag 源镜像名:源标记 镜像仓库地址/项目名称/修改后的镜像名:修改后的标记
An example is as follows, marking the front-end image:
sudo docker tag gpmall-front:latest 192.168.18.18:9999/gpmall/gpmall-front:latest
1.5.3 Upload image
The syntax for uploading an image is:
docker push 镜像仓库地址/项目名称/修改后的镜像名:修改后的标记
The example is as follows, pushing the front-end image to the gpmall project in the private warehouse:
[zhangsan@controller frontend]$ sudo docker push 192.168.18.18:9999/gpmall/gpmall-front:latest
The push refers to repository [192.168.18.18:9999/gpmall/gpmall-front]
2e5e73a63813: Pushed
75176abf2ccb: Pushed
d874fd2bc83b: Mounted from common/nginx
32ce5f6a5106: Mounted from common/nginx
f1db227348d0: Mounted from common/nginx
b8d6e692a25e: Mounted from common/nginx
e379e8aedd4d: Mounted from common/nginx
2edcec3590a4: Mounted from common/nginx
latest: digest: sha256:fc5389e7c056d95c5d269932c191312034b1c34b7486325853edf5478bf8f1b8 size: 1988
In a similar manner, push all previously built images to the private image repository for later use. The pushed image is shown in the following figure:
1.5.4 Delete image
After all images are successfully pushed, you can clear these docker images. The example command is as follows:
# 先停止正在运行的所有容器
[zhangsan@controller ~]$ sudo docker stop $(sudo docker ps -a -q)
# 删除所有容器
[zhangsan@controller ~]$ sudo docker rm $(sudo docker ps -a -q)
# 删除所有镜像
[zhangsan@controller ~]$ sudo docker rmi $(sudo docker images -a -q) --force
2 Deploy the operating environment
The following operations are completed on any k8s host, and the following operations are completed on the k8s-master01 node.
2.1 Create a namespace
It is not necessary to create a namespace, but in order to distinguish other deployment environments and facilitate later management, it is best to create a namespace. Here it is required to create a namespace named pinyin of your own name, such as zhangsan:
sudo kubectl create namespace zhangsan
In order to omit the namespace option when executing the kubectl command later, you can switch the default namespace to your own namespace:
sudo kubectl config set-context --current --namespace=zhangsan
# 若要改回,将上面的zhangsan改为default即可
2.2 Configure NFS service
Since it needs to be used for nfs sharing during the deployment process, the NFS service needs to be configured first.
For the convenience of file storage, it is recommended to divide the current disk into another area on k8s-master01, format it, and permanently mount it to the /data directory.
1. Create an NFS shared directory
[zhangsan@k8s-master01 ~]$ sudo mkdir -p /data/zhangsan/gpmall/nfsdata
2. Configure nfs service
openEuler has installed nfs-tutils by default, you can execute the installation command to install or check whether it has been installed, and then modify the nfs main configuration file to allow any host (*) to have rw, sync and no_root_squash permissions.
[zhangsan@k8s-master01 ~]$ sudo dnf -y install nfs-utils
[zhangsan@k8s-master01 ~]$ sudo vim /etc/exports
/data/zhangsan/gpmall/nfsdata *(rw,sync,no_root_squash)
3. Restart the service and set it to start automatically at boot
[zhangsan@k8s-master01 ~]$ sudo systemctl restart rpcbind.service
[zhangsan@k8s-master01 ~]$ sudo systemctl restart nfs-server.service
[zhangsan@k8s-master01 ~]$ sudo systemctl enable nfs-server.service
[zhangsan@k8s-master01 ~]$ sudo systemctl enable rpcbind.service
4. Verification
Execute [showmount -e NFS server IP address] on any internal Linux host, as shown below, if you can see the shared directory of the NFS server, it means that the NFS service configuration is OK.
[root@k8s-master03 ~]# showmount -e 192.168.218.100
Export list for 192.168.218.100:
/data/zhangsan/gpmall/nfsdata *
2.3 Deploy middleware
gpmall uses Elasticsearch, zookeeper, kafka, MySQL, Rabbitmq, and Redis middleware, so these basic middleware need to be deployed in advance.
2.3.1 Deploy Elasticsearch
To facilitate file management, a dedicated directory can be created here to store yaml files, and all yaml files during subsequent deployments are stored in this directory.
# 创建一个专门存放yaml文件的目录
[zhangsan@k8s-master01 ~]$ sudo mkdir -p /data/zhangsan/gpmall/yaml
# 切换到yaml目录
[zhangsan@k8s-master01 ~]$ cd /data/zhangsan/gpmall/yaml
1. Create es persistent volume pv
Elasticsearch needs to persist data, so it is necessary to create a persistent volume pv on K8s, which requires creating a directory es to mount the pv under the NFS shared directory, and opening the access requirements of this directory.
[zhangsan@k8s-master01 yaml]$ sudo mkdir -p /data/zhangsan/gpmall/nfsdata/es
# 开放权限
[zhangsan@k8s-master01 yaml]$ sudo chmod 777 /data/zhangsan/gpmall/nfsdata/es
(1) Compile the yaml file for creating pv
# 编制es-pv.yaml文件
[zhangsan@k8s-master01 yaml]$ sudo vim es-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /data/zhangsan/gpmall/nfsdata/es
server: 192.168.218.100 #此处的IP为上面目录所在主机的IP
(2) Create es-pv object
# 创建
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f es-pv.yaml
persistentvolume/es-pv created
# 查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
es-pv 1Gi RWO Retain Bound zhangsan/es-pvc nfs 10h
2. Create pvc
After creating es-pv, you also need to create pvc so that pod can obtain storage resources from the specified pv.
(1) Compile the yaml file for creating pvc, and pay attention to modifying the namespace.
[zhangsan@k8s-master01 yaml]$ sudo vim es-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-pvc
namespace: zhangsan
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
(2) Create es-pvc
# 创建
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f es-pvc.yaml
persistentvolumeclaim/es-pvc created
# 查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
es-pvc Bound es-pv 1Gi RWO nfs 10h
3. Create es-Service
(1) Compile the yaml file for creating the es service, pay attention to modify the namespace and nodePort .
By default, nodePort ranges from 30000-32767.
Here it is required to specify nodePort as 3XY, where X and Y are both two-digit numbers, X is your class ID, and Y is the last two digits of your student number . For example, 31888 below represents the value specified by student No. 88 in class 18.
[zhangsan@k8s-master01 yaml]$ sudo vim es-service.yaml
apiVersion: v1
kind: Service
metadata:
name: es-svc
namespace: zhangsan
spec:
type: NodePort
ports:
- name: kibana
port: 5601
targetPort: 5601
nodePort: 31888
- name: rest
port: 9200
targetPort: 9200
- name: inter
port: 9300
targetPort: 9300
selector:
app: es
(2) Create es-service
# 创建
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f es-service.yaml
service/es-svc created
#查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-svc NodePort 10.108.214.56 <none> 5601:31888/TCP,9200:31532/TCP,9300:31548/TCP 10h
4. Create a deployment that deploys the Elasticsearch service
(1) Compile the yaml file for deployment, pay attention to modify the namespace and mirror address .
[zhangsan@k8s-master01 yaml]$ sudo vim es-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: es
namespace: zhangsan
spec:
selector:
matchLabels:
app: es
template:
metadata:
labels:
app: es
spec:
containers:
- image: 192.168.18.18:9999/common/elasticsearch:6.6.1
name: es
env:
- name: cluster.name
value: elasticsearch
- name: bootstrap.memory_lock
value: "false"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
ports:
- containerPort: 9200
name: rest
- containerPort: 9300
name: inter-node
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
- image: 192.168.18.18:9999/common/kibana:6.6.1
name: kibana
env:
- name: ELASTICSEARCH_HOSTS
value: http://es-svc:9200
ports:
- containerPort: 5601
name: kibana
volumes:
- name: es-data
persistentVolumeClaim:
claimName: es-pvc
(2) Create a deployment
# 创建
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f es-deployment.yaml
deployment.apps/es created
# 查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
es 1/1 1 1 10h
Note: If the READY column shows 0/1, it means there is a problem. You can execute [sudo kubectl get pod] to view the pod status. If the pod status is abnormal, you can execute [sudo kubectl describe pod_name ] to view the Events information of the pod, or Execute [sudo kubectl logs -f pod/ pod_name ] to view log messages.
If the pod that starts with es prompts an error, the following error message will be displayed in the log:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2023-06-05T07:43:42,415][INFO ][o.e.n.Node ] [C1xLTO-] stopping ...
[2023-06-05T07:43:42,428][INFO ][o.e.n.Node ] [C1xLTO-] stopped
[2023-06-05T07:43:42,429][INFO ][o.e.n.Node ] [C1xLTO-] closing ...
[2023-06-05T07:43:42,446][INFO ][o.e.n.Node ] [C1xLTO-] closed
The solution is as follows:
In all k8s nodes, modify the /etc/sysctl.conf file, add the content vm.max_map_count=262144 at the end of the file, and then restart each k8s node.
vim /etc/sysctl.conf
……此处省略文件原有内容……
vm.max_map_count=262144
2.3.2 deploy zookeeper
1. Create zookeeper service
(1) Compile the yaml file for creating zookeeper service objects, and pay attention to modifying the namespace .
[zhangsan@k8s-master01 yaml]$ sudo vim zk-service.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-svc
namespace: zhangsan
spec:
ports:
- name: zkport
port: 2181
targetPort: 2181
selector:
app: zk
(2) Create and view zookeeper services
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f zk-service.yaml
service/zk-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-svc NodePort 10.108.214.56 <none> 5601:31888/TCP,9200:31532/TCP,9300:31548/TCP 10h
zk-svc ClusterIP 10.107.4.169 <none> 2181/TCP 11s
2. Deploy zookeeper service
(1) Compile the yaml file for deploying the zookeeper service, and pay attention to modifying the namespace and mirror address .
[zhangsan@k8s-master01 yaml]$ sudo vim zk-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zk
name: zk
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: zk
template:
metadata:
labels:
app: zk
spec:
containers:
- image: 192.168.18.18:9999/common/zookeeper:latest
imagePullPolicy: IfNotPresent
name: zk
ports:
- containerPort: 2181
(2) Create and view the deployment of zookeeper
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f zk-deployment.yaml
deployment.apps/zk created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
es 1/1 1 1 116s
zk 1/1 1 1 4s
2.3.3 Department kafka
1. Create kafka service
(1) Compile the yaml file for creating the kafka service, and pay attention to modifying the namespace .
[zhangsan@k8s-master01 yaml]$ sudo vim kafka-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
namespace: zhangsan
spec:
ports:
- name: kafkaport
port: 9092
targetPort: 9092
selector:
app: kafka
(2) Create and view kafka service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f kafka-service.yaml
service/kafka-svc created
# 查看服务
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-svc NodePort 10.96.114.80 <none> 5601:31888/TCP,9200:32530/TCP,9300:32421/TCP 8m39s
kafka-svc ClusterIP 10.108.28.89 <none> 9092/TCP 9s
zk-svc ClusterIP 10.107.4.169 <none> 2181/TCP 3h27m
2. Deploy kafka service
(1) Create a yaml file for deploying the kafka service, and pay attention to modifying the namespace and mirror address .
[zhangsan@k8s-master01 yaml]$ sudo vim kafka-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
namespace: zhangsan
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
containers:
- image: 192.168.18.18:9999/common/kafka:latest
name: kafka
env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-svc
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-svc:2181
ports:
- containerPort: 9092
(2) Create and view the deployment of kafka service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f kafka-deployment.yaml
deployment.apps/kafka created
# 查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment | grep kafka
kafka 1/1 1 1 30s
2.3.4 Deploy MySQL
MySQL also needs to persist data, so it also needs pv resources, and also needs to create a storage directory and open permissions.
# 创建目录
[zhangsan@k8s-master01 yaml]$ sudo mkdir /data/zhangsan/gpmall/nfsdata/mysql
[zhangsan@k8s-master01 yaml]$ sudo chmod 777 /data/zhangsan/gpmall/nfsdata/mysql/
1. Create a MySQL persistent volume pv
(1) Compile the yaml file for creating pv
# 创建yaml文件
[zhangsan@k8s-master01 yaml]$ sudo vim mysql-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /data/zhangsan/gpmall/nfsdata/mysql
server: 192.168.218.100 #此处的IP为上面目录所在主机的IP地址
(2) Create and view mysql-pv
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f mysql-pv.yaml
persistentvolume/mysql-pv created
# 查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pv/mysql-pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv 1Gi RWO Retain Available nfs 32s
2. Create pvc
(1) Compile the yaml file for creating pvc, and pay attention to modifying the namespace .
[zhangsan@k8s-master01 yaml]$ sudo vim mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: zhangsan
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
(2) Create and view mysql-pvc
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pvc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pvc/mysql-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pvc Bound mysql-pv 1Gi RWO nfs 17s
3. Create MySQL service
(1) Compile the yaml file that creates the mysql service, and pay attention to modifying the namespace .
[zhangsan@k8s-master01 yaml]$ sudo vim mysql-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
namespace: zhangsan
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
app: mysql
(2) Create and view MySQL service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f mysql-svc.yaml
service/mysql-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc/mysql-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-svc NodePort 10.96.70.204 <none> 3306:30306/TCP 24s
4. Deploy MySQL service
(1) Compile the deployment yaml file for deploying the mysql service, pay attention to modify the namespace and mirror address .
[zhangsan@k8s-master01 yaml]$ sudo vim mysql-development.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: zhangsan
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: 192.168.18.18:9999/common/mysql:latest
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pvc
(2) Create and view the deployment of mysql service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f mysql-development.yaml
deployment.apps/mysql created
# 查看
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment/mysql
NAME READY UP-TO-DATE AVAILABLE AGE
mysql 1/1 1 1 39s
2.3.5 Deploy Rabbitmq
1. Create Rabbitmq service
(1) Prepare the yaml file, pay attention to modify the namespace and nodePort.
The nodePort is required to be 3X(Y+1), where X is the two-digit class ID, and Y is the last two digits of the student ID. In the example below, 31889 represents the nodePort value of student No. 88 in Class 18.
[zhangsan@k8s-master01 yaml]$ sudo vim rabbitmq-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-svc
namespace: zhangsan
spec:
type: NodePort
ports:
- name: mangerport
port: 15672
targetPort: 15672
nodePort: 31889
- name: rabbitmqport
port: 5672
targetPort: 5672
selector:
app: rabbitmq
(2) Create and view Rabbitmq service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f rabbitmq-svc.yaml
service/rabbitmq-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc/rabbitmq-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq-svc NodePort 10.102.83.214 <none> 15672:31889/TCP,5672:32764/TCP 4s
2. Deploy Rabbitmq service
(1) Edit the yaml file , pay attention to modify the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim rabbitmq-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rabbitmq
name: rabbitmq
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- image: 192.168.18.18:9999/common/rabbitmq:management
imagePullPolicy: IfNotPresent
name: rabbitmq
ports:
- containerPort: 5672
name: rabbitmqport
- containerPort: 15672
name: managementport
(2) Create and view the deployment of Rabbitmq service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f rabbitmq-deployment.yaml
deployment.apps/rabbitmq created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment/rabbitmq
NAME READY UP-TO-DATE AVAILABLE AGE
rabbitmq 1/1 1 1 24s
3. Fill the pit
The gpmall code needs to use the queue in rabbitmq, but the code seems to have a bug, and the queue cannot be automatically created in rabbitmq. You need to manually create a queue in the ribbitmq container. The specific operation process is as follows:
(1) View and enter the pod
# 查看rabbitmq的pod名称
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pod | grep rabbitmq
rabbitmq-77f54bdd4f-xndb4 1/1 Running 0 6m1s
# 进入pod内部
[zhangsan@k8s-master01 yaml]$ sudo kubectl exec -it rabbitmq-77f54bdd4f-xndb4 -- /bin/bash
root@rabbitmq-77f54bdd4f-xndb4:/#
(2) Declare the queue inside the pod
root@rabbitmq-77f54bdd4f-xndb4:/# rabbitmqadmin declare queue name=delay_queue auto_delete=false durable=false --username=guest --password=guest
queue declared
(3) Check whether the queue exists
root@rabbitmq-77f54bdd4f-xndb4:/# rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
delay_queue 0
(4) exit the pod
root@rabbitmq-77f54bdd4f-xndb4:/# exit
2.3.6 Deploy redis
1. Create redis service
(1) Compile the yaml file for creating the redis service , and pay attention to modifying the namespace.
[zhangsan@k8s-master01 yaml]$ sudo vim redis-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-svc
namespace: zhangsan
spec:
ports:
- name: redisport
port: 6379
targetPort: 6379
selector:
app: redis
(2) Create and view redis service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f redis-svc.yaml
service/redis-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc redis-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-svc ClusterIP 10.108.200.204 <none> 6379/TCP 14s
2. Deploy redis service
(1) Compile the yaml file for deploying the redis service , and pay attention to modifying the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim redis-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: 192.168.18.18:9999/common/redis:latest
imagePullPolicy: IfNotPresent
name: redis
ports:
- containerPort: 6379
(2) Deploy and view the deployment of redis service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f redis-deployment.yaml
deployment.apps/redis created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment/redis
NAME READY UP-TO-DATE AVAILABLE AGE
redis 1/1 1 1 13s
3 Deploy system modules
System modules have certain dependencies, and it is recommended to deploy them in the following order.
3.1 Deploying User Modules
3.3.1 Create user-provider service
1. Compile the yaml file for creating the user-provider service , and pay attention to modifying the namespace.
[zhangsan@k8s-master01 yaml]$ sudo vim user-provider-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: user-provider-svc
namespace: zhangsan
spec:
ports:
- name: port
port: 80
targetPort: 80
selector:
app: user-provider
2. Create and view user-provider service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f user-provider-svc.yaml
service/user-provider-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc/user-provider-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
user-provider-svc ClusterIP 10.109.120.197 <none> 80/TCP 19s
3.3.2 Deploying User Module Provider Services
1. Compile the yaml file for deploying the user-provider service , and pay attention to modifying the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim user-provider-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: user-provider
name: user-provider
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: user-provider
template:
metadata:
labels:
app: user-provider
spec:
containers:
- image: 192.168.18.18:9999/gpmall/user-provider:latest
imagePullPolicy: IfNotPresent
name: user-provider
2. Deploy and view the deployment of the user-provider service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f user-provider-deployment.yaml
deployment.apps/user-provider created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment/user-provider
NAME READY UP-TO-DATE AVAILABLE AGE
user-provider 1/1 1 1 21s
Note: If it is found that READY is displayed as 0/1, it means that there is a problem with the deployment. You can execute the command [sudo kubectl get pod] to view the status of the pod. If the pod is running abnormally, you can execute [sudo kubectl logs -f pod_name] to view the log . If it is found that there is a problem with the docker image built earlier, you need to execute [sudo crictl images] on each k8s node to view the image, and execute [sudo crictl rmi image ID] to delete the problematic image. After rebuilding and uploading the Docker image, then redeploy again.
3.2 Deploy gpmall-user service
3.2.1 Create gpmall-user service
1. Compile the yaml file for creating the gpmall-user service , and pay attention to modifying the namespace.
[zhangsan@k8s-master01 yaml]$ sudo vim gpmall-user-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: gpmall-user-svc
namespace: zhangsan
spec:
ports:
- name: port
port: 8082
targetPort: 8082
selector:
app: gpmall-user
2. Create and view gpmall-user service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f gpmall-user-svc.yaml
service/gpmall-user-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc gpmall-user-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gpmall-user-svc ClusterIP 10.107.12.83 <none> 8082/TCP 17s
3.2.2 Deploy the gpmall-user service
1. Compile the yaml file for deploying the gpmall-user service , pay attention to modify the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim gpmall-user-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gpmall-user
name: gpmall-user
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: gpmall-user
template:
metadata:
labels:
app: gpmall-user
spec:
containers:
- image: 192.168.18.18:9999/gpmall/gpmall-user:latest
imagePullPolicy: IfNotPresent
name: gpmall-user
ports:
- containerPort: 8082
2. Deploy and view the deployment of the gpmall-user service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f gpmall-user-deployment.yaml
deployment.apps/gpmall-user created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment gpmall-user
NAME READY UP-TO-DATE AVAILABLE AGE
gpmall-user 1/1 1 1 22s
3.3 Deploy the search module
3.3.1 Create a yaml file for deploying the search-provider module
Pay attention to modifying the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim search-provider-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: search-provider
name: search-provider
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: search-provider
template:
metadata:
labels:
app: search-provider
spec:
containers:
- image: 192.168.18.18:9999/gpmall/search-provider:latest
imagePullPolicy: IfNotPresent
name: search-provider
3.3.2 Deploy and view the deployment of the search-provider module
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f search-provider-deployment.yaml
deployment.apps/search-provider created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment search-provider
NAME READY UP-TO-DATE AVAILABLE AGE
search-provider 1/1 1 1 27s
3.4 Deploy the order module
3.4.1 Compiling the yaml file for deploying the order-provider module
Pay attention to modifying the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim order-provider-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: order-provider
name: order-provider
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: order-provider
template:
metadata:
labels:
app: order-provider
spec:
containers:
- image: 192.168.18.18:9999/gpmall/order-provider:latest
imagePullPolicy: IfNotPresent
name: order-provider
3.4.2 Deploy and view the deployment of the order-provider module
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f order-provider-deployment.yaml
deployment.apps/order-provider created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment order-provider
NAME READY UP-TO-DATE AVAILABLE AGE
order-provider 1/1 1 1 97s
3.5 Deploy the shopping module
3.5.1 Deploy the shopping-provider module
1. Compile the yaml file of the shopping-provider module , pay attention to modify the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim shopping-provider-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: shopping-provider
name: shopping-provider
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: shopping-provider
template:
metadata:
labels:
app: shopping-provider
spec:
containers:
- image: 192.168.18.18:9999/gpmall/shopping-provider:latest
imagePullPolicy: IfNotPresent
name: shopping-provider
2. Deploy and view the deployment of the shopping-provider module
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f shopping-provider-deployment.yaml
deployment.apps/shopping-provider created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment shopping-provider
NAME READY UP-TO-DATE AVAILABLE AGE
shopping-provider 1/1 1 1 14s
3.5.2 Create shopping service
1. Compile the yaml file for creating the gpmall-shopping service , pay attention to modifying the namespace .
[zhangsan@k8s-master01 yaml]$ sudo vim gpmall-shopping-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: user-provider-svc
namespace: zhangsan
spec:
ports:
- name: port
port: 80
targetPort: 80
selector:
app: user-provider
2. Create and view gpmall-shopping service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f gpmall-shopping-svc.yaml
service/gpmall-shopping-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc gpmall-shopping-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gpmall-shopping-svc ClusterIP 10.105.229.200 <none> 8081/TCP 7m10s
3.5.3 Deploy the gpmall-shopping service
1. Compile the yaml file for deploying the gpmall-shopping service , pay attention to modify the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim gpmall-shopping-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: gpmall-shopping
name: gpmall-shopping
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: gpmall-shopping
template:
metadata:
labels:
app: gpmall-shopping
spec:
containers:
- image: 192.168.18.18:9999/gpmall/gpmall-shopping:latest
imagePullPolicy: IfNotPresent
name: gpmall-shopping
ports:
- containerPort: 8081
2. Deploy and view the deployment of gpmall-shopping service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f gpmall-shopping-deployment.yaml
deployment.apps/gpmall-shopping created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment gpmall-shopping
NAME READY UP-TO-DATE AVAILABLE AGE
gpmall-shopping 1/1 1 1 18s
3.6 Deploy the comment module
3.6.1 Compile and deploy the yaml file of comment-provider
Pay attention to modifying the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim comment-provider-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: comment-provider
name: comment-provider
namespace: zhangsan
spec:
replicas: 1
selector:
matchLabels:
app: comment-provider
template:
metadata:
labels:
app: comment-provider
spec:
containers:
- image: 192.168.18.18:9999/gpmall/comment-provider
imagePullPolicy: IfNotPresent
name: comment-provider
3.6.2 Deploy and view the deployment of the comment-provider module
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f comment-provider-deployment.yaml
deployment.apps/comment-provider created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment comment-provider
NAME READY UP-TO-DATE AVAILABLE AGE
comment-provider 1/1 1 1 65s
3.7 Deploy the front-end module
3.7.1 Create a front-end service
1. Compile the yaml file for creating the gpmall-frontend service, and pay attention to modifying the namespace.
Here, the nodePort is required to be specified as 3X(Y+2) , where X is the two-digit class ID, and Y is the last two digits of the student number. For example, 31890 below is the value set for student No. 88 of Class 18.
[zhangsan@k8s-master01 yaml]$ sudo vim gpmall-frontend-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: gpmall-frontend-svc
namespace: zhangsan
spec:
type: NodePort
ports:
- port: 9999
targetPort: 9999
nodePort: 31890
selector:
app: gpmall-frontend
2. Create and view gpmall-frontend service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f gpmall-frontend-svc.yaml
service/gpmall-frontend-svc created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc gpmall-frontend-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gpmall-frontend-svc NodePort 10.99.154.113 <none> 9999:31890/TCP 15s
3.7.2 Deploying the gpmall-frontend service
1. Compile the yaml file for deploying the gpmall-frontend service, and pay attention to modifying the namespace and mirror address.
[zhangsan@k8s-master01 yaml]$ sudo vim gpmall-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: gpmall-frontend
namespace: zhangsan
spec:
selector:
matchLabels:
app: gpmall-frontend
template:
metadata:
labels:
app: gpmall-frontend
spec:
containers:
- image: 192.168.18.18:9999/gpmall/gpmall-front:latest
imagePullPolicy: IfNotPresent
name: gpmall-frontend
ports:
- containerPort: 9999
2. Deploy and view the deployment of the gpmall-frontend service
[zhangsan@k8s-master01 yaml]$ sudo kubectl create -f gpmall-frontend-deployment.yaml
deployment.apps/gpmall-frontend created
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment gpmall-frontend
NAME READY UP-TO-DATE AVAILABLE AGE
gpmall-frontend 1/1 1 1 17s
3.8 Confirmation Status
3.8.1 Confirm the status of all pods
The STATUS of all pods is required to be Running.
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
comment-provider-59cb4fd467-84fbh 1/1 Running 0 15h 10.0.3.205 k8s-master01 <none> <none>
es-bb896c98-6ggf4 2/2 Running 0 23h 10.0.0.103 k8s-node02 <none> <none>
gpmall-frontend-6486fb87f6-7gsxn 1/1 Running 0 15h 10.0.0.221 k8s-node02 <none> <none>
gpmall-shopping-fc7d766b4-dzlgb 1/1 Running 0 15h 10.0.1.135 k8s-master02 <none> <none>
gpmall-user-6ddcf889bb-5w58x 1/1 Running 0 20h 10.0.2.23 k8s-master03 <none> <none>
kafka-7c6cdc8647-rx5tb 1/1 Running 0 22h 10.0.1.236 k8s-master02 <none> <none>
mysql-8976b8bb4-2sfkq 1/1 Running 0 14h 10.0.3.131 k8s-master01 <none> <none>
order-provider-74bbcd6dd4-f8k87 1/1 Running 0 16h 10.0.4.41 k8s-node01 <none> <none>
rabbitmq-77f54bdd4f-xndb4 1/1 Running 0 21h 10.0.4.1 k8s-node01 <none> <none>
redis-bc8ff7957-2xn8z 1/1 Running 0 20h 10.0.2.15 k8s-master03 <none> <none>
search-provider-f549c8d9d-ng4dv 1/1 Running 0 15h 10.0.3.115 k8s-master01 <none> <none>
shopping-provider-75b7cd5d6-6767x 1/1 Running 0 17h 10.0.1.55 k8s-master02 <none> <none>
user-provider-7f6d7f8b85-hj5m5 1/1 Running 0 20h 10.0.4.115 k8s-node01 <none> <none>
zk-84bfd67c77-llk5w 1/1 Running 0 24h 10.0.1.18 k8s-master02 <none> <none>
3.8.2 Status of all services
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
es-svc NodePort 10.96.114.80 <none> 5601:31888/TCP,9200:32530/TCP,9300:32421/TCP 7h22m
gpmall-frontend-svc NodePort 10.99.154.113 <none> 9999:31890/TCP 4m23s
gpmall-shopping-svc ClusterIP 10.98.89.99 <none> 8081/TCP 77m
gpmall-user-svc ClusterIP 10.107.12.83 <none> 8082/TCP 4h48m
kafka-svc ClusterIP 10.108.28.89 <none> 9092/TCP 7h13m
mysql-svc NodePort 10.98.41.1 <none> 3306:30306/TCP 5h58m
rabbitmq-svc NodePort 10.102.83.214 <none> 15672:31889/TCP,5672:32764/TCP 5h37m
redis-svc ClusterIP 10.108.200.204 <none> 6379/TCP 5h17m
user-provider-svc ClusterIP 10.109.120.197 <none> 80/TCP 5h6m
zk-svc ClusterIP 10.107.4.169 <none> 2181/TCP 10h
3.8.3 Status of all deployments
Requires AVAILABLE to be 1 for all deployments.
[zhangsan@k8s-master01 yaml]$ sudo kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
comment-provider 1/1 1 1 19m
es 1/1 1 1 7h22m
gpmall-frontend 1/1 1 1 2m49s
gpmall-shopping 1/1 1 1 14m
gpmall-user 1/1 1 1 4h45m
kafka 1/1 1 1 7h8m
mysql 1/1 1 1 5h58m
order-provider 1/1 1 1 49m
rabbitmq 1/1 1 1 5h32m
redis 1/1 1 1 5h16m
search-provider 1/1 1 1 16m
shopping-provider 1/1 1 1 84m
user-provider 1/1 1 1 4h55m
zk 1/1 1 1 8h
4 Test visit
So far, gpmall has basically been deployed. You can get the node on which gpmall-frontend is deployed by command [kubectl get pod -o wide], and then access it through the IP address of the node and the corresponding port number. The address format is IP address: 3X( Y+2). For K8S, the IP address here is usually Leader or VIP that provides external services.
4.1 Connect to the database
When deploying for the first time, the opened page cannot display product information, so you need to connect to the database. The specific operations are as follows:
4.1.1 Create a MySQL connection
Use Navicat to create a new MySQL connection, as shown in the figure below.
Fill in the host IP address or host name in the opened window, usually the IP address of the host where the mysql pod is located, which can be viewed through the [kubectl get pod -o wide] command. If there are multiple master nodes in the k8s cluster, fill in the leader node IP address, or the VIP that provides external services.
The port number is 30306 by default, and can also be viewed through the [kubectl get svc] command. As shown below, port 3306 inside the pod is mapped to 30306 outside.
[zhangsan@k8s-master01 yaml]$ sudo kubectl get svc | grep mysql
mysql-svc NodePort 10.98.41.1 <none> 3306:30306/TCP 21h
The default account/password is root/root.
4.1.2 Opening a connection
Double-click the newly created MySQL connection on the left to open the connection. If the 2003 error (2003 - Can't connect to MysQL server on "10.200.7.99' (10038)) is prompted as shown in the figure below, it may be the host IP filled in earlier The address is wrong, modify the host address in the "Connection Properties" of the connection and try to open the connection again.
If the 1130 error (1130 - Host '10.0.1232' is not allowed to connect to this MySOL server) appears as shown in the figure below, it may be that MysQL only allows access through localhost.
It can be solved by the following operations.
(1) View the pod name of mysql
[zhangsan@k8s-master01 yaml]$ sudo kubectl get pod | grep mysql
mysql-8976b8bb4-2sfkq 1/1 Running 0 14h
(2) Enter inside the mysql pod container
[zhangsan@k8s-master01 yaml]$ sudo kubectl exec -it mysql-8976b8bb4-2sfkq -- /bin/bash
root@mysql-8976b8bb4-2sfkq:/#
(3) Log in to mysql inside the container, the default account/password is root/root
root@mysql-8976b8bb4-2sfkq:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 10232
Server version: 8.0.27 MySQL Community Server - GPL
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
(4) Execute the following 3 statements in the mysql container
mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> update user set host = '%' where user = 'root';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 1 Changed: 0 Warnings: 0
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
(5)若出现1251错误(1251 - Client does not support authentication protocol requested by server, consider upgrading MySOl client)。
Then continue to execute the following commands inside the mysql container.
mysql> alter user 'root'@'%' identified with mysql_native_password by 'root';
Query OK, 0 rows affected (0.01 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
mysql> exit;
Bye
4.1.3 Create a new database
After successfully opening the connection, you can create a new database. Right-click the database connection and select [New Database], as shown in the figure below.
In the window that opens, fill in the database name, and set the character set and collation as shown in the image below.
4.1.4 Import database table
The author of the gpmall project has provided the data table script. The gpmall.sql file in the db_script directory of the source code is the file. You can use Navicat to import this file. The operation is as follows:
Double-click the newly created database, then right-click, and select [Run SQL File...], as shown in the figure below.
In the opened window, click the button behind [File] and select the gpmall.sql file in the source code db_script directory.
Click the [Start] button to start the import, and the interface after success is shown in the figure below.
After successful import, right-click the table on the left and select [Refresh] to see the database table.
4.2 Test Access
Under normal circumstances, refresh the page to see the page shown below.
The default test account is test/test, which can be logged in to experience.
If the store still cannot be displayed, execute the following commands in sequence to redeploy shopping-provider and gpmall-shopping.
# 先删除原先的部署
sudo kubectl delete -f shopping-provider-deployment.yaml
sudo kubectl delete -f gpmall-shopping-deployment.yaml
# 重新部署
sudo kubectl create -f shopping-provider-deployment.yaml
sudo kubectl create -f gpmall-shopping-deployment.yaml