1. Environmental preparation
1.1 Environmental Description
This article builds MongoDB, based on WMware virtual machine, operating system CentOS 8, and has built a k8s cluster based on Kubeadm. The k8s node information is as follows:
server | IP address |
master | 192.168.31.80 |
node1 | 192.168.31.8 |
node2 | 192.168.31.9 |
If you want to know how to build a k8s cluster, you can jump to my article "kubeadm deploys a k8s cluster" to view it.
1.2 Installation Instructions
With the increase of enterprise-level applications and the growth of requirements, developers increasingly need a reliable, scalable, and manageable repository to store and share artifacts. Nexus is a popular repository manager, which is an open source, Java-based software for managing and distributing artifacts. Nexus 3 is a new version of Nexus that offers many new features and improvements, making it an even more powerful and flexible repository manager. This article will introduce in detail how to deploy a Nexus 3 private server on k8s.
2. Install NFS
The main function of NFS storage is to provide stable back-end storage. When the Nexus Pod fails, restarts or migrates, the original data can still be obtained.
2.1 Install NFS
I choose to create NFS storage on the master node, first execute the following command to install NFS:
yum -y install nfs-utils rpcbind
2.2 Create NFS shared folder
cd /var/nfs/
mkdir nexus
2.3 Restart the NFS service
systemctl start nfs-server
systemctl enabled nfs-server
systemctl start rpcbind
systemctl enabled rpcbind
2.4 Create nfs client sa authorization
#创建namespace
kubectl create ns nexus
cat > nexus-nfs-client-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client
namespace: nexus
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-runner
namespace: nexus
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get","list","watch","create","delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get","list","watch","create","delete"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get","list","watch","create","update","patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create","delete","get","list","watch","patch","update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
namespace: nexus
subjects:
- kind: ServiceAccount
name: nfs-client
namespace: nexus
roleRef:
kind: ClusterRole
name: nfs-client-runner
apiGroup: rbac.authorization.k8s.io
2.5 Execute the creation command
kubectl apply -f nexus-nfs-client-sa.yaml
2.6 Check whether the service is successful
kubectl get ServiceAccount -n nexus -o wide
kubectl get ClusterRole -n nexus -o wide
kubectl get ClusterRoleBinding -n nexus -o wide
2.7 Create nfs client
cat > nexus-nfs-client.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client
labels:
app: nfs-client
# replace with namespace where provisioner is deployed
namespace: nexus
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client
template:
metadata:
labels:
app: nfs-client
spec:
serviceAccountName: nfs-client
containers:
- name: nfs-client
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME ## 这个名字必须与storegeclass里面的名字一致
value: my-nexus-nfs
- name: ENABLE_LEADER_ELECTION ## 设置高可用允许选举,如果replicas参数等于1,可不用
value: "True"
- name: NFS_SERVER
value: 192.168.31.80 #修改为自己的ip(部署nfs的机器ip)
- name: NFS_PATH
value: /var/nfs/nexus #修改为自己的nfs安装目录
volumes:
- name: nfs-client-root
nfs:
server: 192.168.31.80 #修改为自己的ip(部署nfs的机器ip)
path: /var/nfs/nexus #修改为自己的nfs安装目录
2.8 create storeclass
cat > nexus-store-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexus-nfs-storage
namespace: nexus
provisioner: my-nexus-nfs
2.9 Check whether the nfs client and storeclass are created successfully
kubectl get StorageClass -n nexus -o wide
kubectl get pod -n nexus -o wide
3. Create a PV volume
3.1 Create PV volume yaml
cat > nexus-pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexus-pvc # 自定义
namespace: nexus # 自定义,与本文前后所有命名空间保持一致
labels:
pvc: nexus-pvc # 自定义
spec:
storageClassName: nexus-nfs-storage # 创建的StorageClass的名字
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
3.2 Execute command creation
kubectl apply -f nexus-pv.yaml
3.3 Check whether the PV volume is created successfully
kubectl get pv
4. Deploy Nexus
4.1 create service
cat > nexus-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nexus3
namespace: nexus
labels:
app: nexus3
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30520 # 对外开发的端口,自定义
selector:
app: nexus3
4.2 Execute command creation
kubectl apply -f nexus-service.yaml
4.3 Create a deployment
cat > nexus-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nexus3 # 自定义
labels:
app: nexus3 # 自定义
namespace: nexus # 自定义,与本文前后所有命名空间保持一致
spec:
replicas: 1 # 副本的数量
selector:
matchLabels:
app: nexus3
template:
metadata:
labels:
app: nexus3
spec:
containers:
- name: nexus3
image: sonatype/nexus3
ports:
- name: nexus3-8081
containerPort: 8081 # 容器端口
protocol: TCP
resources:
limits:
memory: 6G
cpu: 1000m
imagePullPolicy: IfNotPresent
volumeMounts:
- name: data
mountPath: /nexus-data # 数据路径挂载出来
restartPolicy: Always
volumes:
- name: data
persistentVolumeClaim:
claimName: nexus-pvc # PVC的名字
readOnly: false
4.4 Execute the command to create a deployment
kubectl apply -f nexus-deployment.yaml
4.5 Check whether the service and deployment are created successfully
kubectl get service -n nexus -o wide
kubectl get pod -n nexus -o wide
Five, login test
5.1 Test external network access to Nexus
Welcome - Nexus Repository Manager
5.2 Get the default login password
Enter the Nexus container, the default login password is in the /nexus-data/admin.password directory, print the password on the screen by cat /nexus-data/admin.password, the default account name is admin
5.3 Change password
After changing the password, log in again. Well, the deployment of Nexus through k8s is now complete!
If you think this article is helpful to you, please like + bookmark + follow!