Getting Started with DevOps丨Gitlab丨Jenkins丨harbor丨CICD丨Automation丨Operation and Maintenance Development
1. Introduction to DevOps
Software development started with two teams:
- The development plan is designed and built from the ground up by the development team . The system needs to be updated iteratively.
- The operation and maintenance team will deploy the code of the development team after testing. I hope the system will run stably and safely.
It seems that two teams with different goals need to work together to complete the development of a software.
After the development team specifies the plan and completes the coding, it needs to be provided to the operation and maintenance team.
The operation and maintenance team feeds back the bugs that need to be fixed and some tasks that need to be reworked to the development team.
At this time, the development team often needs to wait for feedback from the operation and maintenance team. This definitely prolongs the event and delays the entire software development cycle.
There will be a way for the development team to move on to the next project while the development team waits. Wait for Ops to provide feedback on previous code.
But this means that a complete project needs a longer cycle before the final code can be developed.
Based on the current status of the Internet, agile development is more respected, which leads to faster iteration of the project, but due to communication problems between the development team and the operation and maintenance team, it will lead to a high time cost for the new version to go online. This defeats the original purpose of agile development.
So what if the development team and the operation and maintenance team are integrated into one team to work together on a set of software? This is called DevOps .
DevOps , literally means the abbreviation of Development & Operations, that is, development & operation and maintenance.
Although the literal meaning only involves the development team and the operation and maintenance team, in fact the QA testing team is also involved.
It can be seen on the Internet that the symbol of DevOps is similar to an infinity symbol
This shows that DevOps is a process of continuous efficiency improvement and continuous work
The DevOps approach allows companies to respond to updates and market developments faster, development can be delivered quickly, and deployment is more stable.
The core is to simplify the process between Dev and Ops teams, making the overall software development process faster.
The overall software development process includes:
- PLAN: The development team formulates a development plan based on the customer's goals
- CODE: To start the coding process according to PLAN, different versions of the code need to be stored in a library.
- BUILD: After the coding is completed, the code needs to be built and run.
- TEST: After successfully building the project, you need to test the code for bugs or errors.
- DEPLOY: After the code has undergone manual testing and automated testing, it is determined that the code is ready to be deployed and handed over to the operation and maintenance team.
- OPERATE: The operation and maintenance team deploys the code to the production environment.
- MONITOR: After the project is deployed and launched, the product needs to be continuously monitored.
- INTEGRATE: Then send the feedback received in the monitoring stage back to the PLAN stage. The overall iterative process is the core of DevOps , namely continuous integration and continuous deployment.
In order to ensure that the overall process can be completed efficiently, there are relatively common tools at each stage, as shown in the figure below:
Software Development Process & Involved Tools |
---|
Finally, a definition of DevOps can be given : DevOps emphasizes how to complete software life cycle management through automated tool collaboration and communication between efficient organizational teams, so as to deliver more stable software faster and more frequently.
Automated tool collaboration and communication to complete software lifecycle management
2. Code stage tools
In the code stage, we need to store different versions of the code in a warehouse. Common version control tools are SVN or Git. Here we use Git as the version control tool and GitLab as the remote warehouse.
2.1 Git installation
https://git-scm.com/ (fool installation)
2.2 GitLab installation
Prepare the server separately and install it with Docker
-
View the GitLab mirror
docker search gitlab
-
Pull the GitLab mirror
docker pull gitlab/gitlab-ce
-
Prepare the docker-compose.yml file
version: '3.1' services: gitlab: image: 'gitlab/gitlab-ce:latest' container_name: gitlab restart: always environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://192.168.11.11:8929' gitlab_rails['gitlab_shell_ssh_port'] = 2224 ports: - '8929:8929' - '2224:2224' volumes: - './config:/etc/gitlab' - './logs:/var/log/gitlab' - './data:/var/opt/gitlab'
-
Start the container (it will take a while...)
docker-compose up -d
-
Visit the GitLab homepage
front page -
View the initial password of the root user
docker exec -it gitlab cat /etc/gitlab/initial_root_password
initial password -
login root user
Jump to the page after successful login -
Need to change password after first login
change Password
Once done, it can be used like Gitee and GitHub.
3. Build stage tools
There are generally two options for building Java projects, one is Maven and the other is Gradle.
Here we choose Maven as the compilation tool for the project.
The specific installation process of Maven will not be elaborated, but it is necessary to ensure that the private server of the Maven warehouse and the compiled version of JDK are configured.
4. Operate stage tool
During the deployment process, Docker will be used for deployment. For the time being, only Docker can be installed, and Kubenetes needs to be installed later
4.1 Docker installation
-
Prepare test environment & production environment
-
Download Docker dependent components
yum -y install yum-utils device-mapper-persistent-data lvm2
-
Set the image source for downloading Docker to Alibaba Cloud
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
-
Install Docker service
yum -y install docker-ce
-
After the installation is successful, start Docker and set it to start automatically at boot
# 启动Docker服务 systemctl start docker # 设置开机自动启动 systemctl enable docker
-
The test installation was successful
docker version
Effect
4.2 Docker-Compose installation
-
下载Docker/Compose:https://github.com/docker/compose
-
Move the downloaded docker-compose-Linux-x86_64 file to the Linux operating system: …
-
Set docker-compose-Linux-x86_64 file permissions and move to the $PATH directory
# 设置文件权限 chmod a+x docker-compose-Linux-x86_64 # 移动到/usr/bin目录下,并重命名为docker-compose mv docker-compose-Linux-x86_64 /usr/bin/docker-compose
-
The test installation was successful
docker-compose version
Effect
5. Integrate tool
There are many tools for continuous integration and continuous deployment, among which Jenkins is an open source continuous integration platform.
Jenkins involves the task of publishing the written code to the test environment and production environment, and it also involves tasks such as building the project.
Jenkins requires a large number of plug-ins to ensure work, and the installation cost is relatively high. Next, Jenkins will be built based on Docker.
5.1 Introduction to Jenkins
Jenkins is an open source software project and a continuous integration tool developed based on Java
Jenkins is widely used, and most Internet companies use Jenkins with GitLab, Docker, and K8s as the core tools for implementing DevOps .
The most powerful part of Jenkins lies in the plug-ins. Jenkins officially provides a large number of plug-in libraries to automate various trivial functions in the CI/CD process.
The main job of Jenkins is to pull and build the project code that can be built on GitLab, and then choose to release it to the test environment or production environment according to the process.
Generally, after the code on GitLab undergoes a lot of testing, the release version is determined, and then released to the production environment.
CI/CD can be understood as:
- The CI process is to use Jenkins to pull, build, and make mirror images for testers to test.
- Continuous integration: Allow software code to be continuously integrated into the trunk, and automatically built and tested.
- The CD process is to use Jenkins to pull, build, and mirror the tagged release version code to the operation and maintenance personnel for deployment.
- Continuous Delivery: Allows continuous integration of code for manual deployment.
- Continuous deployment: Automate the deployment of code that can be continuously delivered anytime, anywhere.
CI、CD |
---|
5.2 Jenkins installation
-
Pull the Jenkins image
docker pull jenkins/jenkins
-
Write docker-compose.yml
version: "3.1" services: jenkins: image: jenkins/jenkins container_name: jenkins ports: - 8080:8080 - 50000:50000 volumes: - ./data/:/var/jenkins_home/
-
The first startup will fail because the data directory of the data volume does not have permission, so set the write permission of the data directory
error log chmod -R a+w data/
-
After restarting the Jenkins container, because Jenkins needs to download a large amount of content, but because the download speed of the default download address is slow, you need to reset the download address to the domestic mirror station
# 修改数据卷中的hudson.model.UpdateCenter.xml文件 <?xml version='1.1' encoding='UTF-8'?> <sites> <site> <id>default</id> <url>https://updates.jenkins.io/update-center.json</url> </site> </sites> # 将下载地址替换为http://mirror.esuni.jp/jenkins/updates/update-center.json <?xml version='1.1' encoding='UTF-8'?> <sites> <site> <id>default</id> <url>http://mirror.esuni.jp/jenkins/updates/update-center.json</url> </site> </sites> # 清华大学的插件源也可以https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
-
Restart the Jenkins container again and access Jenkins (it will take a while)
Jenkins Homepage -
View the password to log in to Jenkins, and log in to download the plugin
docker exec -it jenkins cat /var/jenkins_home/secrets/initialAdminPassword
Login and download plugin -
Select the plugins to install
Select the plugins to install -
After the download is complete, set the information to enter the home page (there may be plug-ins that fail to download)
5.3 Jenkins entry configuration
Since Jenkins needs to pull code from Git, build locally, and even publish custom images directly to the Docker warehouse, Jenkins needs to configure a lot of content.
5.3.1 Build tasks
Prepare the project in the GitLab warehouse, and configure the project through Jenkins to implement the basic DevOps process of the current project.
-
Build a Maven project and publish it to GitLab (Gitee and Github are both available)
GitLab view project -
Jenkins click on the left navigation to create a new task
new task -
Choose Freestyle Build Task
build task
5.3.1 Configure the source code pull address
Jenkins needs to store the source code stored on Git locally on the disk where the Jenkins service is located
-
Configure the URL for task source code pull
source code management -
Jenkins builds immediately
Click Build Now in task test -
To view the log of the construction project, click on the task bar above ③
View the task to pull the Git source code log You can see that the source code has been pulled locally with Jenkins, and you can view the source code pulled locally by Jenkins according to the third line of log information.
-
View the source code of /var/jenkins_home/workspace/test in the Jenkins container
Source code storage location
5.3.2 Configure Maven to build code
After the code is pulled to Jenkins locally, the code needs to be built in Jenkins. Maven environment is needed here, and Maven needs Java environment. Next, JDK and Maven need to be installed in Jenkins and configured to Jenkins service.
-
Prepare the JDK and Maven compressed packages to map to the inside of the Jenkins container through the data volume
Data volume storage location -
Unzip the compressed package and configure Maven's settings.xml
<!-- 阿里云镜像地址 --> <mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> <!-- JDK1.8编译插件 --> <profile> <id>jdk-1.8</id> <activation> <activeByDefault>true</activeByDefault> <jdk>1.8</jdk> </activation> <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <maven.compiler.compilerVersion>1.8</maven.compiler.compilerVersion> </properties> </profile>
-
Jenkins configures JDK&Maven and saves
-
Configure the Jenkins task to build the code
Configure Maven to build code -
Immediately build the test and view the jar package under the target
build source code
5.3.3 Configuring Publish & Remote Operation
After the jar package is built, it can be published to the test or production environment according to the situation. Here you need to use the previously downloaded plug-in Publish Over SSH.
-
Configure the Publish Over SSH connection test and production environment
Publish Over SSH configuration -
Configure the post-build operation of the task and publish the jar package to the target service
Configure post-build actions | [External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-Wk5rwHTi-1680081209192)(Pictures/ .png)]
| -
Immediately build the task and go to the target service to view
build now
6. Introduction to CI and CD
Pull GitLab's SpringBoot code based on Jenkins to build and publish to the test environment to achieve continuous integration
Pull the SpringBoot code of GitLab's specified release version based on Jenkins to build and release it to the production environment to achieve CD to achieve continuous deployment
6.1 Continuous Integration
In order to allow the program code to be automatically pushed to the test environment to run based on the Docker service, it is necessary to add Docker configuration and script files so that the program can run while being integrated into the trunk.
-
Add Dockerfile
Build a custom image -
Add docker-compose.yml file
Load a custom image to start the container -
Append Jenkins post-build operation script command
Publish and execute script commands after build -
Immediately after publishing to GitLab, it is built by Jenkins and delivered to the target server
build log -
Test deployment to target server program
View the target server and test the interface
6.2 Continuous Delivery and Deployment
The program code can finally be delivered after multiple integration operations. The overall process of continuous delivery is similar to continuous integration, but a specific release version needs to be selected
-
Download the Git Parameter plugin
Download Git Parameters -
Set up project parametric build
Build on Git tags -
Add a tag version to the project
Add tag version -
When the task is built, use the Shell method to build, and pull the code of the specified tag version
Switch to the specified label and build the project -
Build a task based on Parameter, and publish the task to the target server
build task
7. Integrate Sonar Qube
7.1 Introduction to Sonar Qube
Sonar Qube is an open source code analysis platform that supports more than 25 languages such as Java, Python, PHP, JavaScript, CSS, etc., and can detect problems such as duplicate code, code loopholes, code specifications, and security loopholes.
Sonar Qube可以与多种软件整合进行代码扫描,比如Maven,Gradle,Git,Jenkins等,并且会将代码检测结果推送回Sonar Qube并且在系统提供的UI界面上显示出来
Sonar Qube的UI界面 |
---|
7.2 Sonar Qube环境搭建
7.2.1 Sonar Qube安装
Sonar Qube在7.9版本中已经放弃了对MySQL的支持,并且建议在商业环境中采用PostgreSQL,那么安装Sonar Qube时需要依赖PostgreSQL。
并且这里会安装Sonar Qube的长期支持版本8.9
-
拉取镜像
docker pull postgres docker pull sonarqube:8.9.3-community
-
编写docker-compoe.yml
version: "3.1" services: db: image: postgres container_name: db ports: - 5432:5432 networks: - sonarnet environment: POSTGRES_USER: sonar POSTGRES_PASSWORD: sonar sonarqube: image: sonarqube:8.9.3-community container_name: sonarqube depends_on: - db ports: - "9000:9000" networks: - sonarnet environment: SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar SONAR_JDBC_USERNAME: sonar SONAR_JDBC_PASSWORD: sonar networks: sonarnet: driver: bridge
-
启动容器
docker-compose up -d
-
需要设置sysctl.conf文件信息
设置vm.max_map_count 并执行命令刷新
sysctl -p
-
重新启动需要一定时间启动,可以可以查看容器日志,看到如下内容代表启动成功
容器日志 -
访问Sonar Qube首页
登录 -
还需要重新设置一次密码
重新设置密码 -
Sonar Qube首页
Sonar Qube首页
7.2.2 安装中文插件
安装中文插件 |
---|
安装成功后需要重启,安装失败重新点击install重装即可。
安装成功后,会查看到重启按钮,点击即可
重启按钮 |
---|
重启后查看效果
首页效果 |
---|
7.3 Sonar Qube基本使用
Sonar Qube的使用方式很多,Maven可以整合,也可以采用sonar-scanner的方式,再查看Sonar Qube的检测效果
7.3.1 Maven实现代码检测
-
修改Maven的settings.xml文件配置Sonar Qube信息
<profile> <id>sonar</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <sonar.login>admin</sonar.login> <sonar.password>123456789</sonar.password> <sonar.host.url>http://192.168.11.11:9000</sonar.host.url> </properties> </profile>
-
在代码位置执行命令:mvn sonar:sonar
执行代码检测 -
查看Sonar Qube界面检测结果
Sonar Qube检测结果
7.3.2 Sonar-scanner实现代码检测
-
下载Sonar-scanner:https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/
下载4.6.x版本即可,要求Linux版本
-
解压并配置sonar服务端信息
-
由于是zip压缩包,需要安装unzip解压插件
yum -y install unzip
-
解压压缩包
unzip sonar-scanner-cli/sonar-scanner-cli-4.6.0.2311-linux.zip
-
配置sonarQube服务端地址,修改conf下的sonar-scanner.properties
配置服务端信息
-
-
执行命令检测代码
# 在项目所在目录执行以下命令 ~/sonar-scanner/bin/sonar-scanner -Dsonar.sources=./ -Dsonar.projectname=demo -Dsonar.projectKey=java -Dsonar.java.binaries=target/
查看日志信息 -
查看SonarQube界面检测结果
检测结果
7.4 Jenkins集成Sonar Qube
Jenkins继承Sonar Qube实现代码扫描需要先下载整合插件
7.4.1 Jenkins安装插件
下载Sonar Qube插件 |
---|
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-agPe9kBQ-1680081210893)(Pictures/ |
.png)] |
7.4.2 Jenkins配置Sonar Qube
-
开启Sonar Qube权限验证
开启Sonar Qube权限校验 -
获取Sonar Qube的令牌
获取令牌 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-EeB2GqDo-1680081209203)(Pictures/
.png)] |
-
配置Jenkins的Sonar Qube信息
7.4.3 配置Sonar-scanner
-
将Sonar-scaner添加到Jenkins数据卷中并配置全局配置
配置Sonar-scanner -
配置任务的Sonar-scanner
配置任务的Sonar-scanner
7.4.4 构建任务
构建任务 |
---|
八、集成Harbor
8.1 Harbor介绍
前面在部署项目时,我们主要采用Jenkins推送jar包到指定服务器,再通过脚本命令让目标服务器对当前jar进行部署,这种方式在项目较多时,每个目标服务器都需要将jar包制作成自定义镜像再通过docker进行启动,重复操作比较多,会降低项目部署时间。
我们可以通过Harbor作为私有的Docker镜像仓库。让Jenkins统一将项目打包并制作成Docker镜像发布到Harbor仓库中,只需要通知目标服务,让目标服务统一去Harbor仓库上拉取镜像并在本地部署即可。
Docker官方提供了Registry镜像仓库,但是Registry的功能相对简陋。Harbor是VMware公司提供的一款镜像仓库,提供了权限控制、分布式发布、强大的安全扫描与审查机制等功能
8.2 Harbor安装
这里采用原生的方式安装Harbor。
-
下载Harbor安装包:https://github.com/goharbor/harbor/releases/download/v2.3.4/harbor-offline-installer-v2.3.4.tgz
-
拖拽到Linux并解压:
tar -zxvf harbor-offline-installer-v2.3.4.tgz -C /usr/local/
-
修改Harbor配置文件:
-
首先复制一份harbor.yml配置
cp harbor.yml.tmpl harbor.yml
-
编辑harbor.yml配置文件
配置Harbor文件
-
-
启动Harbor
./install.sh
查看日志 -
登录Harbor
登录Harbor -
首页信息
首页信息
8.3 Harbor使用方式
Harbor作为镜像仓库,主要的交互方式就是将镜像上传到Harbor上,以及从Harbor上下载指定镜像
在传输镜像前,可以先使用Harbor提供的权限管理,将项目设置为私有项目,并对不同用户设置不同角色,从而更方便管理镜像。
8.3.1 添加用户构建项目
-
创建用户
创建用户 -
构建项目(设置为私有)
构建项目 -
给项目追加用户
追加用户管理 -
切换测试用户
切换测试用户
8.3.2 发布镜像到Harbor
-
修改镜像名称
名称要求:harbor地址/项目名/镜像名:版本
修改镜像名称 -
修改daemon.json,支持Docker仓库,并重启Docker
修改daemon.json,支持Docker仓库 -
设置登录仓库信息
docker login -u 用户名 -p 密码 Harbor地址
-
推送镜像到Harbor
推送镜像到Harbor
8.3.3 从Harbor拉取镜像ls
跟传统方式一样,不过需要先配置/etc/docker/daemon.json文件
{
"registry-mirrors": ["https://pee6w651.mirror.aliyuncs.com"],
"insecure-registries": ["192.168.11.11:80"]
}
拉取镜像 |
---|
8.3.4 Jenkins容器使用宿主机Docker
构建镜像和发布镜像到harbor都需要使用到docker命令。而在Jenkins容器内部安装Docker官方推荐直接采用宿主机带的Docker即可。
设置Jenkins容器使用宿主机Docker
-
设置宿主机docker.sock权限:
sudo chown root:root /var/run/docker.sock sudo chmod o+rw /var/run/docker.sock
-
添加数据卷
version: "3.1" services: jenkins: image: jenkins/jenkins container_name: jenkins ports: - 8080:8080 - 50000:50000 volumes: - ./data/:/var/jenkins_home/ - /usr/bin/docker:/usr/bin/docker - /var/run/docker.sock:/var/run/docker.sock - /etc/docker/daemon.json:/etc/docker/daemon.json
8.3.5 添加构建操作
制作自定义镜像 |
---|
8.3.6 编写部署脚本
部署项目需要通过Publish Over SSH插件,让目标服务器执行命令。为了方便一次性实现拉取镜像和启动的命令,推荐采用脚本文件的方式。
添加脚本文件到目标服务器,再通过Publish Over SSH插件让目标服务器执行脚本即可。
-
编写脚本文件,添加到目标服务器
harbor_url=$1 harbor_project_name=$2 project_name=$3 tag=$4 port=$5 imageName=$harbor_url/$harbor_project_name/$project_name:$tag containerId=`docker ps -a | grep ${project_name} | awk '{print $1}'` if [ "$containerId" != "" ] ; then docker stop $containerId docker rm $containerId echo "Delete Container Success" fi imageId=`docker images | grep ${project_name} | awk '{print $3}'` if [ "$imageId" != "" ] ; then docker rmi -f $imageId echo "Delete Image Success" fi docker login -u DevOps -p P@ssw0rd $harbor_url docker pull $imageName docker run -d -p $port:$port --name $project_name $imageName echo "Start Container Success" echo $project_name
并设置权限为可执行
chmod a+x deploy.sh
如图
8.3.7 配置构建后操作
执行脚本文件 |
---|
九、Jenkins流水线
9.1 Jenkins流水线任务介绍
之前采用Jenkins的自由风格构建的项目,每个步骤流程都要通过不同的方式设置,并且构建过程中整体流程是不可见的,无法确认每个流程花费的时间,并且问题不方便定位问题。
Jenkins的Pipeline可以让项目的发布整体流程可视化,明确执行的阶段,可以快速的定位问题。并且整个项目的生命周期可以通过一个Jenkinsfile文件管理,而且Jenkinsfile文件是可以放在项目中维护。
所以Pipeline相对自由风格或者其他的项目风格更容易操作。
9.2 Jenkins流水线任务
9.2.1 构建Jenkins流水线任务
-
构建任务
构建Jenkins流水线任务 -
生成Groovy脚本
Hello World脚本生成 -
构建后查看视图
构建后查看视图
9.2.2 Groovy脚本
-
Groovy脚本基础语法
// 所有脚本命令包含在pipeline{}中 pipeline { // 指定任务在哪个节点执行(Jenkins支持分布式) agent any // 配置全局环境,指定变量名=变量值信息 environment{ host = '192.168.11.11' } // 存放所有任务的合集 stages { // 单个任务 stage('任务1') { // 实现任务的具体流程 steps { echo 'do something' } } // 单个任务 stage('任务2') { // 实现任务的具体流程 steps { echo 'do something' } } // …… } }
-
编写例子测试
pipeline { agent any // 存放所有任务的合集 stages { stage('拉取Git代码') { steps { echo '拉取Git代码' } } stage('检测代码质量') { steps { echo '检测代码质量' } } stage('构建代码') { steps { echo '构建代码' } } stage('制作自定义镜像并发布Harbor') { steps { echo '制作自定义镜像并发布Harbor' } } stage('基于Harbor部署工程') { steps { echo '基于Harbor部署工程' } } } }
配置Grovvy脚本 -
查看效果
查看效果
Ps:涉及到特定脚本,Jenkins给予了充足的提示,可以自动生成命令
生成命令位置 |
---|
9.2.3 Jenkinsfile实现
Jenkinsfile方式需要将脚本内容编写到项目中的Jenkinsfile文件中,每次构建会自动拉取项目并且获取项目中Jenkinsfile文件对项目进行构建
-
配置pipeline
配置pipeline -
准备Jenkinsfile
准备Jenkinsfile文件 -
测试效果
测试效果
9.3 Jenkins流水线任务实现
9.3.1 参数化构建
添加参数化构建,方便选择不的项目版本
Git参数化构建 |
---|
9.3.2 拉取Git代码
通过流水线语法生成Checkout代码的脚本
语法生成 |
---|
将*/master更改为标签${tag}
pipeline {
agent any
stages {
stage('拉取Git代码') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '${tag}']], extensions: [], userRemoteConfigs: [[url: 'http://49.233.115.171:8929/root/test.git']]])
}
}
}
}
9.3.3 构建代码
通过脚本执行mvn的构建命令
pipeline {
agent any
stages {
stage('拉取Git代码') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '${tag}']], extensions: [], userRemoteConfigs: [[url: 'http://49.233.115.171:8929/root/test.git']]])
}
}
stage('构建代码') {
steps {
sh '/var/jenkins_home/maven/bin/mvn clean package -DskipTests'
}
}
}
9.3.4 代码质量检测
通过脚本执行sonar-scanner命令即可
pipeline {
agent any
stages {
stage('拉取Git代码') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '${tag}']], extensions: [], userRemoteConfigs: [[url: 'http://49.233.115.171:8929/root/test.git']]])
}
}
stage('构建代码') {
steps {
sh '/var/jenkins_home/maven/bin/mvn clean package -DskipTests'
}
}
stage('检测代码质量') {
steps {
sh '/var/jenkins_home/sonar-scanner/bin/sonar-scanner -Dsonar.sources=./ -Dsonar.projectname=${JOB_NAME} -Dsonar.projectKey=${JOB_NAME} -Dsonar.java.binaries=target/ -Dsonar.login=31388be45653876c1f51ec02f0d478e2d9d0e1fa'
}
}
}
}
9.3.5 制作自定义镜像并发布
-
生成自定义镜像脚本
pipeline { agent any environment{ harborHost = '192.168.11.11:80' harborRepo = 'repository' harborUser = 'DevOps' harborPasswd = 'P@ssw0rd' } // 存放所有任务的合集 stages { stage('拉取Git代码') { steps { checkout([$class: 'GitSCM', branches: [[name: '${tag}']], extensions: [], userRemoteConfigs: [[url: 'http://49.233.115.171:8929/root/test.git']]]) } } stage('构建代码') { steps { sh '/var/jenkins_home/maven/bin/mvn clean package -DskipTests' } } stage('检测代码质量') { steps { sh '/var/jenkins_home/sonar-scanner/bin/sonar-scanner -Dsonar.sources=./ -Dsonar.projectname=${JOB_NAME} -Dsonar.projectKey=${JOB_NAME} -Dsonar.java.binaries=target/ -Dsonar.login=31388be45653876c1f51ec02f0d478e2d9d0e1fa' } } stage('制作自定义镜像并发布Harbor') { steps { sh '''cp ./target/*.jar ./docker/ cd ./docker docker build -t ${JOB_NAME}:${tag} ./''' sh '''docker login -u ${harborUser} -p ${harborPasswd} ${harborHost} docker tag ${JOB_NAME}:${tag} ${harborHost}/${harborRepo}/${JOB_NAME}:${tag} docker push ${harborHost}/${harborRepo}/${JOB_NAME}:${tag}''' } } } }
-
生成Publish Over SSH脚本
pipeline { agent any environment{ harborHost = '192.168.11.11:80' harborRepo = 'repository' harborUser = 'DevOps' harborPasswd = 'P@ssw0rd' } // 存放所有任务的合集 stages { stage('拉取Git代码') { steps { checkout([$class: 'GitSCM', branches: [[name: '${tag}']], extensions: [], userRemoteConfigs: [[url: 'http://49.233.115.171:8929/root/test.git']]]) } } stage('构建代码') { steps { sh '/var/jenkins_home/maven/bin/mvn clean package -DskipTests' } }docker stage('检测代码质量') { steps { sh '/var/jenkins_home/sonar-scanner/bin/sonar-scanner -Dsonar.sources=./ -Dsonar.projectname=${JOB_NAME} -Dsonar.projectKey=${JOB_NAME} -Dsonar.java.binaries=target/ -Dsonar.login=7d66af4b39cfe4f52ac0a915d4c9d5c513207098' } } stage('制作自定义镜像并发布Harbor') { steps { sh '''cp ./target/*.jar ./docker/ cd ./docker docker build -t ${JOB_NAME}:${tag} ./''' sh '''docker login -u ${harborUser} -p ${harborPasswd} ${harborHost} docker tag ${JOB_NAME}:${tag} ${harborHost}/${harborRepo}/${JOB_NAME}:${tag} docker push ${harborHost}/${harborRepo}/${JOB_NAME}:${tag}''' } } stage('目标服务器拉取镜像并运行') { steps { sshPublisher(publishers: [sshPublisherDesc(configName: 'testEnvironment', transfers: [sshTransfer(cleanRemote: false, excludes: '', execCommand: "/usr/bin/deploy.sh $harborHost $harborRepo $JOB_NAME $tag $port ", execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: '[, ]+', remoteDirectory: '', remoteDirectorySDF: false, removePrefix: '', sourceFiles: '')], usePromotionTimestamp: false, useWorkspaceInPromotion: false, verbose: false)]) } } } }
9.4 Jenkins流水线整合钉钉
在程序部署成功后,可以通过钉钉的机器人及时向群众发送部署的最终结果通知
-
安装插件
安装插件 -
钉钉内部创建群组并构建机器人
钉钉内部创建群组并构建机器人 最终或获取到Webhook信息
https://oapi.dingtalk.com/robot/send?access_token=kej4ehkj34gjhg34jh5bh5jb34hj53b4
-
系统配置添加钉钉通知
配置钉钉通知 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Ef1ubHcF-1680081209217)(Pictures/image-20211209162923440.png)] -
任务中追加流水线配置
pipeline { agent any environment { sonarLogin = '2bab7bf7d5af25e2c2ca2f178af2c3c55c64d5d8' harborUser = 'admin' harborPassword = 'Harbor12345' harborHost = '192.168.11.12:8888' harborRepo = 'repository' } stages { stage('拉取Git代码'){ steps { checkout([$class: 'GitSCM', branches: [[name: '$tag']], extensions: [], userRemoteConfigs: [[url: 'http://49.233.115.171:8929/root/lsx.git']]]) } } stage('Maven构建代码'){ steps { sh '/var/jenkins_home/maven/bin/mvn clean package -DskipTests' } } stage('SonarQube检测代码'){ steps { sh '/var/jenkins_home/sonar-scanner/bin/sonar-scanner -Dsonar.sources=./ -Dsonar.projectname=${JOB_NAME} -Dsonar.projectKey=${JOB_NAME} -Dsonar.java.binaries=target/ -Dsonar.login=${sonarLogin}' } } stage('制作自定义镜像'){ steps { sh '''cd docker mv ../target/*.jar ./ docker build -t ${JOB_NAME}:$tag . ''' } } stage('推送自定义镜像'){ steps { sh '''docker login -u ${harborUser} -p ${harborPassword} ${harborHost} docker tag ${JOB_NAME}:$tag ${harborHost}/${harborRepo}/${JOB_NAME}:$tag docker push ${harborHost}/${harborRepo}/${JOB_NAME}:$tag''' } } stage('通知目标服务器'){ steps { sshPublisher(publishers: [sshPublisherDesc(configName: 'centos-docker', transfers: [sshTransfer(cleanRemote: false, excludes: '', execCommand: "/usr/bin/deploy.sh $harborHost $harborRepo $JOB_NAME $tag $port", execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: '[, ]+', remoteDirectory: '', remoteDirectorySDF: false, removePrefix: '', sourceFiles: '')], usePromotionTimestamp: false, useWorkspaceInPromotion: false, verbose: false)]) } } } post { success { dingtalk ( robot: 'Jenkins-DingDing', type:'MARKDOWN', title: "success: ${JOB_NAME}", text: ["- 成功构建:${JOB_NAME}项目!\n- 版本:${tag}\n- 持续时间:${currentBuild.durationString}\n- 任务:#${JOB_NAME}"] ) } failure { dingtalk ( robot: 'Jenkins-DingDing', type:'MARKDOWN', title: "fail: ${JOB_NAME}", text: ["- 失败构建:${JOB_NAME}项目!\n- 版本:${tag}\n- 持续时间:${currentBuild.durationString}\n- 任务:#${JOB_NAME}"] ) } } }
-
查看效果
钉钉通知效果
###十、Kubernetes编排工具
10.1 Kubernetes介绍
Kubernetes是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
Kubernetes一个核心的特点就是能够自主的管理容器来保证云平台中的容器按照用户的期望状态运行着,管理员可以加载一个微型服务,让规划器来找到合适的位置,同时,Kubernetes也系统提升工具以及人性化方面,让用户能够方便的部署自己的应用。
Kubernetes主要能帮助我们完成:
-
服务发现和负载均衡
Kubernetes 可以使用 DNS 名称或自己的 IP 地址公开容器,如果进入容器的流量很大, Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
-
存储编排
Kubernetes 允许你自动挂载你选择的存储系统,比如本地存储,类似Docker的数据卷。
-
自动部署和回滚
你可以使用 Kubernetes 描述已部署容器的所需状态,它可以以受控的速率将实际状态 更改为期望状态。Kubernetes 会自动帮你根据情况部署创建新容器,并删除现有容器给新容器提供资源。
-
自动完成装箱计算
Kubernetes 允许你设置每个容器的资源,比如CPU和内存。
-
自我修复
Kubernetes 重新启动失败的容器、替换容器、杀死不响应用户定义的容器,并运行状况检查的容器。
-
秘钥与配置管理
Kubernetes 允许你存储和管理敏感信息,例如密码、OAuth 令牌和 ssh 密钥。你可以在不重建容器镜像的情况下部署和更新密钥和应用程序配置,也无需在堆栈配置中暴露密钥。
10.2 Kubernetes架构
Kubernetes 搭建需要至少两个节点,一个Master负责管理,一个Slave搭建在工作服务器上负责分配。
kubernetes架构 |
---|
从图中可以看到各个组件的基本功能:
- API Server:作为K8s通讯的核心组件,K8s内部交互以及接收发送指令的组件。
- controller-manager:作为K8s的核心组件,主要做资源调度,根据集群情况分配资源
- etcd:一个key-value的数据库,存储存储集群的状态信息
- scheduler:负责调度每个工作节点
- cloud-controller-manager:负责调度其他云服务产品
- kubelet:管理Pods上面的容器。
- kube-proxy:负责处理其他Slave或客户端的请求。
- Pod:可以理解为就是运行的容器
10.3 Kubernetes安装
这里会采用https://kuboard.cn/提供的方式安装K8s,安装单Master节点
- 要求使用Centos7.8版本:https://vault.centos.org/7.8.2003/isos/x86_64/CentOS-7-x86_64-Minimal-2003.iso
- 至少2台 2核4G 的服务器
安装流程
安装流程 |
---|
准备好服务器后开始安装
-
重新设置hostname,不允许为localhost
# 修改 hostname,名字不允许使用下划线、小数点、大写字母,不能叫master hostnamectl set-hostname your-new-host-name # 查看修改结果 hostnamectl status # 设置 hostname 解析 echo "127.0.0.1 $(hostname)" >> /etc/hosts
-
要求2台服务之间可以相互通讯
-
安装软件
# 阿里云 docker hub 镜像 export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com curl -sSL https://kuboard.cn/install-script/v1.19.x/install_kubelet.sh | sh -s 1.19.5
首先初始化Master节点
关于初始化时用到的环境变量
- APISERVER_NAME 不能是 master 的 hostname
- APISERVER_NAME 必须全为小写字母、数字、小数点,不能包含减号
- POD_SUBNET 所使用的网段不能与 master节点/worker节点 所在的网段重叠。该字段的取值为一个 CIDR 值,如果您对 CIDR 这个概念还不熟悉,请仍然执行 export POD_SUBNET=10.100.0.0/16 命令,不做修改
-
设置ip,域名,网段并执行初始化操作
# 只在 master 节点执行 # 替换 x.x.x.x 为 master 节点实际 IP(请使用内网 IP) # export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令 export MASTER_IP=192.168.11.32 # 替换 apiserver.demo 为 您想要的 dnsName export APISERVER_NAME=apiserver.demo # Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中 export POD_SUBNET=10.100.0.1/16 echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts curl -sSL https://kuboard.cn/install-script/v1.19.x/init_master.sh | sh -s 1.19.5
-
检查Master启动状态
# 只在 master 节点执行 # 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态 watch kubectl get pod -n kube-system -o wide # 查看 master 节点初始化结果 kubectl get nodes -o wide
Ps:如果出现NotReady的情况执行(最新版本的BUG,1.19一般没有)
docker pull quay.io/coreos/flannel:v0.10.0-amd64
mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
安装网络服务插件
export POD_SUBNET=10.100.0.0/16
kubectl apply -f https://kuboard.cn/install-script/v1.22.x/calico-operator.yaml
wget https://kuboard.cn/install-script/v1.22.x/calico-custom-resources.yaml
sed -i "s#192.168.0.0/16#${POD_SUBNET}#" calico-custom-resources.yaml
kubectl apply -f calico-custom-resources.yaml
初始化worker节点
-
获取Join命令参数,在Master节点执行
# 只在 master 节点执行 kubeadm token create --print-join-command
获取命令 -
在worker节点初始化
# 只在 worker 节点执行 # 替换 x.x.x.x 为 master 节点的内网 IP export MASTER_IP=192.168.11.32 # 替换 apiserver.demo 为初始化 master 节点时所使用的 APISERVER_NAME export APISERVER_NAME=apiserver.demo echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts # 替换为 master 节点上 kubeadm token create 命令的输出 kubeadm join apiserver.demo:6443 --token vwfilu.3nhndohc5gn1jv9k --discovery-token-ca-cert-hash sha256:22ff15cabfe87ab48a7db39b3bbf986fee92ec92eb8efc7fe9b0abe2175ff0c2
检查最终运行效果
-
在 master 节点上执行
# 只在 master 节点执行 kubectl get nodes -o wide
Ps:如果出现NotReady的情况执行(最新版本的BUG,1.19一般没有)
docker pull quay.io/coreos/flannel:v0.10.0-amd64
mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
-
输出结果如下所示:
[root@k8smaster ~]# kubectl get nodes
搭建成功效果
安装Kuboard管理K8s集群
-
安装Kuboard
kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml # 您也可以使用下面的指令,唯一的区别是,该指令使用华为云的镜像仓库替代 docker hub 分发 Kuboard 所需要的镜像 # kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3-swr.yaml
-
查看启动情况
watch kubectl get pods -n kuboard
查看效果 -
在浏览器中打开链接 http://your-node-ip-address:30080
首页 -
输入初始用户名和密码,并登录
- 用户名:
admin
- 密码:
Kuboard123
首页效果 - 用户名:
10.4 Kubernetes操作
首先我们要了解Kubernetes在运行我们的资源时,关联到了哪些内容
-
资源的构建方式:
- 采用kubectl的命令方式
- yaml文件方式
10.4.1 Namespace
-
命名空间:主要是为了对Kubernetes中运行的资源进行过隔离, 但是网络是互通的,类似Docker的容器,可以将多个资源配置到一个NameSpace中。而NameSpace可以对不同环境进行资源隔离,默认情况下Kubernetes提供了default命名空间,在构建资源时,如果不指定资源,默认采用default资源。
命令方式:# 查看现有的全部命名空间 kubectl get ns # 构建命名空间 kubectl create ns 命名空间名称 # 删除现有命名空间, 并且会删除空间下的全部资源 kubectl delete ns 命名空间名称
yaml文件方式:(构建资源时,设置命名空间)
apiVersion: v1 kind: Namespace metadata: name: test
10.4.2 Pod
-
Pod:Kubernetes运行的一组容器,Pod是Kubernetes的最小单位,但是对于Docker而然,Pod中会运行多个Docker容器
-
命令方式:
# 查看所有运行的pod kubectl get pods -A # 查看指定Namespace下的Pod kubectl get pod [-n 命名空间] #(默认default) # 创建Pod kubectl run pod名称 --image=镜像名称 # 查看Pod详细信息 kubectl describe pod pod名称 # 删除pod kubectl delete pod pod名称 [-n 命名空间] #(默认default) # 查看pod输出的日志 kubectl logs -f pod名称 # 进去pod容器内部 kubectl exec -it pod名称 -- bash # 查看kubernetes给Pod分配的ip信息,并且通过ip和容器的端口,可以直接访问 kubectl get pod -owide
-
yaml方式(推荐)
apiVersion: v1 kind: Pod metadata: labels: run: 运行的pod名称 name: pod名称 namespace: 命名空间 spec: containers: - image: 镜像名称 name: 容器名称 # 启动Pod:kubectl apply -f yaml文件名称 # 删除Pod:kubectl delete -f yaml文件名称
-
Pod中运行多个容器
apiVersion: v1 kind: Pod metadata: labels: run: 运行的pod名称 name: pod名称 namespace: 命名空间 spec: containers: - image: 镜像名称 name: 容器名称 - image: 镜像名称 name: 容器名称 …………
启动后可以查看到
Kuboard效果
-
10.4.3 Deployment
部署时,可以通过Deployment管理和编排Pod
Deployment部署实现
-
命令方式
# 基于Deployment启动容器 kubectl create deployment deployment名称 --image=镜像名称 # 用deployment启动的容器会在被删除后自动再次创建,达到故障漂移的效果 # 需要使用deploy的方式删除deploy # 查看现在的deployment kubectl get deployment # 删除deployment kubectl delete deployment deployment名称 # 基于Deployment启动容器并设置Pod集群数 kubectl create deployment deployment名称 --image=镜像名称 --replicas 集群个数
-
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
正常使用kubectl运行yaml即可
弹性伸缩功能
# 基于scale实现弹性伸缩
kubectl scale deploy/Deployment名称 --replicas 集群个数
# 或者修改yaml文件
kubectl edit deploy Deployment名称
图形化页面修改 |
---|
灰度发布
Deploy可以在部署新版本数据时,成功启动一个pod,才会下线一个老版本的Pod
kubectl set image deployment/Deployment名称 容器名=镜像:版本
10.4.4 Service
可以将多个Pod对外暴露一个Service,让客户端可以通过Service访问到这一组Pod,并且可以实现负载均衡
ClusterIP方式:
ClusterIP是集群内部Pod之间的访问方式
-
命令实现效果
# 通过生成service映射一个Deployment下的所有pod中的某一个端口的容器 kubectl expose deployment Deployment名称 --port=Service端口号 --target-port=Pod内容器端口
之后通过
kubectl get service
查看Service提供的ip,即可访问kubectl get service 也可以通过
Deployment名称.namespace名称.svc
作为域名访问在服务容器内执行
NodePort方式
ClusterIP的方式只能在Pod内部实现访问,但是一般需要对外暴露网关,所以需要NodePort的方式Pod外暴露访问
-
命令实现方式
# 通过生成service映射一个Deployment下的所有pod中的某一个端口的容器 kubectl expose deployment Deployment名称 --port=Service端口号 --target-port=Pod内容器端口 --type=NodePort
查看Service效果
Service也可以通过yaml文件实现
apiVersion: v1
kind: Service
metadata:
labels
app: nginx
name: nginx
spec:
selector:
app: nginx
ports:
- port: 8888
protocol: TCP
targetPort: 80
通过apply启动就也可以创建Service
测试效果-Deployment部署,通过Service暴露
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- name: nginx-deployment
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
selector:
app: nginx-deployment
ports:
- port: 8888
protocol: TCP
targetPort: 80
type: NodePort
可以查看到暴露的信息
Service信息 |
---|
10.4.5 Ingress
Kubernetes推荐将Ingress作为所有Service的入口,提供统一的入口,避免多个服务之间需要记录大量的IP或者域名,毕竟IP可能改变,服务太多域名记录不方便。
Ingress底层其实就是一个Nginx, 可以在Kuboard上直接点击安装
Kuboard安装 |
---|
因为副本数默认为1,但是k8s整体集群就2个节点,所以显示下面即为安装成功
安装成功 |
---|
可以将Ingress接收到的请求转发到不同的Service中。
推荐使用yaml文件方式
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: ingress
rules:
- host: nginx.mashibing.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 8888
启动时问题 |
---|
Kuboard安装的Ingress有admission的校验配置,需要先删除配置再启动
找到指定的ingress的校验信息,删除即可
删除信息 |
---|
# 查看校验webhook的配置
kubectl get -A ValidatingWebhookConfiguration
# 删除指定的校验
kubectl delete ValidatingWebhookConfiguration ingress-nginx-admission-my-ingress-controller
配置本地hosts文件
配置hosts |
---|
记下来既可以访问在Service中暴露的Nginx信息
服通过Ingress访问 |
---|
10.5 Jenkins集成Kubernetes
10.5.1 准备部署的yml文件
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: pipeline
labels:
app: pipeline
spec:
replicas: 2
selector:
matchLabels:
app: pipeline
template:
metadata:
labels:
app: pipeline
spec:
containers:
- name: pipeline
image: 192.168.11.102:80/repo/pipeline:v4.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
namespace: test
labels:
app: pipeline
name: pipeline
spec:
selector:
app: pipeline
ports:
- port: 8081
targetPort: 8080
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: pipeline
spec:
ingressClassName: ingress
rules:
- host: mashibing.pipeline.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pipeline
port:
number: 8081
10.5.2 Harbor私服配置
在尝试用kubernetes的yml文件启动pipeline服务时,会出现Kubernetes无法拉取镜像的问题,这里需要在kubernetes所在的Linux中配置Harbor服务信息,并且保证Kubernetes可以拉取Harbor上的镜像
-
设置Master和Worker的私服地址信息
设置Harbor私服地址 -
在Kuboard上设置私服密文信息
设置密文并测试 按照复制指令的位置测试认证,效果如下
测试效果
10.5.3 测试使用效果
执行kubectl命令,基于yml启动服务,并且基于部署后服务的提示信息以及Ingress的设置,直接访问
10.5.3 Jenkins远程调用
-
将pipeline.yml配置到Gitlab中
配置yml文件 -
配置Jenkins的目标服务器,可以将yml文件传输到K8s的Master上
设置目标服务器 -
修改Jenkinsfile,重新设置流水线任务脚本,并测试效果
传递yml文件脚本 -
Set Jenkins to log in to k8s-master without password
Copy the public key information in Jenkins to ~/.ssh/authorized_keysz of k8s-master to ensure that there is no password for the remote connection
Execute commands remotely without a password -
Set the script to execute kubectl to Jenkinsfile
Setup Jenkinsfile -
Execute to view the effect
execution pipeline You can see that the yml file is changed, so that k8s will reload
-
View the effect
Effect
10.6 GitLab-based WebHooks
Here is a CI operation to be automated, that is, after the developer pushes the code to the Git warehouse, Jenkins will automatically build the project, build the latest submission point code, and package and deploy it. Here, the difference is the above-mentioned CD operation. The CD operation requires Deploy based on a certain version, and here every time the latest submission point is integrated into the trunk and tested.
10.6.1 WebHooks Notifications
Turn on the automatic build of Jenkins
build trigger |
---|
Setting up Gitlab Webhooks
Setting up Gitlab Webhooks |
---|
Gitlab authentication for Jenkins needs to be turned off
Turn off Gitlab authentication for Jenkins |
---|
Test Gitlab again
test again |
---|
10.6.2 Modify configuration
Modify the Jenkinsfile to achieve continuous integration based on the latest submission point, and remove all previous references to ${tag}
// 所有的脚本命令都放在pipeline中
pipeline{
// 指定任务再哪个集群节点中执行
agent any
// 声明全局变量,方便后面使用
environment {
harborUser = 'admin'
harborPasswd = 'Harbor12345'
harborAddress = '192.168.11.102:80'
harborRepo = 'repo'
}
stages {
stage('拉取git仓库代码') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], extensions: [], userRemoteConfigs: [[url: 'http://192.168.11.101:8929/root/mytest.git']]])
}
}
stage('通过maven构建项目') {
steps {
sh '/var/jenkins_home/maven/bin/mvn clean package -DskipTests'
}
}
stage('通过SonarQube做代码质量检测') {
steps {
sh '/var/jenkins_home/sonar-scanner/bin/sonar-scanner -Dsonar.source=./ -Dsonar.projectname=${JOB_NAME} -Dsonar.projectKey=${JOB_NAME} -Dsonar.java.binaries=./target/ -Dsonar.login=40306ae8ea69a4792df2ceb4d9d25fe8a6ab1701'
}
}
stage('通过Docker制作自定义镜像') {
steps {
sh '''mv ./target/*.jar ./docker/
docker build -t ${JOB_NAME}:latest ./docker/'''
}
}
stage('将自定义镜像推送到Harbor') {
steps {
sh '''docker login -u ${harborUser} -p ${harborPasswd} ${harborAddress}
docker tag ${JOB_NAME}:latest ${harborAddress}/${harborRepo}/${JOB_NAME}:latest
docker push ${harborAddress}/${harborRepo}/${JOB_NAME}:latest '''
}
}
stage('将yml文件传到k8s-master上') {
steps {
sshPublisher(publishers: [sshPublisherDesc(configName: 'k8s', transfers: [sshTransfer(cleanRemote: false, excludes: '', execCommand: '', execTimeout: 120000, flatten: false, makeEmptyDirs: false, noDefaultExcludes: false, patternSeparator: '[, ]+', remoteDirectory: '', remoteDirectorySDF: false, removePrefix: '', sourceFiles: 'pipeline.yml')], usePromotionTimestamp: false, useWorkspaceInPromotion: false, verbose: false)])
}
}
stage('远程执行k8s-master的kubectl命令') {
steps {
sh '''ssh [email protected] kubectl apply -f /usr/local/k8s/pipeline.yml
ssh [email protected] kubectl rollout restart deployment pipeline -n test'''
}
}
}
post {
success {
dingtalk(
robot: 'Jenkins-DingDing',
type: 'MARKDOWN',
title: "success: ${JOB_NAME}",
text: ["- 成功构建:${JOB_NAME}! \n- 版本:latest \n- 持续时间:${currentBuild.durationString}" ]
)
}
failure {
dingtalk(
robot: 'Jenkins-DingDing',
type: 'MARKDOWN',
title: "success: ${JOB_NAME}",
text: ["- 构建失败:${JOB_NAME}! \n- 版本:latest \n- 持续时间:${currentBuild.durationString}" ]
)
}
}
}
Modify pipeline.yml to change the mirror version
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: pipeline
labels:
app: pipeline
spec:
replicas: 2
selector:
matchLabels:
app: pipeline
template:
metadata:
labels:
app: pipeline
spec:
containers:
- name: pipeline
image: 192.168.11.102:80/repo/pipeline:latest # 这里
imagePullPolicy: Always
ports:
- containerPort: 8080
# 省略其他内容…………
10.6.3 Rolling update
Because the pipeline does not change, it will not be reloaded each time, which will cause the container in the Pod to not be dynamically updated. Here, you need to use the rollout restart command of kubectl to roll the update
Setting up Jenkinsfleet |
---|