Use Jenkins combined Gogs and SonarQube project code testing, deployment, rollback, and the use of keepalived + haproxy scheduled to backend tomcat

0 Environment Description

Main tomcat: 192.168.0.112
prepared tomcat: 192.168.0.183

haproxy+keepalived-1:192.168.0.156
haproxy+keepalived-2:192.168.0.157

git: not yet deployed
sonar-scanner: not yet deployed

软件:
jdk-8u144-linux-x64.tar.gz
apache-tomcat-8.5.43.tar.gz
haproxy-1.5.18-8.el7.x86_64.rpm
keepalived-1.3.5-8.el7_6.5.x86_64.rpm

First, the two are configured java environment tomcat backend service

1. Prepare jdk8 archive

[root@bogon src]# pwd
/usr/local/src
[root@bogon src]# ls
jdk-8u144-linux-x64.tar.gz

2. Extract jdk compressed packet to the specified directory

[root@bogon src]# tar -zxv -f jdk-8u144-linux-x64.tar.gz -C /usr/local/
[root@bogon src]# cd /usr/local/
[root@bogon local]# ls
bin  etc  games  include  jdk1.8.0_144  lib  lib64  libexec  sbin  share  src

3. Configure the java environment variable and entered into force

[root@bogon local]# cd /etc/profile.d/
[root@bogon profile.d]# vim java.sh
export JAVA_HOME=/usr/local/jdk1.8.0_144
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib/:$JRE_HOME/lib
export TOMCAT_HOME=/usr/local/apache-tomcat-8.5.43
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$TOMCAT_HOME/bin
[root@bogon profile.d]# source java.sh 

4. Test java environment

[root@bogon profile.d]# echo ${JAVA_HOME}
/usr/local/jdk1.8.0_144
[root@bogon profile.d]# echo ${CLASSPATH}
/usr/local/jdk1.8.0_144/lib/:/usr/local/jdk1.8.0_144/jre/lib
[root@bogon profile.d]# echo ${PATH}
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/jdk1.8.0_144/bin:/usr/local/jdk1.8.0_144/jre/bin:/usr/local/apache-tomcat-8.5.43/bin
[root@bogon profile.d]# java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

Second, each installation configuration tomcat service

1. Prepare tomcat binary archive

[root@bogon src]# pwd
/usr/local/src
[root@bogon src]# ls
apache-tomcat-8.5.43.tar.gz  jdk-8u144-linux-x64.tar.gz

2. Extract jtomcat compressed packet to the specified directory

[root@bogon src]# tar -zxv -f apache-tomcat-8.5.43.tar.gz -C /usr/local/
[root@bogon src]# cd /usr/local/
[root@bogon local]# ls
bin  etc  games  include  jdk1.8.0_144  lib  lib64  libexec  sbin  share  src

3. Start tomcat service

[root@bogon apache-tomcat-8.5.43]# /usr/local/apache-tomcat-8.5.43/bin/startup.sh 
Using CATALINA_BASE:   /usr/local/apache-tomcat-8.5.43
Using CATALINA_HOME:   /usr/local/apache-tomcat-8.5.43
Using CATALINA_TMPDIR: /usr/local/apache-tomcat-8.5.43/temp
Using JRE_HOME:        /usr/local/jdk1.8.0_144/jre
Using CLASSPATH:       /usr/local/apache-tomcat-8.5.43/bin/bootstrap.jar:/usr/local/apache-tomcat-8.5.43/bin/tomcat-juli.jar
Tomcat started.

4. View startup port

[root@bogon apache-tomcat-8.5.43]# ss -tlnp
State       Recv-Q Send-Q                   Local Address:Port                                  Peer Address:Port              
LISTEN      0      128                                  *:22                                               *:*                   users:(("sshd",pid=965,fd=3))
LISTEN      0      100                          127.0.0.1:25                                               *:*                   users:(("master",pid=1048,fd=13))
LISTEN      0      1                     ::ffff:127.0.0.1:8005                                            :::*                   users:(("java",pid=1349,fd=70))
LISTEN      0      100                                 :::8009                                            :::*                   users:(("java",pid=1349,fd=55))
LISTEN      0      100                                 :::8080                                            :::*                   users:(("java",pid=1349,fd=50))
LISTEN      0      128                                 :::22                                              :::*                   users:(("sshd",pid=965,fd=4))
LISTEN      0      100                                ::1:25                                              :::*                   users:(("master",pid=1048,fd=14))

The test browser to access the "main tomcat service"

Browser enter the address: http: //192.168.0.112: 8080 / access,
in order to facilitate distinction in the file /usr/local/apache-tomcat-8.5.43/webapps/ROOT/index.jspadd the following below, in </body>the above

<h2>主</h2>

6. browser to access test "from the tomcat service"

Browser enter the address: http: //192.168.0.183: 8080 / accessible
for the convenience of distinction, in the file /usr/local/apache-tomcat-8.5.43/webapps/ROOT/index.jspadd the following below, in </body>the above

<h2>备</h2>

Three, are arranged two separate keepalived + haproxy dispatch service availability

1. Install a highly available service keepalived

yum -y install keepalived

2. Modify the configuration file keepalived

! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id haproxy # 在备份服务中的路由id设置为 "haproxy-1",不可相同
   vrrp_skip_check_adv_addr
   # vrrp_strict #禁用掉vrrp,否则只支持组播不支持单播模式
   vrrp_iptables #开启不自动添加防火墙规则,避免无法访问此主机
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER  #设置为主服务,在备份服务中设置为"BACKUP",备份服务
    interface ens33 #绑定的网卡
    virtual_router_id 51 # 实例路由id号,在同一网段内virtual_router_id 值不能相同,备份的可以是50
    priority 100 #优先级,备份服务优先级必须小于100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        #192.168.200.16
        #192.168.200.17
        #192.168.200.18
        192.168.0.220/ dev ens33 label ens33:0 #将虚拟vip绑定到本地ens33网卡并取名为ens33:0,备份的也需要设置
    }
    unicast_src_ip 192.168.0.156 #单播源地址ip,这个是填写自身的IP,在备份服务中设置源ip为192.168.1.11
    unicast_peer{
    192.168.0.157 # 单播目标地址ip这个填写另一台的IP,在备份服务中设置目标ip为192.168.1.10
    }

}

3. Start each service keepalived

# 主keepalivd:
[root@bogon keepalived]# systemctl start keepalived.service

[root@bogon keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-08-13 15:51:22 CST; 6min ago
  Process: 1452 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1453 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─1453 /usr/sbin/keepalived -D
           ├─1454 /usr/sbin/keepalived -D
           └─1455 /usr/sbin/keepalived -D

Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Adding sorry server [192.168.200.200]:1358 to VS [10.10.10.2]:1358
Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Removing alive servers from the pool for VS [10.10.10.2]:1358
Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Error reading data from remote SMTP server [192.168.200.1]:25.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Timeout connecting server [192.168.201.100]:443.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Check on service [192.168.201.100]:443 failed after 3 retry.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Removing service [192.168.201.100]:443 from VS [192.168.200.100]:443
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Lost quorum 1-0=1 > 0 for VS [192.168.200.100]:443
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Error reading data from remote SMTP server [192.168.200.1]:25.

[root@bogon keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ae:fb:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.156/24 brd 192.168.0.255 scope global dynamic ens33
       valid_lft 5299sec preferred_lft 5299sec
    inet 192.168.0.220/0 scope global ens33:0 #绑定的虚拟vip
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feae:fb8c/64 scope link 
       valid_lft forever preferred_lft forever
#备keepalivd:
[root@bogon keepalived]# systemctl start keepalived.service

[root@bogon keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-08-13 16:14:20 CST; 8min ago
  Process: 1386 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1387 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─1387 /usr/sbin/keepalived -D
           ├─1388 /usr/sbin/keepalived -D
           └─1389 /usr/sbin/keepalived -D

Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Adding sorry server [192.168.200.200]:1358 to VS [10.10.10.2]:1358
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Removing alive servers from the pool for VS [10.10.10.2]:1358
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Error reading data from remote SMTP server [192.168.200.1]:25.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Timeout connecting server [192.168.201.100]:443.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Check on service [192.168.201.100]:443 failed after 3 retry.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Removing service [192.168.201.100]:443 from VS [192.168.200.100]:443
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Lost quorum 1-0=1 > 0 for VS [192.168.200.100]:443
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Error reading data from remote SMTP server [192.168.200.1]:25.

[root@bogon keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c5:6b:34 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.157/24 brd 192.168.0.255 scope global dynamic ens33
       valid_lft 6058sec preferred_lft 6058sec
    inet 192.168.0.220/0 scope global ens33:0 # 绑定的虚拟IP,这个跟文档说的不一样,有待进一步研究
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec5:6b34/64 scope link 
       valid_lft forever preferred_lft forever

4. The dispatch service are arranged two kernel parameters

[root@bogon keepalived]# vim /etc/sysctl.conf 
net.ipv4.ip_nonlocal_bind = 1   #开启非本地ip绑定,避免haproxy无法绑定非本机ip
net.ipv4.ip_forward = 1  #开启路由转发功能

[root@bogon keepalived]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1   #开启非本地ip绑定,避免haproxy无法绑定非本机ip
net.ipv4.ip_forward = 1  #开启路由转发功能

5. separate compilation installed haproxy

[root@bogon ~]# yum -y install haproxy
[root@bogon ~]# cd /etc/haproxy
[root@bogon haproxy]# cp haproxy.cfg haproxy.cfg.bak
[root@bogon haproxy]# vim haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     100000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
    
    #nbproc 2  #开启的线程数
   # cpu-map 1 0  #绑定到cup的第0号核心
   # cpu-map 2 1  #绑定到cup的第1号核心



#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
#defaults
#    mode                    http
#    log                     global
#    option                  httplog
#    option                  dontlognull
#    option http-server-close
#    option forwardfor       except 127.0.0.0/8
#    option                  redispatch
#    retries                 3
#    timeout http-request    10s
#    timeout queue           1m
#    timeout connect         10s
#    timeout client          1m
#    timeout server          1m
#    timeout http-keep-alive 10s
#    timeout check           10s
#    maxconn                 100000

defaults     #默认设置,为前端、后端及listen默认设置
    option http-keep-alive
    option  forwardfor  #ip透传
    maxconn 100000
    mode http
    timeout connect 300000ms
    timeout client  300000ms
    timeout server  300000ms


#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
#frontend  main *:5000
#    acl url_static       path_beg       -i /static /images /javascript /stylesheets
#    acl url_static       path_end       -i .jpg .gif .png .css .js

#    use_backend static          if url_static
#    default_backend             app

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
#backend app
#    balance     roundrobin
#    server  app1 127.0.0.1:5001 check
#    server  app2 127.0.0.1:5002 check
#    server  app3 127.0.0.1:5003 check
#    server  app4 127.0.0.1:5004 check

listen stats   #开启监听状态页
    mode http   #http协议
    bind 0.0.0.0:8000   #状态页访绑定的端口
    stats enable   #开启状态页
    log global    #全局日志
    stats uri     /haproxy-status   #状态也路径
    stats auth    admin:123456   #状态页登录的用户名及密码

listen  web_port      #监听的服务
    bind 192.168.0.220:80  #绑定的虚拟vip及端口,当外网访问此虚拟vip时会自动调度到后端服务
    mode http    #http协议
    balance roundrobin  #调度算法 roundrobin动态轮询
    log global   #全局日志
    server 192.168.0.112  192.168.0.112:8080  check inter 3000 fall 2 rise 5     #调度的后端服务
    server 192.168.0.183  192.168.0.183:8080  check inter 3000 fall 2 rise 5     #调度的后端服务

[root@bogon haproxy]# systemctl start haproxy.service
[root@bogon haproxy]# systemctl status haproxy.service
# 若是状态没有启动,通过查看/var/log/messages日志中出现如下错误信息:haproxy-systemd-wrapper: [ALERT] 224/170040 (15627) :Starting proxy stats: cannot bind socket
# 解决办法,执行如下命令,然后重启服务即可:
[root@bogon haproxy]# setsebool -P haproxy_connect_any=1

6. Check the status haproxy

Accessed using a browser: http: //192.168.0.156: 8000 / haproxy -status, or http://192.168.0.157:8000/haproxy-status
account is admin, password is 123456

7. browser to access scheduling service, successfully scheduled to back-end services

Accessed using a browser: http: //192.168.0.220 result is first scheduled to prepare the table above that, but because the use of polling algorithm, scheduled to force a refresh will find the Lord above

Fourth, create a script execution Jenkins, and Jenkins parameters to achieve the option to automatically test, deployment, rollback Code

Note: in advance to build a good jenkins, gitlab, sonaqube and other services, to install the scanner scanner jenkins

1. Create a custom work specified directory service jenkins

mkdir -pv /data/jenkins/worker

Save Path 2. jenkins server script

# pwd
/data/jenkins

3. jenkins server script editor

Note: inside the parameters need to be modified, yet deployed services need to deploy

# vim project.sh
#!/bin/bash

#jenkins参数选项
time=`date +%Y-%m-%d_%H-%M-%S`
# 2019-08-14_00-36-41
method=$1
group=$2
branch=$3

#后端tomcat服务ip地址组
function ip_value(){
    if [[ "${group}" == "group1" ]];then
        ip_list="192.168.0.112"
        /usr/bin/echo ${ip_list}
    elif [[ "${group}" == "group2" ]];then
        ip_list="192.168.0.183"
        /usr/bin/echo ${ip_list}
    elif [[ "${group}" == "group3" ]];then
        ip_list="192.168.0.112 192.168.0.183"
        /usr/bin/echo ${ip_list}
    fi
}

#先从git上拉取代码到Jenkins服务端
function code_deploy(){
    /usr/bin/cd /data/jenkins/worker
    /usr/bin/rm -rf ./*
    /usr/bin/git clone -b ${branch} [email protected]:3000/sandu/web-page.git
}

#代码测试,使用sonar检测代码质量
function code_test(){
    /usr/bin/cd /data/jenkins/worker/web-page
    /usr/bin/cat > sonar-project.properties <<eof
        sonar.projectKey=one123456 
        sonar.projectName=code-test 
        sonar.projectVersion=1.0 
        sonar.sources=./ 
        sonar.language=python 
        sonar.sourceEncoding=UTF-8
eof
    /data/scanner/sonar-scanner/bin/sonar-scanner
}

#代码打包压缩
function code_compress(){
    /usr/bin/cd /data/jenkins/worker/
    /usr/bin/rm -f web-page/sonar-project.properties
    /usr/bin/tar -czv -f code.tar.gz web-page
}

#调度器剥离后端服务
function haproxy_down(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh [email protected] "echo "disable  server web_port/${ip}"|socat stdio /var/lib/haproxy/stats"
        /usr/bin/ssh [email protected] "echo "disable  server web_port/${ip}"|socat stdio /var/lib/haproxy/stats"
    done
}

#后端服务下线
function backend_stop(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@$ip "/usr/local/apache-tomcat-8.5.43/bin/shutdown.sh"
        # 备份后端代码
        /usr/bin/ssh root@${ip} "tar -zcv -f /usr/local/apache-tomcat-8.5.43/back_code/${time}-backcode.tar.gz /usr/local/apache-tomcat-8.5.43/webapps"
    done
}

#部署代码到后端服务站点
function scp_backend(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/scp /data/jenkins/worker/code.tar.gz root@${ip}:/usr/local/apache-tomcat-8.5.43/web_code/${time}-code.tar.gz
        /usr/bin/ssh root@${ip} "tar -xv -f /usr/local/apache-tomcat-8.5.43/web_code/${time}-code.tar.gz -C /usr/local/apache-tomcat-8.5.43/webapps"
    done
}

#启动后端服务
function backend_start(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@$ip "/usr/local/apache-tomcat-8.5.43/bin/startup.sh"
        /usr/bin/sleep 6
done
}

#测试访问后端服务
function backend_test(){
for ip in ${ip_list};do
    /usr/bin/echo ${ip}
    status_code=`curl -I -s -m 6 -o /dev/null -w %{http_code} http://${ip}:8080`
    if [ ${status_code} -eq 200 ];then
        /usr/bin/echo "访问测试成功,后端代码部署成功"
        if [[ $ip == "192.168.0.183" ]];then
            /usr/bin/ssh [email protected] "echo "enable server web_port/${ip}" | socat stdio /var/lib/haproxy/stats"
            /usr/bin/ssh [email protected] "echo "enable server web_port/${ip}" | socat stdio /var/lib/haproxy/stats"
        fi
    else
        /usr/bin/echo "访问测试失败,请重新部署代码至后端服务" 
    fi
done
}

#代码回滚
function code_rollback(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@${ip} "tar -zxv -f /usr/local/apache-tomcat-8.5.43/back_code/${time}-backcode.tar.gz -C /usr/local/apache-tomcat-8.5.43/webapps"
    done
    /usr/bin/echo "tomcat代码回滚成功,回到上一版本,下一步进行访问测试"
}

#主菜单命令
main(){
    case $1 in
        "deploy")
            ip_value;
            code_deploy;
            code_test;
            code_compress;
            haproxy_down;
            backend_stop;
            scp_backend;
            backend_start;
            backend_test;
        ;;
        "rollback")
            ip_value;
            haproxy_down;
            backend_stop;
            code_rollback;
            backend_start;
            backend_test;
        ;;
    esac
}
main $1 $2 $3

4. Create a good code compression and backup files stored in the back-end path

Main tomcat: mkdir -p /usr/local/apache-tomcat-8.5.43/{web_code,back_code}
prepared tomcat:mkdir -p /usr/local/apache-tomcat-8.5.43/{web_code,back_code}

5. jenkins free service set up secret keys for each login service

Jenkins server where the secret keys are free to log on two tomcat servers and keepalived / haproxy server

ssh-copy-id 192.168.0.112
ssh-copy-id 192.168.0.183
ssh-copy-id 192.168.0.156
ssh-copy-id 192.168.0.157

Fifth, in gitlab server cloning and push you need to modify the code ---

1) Cloning of the specified branch code develop

root@ubuntu1804:~# git clone -b develop http://192.168.1.30/jie/web-page.git
Cloning into 'web-page'...
Username for 'http://192.168.1.30': jie
Password for 'http://[email protected]': 
remote: Enumerating objects: 39, done.
remote: Counting objects: 100% (39/39), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 39 (delta 4), reused 27 (delta 4)
Unpacking objects: 100% (39/39), done.

2) Check code file clones contained

root@ubuntu1804:~# ls web-page/
index.html  Math.php

3) Modify the file on behalf of

root@ubuntu1804:~/web-page# cat index.html 
<h1>welcome to tomcat page</h1>
<h3>simple-version v1</h3>

4) Push v1 version of the code to the code base gitlab

root@ubuntu1804:~/web-page# git add ./*
root@ubuntu1804:~/web-page# git commit -m 'v1'
[develop d0dd713] v1
 1 file changed, 2 insertions(+), 2 deletions(-)

root@ubuntu1804:~/web-page# git push
Username for 'http://192.168.1.30': jie
Password for 'http://[email protected]': 
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 316 bytes | 316.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: 
remote: To create a merge request for develop, visit:
remote:   http://192.168.1.30/jie/web-page/merge_requests/new?merge_request%5Bsource_branch%5D=develop
remote: 
To http://192.168.1.30/jie/web-page.git
     c10f5bf..d0dd713  develop -> develop

Six, jenkins's profile and modify parameters build option

1. Create a project code-test

2. Configure the project configure file, add the option parameters, and the parameters of the character in the script file corresponds to the option

General, parametric build process, the options parameter / character parameters

  1. method
  • Code deploy deploy #
  • Code rollback rollback #
  1. group
  • group1
  • group2
  • group3
  1. branch
  • main branch master #
  • develop # development branch

3. Configure jenkins shell script command script to achieve this test code, deployment and rollback

bulid (building) - execute shell

cd /data/enkins
bash /project.sh $method $group $branch

4. Save the above configuration, the first set of back-end service and then deploy the main tomcat

The console output

6. direct browser access to the main tomcat service to verify a successful deployment

7. deploy a second set of back-end services apparatus tomcat-1, the console output to verify successful deployment

8. respectively view the code file back-end service deployment, determine whether to deploy the back-end service code file

9. direct browser access equipment tomcat1 verify successful deployment services

10. Finally, the browser haproxy scheduler, successfully scheduled to back-end service tomcat

11. Test Results Code (sonarqube)

12. Rollback test

Guess you like

Origin www.cnblogs.com/sanduzxcvbnm/p/11353704.html