Actual Combat | Pinpoint Full Link Monitoring Construction

Actual Combat | Pinpoint Full Link Monitoring Construction

Introduction to Pinpoint

Pinpoint is an APM tool for large-scale distributed systems written in Java. Some people also like to call such tools call chain systems and distributed tracking systems. We know that the front-end initiates a query request to the back-end, and the back-end service may call multiple services, and each service may call other services, and finally return the results and summarize them on the page. If an abnormality occurs in a certain link, it is difficult for engineers to accurately locate the service call that caused the problem. The function of related tools such as Pinpoint is to track the complete call link of each request and collect the performance of each service on the call link. Data so that engineers can quickly locate problems. Github address: https://github.com/naver/pinpoint

The architecture diagram is as follows (the picture comes from the official website):
Actual Combat | Pinpoint Full Link Monitoring Construction

Architecture description:

  • Pinpoint-Collector: Collect various performance data

  • Pinpoint-Agent: a probe associated with the application you are running

  • Pinpoint-Web: Display the collected data in the form of WEB pages

  • HBase Storage: The collected data is stored in HBase

Pinpoint build

We store the data directly in HDFS here, so the overall plan is as follows:
Actual Combat | Pinpoint Full Link Monitoring Construction

Software version:
Actual Combat | Pinpoint Full Link Monitoring Construction

Install JDK

Unzip the JDK to the opt directory and configure environment variables

tar xf jdk-8u131-linux-x64.tar.gz -C /opt

vim /etc/profile

export JAVA_HOME=/opt/jdk1.8.0_131

export PATH=$JAVA_HOME/bin:$PATH

Load environment variables

source /etc/profile

Configure password-free

Configure mutual trust between 10.2.42.61, 10.2.42.62, and 10.2.42.63 nodes, which can operate three at the same time.

ssh-keygen

ssh-copy-id 10.2.42.61

ssh-copy-id 10.2.42.62

ssh-copy-id 10.2.42.63

If there is no ssh-copy-id, use the following command to install

yum -y install openssh-clients

Configure Hosts mapping

Five hosts need to be configured with hosts mapping.
vim /etc/hosts

10.2.42.61    DCA-APP-COM-pinpoint-HBaseMaster

10.2.42.62    DCA-APP-COM-pinpoint-HBaseSlave01

10.2.42.63    DCA-APP-COM-pinpoint-HBaseSlave02

Install zookeeper cluster

Unzip the installation package to the opt directory, and the three can be operated at the same time.

tar xf zookeeper-3.4.10.tar.gz -C /opt/

cd /opt/zookeeper-3.4.10/conf

cp zoo_sample.cfg zoo.cfg

I came to zoo.cfg

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/data/zookeeper/data

# the port at which the clients will connect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

server.1=10.2.42.61:12888:13888

server.2=10.2.42.62:12888:13888

server.3=10.2.42.63:12888:13888

Create data directory

mkdir /data/zookeeper/data -p

Add campaign ID on 10.2.42.61

echo 1 > /data/zookeeper/data/myid

Add campaign ID on 10.2.42.62

echo 2 > /data/zookeeper/data/myid

Add campaign ID on 10.2.42.63

echo 3 > /data/zookeeper/data/myid

Start service

/opt/zookeeper-3.4.10/bin/zkServer.sh start

View cluster status

[root@DCA-APP-COM-pinpoint-HBaseMaster data]# /opt/zookeeper-3.4.10/bin/zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: follower

Install Hadoop cluster

Actual Combat | Pinpoint Full Link Monitoring Construction
Unzip the installation file to the opt directory. Note: Without special instructions, the following operations are performed on three machines at the same time.

tar xf hadoop-2.8.3.tar.gz -C /opt/

Enter the hadoop configuration file directory and configure

cd /opt/hadoop-2.8.3/etc/hadoop

Configure hadoop-env.sh, specify the java operating environment
vim hadoop-env.sh of hadoop

#export JAVA_HOME=${JAVA_HOME}     # 默认就是这个,所以实际上这一步可以跳过

export JAVA_HOME=/opt/jdk1.8.0_131

Configure core-site.xml, specify to access the hadoop web interface to access
vim core-site.xml

<configuration>

  <property>

    <name>fs.defaultFS</name>

    <value>hdfs://10.2.42.61:9000</value>

  </property>

  <property>

  <name>io.file.buffer.size</name>

  <value>131072</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/data/hadoop/tmp</value>

  </property>

</configuration>

Place hdfs-site.xml
vim hdfs-site.xml

<configuration>

<property>

  <name>dfs.namenode.secondary.http-address</name>

  <value>10.2.42.61:50090</value>

  </property>

  <property>

    <name>dfs.replication</name>

    <value>2</value>

  </property>

  <!-- 指定namenode数据存放临时目录,自行创建 -->

  <property>

    <name>dfs.namenode.name.dir</name>

    <value>file:/data/hadoop/dfs/name</value>

  </property>

  <!-- 指定datanode数据存放临时目录,自行创建 -->

  <property>

    <name>dfs.datanode.data.dir</name>

    <value>file:/data/hadoop/dfs/data</value>

  </property>

</configuration>

Configure mapred-site.xml, which is the task configuration of mapreduce, and you can view the completed job status.
vim mapred-site.xml

<configuration>

  <property>

    <name>mapreduce.framework.name</name>

      <value>yarn</value>

  </property>

  <property>

    <name>mapreduce.jobhistory.address</name>

      <value>0.0.0.0:10020</value>

  </property>

  <property>

    <name>mapreduce.jobhistory.webapp.address</name>

      <value>0.0.0.0:19888</value>

  </property>

</configuration>

Configure yarn-site.xml, datanode does not need to modify this configuration file.
vim yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

<property>

  <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce_shuffle</value>

</property>

<property>

  <name>yarn.resourcemanager.address</name>

  <value>10.2.42.61:8032</value>

</property>

<property>

  <name>yarn.resourcemanager.scheduler.address</name>

  <value>10.2.42.61:8030</value> 

</property>

<property>

  <name>yarn.resourcemanager.resource-tracker.address</name>

  <value>10.2.42.61:8031</value> 

</property>

<property>

  <name>yarn.resourcemanager.admin.address</name>

  <value>10.2.42.61:8033</value> 

</property>

<property>

  <name>yarn.resourcemanager.webapp.address</name>

  <value>10.2.42.61:8088</value> 

</property>

</configuration>

Configure datanode to facilitate namenode to call
vim slaves

10.2.42.62

10.2.42.63

Create data directory

mkdir /data/hadoop/tmp -p

mkdir /data/hadoop/dfs/name -p

mkdir /data/hadoop/dfs/data -p

Format the namenode. Since
the file system on the namenode is HDFS, it needs to be formatted.

/opt/hadoop-2.8.3/bin/hdfs namenode -format

The following indicates that the formatting is successful.
Actual Combat | Pinpoint Full Link Monitoring Construction

Start the cluster

/opt/hadoop-2.8.3/sbin/start-all.sh

The output log is as follows:
Actual Combat | Pinpoint Full Link Monitoring Construction
start jobhistory service, check the running status of mapreduce

/opt/hadoop-2.8.3/sbin/mr-jobhistory-daemon.sh start historyserver

Address accessed via URL

http://10.2.42.61:50070  #整个hadoop 集群

http://10.2.42.61:50090  #SecondaryNameNode的情况

http://10.2.42.61:8088   #resourcemanager的情况

http://10.2.42.61:19888  #historyserver(MapReduce历史运行情况)

Actual Combat | Pinpoint Full Link Monitoring Construction

Configure HBase cluster

Note: No special statement is made. The following operations are performed at three nodes at the same time.
**
Unzip the installation package to the opt directory

tar xf hbase-1.2.6-bin.tar.gz -C /opt/

Copy the hdfs configuration file, this is to ensure that the configuration files on both sides of hbase and hdfs are consistent

cp /opt/hadoop-2.8.3/etc/hadoop/hdfs-site.xml /opt/hbase-1.2.6/conf/

Configure the HBase configuration file
vim hbase-site.xml

<configuration>

  <property>

    <name>hbase.zookeeper.property.clientPort</name>

    <value>2181</value>

  </property>

  <property>

    <name>hbase.zookeeper.quorum</name>

    <value>10.2.42.61,10.2.42.62,10.2.42.63</value>

    <description>The directory shared by RegionServers.</description>

  </property>

  <property>

    <name>hbase.zookeeper.property.dataDir</name>

    <value>/data/zookeeper/zkdata</value>

    <description>

    注意这里的zookeeper数据目录与hadoop ha的共用,也即要与 zoo.cfg 中配置的一致

    Property from ZooKeeper config zoo.cfg.

    The directory where the snapshot is stored.

    </description>

  </property>

  <property>

    <name>hbase.rootdir</name>

    <value>hdfs://10.2.42.61:9000/hbase</value>

    <description>The directory shared by RegionServers.

                 官网多次强调这个目录不要预先创建,hbase会自行创建,否则会做迁移操作,引发错误

                 至于端口,有些是8020,有些是9000,看 $HADOOP_HOME/etc/hadoop/hdfs-site.xml 里面的配置,本实验配置的是

                 dfs.namenode.rpc-address.hdcluster.nn1 , dfs.namenode.rpc-address.hdcluster.nn2

    </description>

  </property>

  <property>

    <name>hbase.cluster.distributed</name>

    <value>tre</value>

    <description>分布式集群配置,这里要设置为true,如果是单节点的,则设置为false

      The mode the cluster will be in. Possible values are

      false: standalone and pseudo-distributed setups with managed ZooKeeper

      true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)

    </description>

  </property>

</configuration>

Configure the regionservers file
vim regionservers

10.2.42.62

10.2.42.63

Configure hbase-env.sh. Since we built zookeeper by ourselves, we need to add the following piece of code.

export HBASE_MANAGES_ZK=false

Start the cluster

/opt/hbase-1.2.6/bin/start-hbase.sh

Actual Combat | Pinpoint Full Link Monitoring Construction

View cluster status
1. View via URL: http://10.2.42.61:16010/master-status
2. View via command line

/opt/hbase-1.2.6/bin/hbase shell

hbase(main):002:0> status

1 active master, 0 backup masters, 1 servers, 0 dead, 2.0000 average load

If an error is reported: ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
1. Stop HBase first: /opt/hbase-1.2.6/bin/stop-hbase.sh
2. Start regionserver: /opt/hbase- 1.2.6/bin/hbase-daemon.sh start regionserver
3. Start master: /opt/hbase-1.2.6/bin/hbase-daemon.sh start master

Initialize the PinPoint library of HBase, hbase-create.hbase needs to be downloaded.
The address is: https://github.com/naver/pinpoint/tree/master/hbase/scripts

/opt/hbase-1.2.6/bin/hbase shell /root/install/hbase-create.hbase

Actual Combat | Pinpoint Full Link Monitoring Construction

Configure PinPoint-Collecter

Unzip the war package to the webapps directory of tomcat

unzip pinpoint-collector-1.7.1.war -d /home/tomcat/apache-tomcat-8.0.47/webapps/ROOT

Configuration file directory /home/tomcat/apache-tomcat-8.0.47/webapps/ROOT/WEB-INF/classes
modify the configuration file hbase.properties

hbase.client.host=10.2.42.61,10.2.42.62,10.2.42.63

hbase.client.port=2181

......

Modify the configuration file pinpoint-collector.properties

cluster.enable=true

cluster.zookeeper.address=10.2.42.61,10.2.42.62,10.2.42.63

......

flink.cluster.zookeeper.address=10.2.42.61,10.2.42.62,10.2.42.63

flink.cluster.zookeeper.sessiontimeout=3000

Start tomcat

/home/tomcat/apache-tomcat-8.0.47/bin/startup.sh

Configure PinPoint-WEB

Unzip the corresponding war package to the webapps directory of tomcat

unzip pinpoint-web-1.7.1.war -d /home/tomcat/apache-tomcat-8.0.47/webapps/ROOT

Configuration file directory /home/tomcat/apache-tomcat-8.0.47/webapps/ROOT/WEB-INF/classes
vim hbase.properties

hbase.client.host=10.2.42.61,10.2.42.62,10.2.42.63

hbase.client.port=2181

......

vim pinpoint-web.properties

cluster.enable=true

cluster.web.tcp.port=9997

cluster.zookeeper.address=10.2.42.61,10.2.42.62,10.2.42.63

cluster.zookeeper.sessiontimeout=30000

cluster.zookeeper.retry.interval=60000

.......

Start tomcat

/home/tomcat/apache-tomcat-8.0.47/bin/startup.sh

Visit URL: http://10.2.42.60:8080/#/main
Actual Combat | Pinpoint Full Link Monitoring Construction

Configure the probe

Copy pinpoint-agent-1.7.1.tar.gz to the application server and unzip it to the tomcat directory

tar xf pinpoint-agent-1.7.1.tar.gz -C /home/tomcat

Modify the configuration file:
vim /home/tomcat/ppagent/pinpoint.config

# ip为pinpoint-collecter的服务器ip

profiler.collector.ip=10.2.42.59

Configure tomcat's Catalina.sh startup script, add the following code to the script

CATALINA_OPTS="$CATALINA_OPTS -javaagent:$AGENT_PATH/pinpoint-bootstrap-$VERSION.jar"

CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.agentId=$AGENT_ID"

CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.applicationName=$APPLICATION_NAME"

If it is a jar package, start directly with Java, you need to follow the following parameters

java -javaagent:/home/tomcat/tmp/ppagent/pinpoint-bootstrap-1.7.1.jar -Dpinpoint.agentId=jss-spring-boot-app11201 -Dpinpoint.applicationName=jss-spring-boot-app -jar jssSpringBootDemo-0.0.1-SNAPSHOT.jar

Restart tomcat after configuration, and then check the following on the WEB side:
Actual Combat | Pinpoint Full Link Monitoring Construction

Guess you like

Origin blog.51cto.com/15080014/2654775