Pinpoint (a) the basic concepts and installation deployment

Pinpoint is written in Korean APM system is an analysis of large-scale distributed systems platform and provide solutions to handle large trace data.

1. Features

  • Distributed transaction tracking, tracking messages across a distributed application
  • Automatic detection of application development
  • Horizontal scaling to support large-scale server clusters
  • Providing the code level practice to locate the point of failure and bottlenecks
  • Providing bytecode enhancement technique, add new functionality without code changes

2. Advantage

  • Non-invasive: bytecode enhancement technique, add new functionality without code changes
  • Small resource consumption: minimal impact on performance (resource usage amount increased by about 3%)

3. Architecture Design

3.1 module division

  • HBase: mainly used for storing data
  • Pinpoint Collector: deployed on Web container
  • Pinpoint Web: On the Web container deployment
  • Pinpoint Agent: Java attached to the application for analysis

Process: First, the data is sent by the calling application's agent to collect data collector, collector by processing and analyzing data, and finally stored in HBase, you can view the UI has a good analysis of call data analysis by Pinpoint Web.

3.2 Data Structure

  • Span: the basic unit of RPC track, showing the operation processing when RPC arrives, containing trace data. The flag is not child Span SpanEvent, as a data structure, each comprising a TraceId Span
  • Trace: a series of span, by the associated RPC (Span) components. The same track in the span of share the same TransactionId. Trace SpanIds and by order of the hierarchical tree structure ParentSpanIds
  • TraceId: the TransactionId, SpanId, ParentSpanId consisting of a set of keys. TransactionIdOn behalf of the message id, SpanId and ParentSpanId indicates that the RPC parent-child relationship
TransactionId:来自单个事务的分布式系统发送、接收的消息id,必须在整个服务器组是全局唯一的
SpanId:接收 RPC 消息时处理的作业 ID,是在 RPC 到达节点时生成的
ParentSpanId:生成 RPC 的父 span 的 spanId,如果节点是事务的起始点,不会有父跨度。

4. Install deployment

4.1 Dependencies

  1. jdk8 --- Java Runtime Environment
  2. hbase-1.0 --- database for storing monitoring information
  3. tomcat8.0 --- Web server
  4. pinpoint-collector.war --- pp controller
  5. pinpoint-web.war --- pp show page
  6. pp-collector.init --- to the Quick Launch pp-col, it is not mandatory
  7. pp-web.init --- to the Quick Launch pp-web, it is not mandatory

Install JDK 4.2

Skip ~

4.3 Installing HBase

Test data collected to pinpoint, mainly exist Hbase database. So it can collect large amounts of data can be carried out more detailed analysis. 

4.3.1 modify the configuration information of HBase

vi hbase-site.xml

# 在结尾修改成如下,这里我们指定Hbase本地来存储数据,生产环境将数据建议存入HDFS中。

  
    hbase.rootdir
    file:///data/hbase

4.3.2 start HBase

cd /data/service/hbase/bin
./start-hbase.sh

# 查看Hbase是否启动成功,如果启动成功的会看到"HMaster"的进程
[root@localhost bin]# jps
12075 Jps
11784 HMaster

4.3.3 initialization pinpoint the HBase database

# 执行pinpoint提供的Hbase初始化语句,这时会初始化一会。
./hbase shell /home/pp_res/hbase-create.hbase

# 执行完了以后,进入Hbase
./hbase shell

# 进入后可以看到Hbase的版本,还有一些相关的信息
2016-11-15 01:55:44,861 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using built
in-java classes where applicableHBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016
 
hbase(main):001:0>

# 输入"status 'detailed'"可以查看刚才初始化的表,是否存在
hbase(main):001:0> status 'detailed'
version 1.0.3
0 regionsInTransition
master coprocessors: []
1 live servers
    localhost:50887 1478538574709
        requestsPerSecond=0.0, numberOfOnlineRegions=498, usedHeapMB=24, maxHeapMB=237, numberOfStores=626, numberOfStorefiles=0, storefileUncom
pressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=7714, writeRequestsCount=996, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[MultiRowMutationEndpoint]        "AgentEvent,,1478539104778.aa1b3b14d0b48d83cbf4705b75cb35b7."
            numberOfStores=1, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0,
readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
...

Access  http://192.168.245.134:16010/master-status see if initialization succeeds

HbaseWeb

4.4 Installation pinpoint-collector

4.4.1 deploy war package

# 解压Tomcat,将Tomcat重命名移动到指定位置
cd /home/pp_res/
tar -zxvf apache-tomcat-8.0.36.tar.gz
mv apache-tomcat-8.0.36/ /data/service/pp-col
 

# 修改pp-col的Tomcat的配置,主要修改端口,避免与pp-web的Tomcat的端口冲突。我在原本默认的端口前都加了1,下面是替换的shell命令。
#【注意】最后一条是将tomcat的私有ip开放,需要将localhost替换成本机的ip,我本机的网卡是默认的,如果你本机的网卡不是eth0,需要进行相关的修改。或者直接用"vi"进去,修改localhost
cd /data/service/pp-col/conf/
sed -i 's/port="8005"/port="18005"/g' server.xml
sed -i 's/port="8080"/port="18080"/g' server.xml
sed -i 's/port="8443"/port="18443"/g' server.xml
sed -i 's/port="8009"/port="18009"/g' server.xml
sed -i 's/redirectPort="8443"/redirectPort="18443"/g' server.xml
sed -i "s/localhost/`ifconfig eth0 | grep 'inet addr' | awk '{print $2}' | awk -F: '{print $2}'`/g" server.xml
 

# 部署pinpoint-collector.war包
#【注意:如果没有unzip命令,可以 "yum install unzip" 】
cd /home/pp_res/
rm -rf /data/service/pp-col/webapps/*
unzip pinpoint-collector-1.5.2.war -d /data/service/pp-col/webapps/ROOT
 

# 启动Tomcat
cd /data/service/pp-col/bin/
./startup.sh
 

# 查看日志,是否成功启动
tail -f ../logs/catalina.out

4.4.2 Configuration Quick Start

# 配置快速启动需要修改pp-collector.init的路径( pp-collector在网盘里面有 ),可以"vi"进去,大概在18,24,27行处,修改相关的路径。我这边为了方便,直接就用替换的shell做了,如果路径与我的不一致,需要将路径修改成自己的路径。
cd /home/pp_res
sed -i "s/JAVA_HOME=\/usr\/java\/default\//JAVA_HOME=\/usr\/java\/jdk17\//g" pp-collector.init
sed -i "s/CATALINA_HOME=\/data\/service\/pinpoint-collector\//CATALINA_HOME=\/data\/service\/pp-col\//g" pp-collector.init
sed -i "s/CATALINA_BASE=\/data\/service\/pinpoint-collector\//CATALINA_BASE=\/data\/service\/pp-col\//g" pp-collector.init
 

# 将文件赋予"执行"的权限,把它放到"init.d"中去。以后就可以restart快速重启了。
chmod 711 pp-collector.init
mv pp-collector.init /etc/init.d/pp-col
 
 
# 测试一下restart
[root@localhost pp_res]# /etc/init.d/pp-col restart
Stoping Tomcat
Using CATALINA_BASE:   /data/service/pp-col/
Using CATALINA_HOME:   /data/service/pp-col/
Using CATALINA_TMPDIR: /data/service/pp-col//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-col//bin/bootstrap.jar:/data/service/pp-col//bin/tomcat-juli.jar
 
waiting for processes to exitStarting tomcat
Using CATALINA_BASE:   /data/service/pp-col/
Using CATALINA_HOME:   /data/service/pp-col/
Using CATALINA_TMPDIR: /data/service/pp-col//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-col//bin/bootstrap.jar:/data/service/pp-col//bin/tomcat-juli.jar
Tomcat started.
Tomcat is running with pid: 22824

4.5 Installation pinpoint-web

4.5.1 deploy war package

# 解压Tomcat,将Tomcat重命名移动到指定位置
cd /home/pp_res/
tar -zxvf apache-tomcat-8.0.36.tar.gz
mv apache-tomcat-8.0.36/ /data/service/pp-web
 

# 修改pp-web的Tomcat的配置,主要修改端口,避免与pp-col的Tomcat的端口冲突。我在原本默认的端口前都加了2,下面是替换的shell命令
#【注意】最后一条是将tomcat的私有ip开放,需要将localhost替换成本机的ip,我本机的网卡是默认的,如果你本机的网卡不是eth0,需要进行相关的修改。或者直接用"vi"进去,修改localhost
cd /data/service/pp-web/conf/
sed -i 's/port="8005"/port="28005"/g' server.xml
sed -i 's/port="8080"/port="28080"/g' server.xml
sed -i 's/port="8443"/port="28443"/g' server.xml
sed -i 's/port="8009"/port="28009"/g' server.xml
sed -i 's/redirectPort="8443"/redirectPort="28443"/g' server.xml
sed -i "s/localhost/`ifconfig eth0 | grep 'inet addr' | awk '{print $2}' | awk -F: '{print $2}'`/g" server.xml
 

# 部署pinpoint-collector.war包
#【注意:如果没有unzip命令,可以 "yum install unzip" 】
cd /home/pp_res/
rm -rf /data/service/pp-web/webapps/*
unzip pinpoint-web-1.5.2.war -d /data/service/pp-web/webapps/ROOT
 

# 查看war包是否解压成功
[root@localhost conf]# ll /data/service/pp-web/webapps/ROOT/WEB-INF/classes/
total 88
-rw-rw-r--. 1 root root 2164 Apr  7  2016 applicationContext-cache.xml # 这些 *.xml 文件在后续的调优工作中会用到。
-rw-rw-r--. 1 root root 3649 Apr  7  2016 applicationContext-dao-config.xml
-rw-rw-r--. 1 root root 1490 Apr  7  2016 applicationContext-datasource.xml
-rw-rw-r--. 1 root root 6680 Apr  7  2016 applicationContext-hbase.xml
-rw-rw-r--. 1 root root 1610 Apr  7  2016 applicationContext-websocket.xml
-rw-rw-r--. 1 root root 6576 Apr  7  2016 applicationContext-web.xml
drwxrwxr-x. 2 root root 4096 Apr  7  2016 batch
-rw-rw-r--. 1 root root  106 Apr  7  2016 batch.properties
drwxrwxr-x. 3 root root 4096 Apr  7  2016 com
-rw-rw-r--. 1 root root  682 Apr  7  2016 ehcache.xml
-rw-rw-r--. 1 root root 1001 Apr  7  2016 hbase.properties # 配置我们pp-web从哪个数据源获取采集数据,这里我们只指定Hbase的zookeeper地址。
-rw-rw-r--. 1 root root  153 Apr  7  2016 jdbc.properties # 连接自身Mysql数据库的连接认证配置。
-rw-rw-r--. 1 root root 3338 Apr  7  2016 log4j.xml
drwxrwxr-x. 2 root root 4096 Apr  7  2016 mapper
-rw-rw-r--. 1 root root 1420 Apr  7  2016 mybatis-config.xml
drwxrwxr-x. 3 root root 4096 Apr  7  2016 org
-rw-rw-r--. 1 root root  630 Apr  7  2016 pinpoint-web.properties # 这里pp-web集群的配置文件,如果你需要pp-web集群的话。
-rw-rw-r--. 1 root root  141 Apr  7  2016 project.properties
-rw-rw-r--. 1 root root 3872 Apr  7  2016 servlet-context.xml
drwxrwxr-x. 2 root root 4096 Apr  7  2016 sql # sql目录 pp-web本身有些数据需要存放在MySQL数据库中,这里需要初始化一下表结构。


# 启动Tomcat
cd /data/service/pp-web/bin/
./startup.sh
 

# 查看日志,Tocmat是否启动成功
tail -f ../logs/catalina.out
 

# 日志中出现下面这句话,说明已经启动成功了
org.apache.catalina.startup.Catalina.start Server startup in 79531 ms


# 这时候我们可以访问一下这个地址,在浏览器中输入"http://192.168.245.136:28080",就会出现主页面了
# 如果访问不了的话,关闭防火墙
[root@localhost conf]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]

Enter "http://192.168.245.136:28080" in the browser, there will be a main page

pp-web

4.5.2 Configuration Quick Start

# 需要修改"pp-web.init",与上面的步骤一致
cd /home/pp_res
sed -i "s/JAVA_HOME=\/usr\/java\/default\//JAVA_HOME=\/usr\/java\/jdk17\//g" pp-web.init
sed -i "s/CATALINA_HOME=\/data\/service\/pinpoint-web\//CATALINA_HOME=\/data\/service\/pp-web\//g" pp-web.init
sed -i "s/CATALINA_BASE=\/data\/service\/pinpoint-web\//CATALINA_BASE=\/data\/service\/pp-web\//g" pp-web.init
 

# 将文件赋予"执行"的权限,把让放到"init.d"中去。以后就可以restart快速重启了。
chmod 711 pp-web.init
mv pp-web.init /etc/init.d/pp-web
 
 
# 测试一下restart
[root@localhost pp_res]# /etc/init.d/pp-web restart
Stoping Tomcat
Using CATALINA_BASE:   /data/service/pp-web/
Using CATALINA_HOME:   /data/service/pp-web/
Using CATALINA_TMPDIR: /data/service/pp-web//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-web//bin/bootstrap.jar:/data/service/pp-web//bin/tomcat-juli.jar
 
waiting for processes to exitStarting tomcat
Using CATALINA_BASE:   /data/service/pp-web/
Using CATALINA_HOME:   /data/service/pp-web/
Using CATALINA_TMPDIR: /data/service/pp-web//temp
Using JRE_HOME:        /usr/java/jdk17/
Using CLASSPATH:       /data/service/pp-web//bin/bootstrap.jar:/data/service/pp-web//bin/tomcat-juli.jar
Tomcat started.
Tomcat is running with pid: 22703

4.6 deployment pinpoint-agent acquisition monitoring data

In the application system integrated pinpoint-agent it is very simple.

4.6.1 Configuration pinpoint-agent acquisition

# 解压pp-agent
cd /home/pp_test
tar -zxvf pinpoint-agent-1.5.2.tar.gz
mv pinpoint-agent-1.5.2 /data/pp-agent
 

# 编辑配置文件
cd /data/pp-agent/
vi pinpoint.config
 

# 主要修改IP,只需要指定到安装pp-col的IP就行了,安装pp-col启动后,自动就开启了9994,9995,9996的端口了。这里就不需要操心了,如果有端口需求,要去pp-col的配置文件("pp-col/webapps/ROOT/WEB-INF/classes/pinpoint-collector.properties")中,修改这些端口
profiler.collector.ip=192.168.245.136

4.6.2 Start Service

Parameters explanation:

  • -Dpinpoint.agentId: That uniquely identifies the agent of
  • -Dpinpoint.applicationName: Expressed by name

eureka

java -javaagent:/usr/local/src/pinpoint/soft/eureka/pinpoint-agent-1.7.3/pinpoint-bootstrap-1.7.3.jar -Dpinpoint.agentId=eureka-server -Dpinpoint.applicationName=eureka-server -jar spring-cloud-eureka-server-simple-0.0.1-SNAPSHOT.jar

provider

java -javaagent:/usr/local/src/pinpoint/soft/eureka/pinpoint-agent-1.7.3/pinpoint-bootstrap-1.7.3.jar -Dpinpoint.agentId=provider -Dpinpoint.applicationName=provider -jar spring-cloud-apm-skywalking-provider-0.0.1-SNAPSHOT.jar

consumer

java -javaagent:/usr/local/src/pinpoint/soft/eureka/pinpoint-agent-1.7.3/pinpoint-bootstrap-1.7.3.jar -Dpinpoint.agentId=consumer -Dpinpoint.applicationName=consumer -jar spring-cloud-apm-skywlaking-consumer-0.0.1-SNAPSHOT.jar

zuul

java -javaagent:/usr/local/src/pinpoint/soft/eureka/pinpoint-agent-1.7.3/pinpoint-bootstrap-1.7.3.jar -Dpinpoint.agentId=zuul -Dpinpoint.applicationName=zuul -jar spring-cloud-apm-skywalking-zuul-0.0.1-SNAPSHOT.jar -Xms256m -Xmx256m

After a successful start, visit Pinpoint: http://192.168.67.136:28080/#/main

Switch to the zuul tab:

(Need to get the first call from eureka data, the default timeout of a second) and red for the call failed. Figures represent the number of calls.

Inspector: Checker, you can see the call information services. Click to view:

In the Inspectormiddle, Timeline tab displays the requested time period, informationtab displays information about the current node starts, including: application name agentId, start time, Heap Usagedisplay heap usage, JVM/System Cpu Usageshow CPU usage, Active Threaddisplay thread usage. Response TimeThe response time of the display, Data Sourcethe display database usage.

Released eight original articles · won praise 0 · Views 7269

Guess you like

Origin blog.csdn.net/fedorafrog/article/details/104169816