Application monitoring plug-in pinpoint installation

  • pinpoint 

       pinpoint homepage: https://github.com/naver/pinpoint, the latest version of pinpoint is 1.6.0 so far.

       Installation environment and plugin version: centos6.5 + jdk1.8 + hadoop2.6.5 + hbase1.0.3 + tomcat7.0 

 

  • hadoop installation

        Pinpoint storage depends on hbase, and hbase's distributed file storage system depends on hadoop. Therefore, if hbase needs to use hdfs to store data, you need to install hadoop, but if hbase uses a normal file system to store data, you do not need to install hadoop. This pinpoint installation example uses the ordinary file system of hbase to store data. This hadoop installation step is only to record the installation process of hadoop, so that hdfs can be used to replace the storage of ordinary file system in the future. Those who use the hbase common file system to store data can skip this hadoop installation step.

        JDK1.7+ version is required to install hadoop.

        The installation and operation modes of hadoop are divided into: stand-alone mode, pseudo-distributed mode and distributed mode. In this installation example, the pseudo-distributed mode is used. The pseudo-distribution can be regarded as a cluster with only one node. This node is both a master and a slave, even a namenode, and a datanode, which is both a jobtracker and a tasktracker.

         Since hadoop requires SSH to start the daemons in the slave list, SSH must be installed. The slave in pseudo-distributed mode is localhost itself. It is best to enable password-free login for SSH, because when initializing the namenode and datanode through hdfs namenode -format, you will constantly be prompted to enter the SSH login password.

        SSH password-free login

        ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa , where ssh-keygen represents the generated key, -t specifies the type of the generated key, dsa indicates that the type of the generated key is dsa, and -P provides Passphrase, -f specifies the generated key file. This command will create two files, id_dsa and id_dsa.pub, in the .ssh folder, which are a pair of SSH private and public keys.

        cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys , add the public key to the public key file used for authentication, where authorized_keys is the public key file used for authentication.

        At this point, the SSH password-free login to the machine is configured. Execute ssh localhost to test whether you can log in directly.

        

         hadoop configuration. The configuration files related to hadoop are in the etc/hadoop folder in the hadoop decompression directory.

         Modify hadoop-env.sh. Specify JAVA_HOME and configure it as: export JAVA_HOME=${JAVA_HOME}

         Modify core-site.xml. Configure the address and port number of hdfs.

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

           Modify hdfs-site.xml. Configure the backup mode and change the backup to 1.

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

          Modify mapred-site.xml. Configure the address and port number of the Jobtracker.

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
    </property>
</configuration>

           So far, the simple configuration for distributed hadoop is completed.

 

           Execute: bin/hdfs namenode -format to format the file system, then start hadoop: sbin/strat-all.sh.

           After the startup is complete, you can see the HDFS web interface by visiting http://localhost:50070, indicating that the hadoop installation is successful.

 

  • hbase installation

         The installation mode of hbase is also divided into stand-alone installation, pseudo-distributed installation and distributed installation. This example uses the stand-alone mode installation.

         Modify the hbase-site.xml configuration to specify the directory where the hbase data is stored. Use the " file:// " protocol to specify that hbase data is stored in a common file system.

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>file:///var/pinpointer/data/hbase</value>
    </property>
</configuration>

          In stand-alone installation mode, hbase will also start zookeeper. The default port number of zookeeper is 2181. Be careful not to cause port conflicts.

           Run bin/start-hbase.sh to start hbase. After the startup is completed, you can see the main process Hmaster of hbase through the java jps command. Note: After the hbase installation is complete, you need to execute the hbase-create.hbase script for creating the hbase table of pinpoint.
 

  • pinpoint server installation

         collector configuration

        Modify hbase.properties, mainly modify the ip and port number of hbase

hbase.client.host=localhost
hbase.client.port=2181

       Modify pinpoint-collector.properties, mainly modify the ip address

collector.tcpListenIp=0.0.0.0
collector.udpStatListenIp=0.0.0.0
collector.udpSpanListenIp=0.0.0.0

       pinpoint-web configuration

 

      Modify hbase.properties

hbase.client.host=localhost
hbase.client.port=2181

 

  • pinpoint agent configuration

       Please refer to the official website of pinpont.

       https://github.com/naver/pinpoint/blob/master/doc/installation.md

 

 

 

 

 

 

 

 

 

 

 

 

 

   

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326919509&siteId=291194637