Let us talk about the last part of the presto ready to upgrade, because the pre-deployment presto machine itself deployed CDH cluster (the client's home), the machine is Centos6.x, JDK on Cloudrea according to the official website of the Ministry said that the machine is not moving . He later found JDK environment variables are written in the / etc / profile, that thinking in a new PATH user to write into the user's own PATH priority - to do so is, of course, was not successful because he forgot before write $ PATH, ... Bitter tears
The old way to put the package address: link: https: //pan.baidu.com/s/1U676CujPaD8kfjrkXgj0qQ
extraction code: zvy0
First, the environment to prepare
1. Service roles distribution
IP addresses | Responsibility |
---|---|
172.16.180.4 | coordinator |
172.16.180.17 | worker |
172.16.180.3 | worker |
172.16.180.3 | worker |
2. Install the package
need to get started in the Baidu network disk
Second, the deployment
1, the installation JDK
not go into detail, it is noted that the official website to explain presto-server-0.229.tar.gz version requirements jdk version jdk-6u115 and above, have been field tested in a production environment (jdk-8u45 -linux-x64.gz) this version does not work, they had to change the version (jdk-8u221-linux-x64.tar.gz ) was OK, environment variables first deployment of the best written / etc / profile, JDK if present, the environment variable corresponding to the user writes .bashrc
2. Installation presto
. 1) presto new user and uploaded to presto-server-0.229.tar.gz / home / presto / directory
# useradd presto
# chown -R presto. presto /home/presto
# su – presto
$ tar -zxvf presto-server-0.229.tar.gz
2) modifying the configuration file (172.16.180.12 operation)
# su – presto
$ cd /home/presto/presto-server-0.229
$ mkdir etc
$ cd etc/
$ touch node.properties
$ touch jvm.config
$ touch config.properties
$ touch log.properties
$ mkdir catalog
$ cd catalog
$ touch hive.properties
$ mkdir hive ##将hive的配置文件core-site.xml、hdfs-site.xml上传到此文件下
$ vi hive.properties
connector.name=hive-hadoop2
hive.metastore.uri=thrift://172.16.180.4:9083
hive.config.resources=/home/presto/presto-server-0.229/etc/catalog/hive/core-site.xml,/home/presto/presto-server-0.229/etc/catalog/hive/hdfs-site.xml
:wq保存退出
$ cd ..
$ vi config.properties
coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=19999
query.max-memory=8GB
query.max-memory-per-node=2GB
discovery-server.enabled=true
discovery.uri=http://172.16.180.12:19999
:wq保存退出
$ vi jvm.config
-server
-Xmx8G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
:wq保存退出
$ vi log.properties
com.facebook.presto=INFO #输出日志级别
:wq保存退出
$ vi node.properties
node.environment=production
node.id=cgn_presto_coordinator_node1 #节点唯一标识
node.data-dir=/home/presto/presto-server-0.229/data
:wq保存退出
$ exit
# scp -r /home/presto/presto-server-0.229 [email protected]: /home/presto/
# scp -r /home/presto/presto-server-0.229 [email protected]: /home/presto/
Config.properties modifications and node.properties on 172.16.180.17,172.16.180.3, wherein config.properties as follows:
coordinator=false
http-server.http.port=19999
query.max-memory=8GB
query.max-memory-per-node=2GB
discovery.uri=http://172.16.180.12:19999
On 172.16.180.17 node.properties as follows:
node.environment=production
node.id=cgn_presto_coordinator_node2
node.data-dir=/home/presto/presto-server-0.229/data
On 172.16.180.3 node.properties as follows:
node.environment=production
node.id=cgn_presto_coordinator_node3
node.data-dir=/home/presto/presto-server-0.229/data
- Start presto on 172.16.180.12,172.16.180.17,172.16.180.3
$ cd /home/presto/presto-server-0.229/bin
$ ./launcher start
The above information is the emergence of a successful start.
Browser Login: HTTP: //172.16.180.12: 19999
4) uploaded under presto-cli-0.229-executable.jar to /home/presto/presto-server-0.229/bin directory (connection hive data warehouse)
$ cd /home/presto/presto-server-0.229/bin
$ mv presto-cli-0.229-executable.jar presto
$ chmod +x presto
$ vi hive.sh
./presto --server 172.16.180.12:19999 --catalog hive --schema test
:wq保存退出
$ ./hive.sh
connection succeeded!
Third, the error resolution
1) Error 1
solution: config.properties to add this parameter: query.max-total-memory-per -node = value of 2GB (2GB by reference)
2) If this is an error injected into the formation :( function error, java.lang invalid memory configurations maximum total memory query each node (2147483648) and Duitou space (644 245 094) can not be greater than the sum of the available heap memory (2147483648).
Solution: modify etc / jvm.config file adjust the size parameters -Xmx2G