Druid集群配置


一、默认端口



二、集群配置思想

(一)本集群有五个节点(master, slave1,slave2,slave3,slave4),节点安排情况如下:

master节点:Mysql server, coordinator node, overlord node 

slave1节点:   historical node, middleManager node

slave2节点:   historical node, middleManager node

slave3节点:   broker node 

slave4节点:   realtime node(没有配置,空着)


(二)配置deep storage(保存冷数据)为HDFS


三、集群配置

(一)配置所有节点(master, slave1,slave2,slave3,slave4)的_common/common.runtime.properties文件

  1. #
  2. # Licensed to Metamarkets Group Inc. (Metamarkets) under one
  3. # or more contributor license agreements. See the NOTICE file
  4. # distributed with this work for additional information
  5. # regarding copyright ownership. Metamarkets licenses this file
  6. # to you under the Apache License, Version 2.0 (the
  7. # "License"); you may not use this file except in compliance
  8. # with the License. You may obtain a copy of the License at
  9. #
  10. # http: //www.apache.org/licenses/LICENSE-2.0
  11. #
  12. # Unless required by applicable law or agreed to in writing,
  13. # software distributed under the License is distributed on an
  14. # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  15. # KIND, either express or implied. See the License for the
  16. # specific language governing permissions and limitations
  17. # under the License.
  18. #
  19. #
  20. # Extensions
  21. #
  22. # This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
  23. # based on your particular setup.
  24. #
  25. # Licensed to Metamarkets Group Inc. (Metamarkets) under one
  26. # or more contributor license agreements. See the NOTICE file
  27. # distributed with this work for additional information
  28. # regarding copyright ownership. Metamarkets licenses this file
  29. # to you under the Apache License, Version 2.0 (the
  30. # "License"); you may not use this file except in compliance
  31. # with the License. You may obtain a copy of the License at
  32. #
  33. # http: //www.apache.org/licenses/LICENSE-2.0
  34. #
  35. # Unless required by applicable law or agreed to in writing,
  36. # software distributed under the License is distributed on an
  37. # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  38. # KIND, either express or implied. See the License for the
  39. # specific language governing permissions and limitations
  40. # under the License.
  41. #
  42. #
  43. # Extensions
  44. #
  45. # This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
  46. # based on your particular setup.
  47. druid.extensions.loadList=[ "mysql-metadata-storage", "druid-hdfs-storage"]
  48. # If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
  49. # and uncomment the line below to point to your directory.
  50. #druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
  51. #
  52. # Logging
  53. #
  54. # Log all runtime properties on startup. Disable to avoid logging properties on startup:
  55. druid.startup.logging.logProperties= true
  56. #
  57. # Zookeeper
  58. #
  59. druid.zk.service.host=master: 2181,slave1: 2181,slave2: 2181,slave3: 2181,slave4: 2181
  60. #druid.zk.service.host=master: 2181
  61. druid.zk.paths.base=/druid
  62. #
  63. # Metadata storage
  64. #
  65. # For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
  66. #druid.metadata.storage.type=derby
  67. #druid.metadata.storage.connector.connectURI=jdbc:derby: //metadata.store.ip:1527/var/druid/metadata.db;create=true
  68. #druid.metadata.storage.connector.host=metadata.store.ip
  69. #druid.metadata.storage.connector.port= 1527
  70. # For MySQL:
  71. druid.metadata.storage.type=mysql
  72. druid.metadata.storage.connector.connectURI=jdbc:mysql: //localhost:3306/druid?characterEncoding=UTF-8
  73. druid.metadata.storage.connector.user=***
  74. druid.metadata.storage.connector.password=***
  75. # For PostgreSQL (make sure to additionally include the Postgres extension):
  76. #druid.metadata.storage.type=postgresql
  77. #druid.metadata.storage.connector.connectURI=jdbc:postgresql: //db.example.com:5432/druid
  78. #druid.metadata.storage.connector.user=...
  79. #druid.metadata.storage.connector.password=...
  80. #
  81. # Deep storage
  82. #
  83. # For local disk (only viable in a cluster if this is a network mount):
  84. #druid.storage.type=local
  85. #druid.storage.storageDirectory=var/druid/segments
  86. # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
  87. druid.storage.type=hdfs
  88. druid.storage.storageDirectory=hdfs: //master:9000/druid/segments
  89. # For S3:
  90. #druid.storage.type=s3
  91. #druid.storage.bucket=your-bucket
  92. #druid.storage.baseKey=druid/segments
  93. #druid.s3.accessKey=...
  94. #druid.s3.secretKey=...
  95. #
  96. # Indexing service logs
  97. #
  98. # For local disk (only viable in a cluster if this is a network mount):
  99. #druid.indexer.logs.type=file
  100. #druid.indexer.logs.directory=var/druid/indexing-logs
  101. # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
  102. druid.indexer.logs.type=hdfs
  103. druid.indexer.logs.directory=hdfs: //master:9000/druid/indexing-logs
  104. # For S3:
  105. #druid.indexer.logs.type=s3
  106. #druid.indexer.logs.s3Bucket=your-bucket
  107. #druid.indexer.logs.s3Prefix=druid/indexing-logs
  108. #
  109. # Service discovery
  110. #
  111. druid.selectors.indexing.serviceName=druid/overlord
  112. druid.selectors.coordinator.serviceName=druid/coordinator
  113. #
  114. # Monitoring
  115. #
  116. druid.monitoring.monitors=[ "com.metamx.metrics.JvmMonitor"]
  117. druid.emitter=logging
  118. druid.emitter.logging.logLevel=info

(二)配置master节点的coordinator(在coordinator/runtime.properties文件里) 

  1. druid.service=druid/coordinator
  2. druid.port= 8081
  3. druid.coordinator.startDelay=PT30S
  4. druid.coordinator.period=PT30S
  5. #Below it 's append by fenghui for test
  6. druid.host=master



      配置master节点的 overlord(在overlord/runtime.properties文件里) 

  1. druid.service=druid/overlord
  2. druid.port= 8090
  3. #druid.indexer.queue.startDelay=PT30S
  4. druid.indexer.runner.type=remote
  5. druid.indexer.storage.type=metadata
  6. #below it is append by fenghui for test!
  7. druid.host=master
  8. #druid.indexer.logs.type=hdfs
  9. #druid.indexer.logs.directory=/druid/indexing-logs

(三)配置slave1节点的historical (在historical /runtime.properties文件里) 

  1. druid.service=druid/historical
  2. druid.port= 8083
  3. # HTTP server threads
  4. druid.server.http.numThreads= 25
  5. # Processing threads and buffers
  6. druid.processing.buffer.sizeBytes= 6870912
  7. druid.processing.numThreads= 7
  8. # Segment storage
  9. druid.segmentCache.locations=[{ "path": "var/druid/segment-cache", "maxSize"\: 130000000000}]
  10. druid.server.maxSize= 130000000000
  11. druid.host=slave1
  12. druid.historical.cache.useCache= false
  13. druid.historical.cache.populateCache= false


    配置slave1节点的middleManager (在middleManager /runtime.properties文件里) 

  1. druid.service=druid/middleManager
  2. druid.port= 8091
  3. # Number of tasks per middleManager
  4. druid.worker.capacity= 3
  5. # Task launch parameters
  6. druid.indexer.runner.javaOpts=-server -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF- 8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
  7. druid.indexer.task.baseTaskDir=var/druid/task
  8. # HTTP server threads
  9. druid.server.http.numThreads= 25
  10. # Processing threads and buffers
  11. druid.processing.buffer.sizeBytes= 65536
  12. druid.processing.numThreads= 2
  13. # Hadoop indexing
  14. druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
  15. druid.indexer.task.defaultHadoopCoordinates=[ "org.apache.hadoop:hadoop-client:2.3.0"]
  16. #below it 's append by fenghui for test
  17. druid.host=slave1
  18. #druid.indexer.logs.type=hdfs
  19. #druid.indexer.logs.directory=/druid/indexing-logs

(四)配置slave2节点(同slave1节点)


(五)配置slave1节点的broker(broker/runtime.properties文件里) 

  1. druid.service=druid/broker
  2. druid.port= 8082
  3. # HTTP server threads
  4. druid.broker.http.numConnections= 5
  5. druid.server.http.numThreads= 25
  6. # Processing threads and buffers
  7. druid.processing.buffer.sizeBytes= 32768
  8. druid.processing.numThreads= 2
  9. # Query cache
  10. druid.broker.cache.useCache= true
  11. druid.broker.cache.populateCache= true
  12. druid.cache.type=local
  13. druid.cache.sizeInBytes= 2000000000
  14. druid.host=slave3


三、启动集群命令

    分别到每个节点master, slave1,slave2,slave3,slave4)去启动相应的服务,命令如下(下面是启动broker的命令):     java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath conf/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server broker &


四、验证集群

(一)打开web端

http://master:8090/console.html



http://master:8081/#/




(二)下载测试文件,把文件解压到每个节点(master, slave1,slave2,slave3,slave4)的druid-0.9.1.1目录下,然后再file目录下执行 bash submit_task.sh命令,可以在http://master:8090/console.html下查看状态

    在file目录下执行bash query_task.sh命令





参考文件:

druid.io_druid.io本地集群搭建 / 扩展集群搭建

http://druid.io/docs/latest/tutorials/cluster.html

一、默认端口



二、集群配置思想

(一)本集群有五个节点(master, slave1,slave2,slave3,slave4),节点安排情况如下:

master节点:Mysql server, coordinator node, overlord node 

slave1节点:   historical node, middleManager node

slave2节点:   historical node, middleManager node

slave3节点:   broker node 

slave4节点:   realtime node(没有配置,空着)


(二)配置deep storage(保存冷数据)为HDFS


三、集群配置

(一)配置所有节点(master, slave1,slave2,slave3,slave4)的_common/common.runtime.properties文件

  1. #
  2. # Licensed to Metamarkets Group Inc. (Metamarkets) under one
  3. # or more contributor license agreements. See the NOTICE file
  4. # distributed with this work for additional information
  5. # regarding copyright ownership. Metamarkets licenses this file
  6. # to you under the Apache License, Version 2.0 (the
  7. # "License"); you may not use this file except in compliance
  8. # with the License. You may obtain a copy of the License at
  9. #
  10. # http: //www.apache.org/licenses/LICENSE-2.0
  11. #
  12. # Unless required by applicable law or agreed to in writing,
  13. # software distributed under the License is distributed on an
  14. # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  15. # KIND, either express or implied. See the License for the
  16. # specific language governing permissions and limitations
  17. # under the License.
  18. #
  19. #
  20. # Extensions
  21. #
  22. # This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
  23. # based on your particular setup.
  24. #
  25. # Licensed to Metamarkets Group Inc. (Metamarkets) under one
  26. # or more contributor license agreements. See the NOTICE file
  27. # distributed with this work for additional information
  28. # regarding copyright ownership. Metamarkets licenses this file
  29. # to you under the Apache License, Version 2.0 (the
  30. # "License"); you may not use this file except in compliance
  31. # with the License. You may obtain a copy of the License at
  32. #
  33. # http: //www.apache.org/licenses/LICENSE-2.0
  34. #
  35. # Unless required by applicable law or agreed to in writing,
  36. # software distributed under the License is distributed on an
  37. # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  38. # KIND, either express or implied. See the License for the
  39. # specific language governing permissions and limitations
  40. # under the License.
  41. #
  42. #
  43. # Extensions
  44. #
  45. # This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
  46. # based on your particular setup.
  47. druid.extensions.loadList=[ "mysql-metadata-storage", "druid-hdfs-storage"]
  48. # If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
  49. # and uncomment the line below to point to your directory.
  50. #druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
  51. #
  52. # Logging
  53. #
  54. # Log all runtime properties on startup. Disable to avoid logging properties on startup:
  55. druid.startup.logging.logProperties= true
  56. #
  57. # Zookeeper
  58. #
  59. druid.zk.service.host=master: 2181,slave1: 2181,slave2: 2181,slave3: 2181,slave4: 2181
  60. #druid.zk.service.host=master: 2181
  61. druid.zk.paths.base=/druid
  62. #
  63. # Metadata storage
  64. #
  65. # For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
  66. #druid.metadata.storage.type=derby
  67. #druid.metadata.storage.connector.connectURI=jdbc:derby: //metadata.store.ip:1527/var/druid/metadata.db;create=true
  68. #druid.metadata.storage.connector.host=metadata.store.ip
  69. #druid.metadata.storage.connector.port= 1527
  70. # For MySQL:
  71. druid.metadata.storage.type=mysql
  72. druid.metadata.storage.connector.connectURI=jdbc:mysql: //localhost:3306/druid?characterEncoding=UTF-8
  73. druid.metadata.storage.connector.user=***
  74. druid.metadata.storage.connector.password=***
  75. # For PostgreSQL (make sure to additionally include the Postgres extension):
  76. #druid.metadata.storage.type=postgresql
  77. #druid.metadata.storage.connector.connectURI=jdbc:postgresql: //db.example.com:5432/druid
  78. #druid.metadata.storage.connector.user=...
  79. #druid.metadata.storage.connector.password=...
  80. #
  81. # Deep storage
  82. #
  83. # For local disk (only viable in a cluster if this is a network mount):
  84. #druid.storage.type=local
  85. #druid.storage.storageDirectory=var/druid/segments
  86. # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
  87. druid.storage.type=hdfs
  88. druid.storage.storageDirectory=hdfs: //master:9000/druid/segments
  89. # For S3:
  90. #druid.storage.type=s3
  91. #druid.storage.bucket=your-bucket
  92. #druid.storage.baseKey=druid/segments
  93. #druid.s3.accessKey=...
  94. #druid.s3.secretKey=...
  95. #
  96. # Indexing service logs
  97. #
  98. # For local disk (only viable in a cluster if this is a network mount):
  99. #druid.indexer.logs.type=file
  100. #druid.indexer.logs.directory=var/druid/indexing-logs
  101. # For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
  102. druid.indexer.logs.type=hdfs
  103. druid.indexer.logs.directory=hdfs: //master:9000/druid/indexing-logs
  104. # For S3:
  105. #druid.indexer.logs.type=s3
  106. #druid.indexer.logs.s3Bucket=your-bucket
  107. #druid.indexer.logs.s3Prefix=druid/indexing-logs
  108. #
  109. # Service discovery
  110. #
  111. druid.selectors.indexing.serviceName=druid/overlord
  112. druid.selectors.coordinator.serviceName=druid/coordinator
  113. #
  114. # Monitoring
  115. #
  116. druid.monitoring.monitors=[ "com.metamx.metrics.JvmMonitor"]
  117. druid.emitter=logging
  118. druid.emitter.logging.logLevel=info

(二)配置master节点的coordinator(在coordinator/runtime.properties文件里) 

  1. druid.service=druid/coordinator
  2. druid.port= 8081
  3. druid.coordinator.startDelay=PT30S
  4. druid.coordinator.period=PT30S
  5. #Below it 's append by fenghui for test
  6. druid.host=master



      配置master节点的 overlord(在overlord/runtime.properties文件里) 

  1. druid.service=druid/overlord
  2. druid.port= 8090
  3. #druid.indexer.queue.startDelay=PT30S
  4. druid.indexer.runner.type=remote
  5. druid.indexer.storage.type=metadata
  6. #below it is append by fenghui for test!
  7. druid.host=master
  8. #druid.indexer.logs.type=hdfs
  9. #druid.indexer.logs.directory=/druid/indexing-logs

(三)配置slave1节点的historical (在historical /runtime.properties文件里) 

  1. druid.service=druid/historical
  2. druid.port= 8083
  3. # HTTP server threads
  4. druid.server.http.numThreads= 25
  5. # Processing threads and buffers
  6. druid.processing.buffer.sizeBytes= 6870912
  7. druid.processing.numThreads= 7
  8. # Segment storage
  9. druid.segmentCache.locations=[{ "path": "var/druid/segment-cache", "maxSize"\: 130000000000}]
  10. druid.server.maxSize= 130000000000
  11. druid.host=slave1
  12. druid.historical.cache.useCache= false
  13. druid.historical.cache.populateCache= false


    配置slave1节点的middleManager (在middleManager /runtime.properties文件里) 

  1. druid.service=druid/middleManager
  2. druid.port= 8091
  3. # Number of tasks per middleManager
  4. druid.worker.capacity= 3
  5. # Task launch parameters
  6. druid.indexer.runner.javaOpts=-server -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF- 8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
  7. druid.indexer.task.baseTaskDir=var/druid/task
  8. # HTTP server threads
  9. druid.server.http.numThreads= 25
  10. # Processing threads and buffers
  11. druid.processing.buffer.sizeBytes= 65536
  12. druid.processing.numThreads= 2
  13. # Hadoop indexing
  14. druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
  15. druid.indexer.task.defaultHadoopCoordinates=[ "org.apache.hadoop:hadoop-client:2.3.0"]
  16. #below it 's append by fenghui for test
  17. druid.host=slave1
  18. #druid.indexer.logs.type=hdfs
  19. #druid.indexer.logs.directory=/druid/indexing-logs

(四)配置slave2节点(同slave1节点)


(五)配置slave1节点的broker(broker/runtime.properties文件里) 

  1. druid.service=druid/broker
  2. druid.port= 8082
  3. # HTTP server threads
  4. druid.broker.http.numConnections= 5
  5. druid.server.http.numThreads= 25
  6. # Processing threads and buffers
  7. druid.processing.buffer.sizeBytes= 32768
  8. druid.processing.numThreads= 2
  9. # Query cache
  10. druid.broker.cache.useCache= true
  11. druid.broker.cache.populateCache= true
  12. druid.cache.type=local
  13. druid.cache.sizeInBytes= 2000000000
  14. druid.host=slave3


三、启动集群命令

    分别到每个节点master, slave1,slave2,slave3,slave4)去启动相应的服务,命令如下(下面是启动broker的命令):     java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath conf/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server broker &


四、验证集群

(一)打开web端

http://master:8090/console.html



http://master:8081/#/




(二)下载测试文件,把文件解压到每个节点(master, slave1,slave2,slave3,slave4)的druid-0.9.1.1目录下,然后再file目录下执行 bash submit_task.sh命令,可以在http://master:8090/console.html下查看状态

    在file目录下执行bash query_task.sh命令





参考文件:

druid.io_druid.io本地集群搭建 / 扩展集群搭建

http://druid.io/docs/latest/tutorials/cluster.html

猜你喜欢

转载自blog.csdn.net/wangshuminjava/article/details/80870157