Enabling Metrics

Enabling Metrics

Druid nodes periodically emit metrics and different metrics monitors can be included. Each node can overwrite the default list of monitors.

Property Description Default
druid.monitoring.emissionPeriod How often metrics are emitted. PT1m
druid.monitoring.monitors Sets list of Druid monitors used by a node. See below for names and more information. For example, you can specify monitors for a Broker withdruid.monitoring.monitors=["io.druid.java.util.metrics.SysMonitor","io.druid.java.util.metrics.JvmMonitor"]. none (no monitors)

The following monitors are available:

Name Description
io.druid.client.cache.CacheMonitor Emits metrics (to logs) about the segment results cache for Historical and Broker nodes. Reports typical cache statistics include hits, misses, rates, and size (bytes and number of entries), as well as timeouts and and errors.
io.druid.java.util.metrics.SysMonitor This uses the SIGAR library to report on various system activities and statuses.
io.druid.server.metrics.HistoricalMetricsMonitor Reports statistics on Historical nodes.
io.druid.java.util.metrics.JvmMonitor Reports various JVM-related statistics.
io.druid.java.util.metrics.JvmCpuMonitor Reports statistics of CPU consumption by the JVM.
io.druid.java.util.metrics.CpuAcctDeltaMonitor Reports consumed CPU as per the cpuacct cgroup.
io.druid.java.util.metrics.JvmThreadsMonitor Reports Thread statistics in the JVM, like numbers of total, daemon, started, died threads.
io.druid.segment.realtime.RealtimeMetricsMonitor Reports statistics on Realtime nodes.
io.druid.server.metrics.EventReceiverFirehoseMonitor Reports how many events have been queued in the EventReceiverFirehose.
io.druid.server.metrics.QueryCountStatsMonitor Reports how many queries have been successful/failed/interrupted.
io.druid.server.emitter.HttpEmitterMonitor Reports internal metrics of http or parametrized emitter (see below). Must not be used with another emitter type. See the description of the metrics here: https://github.com/druid-io/druid/pull/4973.

Emitting Metrics

The Druid servers emit various metrics and alerts via something we call an Emitter. There are three emitter implementations included with the code, a "noop" emitter, one that just logs to log4j ("logging", which is used by default if no emitter is specified) and one that does POSTs of JSON events to a server ("http"). The properties for using the logging emitter are described below.

Property Description Default
druid.emitter Setting this value to "noop", "logging", "http" or "parametrized" will initialize one of the emitter modules. value "composing" can be used to initialize multiple emitter modules. noop

Logging Emitter Module

Property Description Default
druid.emitter.logging.loggerClass Choices: HttpPostEmitter, LoggingEmitter, NoopServiceEmitter, ServiceEmitter. The class used for logging. LoggingEmitter
druid.emitter.logging.logLevel Choices: debug, info, warn, error. The log level at which message are logged. info

Http Emitter Module

Property Description Default
druid.emitter.http.flushMillis How often the internal message buffer is flushed (data is sent). 60000
druid.emitter.http.flushCount How many messages the internal message buffer can hold before flushing (sending). 500
druid.emitter.http.basicAuthentication Login and password for authentification in "login:password" form, e. g.druid.emitter.http.basicAuthentication=admin:adminpassword not specified = no authentification
`druid.emitter.http.flushTimeOut The timeout after which an event should be sent to the endpoint, even if internal buffers are not filled, in milliseconds. not specified = no timeout
druid.emitter.http.batchingStrategy The strategy of how the batch is formatted. "ARRAY" means[event1,event2], "NEWLINES" means event1\nevent2, ONLY_EVENTS means event1event2. ARRAY
druid.emitter.http.maxBatchSize The maximum batch size, in bytes. the minimum of (10% of JVM heap size divided by 2) or (5191680 (i. e. 5 MB))
druid.emitter.http.batchQueueSizeLimit The maximum number of batches in emitter queue, if there are problems with emitting. the maximum of (2) or (10% of the JVM heap size divided by 5MB)
druid.emitter.http.minHttpTimeoutMillis If the speed of filling batches imposes timeout smaller than that, not even trying to send batch to endpoint, because it will likely fail, not being able to send the data that fast. Configure this depending based on emitter/successfulSending/minTimeMs metric. Reasonable values are 10ms..100ms. 0
druid.emitter.http.recipientBaseUrl The base URL to emit messages to. Druid will POST JSON to be consumed at the HTTP endpoint specified by this property. none, required config

Parametrized Http Emitter Module

druid.emitter.parametrized.httpEmitting.* configs correspond to the configs of Http Emitter Modules, see above. ExceptrecipientBaseUrl. E. g. druid.emitter.parametrized.httpEmitting.flushMillis,druid.emitter.parametrized.httpEmitting.flushCount, etc.

The additional configs are:

Property Description Default
druid.emitter.parametrized.recipientBaseUrlPattern The URL pattern to send an event to, based on the event's feed. E. g. http://foo.bar/{feed}, that will send event tohttp://foo.bar/metrics if the event's feed is "metrics". none, required config

Composing Emitter Module

Property Description Default
druid.emitter.composing.emitters List of emitter modules to load e.g. ["logging","http"]. []

Graphite Emitter

To use graphite as emitter set druid.emitter=graphite. For configuration details please follow this link.

Metadata Storage

These properties specify the jdbc connection and other configuration around the metadata storage. The only processes that connect to the metadata storage with these properties are the CoordinatorIndexing service and Realtime Nodes.

Property Description Default
druid.metadata.storage.type The type of metadata storage to use. Choose from "mysql", "postgresql", or "derby". derby
druid.metadata.storage.connector.connectURI The jdbc uri for the database to connect to none
druid.metadata.storage.connector.user The username to connect with. none
druid.metadata.storage.connector.password The Password Provider or String password used to connect with. none
druid.metadata.storage.connector.createTables If Druid requires a table and it doesn't exist, create it? true
druid.metadata.storage.tables.base The base name for tables. druid
druid.metadata.storage.tables.segments The table to use to look for segments. druid_segments
druid.metadata.storage.tables.rules The table to use to look for segment load/drop rules. druid_rules
druid.metadata.storage.tables.config The table to use to look for configs. druid_config
druid.metadata.storage.tables.tasks Used by the indexing service to store tasks. druid_tasks
druid.metadata.storage.tables.taskLog Used by the indexing service to store task logs. druid_taskLog
druid.metadata.storage.tables.taskLock Used by the indexing service to store task locks. druid_taskLock
druid.metadata.storage.tables.supervisors Used by the indexing service to store supervisor configurations. druid_supervisors
druid.metadata.storage.tables.audit The table to use for audit history of configuration changes e.g. Coordinator rules. druid_audit

猜你喜欢

转载自blog.csdn.net/wangshuminjava/article/details/81811047
今日推荐