In-depth understanding of ElasticSearch keywords (2)

  • real-time pipelining capabilities

  • Codecs

    Codecs are basically stream filters that can operate as part of the input or output.

    Codes enable you to easily separate the transport of your messages from the serialization process.

    Popular codecs include json, msgpack, and plain.

    A codec plugin changes the data representation of an event. Codecs are essentially stream filters that can operate as part of an input or output.

  • Beats

    Beats are open source data shippers that you install as agents on your servers to send operational data to Elasticsearch.

  • ILM(Index Lifecycle management)

    Use the index lifecycle management (ILM) feature in Elasticsearch to mange your Filebeat indices as they age.

    ILM defines four index liftcycle phases:

    • Hot : The index is actively being updated and queried
    • Warm : The index is no longer being updated but is still being queried
    • Cold : The index is no longer being updated and is seldom queried. The information still needs to be searchable, but it’s okey if those queries are slower
    • Delete : The index is no longer needed and can safely be removed

    An index’s lifecycle policy specifies which phases are applicable, what actions are performed in each phase, and when it transitions between phases.

  • Data stream

    We recommend using data streams to manage time series data.

    A data stream lets you store append-only time series data acrosss multiple indices while giving you a single named resource for requests.

    A data stream consists of one or more hidden, auto-generated backing indices.

    Each data stream requires a matching index template. The same index template can be used for multiple data streams.

  • Index template

    An Index Template is a way to tell Elasticsearch how to configure an index when it is created.

    There are two types of templates, index templates and component templates.

  • filebeat -e

    In the Linuxsystem, including the dockercontainer, the filebetacommand is required to start :sudo ./filebeat -e

  • Harvester

    Filebeat consists of two main components: inputs and harvesters.

    A harvester is responsible for reading the content of a single file.

    The harvester reads each file, line by line, and sends the content to the output.

    One harvester is started for each file.

  • Go Glob

    func Glob(pattern string) (matches [] string, err error)
    

    Glob returns the names of all files matching pattern or nil if there is no matching file.

    The syntax of patterns is the same as in Match.

  • Log Stream

    A log stream is a sequence of log events that share the same source.

  • file rotation

    When dealing with file rotation, avoid harvesting symlinks.

    In information technology, log rotation is an automated process used in system administration in which log files are compressed, moved (archived), renamed or deleted once they are too old or too big.

    New incoming log data is directed into a new fresh file which at the same location.

  • Seccomp

    " Understanding Seccomp "

    On Linux 3.17 and later, Filebeat can take advantage of secure computing mode, also known as seccomp.

  • Index mappings, settings, aliases

    Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

  • mapping

    Mapping is the process of defining how a document, and the fields it contains, are stored and indexed.

    A mapping definitions has: Metadata fields, Fields.

    Metadata fields are used to customize how a document’s associated metadata is treated, like _index, _id, and _source, _type fields.

    A

  • setting

    Index Modules are modules created per index and control all aspects related to an index.

    Index level settings can be set per-index.

  • aliases

    An index alias is a secondary name used to refer to one or more existing indeces.

  • Processors

    You can define processors in your configuration to process events before they are sent to the configured output.

    The libbeat library provides processors for :

    • reducing the number of exported fields
    • enhancing events with additional metadata
    • performing additional processing and decoding

    Each processor receives and event, applies a defined action to the event, and returns the event.

  • Text Analysis

    Text analysis is the process of converting unstructed text, like the body of an email or producet description, into a structured format that’s optimized for search.

Guess you like

Origin blog.csdn.net/The_Time_Runner/article/details/111708878