Lazy essential: .NetCore quickly build distributed logging ELK Center

The individual pieces of content from the blog click to jump synchronize updates! Please indicate the source!

Foreword

What is the ELK

It is a distributed logging solution is Logstash, Elastaicsearch, Kibana abbreviation can be used for processing and analysis from different services collected logs, it can display information collected by all aspects of Kibana, such as charts or tables by form.

What can be done

(A) ELK massive log assembly operation and maintenance system, the solution may be used:

  • Distributed query log data and centralized management
  • System monitoring, the monitoring system includes hardware and application of the various components
  • Troubleshooting
  • Security Information and Event Management
  • Reporting
    (B) component ELK large data operation and maintenance system, solve the main problem as follows:
  • Log, troubleshooting, on-line inspection
  • Server monitoring, application monitoring, false alarm, Bug Management
  • Performance analysis, user behavior analysis, security vulnerability analysis, time management

    Preparation Before Installation

    My system environment is CentOS Linux release 7.6.1810 (Core)
  1. GIT installation (for downloading GITHUB project mentioned below, you can also use curl, but the project update faster with GIT easy to update)
  2. Installation Docker (below the project will be deployed to Docker)
  3. Installation Docker-Compose (article service will be used to build DC)
    Well, everything is in place, only a strong wind.

    Lazy installation

    ELK installation is relatively complicated, on-line tutorials are basically re-installed and configured one after the download. Stroll around gayhub found the lazy way docker install and deploy this project address
    first in your root directory create a new folder called elk, and then through the GIT command to download the project to the directory, go to the directory you can to see such a directory structure
[root@localhost docker-elk]# ls
docker-compose.yml  docker-stack.yml  elasticsearch  extensions  filebeat  kibana  LICENSE  logstash  README.md

Things run command service building, remember to add d, otherwise exit the terminal service stopped, the process is a bit long, after all, filled with a little bit more

[root@localhost docker-elk]# docker-compose up -d

Bahrain he will open the following default port
5000: Logstash TCP input (receive channel Logstash data)
9200: elasticsearch HTTP (http passage of ES)
9300: elasticsearch TCP Transport (TCP channel the ES)
5601: Kibana (the UI management interface )
relationship corresponding to these services as shown below, the application will be pushed to log logstash, and then stored into the ES, the last display data by Kibana
Service relationship
of course there are many logstash plug beats corresponding to different log collected in the high you can also be the case of concurrent use by redis or kafka as middleware transition, like the architecture diagram below as
Complicated
we open the following two addresses used browser after the installation is complete, here you will be prompted to enter a user name and password:
the user: Elastic
password: changeme
this is how the default later and tell you that the next change passwords

  1. http://127.0.0.1:9200/ open you can see some of the ES version information, etc.
    ES information
  2. After http://127.0.0.1:5601/ Kibana interface is open, if ES does not start, then you will be prompted to complete the ES data source is not found
    kibana interface
    good, installation and operation has been all over, is not very simple, compared to those long-winded online speak installation, where in fact two steps: Download and run the dc file

    .NetCore use Nlog log collection

    First in your project with Nuget install the following two libraries
    NLog.Extensions.Logging and NLog.Web.AspNetCore
    then create NLog profile Nlog.config, reads as follows:
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="internal-nlog.txt">

  <extensions>
    <add  assembly="NLog.Web.AspNetCore"/>
  </extensions >
  <variable name="logDirectory" value="${basedir}\logs\"/>
  <!--define various log targets-->
  <targets>
    <!--write logs to file-->
    <!--address 填写Logstash数据的接收通道-->
    <target xsi:type="Network"
            name="elastic"
            keepConnection="false"
            address ="tcp://127.0.0.1:5000"
            layout="${longdate}|${logger}|${uppercase:${level}}|${message} ${exception}" />
    />
    <target xsi:type="Null" name="blackhole" />
  </targets>
  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />
    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="elastic" />
  </rules>
</nlog>

Nlog then introduced in Startup> Configure

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
       LogManager.LoadConfiguration("Nlog.config");//引入配置文件
       loggerFactory.AddNLog();//注入Nlog
}

Next in their project is a simple test

Logger log = NLog.LogManager.GetCurrentClassLogger();
log.Debug("测试日志内容");

Kibana configured to just log analysis

Kibana must first have the data to show the data, so we must have the above test data, and then configure the index data to see
after logging in to the left menu to a random point the following screen
interface
click on Create index pattern to the following interface, here is the configuration you want to show the contents of the ES which index, where nci-bids-log-2019xxx this is logstash generated, you may call logstash-2019xxxx, my index in the configuration was I renamed it easy to distinguish project
Configuration index
to fuzzy matching by name, matching rule that looks like a regular, matching success will prompt the following information, and then click Next-Step
match
here allows you to select a time field, choose @timestamp then the next step to complete the creation, then click on the left first menu you can see a chart and details of
Log information
other functions it their own way

Configuration

ELK Volume profiles are based on the way to mount, each folder corresponding to the following Configs inside.
For example logstash profile cd logstash/configfollowing logstash.yml is his profile.
On the version used here because es require a password to use a commercial version, valid for 30 days, if you do not want to use this, then you can go to xpack es configuration directory configuration can be changed as follows:

xpack.license.self_generated.type: basic
xpack.security.enabled: false
xpack.monitoring.collection.enabled: false

Available later by kafka as middleware under way and we then share as well as some of the specific configuration content

Micro letter concerns me Oh! (Reproduced indicate the source)Oh, my concern

Guess you like

Origin www.cnblogs.com/ShaoJianan/p/11455250.html