.Net Core Services mall micro series (13): build Log4net + ELK + Kafka logging framework

Original: .Net Micro Core Mall Services Series (13): build Log4net + ELK + Kafka logging framework

Before the log is sent directly to the use of the ELK NLog, Benpian Docker will be used to build and ELK kafka, while substituting for the NLog Log4net.

A. Building kafka

1. Pull mirror

//下载zookeeper
docker pull wurstmeister/zookeeper

//下载kafka
docker pull wurstmeister/kafka:2.11-0.11.0.3

2. Start

//启动zookeeper
docker run -d --name zookeeper --publish 2181:2181 --volume /etc/localtime:/etc/localtime wurstmeister/zookeeper

//启动kafka
docker run -d --name kafka --publish 9092:9092 \
--link zookeeper \
--env KAFKA_ZOOKEEPER_CONNECT=192.168.3.131:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=192.168.3.131 \
--env KAFKA_ADVERTISED_PORT=9092  \
--volume /etc/localtime:/etc/localtime \
wurstmeister/kafka:2.11-0.11.0.3

3. Test Kafka

// Check Kafka container ID 
Docker PS 

into the container 
Docker Exec -it [container ID] bin / bash 

// create Topic 
bin / kafka-topics.sh --create --zookeeper 192.168 . 3.131 : 2181 --replication-factor 1 - -partitions 1 - Topic mykafka 

// View Topic 
bin / kafka-topics.sh --list --zookeeper 192.168 . 3.131 : 2181 

// create producer 
bin / kafka-console-producer.sh --broker-List 192.168 . 3.131 : 9092 - Topic mykafka 

//To open a client into the container
 // create consumer 
bin / kafka-console-consumer.sh --zookeeper 192.168 . 3.131 : 2181 --topic mykafka - from -beginning

Reproduction sends a message, the consumer may end successfully received it shows no problem.

nbsp;

ELK install two .Docker

1 . Pull mirror 
Docker pull sebp / Elk 

2 Start ELK. 
Docker RUN -p 5601 : 5601 -p 9200 : 9200 -p 9300 : 9300 -p 5044 : 5044 -e -e ES_MAX_MEM ES_MIN_MEM = 128M = 2048m -d - Elk sebp name / Elk 

// If the startup error is usually because elasticsearch user has permission to memory is too small, at least 262144 
switch to root 

execute the command: 

sysctl -w vm.max_map_count = 262144 

View results: 

sysctl -a | grep vm .max_map_count 

display: 

vm.max_map_count = 262144

After the above-mentioned method changes, restart the virtual machine will fail if so: 

solution: 

in    / etc / the sysctl.conf file add the line 

vm.max_map_count = 262144 

to permanently modify

And so tens of seconds, and then access ports 9200 and 5601 can be seen ELK-related panels.

Then we also need to configure the next logstash:

1 . See container elk ID 
Docker PS 

2 . Elk into the container 
Docker Exec Expediting IT bin container ID / the bash 

. 3 . Run
 / opt / logstash / bin / logstash -e ' INPUT stdin {Output} {} {{elasticsearch the hosts gt = ; [ "localhost"]}} '

When the command is executed successfully, see: Successfully started Logstash API endpoint {: port = gt; 9600} After the information input: this is a dummy entry and then return, a simulation test log.

Open a browser, enter: HTTP: // : 9200 / _search Pretty figure, you will see the contents of the log we just entered?.

Note: If you see this error message Logstash could not be started because there is already another instance using the configured data directory If you wish to run multiple instances, you must change the "path.data" setting execute the command:.. Service logstash stop and then execute it.

nbsp;

OK, no problem, then the test, indicating that between Logstash and ES Unicom is normal, then we need to configure Logstash consumer news from Kafka:

1 Locate the config file 
cd / opt / logstash / config 

2 . Edit the configuration file 
vi logstash.config
input {
        kafka{
                bootstrap_servers =gt;["192.168.3.131:9092"]
                client_id =gt; "test" group_id =gt; "test"
                consumer_threads =gt; 5
                decorate_events =gt; true
                topics =gt; "mi"
        }
}
filter{
        json{
                source =gt; "message"
        }
}

output {
        elasticsearch {
                hosts =gt; ["localhost"]
                index =gt; "mi-%{app_id}"
                codec =gt; "json"
        }
}

bootstrap_servers: Kafka address

Here are app_id effect of lower production scenarios based on different projects we generate different ES index, such as the service is a separate index, Web is a, MQ is one of these can be distinguished by the incoming app_id create this.

Once configured load the configuration:

/opt/logstash/bin/logstash -f  /opt/logstash/config/logstash.conf

No problem, then, will this time Logstash consumption data from Kafka, then we create a new .net core API project to test:

1 by reference nbsp NuGet; Microsoft.Extensions.Logging.Log4Net.AspNetCore;

2. Start the file injection Log4Net:

public  static IWebHost BuildWebHost ( String [] args) = gt; 
            WebHost.CreateDefaultBuilder (args) 
            .ConfigureLogging ((the logging) = gt; 
            { 
                // filter out the components of the System namespace and Microsoft warning level is generated at the beginning of the following log 
                logging.AddFilter ( " the System " , LogLevel.Warning); 
                logging.AddFilter ( " the Microsoft " , LogLevel.Warning); 
                logging.AddLog4Net (); 
            }) 
                .UseStartup lt; Startupgt; () 
                .build ();

3. Add at the root of log4net.config, set ldquo; if newer copy rdquo;

support for custom appender, they meet appender can define rules, here I used Nuget package co-workers big brother to write a written log of Kafka, log4net.Kafka.Core , after installation you can add KafkaAppender configuration log4net.config . log4net.Kafka.Core source on github, Fork can modify.
lt;?xml version="1.0" encoding="utf-8" ?gt;
lt;log4netgt;
    lt;appender name="KafkaAppender" type="log4net.Kafka.Core.KafkaAppender, log4net.Kafka.Core"gt;
        lt;KafkaSettingsgt;
            lt;broker value="192.168.3.131:9092" /gt;
            lt;topic value="mi" /gt;
        lt;/KafkaSettingsgt;
        lt;layout type="log4net.Kafka.Core.KafkaLogLayout,log4net.Kafka.Core" gt;
            lt;appid value="api-test" /gt;
        lt;/layoutgt;
    lt;/appendergt;
    lt;rootgt;
        lt;level value="ALL"/gt;
        lt;appender-ref ref="KafkaAppender" /gt;
    lt;/rootgt;
lt;/log4netgt;
broker: Kafka service address, a cluster may be used, segmentation;
Topic: Topic name corresponding to the log;
AppID: unique service identifier, assist in identifying the source of the log;
nbsp;
Modify the code in ValuesController test:
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        private readonly ILogger _logger;

        public ValuesController(ILoggerlt;ValuesControllergt; logger)
        {
            _logger = logger;
        }

        // GET api/values
        [HttpGet]
        public IEnumerablelt;stringgt; Get()
        {
            _logger.LogInformation("根据appId最后一次测试Kafka!");
            return new string[] { "value1", "value2" };
        }
    }

OK, and then run the access port 5601 View:

nbsp;

nbsp; OK, build success!
nbsp;
nbsp;
Reference article:

Guess you like

Origin www.cnblogs.com/lonelyxmas/p/11233437.html