Tianjin has selected IVF sex hospital? How much for the eggs?

Tianjin Wei electrical █ 188 ★ 2335 ★ 0811 ████ ███ sex tube selected for egg IVF surrogacy selected ███ ████ tube package gender boy born boy ████ ████ Surrogacy surrogacy package born ███ ██ sex surrogate choose IVF surrogacy boy █████ ███

  1. Application for a highly concurrent, would you choose to print the access log?
  2. For distributed applications, whether you choose to print all the logs to the log center?

solution:

  If you choose to 1. If the performance does not print the log, that is understandable. But you have to consider carefully whether or not able to do when something goes wrong fast investigation?
  2. Do you think the distribution log on each machine is very convenient, it goes without logging center is also OK!

  If you still choose to print a large number of access logs, or if you choose to print the log to the log center, then this article useful to you!

  If they achieve a logging center, it is not that difficult, or to spend too much effort, such as performance, such as capacity size!

  So, we choose Ali cloud loghub center as a log, collect all the logs!

loghub normal operation:

  Before presenting the subject matter hereof, we have to look at loghub own way, and problems!
  Access the official document, it is recommended that we use logProducer access.

  In fact logProducer has done a lot of optimization, such as when the log reaches a certain number of data packets before sending a unified, sent asynchronously and so on!

  As for why this article will continue to exist, it is because these optimizations were not enough, for example, send these logs will still affect business performance, will still be subject to memory limitations, will still occupy a lot of cpu. . .

  Well, access methods:

  1. maven dependency is introduced:

Copy the code
        <dependency>
            <groupId>com.aliyun.openservices</groupId>
            <artifactId>aliyun-log-logback-appender</artifactId>
            <version>0.1.13</version>
        </dependency>
Copy the code

 

  2. logback added appender:

Copy the code
    <appender name="LOGHUB-APPENDER" class="appender:com.aliyun.openservices.log.logback.LoghubAppender">
        <endpoint>${loghub.endpoint}</endpoint>
        <accessKeyId>${loghub.accessKeyId}</accessKeyId>
        <accessKey>${loghub.accessKey}</accessKey>
        <projectName>${loghub.projectName}</projectName>
        <logstore>test-logstore</logstore>
        <topic>${loghub.topic}</topic>
        <packageTimeoutInMS>1500</packageTimeoutInMS>
        <logsCountPerPackage>4096</logsCountPerPackage>
        <!-- 4718592=4M, 3145728=3M, 2097152=2M -->
        <logsBytesPerPackage>3145728</logsBytesPerPackage>
        <!-- 17179869184=2G(溢出丢弃) , 104857600=12.5M, 2147483647=2G, 536870912=512M-->
        <memPoolSizeInByte>536870912</memPoolSizeInByte>
        <retryTimes>1</retryTimes>
        <maxIOThreadSizeInPool>6</maxIOThreadSizeInPool>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
    </appender>
    <root level="${logging.level}">
        <appender-ref ref="STDOUT"/>
        <appender-ref ref="LOGHUB-APPENDER" />
    </root>
Copy the code

 

  3. print log in the code:

    private static Logger logger = LoggerFactory.getLogger(MyClass.class);
    logger.warn("give me five: {}", name);

 

Seemingly efficient access problems:

  Send 1. loghub asynchronous log Yes Yes, but when the transmission network is slow, there will be a lot of memory accumulation;
  2. accumulation is not afraid, configured as described above, when stacked memory reaches a certain limit, it will not be big . How he do it? In fact, through a lock, all subsequent requests all blocked, think about all these terrible;
  3. Network we can open up a few slow sending thread Well, yes, this will ease the delivery problems to a certain extent, but the basic also it does not help, in addition, the log is sent after thread open multi-thread scheduling will be more terrible, but this is just an optional feature just ah;

 

To solve these problems, what can we do?

  1. Remove unnecessary log print, this is not nonsense Well, to do that early did!
  2. When the network is slow, reduce logging print; it's a bit far-fetched, but you can try!
  3. Direct use asynchronous threads to receive and send logs to solve the problem fundamentally!
  4. If using asynchronous thread to send, then when a large number of logs piled how to do?
  5. Use the local file storage needs to send logs to solve the problem of the accumulation of a large number of logs, until smooth network, quickly send!

 

  Taking into account the use of asynchronous thread to send logs, using local disk storage accumulation of a large number of logs, the basic issue should have been resolved!
  But how to do it?
  How asynchronous?
  How to store disk?

  These are very real problem!

  If you see here, the students felt very low, which can be dismissed!

 

Let us look at specific embodiments:

1. How asynchronous?

  Can imagine, the basic is to use a queue to receive log write requests, then open another consumer thread consumption can be!

  However, so what's the problem? Because access to external requests come in, are concurrent, thread-safe queue was right! With synchronized? By blocking queue?

  In short, it seems there will be a parallel to serial problem, which will bring the application concurrency hit!

  So, we have to reduce the role of this lock. We can use multiple queues to solve this problem, similar to the sub-lock! If concurrent capacity is not enough, you can increase the number of locks!

  He says it is still very abstract, ready-made code line and go!

  1. The cover of the original logProducer appender, using their own appender implemented, mainly to solve the asynchronous problem:

Copy the code
    <appender name="LOGHUB-APPENDER" class="com.test.AsyncLoghubAppender">
        <endpoint>${loghub.endpoint}</endpoint>
        <accessKeyId>${loghub.accessKeyId}</accessKeyId>
        <accessKey>${loghub.accessKey}</accessKey>
        <projectName>${loghub.projectName}</projectName>
        <logstore>apollo-alarm</logstore>
        <topic>${loghub.topic}</topic>
        <packageTimeoutInMS>1500</packageTimeoutInMS>
        <logsCountPerPackage>4096</logsCountPerPackage>
        <!-- 4718592=4M, 3145728=3M, 2097152=2M -->
        <logsBytesPerPackage>3145728</logsBytesPerPackage>
        <-! 17179869184 = 2G (overflow discarded), 104857600 = 12.5M, 2147483647 =
        <memPoolSizeInByte>536870912</memPoolSizeInByte>
        <retryTimes>1</retryTimes>
        <maxIOThreadSizeInPool>6</maxIOThreadSizeInPool>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
    </appender>

Guess you like

Origin www.cnblogs.com/rewq/p/10988269.html