With me know Little's Law

  1. Introduction
  
  development of students with more or less "performance" to deal with this stuff, this article will introduce the Little's Law is closely related with common indicators to measure performance, so before today leads the protagonist of Little's Law, it is necessary to unify what we description "performance" and "basic language", after all, the language barrier is impossible not to communicate. In addition, the following description is my personal understanding, inappropriate, please correct me.
  
  2. "Performance" and "basic language"
  
  different definition of service equipment performance is different, for example, mainly to see CPU clock speed, mainly to see the disk IOPS. In this paper, for the performance of back-end software services (such as api services, database services, etc.) to discuss. After defining the range should give good performance in a definition of: the ability to handle the requested service is performance. A measure of performance indicators common are three: the number of concurrent users, throughput, response time.
  
  2.1 the number of concurrent users
  
  refers to the number of users actually send a request for service, you need to pay attention to the difference between the online and the number of users; for example, at some point, the number of online users is 1000, of which only 100 user action triggers interact with the remote services , then when the remote service, the number of concurrent users is 100, instead of 1000.
  
  2.2 a certain
  
  number of requests processed per unit time.
  
  2.3 Response time
  
  corresponding English is the response time, is also represented by some places latency, i.e. latency. Statistical requires a response time period, obtains characteristic value represents the response time. Common characteristic value comprises an average, maximum, minimum, place value.
  
  3. Little's Law protagonist debut
  
  3.1 Little's Law definition of
  
  the number of users a stable system while being a user reaches the service is equal to the speed of the system multiplied by the time each user resides in the system for all need to line up the scene. Corresponding to the formula is expressed as:
  
  N = X-R & lt *, wherein
  
  N represents simultaneously active users in the system,
  
  namely when X represents a user system throughput system successive arrival rate, stable (the word is very important!) State (the user system reaches a speed equal to the speed of the user leaves the system),
  
  R represents the average residence time of each user in the system.
  
  For example, you are lining up to enter a medical center, you can know how long to wait by estimating the rate of people entering.
  
  Medical center can accommodate about 600 people, each complete examination time is 2 hours,
  
  i.e., R = 2 h, N = 600, is calculated according to the formula:
  
  X-= N / R & lt = 600/2 hours = 300 / hr
  
  so enter the speed to 300 people per hour.
  
  So if you are there in front of 300 people, so we have to wait about an hour before entering the examination center.
  
  Little relationship 3.2's Law and performance indicators of
  
  the relationship between the three indicators of said first part can be Little's Law to represent, because the situation on user requests and constantly sent to the server for processing, it is a kind of line up scene, the Little's Law specific to this equation is then the corresponding scenario:
  
  the number of concurrent * a certain response time =
  
  3.3 Little's Law by way of example feeling of presence
  
  following main Little's Law, as viewed from the service interfaces and two mysql service instances.
  
  Examples of interfaces and services 3.3.1
  
  3.3.1.1 describes the preparation process
  
  based on the interface itself springboot expose an interface based on the value of the sleep period parameter, so that clients can metrology tool pressure control response time of the interface.
  
  @RestController
  
  {class ApiLatency public
  
  @RequestMapping ( "/ Test / Latency / MS {}")
  
  public String Latency (Long @PathVariable MS) {
  
  the try {
  
  the Thread.sleep (MS);
  
  } the catch (InterruptedException E) {
  
  e.printStackTrace ();
  
  return "Hello World!";
  
  for sensing pressure above interface provided by the different number of concurrent JMeter
  
  3.3.1.2 analysis results show
  
  an initial stage of stable response time, throughput increases as the number of concurrent increases; concurrent with increases, the system reaches "inflection point", the throughput begins a downward trend, while the response time starts to increase; continuing to increase the number of concurrent, overload the system load, thus entered the saturation region at this time rapidly increases response time, throughput A sharp decline. Before "turning point" and just enter the inflection point of this area at this time that the system is "stable", in which case the number of concurrent throughput, average response time is consistent with Little's Law formula.
  
  Examples of services mysql 3.3.2
  
  3.3.2.1 describes the preparation process
  
  to prepare a mysql Tencent cloud service (version 5.6, the second core is configured to 4G) and a cloud server (network cloud Tencent test)
  
  using sysbench (0.5 version) services to mysql tested
  
  ## preset data
  
  sysbench --mysql-host=192.168.0.10 --mysql-port=3306 -www.hxyl1618.com-mysql-user=root --mysql-password=test --mysql-db=loadtest --mysql-table-engine=innodb www.feihongyul.cn--test=/usr/local/share/sysbench/tests/include/oltp_legacy/oltp.lua --oltp_tables_count=8 --oltp-table-size=4000000 --rand-init=on prepare
  
  ## 执行shell脚本进行测试
  
  for i in 5 7 8 10 12 25 50 128 200 400 700 1000
  
  do
  
  sysbench --mysql-host=192.168.0.10 www.chuanchenpt.cn--mysql-port=3306 --mysql-user=root --mysql-password=test --mysql-db=loadtest -www.haoranpkk.cn-test=/usr/local/share/sysbench/tests/include/oltp_legacy/oltp.lua --oltp_tables_count=8 --oltp-table-size=4000000 --num-threads=${i} --oltp-read-only=off --rand-type=special www.zeshengyule.com--max-time=180 --max-requests=0 --percentile=99 --oltp-point-selects=4 --report-interval=3 --forced-shutdown=1 run | tee www.yinchengyule.cn-a sysbench.${i}.oltp.txt
  
  done
  
  ## 清理数据
  
  sysbench --mysql-host = 192.168.0.10 -www.shicaiyl.com- mysql-port = 3306 --mysql-user = root --mysql-password = test --mysql-db = loadtest --mysql-table-engine InnoDB --test = = / usr / local / Share / SysBench / Tests / the include / oltp_legacy / oltp.lua. 8 --oltp---oltp_tables_count = Table-size = 4000000 = ON Cleanup --rand the init-
  
  ## script the meaning of the parameters
  
  --oltp_tables_count = 8, this represents an amount of a test table 8.
  
  --oltp-table-size = 4000000, signifying the number of table rows are testing the use of 4 million lines.
  
  --num-threads = n, this test represents the number of concurrent client connections to n.
  
  --oltp-read-only = off, off Close test indicates a read-only test model, the read-write hybrid model.
  
  --rand-type = special, expressed as a particular stochastic model.
  
  --max-time = 180, the execution time of this test for 180 seconds.
  
  --max-requests = 0,0 represents the total number of requests is not limited, but by max-time test.
  
  --percentile = 99, represents the sampling ratio is set, the default is 95%, i.e. 1% of the length of discarding request, takes a maximum value in the remaining 99%.
  
  --oltp-point-selects = 4, represents oltp script sql test command, SELECT operation 4 times, a default value is 1.
  
  3.3.2.2 Display and analysis of results
  
  in response to the initial period substantially stable, with the increase in the number of concurrent throughput increases; concurrent with increasing the system reach the "inflection point", the throughput begins a downward trend, while the response time starts increases; continue to increase the number of concurrent, overload the system load, thus entered the saturation region at this time rapidly increases the response time, the throughput drops sharply. Before "turning point" and just enter the inflection point of this area at this time that the system is "stable", in which case the number of concurrent throughput, average response time is consistent with Little's Law formula.
  
  4. Summary
  
  is established when 4.1 Little's Law equation system "stable"
  
  Examples of the two measured based on, for concurrent, throughput, response time of the three relations, we can represent spend FIG.
  
  The beginning of the "linear growth zone" where stable response time, throughput increases as the number of concurrent users is increased;
  
  when the resource utilization of the system is saturated, the system reaches "inflection point", as the number of concurrent users increases, throughput began declining trend, but also begins to increase in response time;
  
  continue to increase the number of concurrent users, the system overload load, thereby entering supersaturation zone, rapidly increases at this time the response time, a sharp decline in throughput.
  
  Before "turning point" and just enter the inflection point of this area at this time that the system is "stable", in which case the number of concurrent throughput, average response time is consistent with Little's Law formula.
  
  4.2 is not the actual number of concurrent users
  
  For example, you set the number of threads such jmeter 80, the measured result is not to say your service performance in 80 customer usage, the pressure in the actual scene may 1000 produced no real user here 80 threads (or called virtual users) pressure generated great. I tend to measure the number of concurrent understood as a kind of pressure, the pressure to 80 concurrent services is certainly greater than 60 concurrent pressure on serve.
  
  4.3 low latency (response time) is not necessarily high throughput, high latency (response time) is not necessarily low throughput
  
  if a program has only one thread, which can handle 10 events per second, then we say that handles a single event delay of 100ms, the throughput was 10 times / sec.
  
  If a program has four threads, each thread can process 5 events per second, then we say that the program delay processing of a single event is 200ms, throughput is 20 times / sec.
  
  If a program has a thread, each thread can handle the event 20 times per second, then we say that the program delay processing of a single event is 50ms, throughput is 20 times / sec.
  
  We know from the Little's Law, the number of concurrent throughput * = response time, so the relationship between delay and throughput is affected by the number of concurrent, concurrent aside find another relationship between the two is no law.
  
  4.4 response time and throughput to be linked to
  
  performance If you look at the throughput, response time does not see no sense. From the previous analysis, we know that, with the increase of the number of concurrent, increasing process throughput to a further decreased, i.e., a present value of the same throughput in terms of concurrent, would correspond to different response times. For example, an interface 10000TPS throughput, response time of five seconds, then this 10000TPS have no meaning.
  
  4.5 response time, throughput and success rate is linked to
  
  such a test in the interface, when the number of concurrent 4000, the error rate of 40%, so at this time 1270.8TPS pointless.
  
  4.6 addition of faster and better
  
  I understand the performance of the service is the addition of faster and better process the request, "More" is the high throughput, "fast" is the response time is short, "good" is the error rate as low as possible.
  
  4.7 last but not least
  
  in Little's Law, we have a response time using the word "average response time", but the actual work we usually "place value" with a response time as the statistical value of the response time to measure the performance of the average value only as a secondary reference. The reason you should know, such as "average wage" is usually not much reference value, there might be a lot of people are averaged.
  
  5. Extended
  
  Eric Man Wong, published in 2004 "Method for Estimating the Number of Concurrent Users" paper introduces a system of equations to estimate the number of concurrent users:
  
  this formula and Little's Law is equivalent to the specific discussion process reference Eric's estimate of the number of concurrent users and Little's law of equivalence
  
  6. reference
  
  performance tuning Guide
  
  performance testing should be how to do?
  
  Know thyself - Aurora for stress testing
  
  Amazon Aurora Sysbench benchmark
  
  Tencent cloud _ high-availability version of the performance test report
  
  on how to monitor the performance of Tomcat?
  
  Equivalence of Eric's estimate of the number of concurrent users and Little's law

Guess you like

Origin www.cnblogs.com/dakunqq/p/11700370.html
law