On the contrast, and the processing performance of the server speed of

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/cuiyaonan2000/article/details/100554570

Preface

          Now in technology selection, particular attention QPS provides software, of course, this is inevitable, so finishing at the application level between the different services

QPS

         QPS (query rate per second) = number of concurrent / average response time

         E.g:

  1. If one time can handle 100 requests per request takes 100 milliseconds, qps = 1000
  2. If the handle 50 may be a one-time requests, each consuming 200 milliseconds, qps = 250

 

Peak QPS

      Formula :( Total PV * 80%) / (number of seconds * 20%) = peak time requests per day (QPS) per second

      Machine: QPS second peak time / single machine QPS = desired machine

      Q: every day on a single machine, the machine number of QPS needs 300w PV?
      A :( 3000000 * 0.8) / (86400  * 0.2) = 139 (QPS)

      Q: If a machine QPS is 58, need several machines to support?

      A: 139/58 = 3 

 

 

MQ Services

        Kafka is a high-throughput, low-latency high concurrency, high performance messaging middleware, there is a very widely used in the field of big data. Good Kafka cluster configuration can be done even hundreds of thousands per second, millions of ultra-high concurrent writes.

        Stand-alone general RabbitMQ million level within the QPS, QPS and Kafka stand-alone can maintain one hundred thousand level, even up to one million.

     Clearly seen kafka performance far more than rabbitmq. But this is only natural, after all, two message queuing protocol implemented is not the same, scene processing messages are also very different. rabbitmq suitable for handling some of the data stringent messages, such as paid news, social news and other data can not be lost. kafka is not reported whether the batch operation cut card data can be a complete end to reach consumers, so for some scenes of a large number of marketing messages.

 

NoSql database

        redis single word can provide about 5w QPS, if the server can provide separate read and write to 10w. cost about redis-cluster cluster if the machine too much will increase communication interaction, so about one million cluster almost as it .Redis KV-memory system, the data processing amount is smaller than HBase and MongoDB

        HBase column based storage, providing <key, family: qualifier, timestamp > embodiment three coordinate location data, which qualifier may be due to the dynamic extended (without schema design, can store any number of qualifier), particularly suitable for storing a sparse table structure data (such as the Internet web page class). HBase secondary index does not support read data or key aspects are supported by a key range of the reader, or a full table scan.

        MongoDb class SQL statements operational aspects currently more than HBase has some advantages, there are secondary index, compared to the HBase support more complex set of look like. BSON data structure allows processing document type data more directly. MongoDb also supports mapreduce, but because Hadoop HBase combined with more closely, Mongo on the data fragmentation and other required attributes as good as HBase mapreduce so directly, the need for additional processing.

Read and write performance with HBase Mongodb the contrary, HBase write better than random reads, MongoDB seem write performance than read performance. hbase occupy two machines can accomplish things, mongodb to take up more machines, but at the cost of things after hbase record, only after the analysis in accordance with the overall index range or by way of full table retrieval, not for specific everybody real-time data analysis, more emphasis on data analysis rather than real-time data query capabilities

  Expansibility Table Design Load Balancing failover Affairs Applicable amount of data
RDBMS difference Flexibility is weak difference Synchronous implementation stand by Ten thousand
HBase Strong One billion row, column one million; dynamic column, each row may be different. And the introduction of multi-version concept of family and columns of data. Strong The components are supported HA MVCC, Produce LOCK; row-level transactions One hundred million

 

 

SNAKE

        From iBatis to MyBatis, not just a name change, MyBatis provides a more powerful, but did not lose its ease of use, on the contrary, in many places by means of generics and annotations properties JDK has been simplified.

       In the iBatis, namespace is not required, and it is there is no practical significance. With MyBatis, namespace finally come in handy, it makes mapping file bound to an interface becomes very natural. After MyBatis in a sql can have ";", and iBatis will complain,


time complexity

⑴ find basic statements algorithm;

Algorithm to perform the highest number of that statement is the basic statement, usually the innermost loop cycle.

⑵ calculated number of times of execution of the basis statement level;

Just calculate the number of executions basic statement of magnitude, which means that as long as a function of the number of executions of basic statement of the right to the highest power, you can ignore all the low power factor and the highest power. This can simplify the analysis of algorithms, and the focus on the most important point: the growth rate.

⑶ represents time performance of the algorithm with a large Ο mark.

The basic statement is executed into a large number of orders of magnitude Ο notation.
If the algorithm contains nested loop, the basis statement is usually the innermost loop, if the algorithm loop comprising parallel, then the cycle time complexity of parallel addition.

 

example

Temp=i; i=j; j=temp;       

Three or more frequencies are a single statement, execution time of the program segment is a problem with the size of n independent constant. The time complexity of the algorithm is a constant order, denoted by T (n) = O (1). Note: If the execution time of the algorithm is not a problem with the increase in the size of n and growth, even if there are thousands of statements algorithm execution time is nothing but a large constant. The time complexity of such algorithms is O (1).

 

 

sum=0;                 //(一次)  
for(i=1;i<=n;i++)      //(n+1次)  
   for(j=1;j<=n;j++)   //(n2次)  
    sum++;            //(n2次)  

Since Θ (2n2 + n + 1) = n2 (Θ, ie: to lower order entries, remove the constant term, often remove the higher order terms parameters obtained), the T (n) = = O (n2);

 

a=0;        //1
b=1;       // 1   
for (i=1;i<=n;i++) // n 
{    
   s=a+b;    // n-1
   b=a;     //  n-1
   a=s;     // n-1
}  

T (n) = 2 + n + 3 (n-1) = 4n-1 = O (n).
The time complexity of the algorithm program to O (n)

 

 

 

 

 

 

 

 

Guess you like

Origin blog.csdn.net/cuiyaonan2000/article/details/100554570