Linux server performance tuning theory articles

First, the site architecture related

Professional Terminology (1) evaluate the performance of the site involved

  1.PV(Page View)

  PV ie visits, the Chinese translation for page views, page views on behalf of, each user will be refreshed once. PV specific calculation method is: issuing a request to the network server (Request) from the browser, the web server receives the request, the request is corresponding to a page (Page) to the browser, resulting in a PV.

  2.UV (Unique Views)

  UV namely independent access, visit the website of a client computer as a guest, if by the day, the statistical program will be from 0:00 to 24:00 during this time of the client computer, and the same client only be calculated once.

  3. The number of simultaneous connections (Concurrent TCP Connections)

  When a page was viewed, the server and browser to establish a connection, each connection represents a concurrent. If the current Web page contains a lot of pictures, one picture is not displayed, the server will have multiple connections to send text and pictures to enhance the browsing speed.

  4.QPS(Query Per Second)

  QPS queries per second rate, is a measure of a particular query server within the specified time how much traffic the standard treatment, on the Internet, as a machine queries per second rate performance is usually the domain name system server to measure. For a system, QPS value is a very important parameter, which is a measure of the maximum throughput capacity of the system to reflect the comprehensive labeling.

  The network quality assessment room

    1) Stability: a response delay, packet loss rate

    2) bandwidth quality: download speed download speed test TCP and TCP's largest

    3) access location: the access router apparatus from the position of the backbone network, the access number of the better

(2) CDN service options

  If your business site contains a lot of pictures and video files class, in order to speed up the access speed of the client, while alleviating the pressure core room service and enhance the user experience, we recommend that you use CDN cache to accelerate the program website or at the front of the system.

  CDN stands for Content Delivery Network, ie, content delivery network. Its purpose is to add a new layer of network architecture through the existing Internet, the content posted to the site recently speaking user's network "edge" so that users can go to obtain the desired content, improve the response speed of the user access to the site, so as to enhance user experience.

  Rental CDN: small and medium sites directly buying services, CDN now has entered the cloud computing model pay-as-can accurately calculate the cost.

  Self-built CDN: relatively high cost of such programs, in order to ensure good buffer effect, the engine room must be in the national distribution, and the need to self-built intelligent Bind system. General professional video sites or photo site would consider using this scheme.

Select (3) IDC room

  Single Telecom IDC room: this business model relatively fixed, traffic is not great for news or government websites.

  IDC double room: because of interoperability issues between the two major domestic network (Telecom and China Netcom), the telecommunications users visit the site Netcom or Telecom Netcom users access the site is slow, and thus had a two-room, two-server, two-two-server hosting and server rental service.

  BGP room: BGP (Border Gateway Protocol) routing protocol is used to connect to the Internet opposition system. For users, select BGP room can be achieved very quickly in the operator's website users visit the Web site, more stable, do not worry all over the country due to line problems brought about slow or fast access speed, which is a traditional two-room double IP incomparable advantages.

  Cloud computing services: the current devaluation Amazon Cloud (AWS) and both Ali cloud cloud computing platform

  Cloud computing service offerings allow our development team to focus on product development itself, rather than buying the hardware, configure and maintain hardware and other complex work, but also can reduce the initial capital investment.

  Cloud computing is particularly suitable for certain days or certain times of the traffic will surge website.

Second, how to buy server based application server

(1) What is the application server running

  Load balancing end: In addition to the performance of the card, it requires servers are relatively low in other areas.

  Cache Server: mainly Varnish and redis, on CPU performance and other requirements in general, but more in terms of memory requirements.

  Application servers: do the heavy lifting and computing functions implemented, it is necessary to select fast enough server-based Web application server architecture.

  Special applications: In addition to the web application, there are streaming video coding, server virtualization, media server or game server, it will have the same CPU and memory have certain requirements, at least four more nuclear.

  Public service: refers to the mail server, file server, DNS server, domain controller servers. No need to be too harsh for reliability.

  Database server: database server requirements is the highest and most important. We need fast enough CPU and enough memory, enough stable and reliable hardware. We recommend SSDs do RAID10, because the database on the hard disk IO requirements are highest.

  Spark Hadoop and Distributed Computing: The recommended intensive storage.

  RabbitMQ cluster: Based on Erlang language development, for high memory requirements.

(2) how many users access the server needs to support

  Before the general implementation of the project, the client will make a general result of these problems, but we have to try to design fully specific.

(3) how much space to store data

How important (4) Business

  1. Choose what CPU

  2. require much memory

  A considerable number of servers running, CPU utilization is generally 10-30%, but we found results in insufficient memory to run slowly abound, if the server can not allocate enough memory, your application will need to read the hard disk interface to exchange data this will lead to the site intolerably slow.

  For Tomcat, Resin, WebLogic application server, 8GB memory is the reference configuration.

  As the number of database server memory, table size, indexes, such as the number of users of the database instance is generally recommended configuration of 16GB or more, the average company many projects using 24GB-48GB of memory.

  Special server, you need to configure the highest possible position memory capacity, for example, equipped with Redis and Memcached caching server.

  File server, 1GB of memory is sufficient.

  3. What kind of disk storage systems

  Cache server, consider RAID 0

  Running Nginx + FastCGI, consider RAID 1

  Server or network server storage of important development code, consider RAID 5

  Database server, consider solid state drive or RAID 5 RAID 10

(5) card performance considerations

  Server Configuration recommended two network cards, a provide services for internal data exchange.

  Keepalived only public address of the Linux cluster architecture, the card rate requirements for high, suggested the use Gigabit network cards.

(6) server security considerations

  Domestic DDoS attacks generally recommended to configure the hardware firewall, such as Juniper, Cisco and so on.

(7) the number of servers based on the number of racks reasonable arrangements

(8) cost considerations: the price of the server

Third, the impact on Linux hardware performance

(1)CPU 

  CPU operating system is fundamentally stable operation, speed and CPU performance largely determines overall performance, therefore, the more CPU or, better frequency is also high.

(2) Memory

  Memory size directly affect the performance an important factor in Linux, the memory is too small, the system process is blocked, the application will also slow down, or even loss of response.

(3) Disk IO performance

  Disk IO performance directly affects the performance of the application, in an application to read and write frequently, if the disk IO performance are not met, the application will lead to stagnation.

  RAIO 0: the lowest cost, requires at least two disks, but no fault tolerance and data recovery, data security and therefore is used in less demanding environments.

  RAIO 1: disk utilization is only 50%, so the highest cost, the use of the occasion to save important data.

  RAIO 5: high efficiency of reading, writing efficiency in general, at least three disks, allowing a disk failure, without affecting the availability of data.

  RAIO 10: at least four disks, each disk which is a disk mirroring provides redundancy, while allowing a disk failure, the data without affecting the availability and fast read / write capability.

(4) network bandwidth

  Various applications under Linux systems are generally based network, the bandwidth is also an important factor affecting performance, low-speed networks will lead to instability in blocking access to the application, and stable, high-speed network bandwidth on the network to ensure smooth application unimpeded operation.

Guess you like

Origin www.cnblogs.com/hkping18/p/11587952.html