Introduction to FastCGI

FastCGI is actually a variation of CGI.

1、Understanding the CGI mechanism

The original purpose of a web server was merely to answer requests from clients by
serving files located on a storage device. The client sends a request to download a file
and the server processes the request and sends the appropriate response: 200 OK if
the file can be served normally, 404 if the file was not found, and other variants.


This mechanism has been in use since the beginning of the World Wide Web and it
still is. However, as stated before, static websites are being progressively abandoned
at the expense of dynamic ones that contain scripts that are processed by applications
such as PHP and Python among others. The web serving mechanism thus evolved
into the following:



 When a client attempts to visit a dynamic page, the web server receives the request
and forwards it to a third-party application. The application processes the script
independently and returns the produced response to the web server, which then
forwards the response back to the client.
In order for the web server to communicate with that application, the CGI protocol
was invented in the early 1990s.

2、Common Gateway Interface (CGI)

As stated in RFC 3875 (CGI protocol v1.1), designed by the Internet Society (ISOC):


The Common Gateway Interface (CGI) allows an HTTP server and a CGI script to
share responsibility for responding to client requests. […] The server is responsible
for managing connection, data transfer, transport, and network issues related to the
client request, whereas the CGI script handles the application issues such as data
access and document processing.

CGI is the protocol that describes the way information is exchanged between the web
server (Nginx) and the gateway application (PHP, Python, and so on). In practice,
when the web server receives a request that should be forwarded to the gateway
application, it simply executes the command corresponding to the desired application,
for example, /usr/bin/php. Details about the client request (such as the User Agent
and other request information) are passed either as command-line arguments or in
environment variables, while actual data from POST or PUT requests is transmitted
via the standard input. The invoked application then writes the processed document
contents to the standard output, which is recaptured by the web server.

While this technology seems simple and efficient enough at first sight, it comes with
a few major drawbacks, which are discussed as follows:

•  A unique process is spawned for each request. Memory and other context
information are lost from one request to another.

•  Starting up a process can be resource-consuming for the system. Massive
amounts of simultaneous requests (each spawning a process) could quickly
clutter a server.

•  Designing an architecture where the web server and the gateway application
would be located on different computers seems difficult, if not impossible.

3、Fast Common Gateway Interface (FastCGI)

The issues mentioned in the Common Gateway Interface (CGI) section render the
CGI protocol relatively inefficient for servers that are subject to heavy load. The
will to find solutions led Open Market in the mid-90s to develop an evolution of
CGI: FastCGI. It has become a major standard over the past fifteen years and most
web servers now offer the functionality, even proprietary server software such as
Microsoft IIS.

Although the purpose remains the same, FastCGI offers significant improvements
over CGI with the establishment of the following principles:

•  Instead of spawning a new process for each request, FastCGI employs
persistent processes that come with the ability to handle multiple requests.

•  The web server and the gateway application communicate with the use
of sockets such as TCP or POSIX Local IPC sockets. Consequently, both
processes may be on two different computers on a network.

•  The web server forwards the client request to the gateway and receives the
response within a single connection. Additional requests may also follow
without needing to create additional connections. Note that on most web
servers, including Nginx and Apache, the implementation of FastCGI does
not (or at least not fully) support multiplexing(多路复用).

•  Since FastCGI is a socket-based protocol, it can be implemented on any
platform with any programming language.

Throughout this chapter, we will be setting up PHP and Python via FastCGI.
Additionally, you will find the mechanism to be relatively similar in the case of 
other applications, such as Perl or Ruby on Rails.

Designing a FastCGI-powered architecture is actually not as complex as one might
imagine. As long as you have the web server and the processing application running,
the only difficulty that remains is to establish the connection between both parties.
The first step in that perspective is to configure the way Nginx will communicate
with the FastCGI application. FastCGI compatibility with Nginx is introduced by 
the FastCGI module. This section details the directives that are made available by 
the module.

4、uWSGI and SCGI

Before reading the rest of the chapter, you should know that Nginx offers two other
CGI-derived module implementations:

•  The uWSGI module allows Nginx to communicate with applications through
the uwsgi protocol, itself derived from Web Server Gateway Interface
(WSGI). The most commonly used (if not the unique) server implementing
the uwsgi protocol is the unoriginally named uWSGI server. Its latest
documentation can be found at http://uwsgi-docs.readthedocs.org.
This module will prove useful to Python adepts(能手;专家) seeing as the uWSGI project
was designed mainly for Python applications.

•  SCGI, which stands for Simple Common Gateway Interface, is a variant of the
CGI protocol, much like FastCGI. Younger than FastCGI since its specification
was first published in 2006, SCGI was designed to be easier to implement and
as its name suggests: simple. It is not related to a particular programming
language. SCGI interfaces and modules can be found in a variety of software
projects such as Apache, IIS, Java, Cherokee, and a lot more.

There are no major differences in the way Nginx handles the FastCGI, uwsgi and SCGI
protocols: each of these have their respective module, containing similarly named
directives. The following table lists a couple of directives from the FastCGI module,
which are detailed in following sections, and their uWSGI and SCGI equivalents:



 Directive names and syntaxes are identical. In addition, the Nginx development
team has been maintaining all three modules in parallel. New directives or directive
updates are always applied to all of them. As such, the following sections will be
documenting Nginx's implementation of the FastCGI protocol, but they also apply to
uWSGI and SCGI.

5、Upstream blocks

With the FastCGI module, and as you will discover in the next chapter with the Proxy
module too, Nginx forwards requests to backend servers. It communicates with
processes using either FastCGI or simply by behaving like a regular HTTP client.
Either way, the backend server (a FastCGI application, another web server, and so on)
may be hosted on a different server in the case of load-balanced architectures:

 

 The general issue with applications (such as PHP) is that they are quite 
resource-consuming, especially in terms of CPU. Therefore, you may find  
yourself forced to balance the load across multiple servers, resulting in the  
following architecture:



 In this case, Nginx is connected to multiple backend servers. To establish such a
configuration, a new module comes into play: the upstream module.

5.1、Module syntax

The upstream module allows you to declare named upstream blocks that define 
lists of servers:
upstream phpfpm {
  server 192.168.0.50:9000;
  server 192.168.0.51:9000;
  server 192.168.0.52:9000;
}

When defining the FastCGI configuration, connect to the  upstream block:

server {
    server_name website.com;
    location ~* \.php$ {
        fastcgi_pass phpfpm;
        […]
    }
}

In this case, requests eligible to FastCGI will be forwarded to one of the backend
servers defined in the upstream block.

A question you might ask is, how does Nginx decide which backend server is to be
employed for each request? And the answer is simple: the default method of the
Upstream module is round robin. However, this method is not necessarily the best.
Two requests from the same visitor might be processed by two different servers,
and that could be a problem for many reasons (for example, when PHP sessions are
stored on the backend server and are not replicated across the other servers).
To ensure that requests from a same visitor always get processed by the same backend
server, you may enable the ip_hash option when declaring the upstream block:
upstream phpfpm {
  ip_hash;
  server 192.168.0.50:9000;
  server 192.168.0.51:9000;
  server 192.168.0.52:9000;
}
This will distribute requests based on the visitors IP address employing a regular
round robin algorithm.However, be aware that client IP addresses are sometimes
subject to change for various reasons such as dynamic IP refresh, proxy switching,
Tor. Consequently, the ip_hash mechanism cannot fully guarantee that clients will
always be involved to the same upstream server. Alternatively, you may force Nginx
to select the backend server that currently has the last amount of active connections,
through the use of the least_conn directive.

5.2、Server directive

The server directive that you place within upstream blocks accepts several
parameters that influence the backend selection by Nginx:
•  weight=n: This lets you indicate a numeric value that will affect the 
weight of the backend server. If you create an upstream block with 
two backend servers, and set the weight of the first one to 2, it will 
be selected twice more often:
upstream php {
  server 192.168.0.1:9000 weight=2;
  server 192.168.0.2:9000;
}
•  max_fails=n: This defines the number of communication failures 
that should occur (in the time frame specified with the fail_timeout
parameter below) before Nginx considers the server inoperative.

•  fail_timeout=n: This defines the time frame within which the maximum
failure count applies. If Nginx fails to communicate with the backend 
server max_fails times over fail_timeout seconds, the server is 
considered inoperative.
•  down: If you mark a backend server as down, the server is no longer used. 
This only applies when the ip_hash directive is enabled.
•  backup: If you mark a backend server as backup, Nginx will not make 
use of the server until all other servers (servers not marked as backup) 
are down or inoperative.

These parameters are all optional and can be used altogether:
upstream phpbackend {
  server localhost:9000 weight=5;
  server 192.168.0.1 max_fails=5 fail_timeout=60s;
  server unix:/tmp/backend backup;
}



 

猜你喜欢

转载自zsjg13.iteye.com/blog/2230975
今日推荐