34 | Macroscopic perspective of server-side development

Starting today, we enter Chapter 3 and talk about server-side development.

The development history of the server

The division of labor in server-side development has a very short history. Incredibly short.

In 1946, the first electronic computer was introduced. In 1954, Fortran, the first high-level language, was released. The development of the entire information technology to this day is only about 60 to 70 years old.

In 1974, the Internet was born. In 1989, the World Wide Web (WWW) was born, but it was initially limited to government and academic research purposes. It only began to enter the civilian market in 1993.

From this perspective, the division of labor in server-side development has a history of more than 40 years since the birth of the Internet. The truly active period was actually only more than 20 years.

But its development speed is very amazing. We briefly list the landmark events over the years.

In 1971, email was born.

In 1974, the Internet was born.

In 1974, the first database system, IBM System R, was born. The SQL language is born.

In 1989, the World Wide Web (WWW) was born.

In 1993, the world's first Web server NCSA HTTPd was born, which is also the predecessor of the famous Apache open source Web server.

In 1998, Akamai was born to provide content delivery network (CDN) services. This should be regarded as the world's first enterprise cloud service, although there was no such concept as cloud computing at the time.

In 2006, Amazon released Elastic Compute Cloud (EC2 for short). This is regarded as a landmark event in the birth of cloud computing.

In 2007, Amazon released Simple Storage Service (S3). This is the world's first object storage service.

In 2008, Google released GAE (Google App Engine).

In 2009, the Go language was born. Derek Collison once predicted that Go language will dominate the field of cloud computing.

In 2011, Qiniu Cloud was born and released a PaaS cloud storage integrating "object storage + CDN + multimedia processing" to provide enterprises with one-stop hosting services for multimedia content such as pictures, audio and video.

In 2013, Docker was born.

In 2013, CoreOS was born. This is the first operating system designed specifically for the server side.

In 2014, Kubernetes was born. Currently considered the de facto standard for data center operating systems (DCOS).

By reviewing the development history of the server side, we can find that the driving forces behind it and desktop development technology iterations are completely different.

The iteration of desktop development technology is an iteration of interaction and a revolution in human-computer interaction. The iteration of server-side development technology, although it initially followed the entire system framework of the desktop operating system, is gradually diverging from the desktop operating system and turning to the data center operating system (DCOS).

Server program requirements

What are the roots of these evolving trends?

One is scale.

Desktop programs serve a single user, so their focus is on the continuous upgrading of user interaction experience.

The server program is shared by all users and serves all users. The resources of a physical machine are always limited, and there must be an upper limit on the number of users it can serve. Therefore, after the user scale reaches a certain level, a server program needs to be distributed and run on multiple machines to serve users.

The second is the continuous service time.

Desktop programs serve a single user, and the user's continuous use of a single desktop program usually does not last too long.

But the server program is different. It usually provides 24x7 uninterrupted service. When the user scale reaches a certain base, there will be users using it every second, and there is no concept of closing the program.

The third is quality requirements.

Each instance of the desktop program serves a single user, and there are 100 million instances of the desktop program for 100 million users.

But the server-side program is different. It is impossible to run 100 million programs for 100 million users. Each user uses one alone. Instead, many users share and use one program instance.

This means that the two have different tolerances for program crashes.

An instance of a desktop application crashes, affecting only one user.

But the crash of a server program instance may affect hundreds of thousands or even millions of users.

This is unacceptable.

An instance of the server program can crash, but its work must be immediately transferred to other instances to be re-done, otherwise the loss will be too great.

Therefore, the server program must be able to realize automatic transfer of users. If an instance crashes or is restarted because it needs a feature upgrade, the users it is serving need to be transferred to other instances.

Therefore, the server program must be multi-instance. The user must not be aware of the temporary unavailability of a single program instance.

From a user perspective, the server program continues to serve 24/7 and should not crash at any time. Just like water, electricity and coal.

Server-side development architecture

In the lecture "01 | Macroscopic Perspective of Architecture Design", we summarize the complete architecture of a server program as follows:

This architecture system is drawn to facilitate you to establish a natural correspondence with the desktop development architecture.

It is certainly correct, but it only looks at a single instance of the server program, not the entire server program architecture.

In the lecture "15 | Programmable Internet World", we compared the TCP/IP layer to the operating system of the network. The architecture of a network program is as follows:

A server program is of course also a network program, and it conforms to the architecture of network programs.

But it's not the whole story of server-side program architecture.

From a macro perspective, a server program should first be a multi-instance distributed program. Its macro system architecture is as follows:

Compared with desktop programs, the basic software that server programs rely on is not only the operating system and programming language, but also two more categories:

Load Balance;

Database or other form of storage (DB/Storage).

Why is Load Balance needed? Why would you need a database or other form of storage? You can leave a message to discuss. We'll talk about load balancing and storage in the next few lectures.

Conclusion

Today we will start with the development history of the server and the needs of server development, so that you can understand how the server development ecosystem will evolve and where the technology iteration will go.

The requirements we are discussing here have nothing to do with the specific business. It belongs to the domain characteristics of the server itself. Just like the domain characteristics of the desktop are strong interaction, taking events as input and GDI as output, the domain characteristics of the server are large-scale user requests and 24-hour uninterrupted service.

These domain characteristics directly lead to the inevitable difference between the architecture of server-side development and the desktop.

Guess you like

Origin blog.csdn.net/qq_37756660/article/details/134973997