Disappearance and existence - the ups and downs of the application delivery industry

In 1996, a company called Foundry Networks was established in Silicon Valley, mainly developing and selling network hardware products such as switches and routers.

That year was the early days of the "Internet bubble", and new companies in the US IT industry were springing up like mushrooms. It is conceivable that the establishment of this company at that time may not have caused too much splash.

But just three years later, Foundry was listed on Nasdaq and set multiple records, becoming the most watched start-up company with the most growth potential on Wall Street, and even selected as one of the top 500 companies in the United States.

In the blink of an eye and a few years later, the situation changed suddenly. In 2008, Foundry was acquired by Brocade for approximately US$2.6 billion. The acquisition took several months to finalize and was suppressed because of the stock price drop. By now, Foundry has disappeared from everyone.

It survived the bursting of the bubble, but failed to withstand the acquisition.

Bobby Johnson, founder of Foundry, who decided to reduce investment in layer 4-7 switch products at the beginning, did not know how he would feel when he saw F5, an old rival established in the same year as Foundry and now in full swing.

The term 4-7 layer switch may be relatively rare for friends who are new to the network industry. Today, everyone may be more accustomed to its other name-load balancing.

As the first manufacturer to introduce load balancing equipment, Foundry had a bright future. According to the general script, application delivery evolved from the concept of load balancing should be their forte.

But change is constant, and history will always take a path that most people didn't envision—including technology itself.

How to understand application delivery

As the digitalization process continues to be promoted, a large number of applications emerge and the number of visits soars. In order to ensure safe and efficient access to applications and content, application delivery technologies have emerged as the times require.

Application delivery is deployed between the network and the application. The two are deeply integrated and effectively coordinated, and are committed to delivering applications to end users in an efficient, secure, and stable manner. To achieve this goal, application delivery integrates multiple technologies such as load balancing, application security management, application acceleration, and traffic control.

In IT application architecture, application delivery is at the choke point between the Internet and the data center, and therefore holds significant strategic value.

The greater the responsibility, the greater the challenge. For enterprises, this challenge may lie more in how to better and reasonably utilize the value of application delivery and empower business applications. As for application delivery itself, the challenge lies in how to better grasp the direction of technological development.

When we want to have some insight into future trends, it is necessary to look back at the development of application delivery.

before application delivery

In the early days of the Internet, there were very few user visits, and a single server was often enough to meet external service needs. However, with the popularization of the Internet, more and more users access the network, and business traffic will naturally increase.

When a single server cannot cope with traffic growth and support service operation, there are usually two solutions:

  1. change to a better server
  2. add more servers

That is what we often call vertical expansion (increasing the computing, storage, network and other resource configurations of a single server) and horizontal expansion (by adding multiple servers to work at the same time to meet the processing needs of large applications).

The first method may be simple and crude enough, but there is always an upper limit to the performance of a single machine, and in most cases, the improvement of machine performance is not directly proportional to the price. Therefore, it is generally accepted as a more practical solution to form a cluster with multiple servers and carry business applications at the same time.

But at this time, we are faced with a new problem, how to distribute the business load? How to coordinate among multiple servers? First, let's see what happens if there is no coordination? Business traffic cannot be effectively controlled when it reaches the backend, resulting in uneven work distribution: some servers are already overloaded, while others are idle. This not only causes a great waste of resources, but also affects the user's access experience.

The emergence of load balancing is to solve this problem. It can be built on the existing network structure, as the traffic entrance, schedules the traffic, and evenly distributes it to different machines in the cluster.

DNS load balancing implementation

Initially, the DNS (Domain Name System, Domain Name System) method was used to achieve load balancing.

DNS is essentially a distributed database that can map domain names and IP addresses to each other, making Internet access more convenient - when people visit a website, they only need to remember the domain name of the website instead of complex IP addresses.

Using DNS to achieve load balancing is achieved through random name resolution in the DNS service. In DNS, configure the same name for multiple domain name addresses, and the client that queries this name will get one of the addresses during resolution. So that different clients visiting the same website get different domain name addresses, and also visit different servers, so as to achieve the purpose of load balancing.

It's a simple and effective approach that's easy, inexpensive to implement, and widely applicable.

But there is a fatal problem in the DNS method: it achieves absolute average, and completely and evenly distributes the access requests from the Internet to the back-end servers. It cannot distinguish between server performance differences, nor does it reflect the server's current operating state.

In fact, "balanced" is not equal to "average". Evenly distributing business traffic to servers regardless of their performance cannot fundamentally solve the problem of load imbalance.

Also, if one of the servers goes down, DNS has no way of knowing it will still route traffic there, which will cause clients to not get a response.

Therefore, the application scenarios of DNS load balancing are relatively limited.

Load balancing based on network switching technology

In order to achieve real load balancing, the industry has turned its attention to network switching technology, and based on the OSI model (Open System Interconnection Reference Model, Open System Interconnection Reference Model), load balancing functions are realized at different network levels.

The OSI model was proposed by the International Organization for Standardization in 1981. It is a global network standard framework. It divides the computer network structure into seven layers:

  1. physical layer
  2. data link layer
  3. Network layer
  4. transport layer
  5. session layer
  6. presentation layer
  7. application layer

According to the implementation at different levels of OSI, there are the following types of load balancing:

  1. Layer 2 load balancing: Based on the data link layer, using the virtual MAC address method, the external request for the virtual MAC address, after receiving the load balancing, assign the actual MAC address response of the backend.
  2. Three-layer load balancing: Based on the network layer, using the virtual IP address method, the external request for the virtual IP address, after the load balancing receives the request, assign the actual IP address of the backend to respond.
  3. Four-layer load balancing: Based on the transport layer, the final internal server is determined mainly through the destination address and port in the message, plus the server selection method set by the load balancing device.
  4. Seven-layer load balancing: Based on the application layer, also known as "content exchange", the final selection of the internal server is mainly determined by the truly meaningful application layer content in the message, coupled with the server selection method set by the load balancing device.

Compared with Layer 2 and Layer 3, Layer 4/7 load balancing is more common. Now when we refer to load balancing, we refer more to the latter.

Due to the different implementation levels, there are huge differences between Layer 4 load balancing and Layer 7 load balancing in terms of performance, security, functionality, and complexity.

The most obvious difference is that the four-layer load balancing architecture is relatively simple, easy to manage, and consumes less resources, and has more advantages in network throughput and processing capabilities.

The seven-layer load balancing covers all network layers defined by the OSI model. It can modify network requests more flexibly and comprehensively, realize smarter load balancing, and have stronger security protection capabilities. But in contrast, resource consumption will be higher.

From load balancing to application delivery

The period when load balancing appeared was the period when the Internet speculative bubble began to expand. Many start-up companies that focused on developing related products emerged one after another, such as Foundry, Alteon, Arrowpoint, NetScaler, F5, etc. The competition pattern was fierce, and there was quite a group of heroes. the taste of.

As mentioned earlier, Foundry first proposed the concept of a four-layer/seven-layer switch and launched a product, namely ServerIron XL released in 1998. The product was so popular at the time that it continued to sell for a while even after Foundry was acquired by Brocade.

Most of the load balancing start-ups at that time did not escape the fate of being acquired in the end. What may make a bigger difference is whether the takeovers occurred before or after the bubble burst.

A typical case is Alteon’s two-time owner change: At the peak of 2000, Nortel acquired Alteon with a total transaction value of US$7.8 billion; less than 10 years later, Nortel sold Alteon’s business to Radware, although the two parties did not publicly trade Uh, but there are rumors that Radware's price for Nortel is only tens of millions of dollars.

Today, we cannot arbitrarily attribute the ultimate failure of those companies to the bursting of the Internet bubble, but it may not be an exaggeration to regard it as the fuse.

The market space for load balancing lies in the expansion of the Internet and the increase in traffic. When the Internet bubble burst, the sales of load balancing products dropped sharply, which made all manufacturers face huge survival challenges.

Since the investment in R&D and technical support of four-layer/seven-layer products is much higher than that of two-layer/three-layer products with a simpler structure, in order to survive the difficult period, many manufacturers, including the leader Foundry, have made shrinkage four/seven layer product decisions. Among the older manufacturers, only F5, which was established in the same year as Foundry, insisted on continuous investment in this field.

When the Internet revived, the value of four/seven-layer products was widely recognized, and the market was revived, but at this time F5 had no rivals.

It was during this period that with the prevalence of e-commerce, streaming media websites and other applications, business applications put forward higher requirements for the network, and simple load balancing could no longer meet the requirements.

Through the integration of application acceleration and optimization, traffic management and other technologies, load balancing has begun to evolve into application delivery.

 

The technical core of application delivery

With the development of the architecture, the technology stack of application delivery has been greatly enriched on the basis of load balancing, and many new technical means have been added, such as TCP optimization, connection multiplexing, cache compression, SSL acceleration, etc., as well as web application firewall, protocol cleaning, DDOS protection, business encryption, etc. to ensure application security.

So many complex nouns may dazzle friends who don't understand technology. In fact, the evolution of the application delivery technology architecture has always revolved around its fundamental goal: delivering applications to users in an efficient, secure, and stable manner.

It can be said that the use of all technical means is to achieve this goal. Therefore, we may broadly divide the core technologies of application delivery into the following aspects:

  1. Application acceleration: It mainly realizes WAN optimization and acceleration, provides fast remote network access across regions and operators through a series of technologies, saves Internet bandwidth, and improves application access efficiency in the existing network environment.
  2. Application security: Integrating the functions of the security gateway, by establishing comprehensive security rules and policies, covering the network layer and application layer, implementing complete security protection to ensure safe access to applications.
  3. High Availability: Focusing on load balancing, it also adds various functions such as health check and session maintenance to realize high availability of clusters from the perspective of traffic management and ensure stable and reliable operation of applications.

Under the trend of increasing application environment pressure and increasing traffic, application delivery technology basically follows the route of application-oriented development of new technologies around the above three core demands.

So far, what we have discussed is application delivery based on local data center deployment. After the industry reached a consensus on the concept of application delivery, at least the mainstream maintained this law of development-until cloud computing became popular, and another huge change came.

From application delivery to application release

Entering the era of mobile Internet, the number of applications has grown explosively, and the competitive environment for enterprises has become increasingly fierce. It is necessary to maintain business innovation, continuously improve user experience, and quickly complete the implementation in order to achieve rapid progress and commercial success.

This poses huge challenges to IT architecture, including how to realize application modernization, how to deal with IT complexity, and so on. Compared with traditional data centers, the advantages of flexibility and scalability of cloud computing can help enterprises fully cope with this challenge, so it has become a mainstream trend.

Cloud computing, big data, and artificial intelligence are often referred to as the troika of the digital economy era, and have a profound impact on the future development of the entire industry. When it comes to the field of application delivery, perhaps because cloud computing is more inclined to the transformation of IT underlying infrastructure, it has the most significant impact on the development of application delivery technology. In contrast, the impact of big data and artificial intelligence on application delivery is It is more reflected in how to improve product service capabilities.

In 2019, F5 acquired the commercialization company of open source web server software Nginx. Nginx is very popular in the world, including Instagram, Netflix, Airbnb, etc., has Nginx in most website applications around the world. Its excellent load balancing, flow control and other capabilities can also play a huge role in the cloud microservice architecture. F5's intention to acquire Nginx is very obvious - to accelerate its own transformation in software and cloud, and to maintain its leadership in the cloud era.

Also in this year, the Cloud Native Computing Foundation (CNCF) established the Application Delivery SIG, hoping to clarify and solve the key links and core issues of the application delivery life cycle, while optimizing cloud native The application architecture in the scenario. CNCF was led by Google, and Kubernetes, which is gaining momentum, is its first open source project. The end of CNCF, from a certain point of view, also represents the accelerated development of "cloud native application delivery".

According to Wu Ruosong, general manager of Tongzhiyun, with the evolution of cloud computing and micro-service architecture, we will usher in the era of "application release". Application publishing originated from application delivery, but it is very different from application delivery.

The technical core of application publishing

First of all, we can briefly summarize that the biggest difference between traditional application delivery and application release is that the two are respectively oriented to two different application scenarios of "steady state" and "sensitive state".

The main requirement of the steady state is to ensure that applications can be accessed efficiently, reliably, and securely, while the sensitive state hopes to release applications and services in a faster, more agile, and more flexible manner based on this.

On the other hand, the most intuitive difference between the two lies in: From the perspective of the overall architecture, the two ends of application delivery are the user and the data center, while the two ends of application release are the user and the cloud.

This has also led to the application delivery era paying more attention to solving the pressure of north-south communication, while application publishing—due to the characteristics of cloud service architecture—needs to focus more on east-west communication in cloud computing.

  1. The north-south communication solves the communication problems between the cluster and the outside, and realizes the functions of the core network.
  2. East-west communication, which solves the communication and link management between various microservices within the business.

The core technologies of the current application release include:

  1. Blue-green release  - When releasing a new version of an application, the old version is not stopped. If the new version is running stably, the traffic will be switched from the old version. If there is a problem with the new version, it can also be switched back to the old version in time. This can reduce the impact of version releases on application operation.
  2. Grayscale Publishing  - As the name suggests, this is a way to release apps with a smooth transition between black and white. After the new version is released, first switch over a small amount of traffic, observe and analyze the operation of the new version, and then gradually switch more traffic to the new version until 100% switching is achieved, completely replacing the old version . Canary release, also known as canary release, is an important means of A/B testing.
  3. Circuit breaker mechanism  - The term "circuit breaker" often appears in the stock market. The circuit breaker mechanism in application services is essentially the same as the stock market, and it is all for risk control. There are often dependencies between cloud microservices. If there is a problem such as high latency in one of the service calls, it will cause problems in other services, and may even cause the business system to be unavailable. The implementation of the circuit breaker mechanism can detect problems in time and perform circuit breaker on the service to avoid a wider impact and release resources at the same time.
  4. Service orchestration  - control the forwarding of service traffic by formulating policies, schedule the traffic according to the order defined by the policy, and coordinate the entire network in a strategic manner.

It can be seen that the goals of the above technologies are all to establish a faster, flexible and agile application release mechanism in a sensitive application environment. Today, when cloud computing tends to become the norm, it has become a major trend to fully consider the characteristics of cloud architecture and build "cloud-native" application releases.

Application delivery is not going away

But we always need to be clear that "evolution" is not "replacement" . If the traditional technology practices that have been mature and stable before are eliminated when new technical concepts appear, not only manufacturers, enterprises, and the market will not agree. Traditional application delivery still needs to exist and develop.

From a technical point of view, although future application releases are mainly for sensitive environments, as mentioned above, they are still implemented on a stable basis. Enterprises' demands for application acceleration, application security, and high availability will not disappear when they enter the cloud era, but will only continue to increase with the development of technology.

In terms of product form, traditional application delivery equipment still has a large demand space, especially in China.

From "new infrastructure" to "counting in the east and calculating in the west", as a key component of IT infrastructure, facing the increasing network traffic across the country, the value of application delivery equipment in terms of traffic management and security protection can be achieved in the data center Play a huge role and contribute to weaving a network of national computing power and building a new computing power network system.

In addition, an important practical reason is that due to various factors such as data security, compliance, existing IT processes, and delay-sensitive applications, many domestic enterprises cannot accept public clouds——Industry research such as IDC and Gartner Agencies have described this status quo in their reports—but mostly in the form of on-premises private cloud deployments.

Even in the future, hybrid cloud is more likely to be an acceptable architecture than full public cloud adoption. In the private cloud environment, the demand for traditional application delivery equipment will continue.

Just as load balancing never died, application delivery will never die.

Swim till the sea turns blue

People often use "blue ocean" and "red ocean" to describe the state of the market space. There is no doubt that compared with the already very mature application delivery, application release is more like a blue ocean.

But when we pushed the focus to more than ten years ago, compared with load balancing, application delivery was a blue ocean; pushing forward further, load balancing was also a blue ocean—from Foundry to F5, different eras wrote different s story.

But at least we can be sure that the blue ocean will always belong to the leader and will always create the leader.

Interestingly, judging from the history of the evolution from load balancing to application delivery, it seems that the leader who goes deep into the blue ocean must have experienced the baptism of the red ocean. Of course, the deeper reason here is the strong "muscles" trained in the Red Sea-sufficient technical strength.

Yunke Tongming Lake application delivery gateway, based on application delivery, has built a safe, efficient and controllable application delivery platform for many domestic customers with years of technical practice. Yunke Tongminghu series products can achieve effective collaboration between applications and networks, and ensure that application systems can be accessed safely, efficiently, and controllably. They have important strategic value and rich application scenarios in the entire IT architecture. As the best collaborator of f5, the world's leading brand in the field of load balancing, Yunke Tongminghu is also the leading brand of Xinchuang load balancing.

Xinchuang chose Yunke Tongming Lake.

While constantly exercising its own muscles, Yunke Tongming Lake is also looking forward to the future, moving forward to the release of applications in the cloud era, and is committed to opening up the edge of cloud pipes, helping customers complete architecture evolution and embrace sensitive application environments.

In the vast ocean full of uncertainties in the digital economy, we not only need to temper ourselves from time to time, but also continue to motivate ourselves to swim further.

"Swim until the water turns blue."

Guess you like

Origin blog.csdn.net/m0_74258696/article/details/127410211