Understanding the 4 Dimensions of a Web DDoS Tsunami Attack

We all know that the number and frequency of network attacks have risen sharply in recent years, and the DDoS tsunami attack against web applications is one of the types that has grown very rapidly. The common HTTP/S flood attacks in the past are changing into the more difficult Web DDoS tsunami attacks on a large scale. Everyone should prepare for the attacks in advance and take appropriate protection measures.

 What can be classified as a Web DDoS Tsunami attack?

To understand Web DDoS tsunami attacks (and HTTP flood attacks in general), it is first necessary to understand the four dimensions involved in these attacks: attack volume, duration, characteristics of the botnet used, and how the attack was achieved.

1. Attack volume

Over the past few months, various third-party organizations have observed several HTTPS flood attacks reaching millions of RPS (requests per second). Some of these large-scale attacks even reached tens of millions of RPS. These giant RPS attack levels represent extreme HTTPS floods, and the number of high-volume RPS attacks continues to grow. There is reason to believe that virtually every web application, web service, or any other online property on the planet could be the target of this massive tsunami of web DDoS attacks! The need for web DDoS tsunami protection is truly a "must have" for our industry.

The rise of the Web DDoS tsunami has greatly affected not only online property owners, but also WAF and DDoS protection solution providers, and it is our responsibility to protect our customers' online properties and our own infrastructure from these complex high RPS DDoS attack. Building a cyber tsunami attack detection and mitigation service requires special attention, expertise, and substantial investment in appropriate infrastructure. The goal is to eliminate situations where the protection infrastructure becomes overwhelmed and saturated with the high volume of traffic from an attack before actual customer protection is achieved. Only a high volume of L7 entities (network proxies, etc.) and a highly architected and ruggedized protection infrastructure can successfully handle such a large attack volume. Only a highly skilled and experienced vendor in DDoS and L7 AppSec protection, like Fire Umbrella Cloud, can meet the L7 infrastructure and attack mitigation requirements created by the new era of Web DDoS tsunami.

 2. Attack duration

Web DDoS tsunami attacks can last anywhere from seconds to hours or days. Although some notorious ultra-high RPS (millions) attacks usually lasted less than a minute, many other recent Web DDoS tsunami attacks have also lasted for minutes or hours, and there have been several cases experienced by Huo Umbrella Cloud customers The tsunami hit lasted for several hours.

In addition to the duration, the intensity of Tsunami attacks also ramps up dramatically, with attacks bursting to "full power" in less than 10 seconds in most cases, and then staying there. One can imagine the consequences of an unprotected website suddenly jumping to 50-1 million RPS in less than 10 seconds during periods of high traffic: the website would be down and unresponsive to legitimate users, and customers would have to go to another provider suppliers to get the services they need.

Defending against a tsunami attack is no easy task and it requires a high degree of DDoS and AppSec protection expertise. Every web DDoS tsunami mitigation infrastructure must be able to cope with and absorb dramatic increases in incoming load, be prepared to maintain this capacity over varying periods of time, and do it all efficiently and cost-effectively, and this all needs to be done within Ensure that customers' online assets are securely started and running at the same time.

 3. Characteristics of the botnet used

The following are the main dimensions related to attack detection and mitigation summarized by Huosanyun:

First, we should consider the size of the botnet.  The most important indicator is the number of IPs launching attacks, and the number of attacker IPs used can usually range from thousands to hundreds of thousands. IPs can be distributed across the globe, or they can be assigned to numerous Autonomous System Numbers (ASNs), which are usually owned by service providers and identify networks as they exist on the Internet. Therefore, during a Web DDoS tsunami, each attacker's IP can generate similar, higher or lower RPS levels, as can the average RPS level of legitimate clients. Therefore, treat the IP with the most traffic (i.e. the client IP with the highest RPS received within a certain time frame) to the attacker as a mitigation technique (including providing other traditional mitigation methods such as rate-limiting source IPs with high RPS levels ) may generate unnecessary false positives. In some real-world cases seen by Fire Umbrella Cloud, attackers generate Web DDoS tsunamis from large-scale botnets, and each individual bot generates only a very low RPS amount to evade the use to mitigate such attacks simple method.

Web DDoS tsunamis can also originate from source IPs that various types of sources may assign or own . Probably the most common attack is one in which the attacker's IP belongs to a public proxy, such as an open proxy, anonymous proxy, or open VPN. Attackers often use this to obfuscate their real identity. Additionally, the attacker's IP could belong to legitimate users (i.e., home routers belonging to innocent, unwitting users), cloud provider IPs, web hosting provider IPs, and infected IoT devices. Attackers mainly use these different types of IPs to confuse themselves so as not to be identified and simply blocked. Therefore, attacks that are only mitigated based on IP address affiliation using threat intelligence information will not be detected and mitigated. Threat intelligence feed repositories are of little help when attacks are coming from legitimate residential IPs (legitimate clients of most online services). Building mitigation strategies based solely on IP address intelligence can generate unnecessary false negatives.

And to build a tsunami of HTTP attacks, different groups of hackers sometimes cooperate and attack a single victim at the same time . Therefore, multiple types of attacking IP addresses and a large number of RPS may appear in one attack, which makes it complex and challenging to deal with.

 4. How to achieve the attack

In the beginning, a Web DDoS Tsunami attack consisted of a simple HTTP request built from a single transaction that was mass-transmitted or replicated. For example, it could be a simple HTTP GET to "/" with a very basic set of HTTP headers such as Host and Accept. On the one hand, these transactions appear legitimate, so it is unlikely that traditional WAAF or other existing means will mitigate the attack. On the other hand, a mitigation entity might be able to simply block or filter this particular individual transaction before it is delivered to the protected organization's online properties. In this case, the attack will be mitigated. Today, however, Web DDoS tsunamis are more sophisticated, and attackers avoid this simple detection and mitigation by constructing more complex and authentic transactions. Furthermore, they rely heavily on randomization. Various attack transaction structures emerged in the cyber tsunami attack. Attackers craft more authentic and legitimate transactions that include a set of "legitimate-looking" query parameters, more HTTP headers, User-Agent and Referer headers, web cookies, and more. Attack requests have various HTTP methods (POST, PUT, HEAD, etc.) and are directed to multiple paths within the protected application. Many properties of attacker-generated transactions are sequentially random, sometimes based on individual transactions. Using this high level of randomization makes simple mitigations impractical. Tsunami DDoS attacks appear as legitimate traffic requests that are constantly randomized. Therefore, when aiming for perfect mitigation, there is no simple, predefined signature or rule-based mechanism to provide attack mitigation because requests appear legitimate and do not indicate malicious intent.

Another thing that makes these attacks so hard to mitigate is that even when encrypted traffic is decrypted, it still looks legitimate. Web DDoS tsunami attackers utilize a plethora of sophisticated evasion techniques to bypass traditional application protections. To increase the complexity of these attacks, attackers change their attack patterns during the attack or use multiple attack request structures simultaneously. The situation is further complicated when the attack is launched by multiple well-orchestrated botnets and the attacker's multiple tactics are presented simultaneously. Because of all these attacker tactics, a Web DDoS tsunami attack can contain millions of different transactions, all of which appear to be legitimate. Failure to treat these attacks as a zero-day attack and specifically mitigate them using a predefined set of filters can lead to a large number of unnecessary false negatives during the mitigation process. Imagine if there is an RPS attack of up to 3 million times, and the false positive rate is 1%; it will also cause many online assets to be unable to withstand the impact of traffic leakage.

The Right Option to Protect Against Web DDoS Tsunami Attacks

Understanding the different dimensions of a web DDoS tsunami attack is important, but even more important is understanding how to protect your organization from such attacks. To protect against these attacks, organizations need a solution that can quickly adapt to attack activity in real time. Regular on-premises or cloud-based DDoS and WAAF solutions cannot do this: because these threats are so dynamic that their frequency is unpredictable, attack vectors are random, source IPs and other parameters change, and Their ability to withstand these changes is maintained over time.

Only behavior-based algorithms with self-learning and auto-tuning capabilities can detect and mitigate these attacks. Our industry-leading application protection products and solutions are developed with one goal in mind: to detect and stop attacks before they overwhelm the infrastructure, keeping customers safe.

Guess you like

Origin blog.csdn.net/huosanyun/article/details/132227081