Summary and analysis of computer room traffic problems

1 [Ask a question]

【Actual Case 1】

At 3:00 in the morning, the bandwidth traffic of an IDC computer room of a company (website business) suddenly increased from 150M to 1000M during the usual peak period , as shown in the following figure:

 

The impact of the fault: Hundreds of servers cannot be connected directly, and all services in the computer room are interrupted.

 

Actual case 2]

A certain day and night of a certain month, an old boy and a child received urgent help from a student. The company website ( web game business) usually has a bandwidth of tens of megabytes , but suddenly it runs to 100M , and it has been 100M for a long time. Afterwards, the student's summary begins as follows:

At one o'clock in the morning, I received an alarm text message, and the website could not be accessed. Immediately picked up the laptop and checked the Internet, and found that the network of the entire cabinet could not be accessed normally. The first feeling is whether there is a problem with the IDC network. The information returned by calling the computer room is that the network of the computer room is normal, but the bandwidth traffic is abnormal (the traffic peak of 100M bandwidth has been hidden ) .

The impact of the fault: Dozens of servers could not be connected directly, all services in the computer room were interrupted, and the fault lasted for a long time.

【Actual Case 3】

On a certain day of a certain month, I received an urgent help from a friend of operation and maintenance. The traffic of the company's CDN source site did not change, and the traffic on the CDN side exceeded several G for no reason . I don't know what to do? The old boy added that he once encountered a picture that ran for more than 20 terabytes in less than a day .
The impact of the fault: Since the CDN was purchased , although the traffic increased by several G , the business was not affected. However, such a large abnormal traffic could directly cause the company to lose tens of thousands of yuan for no reason. Solving this problem reflects the value of operation and maintenance.

 

There are no more than three things, so let’s give three examples for the time being. These three cases are all faults actually encountered in the operation and maintenance work, which happened suddenly and needed urgent treatment. In the actual forum or group, I have seen such problems reported by friends several times. There are almost all kinds of birds, old birds, middle birds, and small birds. Most of my friends solve it without thinking in their heads (the reflex arc directly locates DDOS ), which takes a long time to solve, resulting in long-term business interruption. The old bird solves it step by step. First, it will be reflected as a DDOS problem. As a result, the solution time will be longer. If you can make a plan in advance, the recovery speed may be much better. The following experts will talk about some personal views.

 

[Analysis problem] 1) There are many reasons for the
IDC bandwidth to be full, the common ones are:

a.真实遭受DDOS攻击(遇到过几次,造成影响的不多见,其中还有黑客勒索的案例)。
b.
内部服务器中毒,大量外发流量(这个问题老男孩接警5次以上)

c.网站元素(如图片)被盗连,在门户页面被推广导致大量流量产生(接警3次以上)

d.合作公司来抓数据,如:对合作单位提供了API数据接口(有合作的公司的朋友了解这个)

e.购买了CDN业务,CDN猛抓源站(这个次数也不少)

f.其他原因还有一些,不普遍就不提了。

2)CDN带宽异常,源站没异常。

这类问题基本都是缓存在CDN的数据被频繁访问引起的。解决方法见结尾案例。

3) CDN带宽异常,源站也异常。

可能原因如公司做推广,大量数据访问,热点数据cache里不全。或CDN问题导致数据回源(有关CDN回源率问题及提升回源率经验,以后再和大家分享)。影响就是带宽高,后端静态服务器及图片及存储压力大

 

【解决问题】
分析了问题的可能原因,就好比较排查了。

a.真实遭受DDOS攻击

高手提供了17条解决经验思路,供大家参考,这里就不提了,那么实际上

遭受真实DDOS攻击并产生影响的并不是最常见的。

b.内部服务器中毒,大量外发流量。

这个问题的解决比较简单,可能有的朋友说,看看服务器流量,哪个机器带宽高处理下就好了。其实不然,实际解决比这复杂得多,带宽打满,所有监控都是看不到的。
比较好的思路,是联系机房确定机房自身无问题后(机房一般没法帮我们的),请机房断开连接外部IP服务器的网线,如负载均衡器,仅保留VPN SERVER,然后断掉内部服务器出网光关的线路,切断外发流量源头。
接下来查看监控流量服务,判断外发流量的服务器,然后进行处理。
其实,这个问题的发生及快速定位和很多公司的运维规范、制度关系很大,高手在给一些公司做运维培训分享时发现这个问题很严重(表象很好,内部运维规范、制度欠缺很多),大家都讨论的很深入,实际用的还是和聊的有差距。。

比如有的公司开发直接FTP连接随时发布代码,或者由开发人员负责定时多次上线。而运维人员又不知晓,结果导致问题发生定位时间长,这点建议各公司的老大多思考下。
高手的运维思路是,如果把网站机房比喻为一座房子,那首先要堵住后门(内部),其次是监控好前门(做好安全,留个小窗户给外面人看,即80端口服务,同时安排站岗值班的)。
网站的无休止的随时随意发布代码,对网站的稳定影响是至关重要的。对运维人员对故障的定位快慢也很关键。根据老男孩不完全调查,约50%以上的重要运维故障都是程序代码导致的,这也是老男孩给企业做培训分享时,灌输建议CTO的,多把网站稳定的责任分给开发,而不是运维。如果这个思想不扭转,网站不稳定状况就难以改变。
c.
网站元素(如图片)被盗连
这个属于网站的基本优化了,apache,lighttpd,nginx都有防盗链的方案,必须要搞。说到这也提个案例,高手的一个学生,到了企业工作,发现人家网站没有防盗链,结果上来没有周知老大,直接做防盗链了,然后美滋滋的当时还给我留言,说给公司搞防盗链了,很有成就,结果导致公司对外合作的业务,都是小叉子了,幸亏发现的及时没出大问题。
d-e.
合作公司来抓数据,如:对合作单位提供了API数据接口或购买了CDN业务。

最常见的就是购买CDN服务,如:CDN新建一个节点(可能数十机器),直接来我们IDC原战来抓数据(有的做好点的夜里来抓)。把原站抓的流量暴涨,严重的导致服务宕机。几家CDN公司,都有过这样的问题。这点希望CDN公司看到了,能改善,毕竟用户上帝嘛。

当然和电信,联通,GOOGLE,BAIDU,词霸等公司的合作,也会有流量暴高的情况,这里面包括了为合作的站搜索引擎爬虫爬数据的问题。有时虽然带宽流量不高,但是服务器或数据库撑不住了,搜索引擎专门喜欢爬我们的站内搜索,DISCUZCMS等早期的开源程序的搜索都是全站like %%方式去数据库搜索的,几个爬虫过来,直接就挂掉了,当然这不是本文要讨论的,解决方案以后再聊。

f.其他原因还有一些,不普遍就不提了。

上面的几点比较常见,其他原因就不多见了,因此,作罢,打这么多字真不轻松啊。

【苦练内功】

首先,高手强调下,大家要经常培养下自己的心里素质,遇到问题不能发慌。遇到不少朋友,处理紧急故障时,大脑都空白缺血了,手抖的无法敲击键盘了,这样的状态如何解决故障呢?如果老大在后面看着就更是雪上加霜了,甚至有个别学生直接跟高手哭鼻子了,宕机几分钟损失上万,负不起责任。

其实上面的大家的表现都是正常的,没什么不对的,曾经高手也是这样过来的,也是不断的挑战自己才练出来的。
希望朋友们能多提前做功课,不要问题来了在思考解决办法,临时的应对一定会是手忙脚乱的,即使是老鸟。如果提前有预案和防范演练,问题发生后就坦然得多,这可以扩展到运维的方方面面,DB,WEB,备份,恢复,流量等。
【亡羊补牢】

发生问题后,要充分总结,争取下次发生了,能提升速度,当然最好不发生。其实,运维人员挺悲催的,开发的下班就没事了,我们还得7*24开手机,来个短信提心吊胆的,甚至看到有个门户DBA发微薄,说making love时都可能被报警短信打断。1、提前优化运维制度、规范。2、提前优化网站结构、单点故障。3、留足备用带宽及服务器资源,把控好风险。4、完善的监控策略及响应机制等。

尽量不打无准备之战。兵法云,知己知彼,百战不殆。运维又何尝不是这个理?

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327051005&siteId=291194637