0x00: write in front
Worshiped domestic translators, analyzed and translated a topic of BlackHat (HTTP Desync Attacks: Smashing into the Cell Next Door)
The harvest is full, just combine this knowledge point with a CTF topic to consolidate
[RoarCTF 2019]Easy Calc
Recurring address: https://buuoj.cn/challenges
0x01: HTTP request smuggling vulnerability
In the current network environment, different servers use different postures to implement the RFC standard (including almost all important textual information about the Internet)
Generally speaking, the transmission format of data packets in the network is planned in strict compliance with the protocol. But it is precisely because of subtle differences that different servers may produce different processing results for the same HTTP request, and security risks arise here.
First learn about the protocol that data packet transmission complies with-HTTP 1.1 protocol
In the protocol before the ancient times (HTTP1.0), every time the client makes an HTTP request, it needs to perform a TCP interaction (three-way handshake) with the server. Based on the network environment at that time, it is not like today's web page consists of multiple network resources. The composition was still sufficient at the time. But now it must consume server resources too much and is not suitable for the current environment. So HTTP1.0/1 appeared.
Here are two important protocol featuresKeep-Alive&Pipeline
Keep-alive is certainly not unfamiliar, you can often see connection:close/keep-allive in the data packet
The so-called Keep-Alive
is to add a special request header to Connection: Keep-Alive
the HTTP request to tell the server that after receiving this HTTP request, do not close the TCP link, and reuse this TCP link for subsequent HTTP requests to the same target server. This only needs to be done. A TCP handshake process can reduce server overhead, save resources, and speed up access. Of course, this feature HTTP1.1
is turned on by default in.
Pipeline
Here, the client can send its own HTTP request like a pipeline, without waiting for the server's response. After the server receives the request, it needs to follow the first-in-first-out mechanism to strictly correspond the request and response, and then The response is sent to the client.
HTTP Pipelining field, it is an asynchronous technology that submits multiple http requests in batches without waiting for the response to be submitted. The following figure shows the use of Pipelining and non-Pipelining
This means that the front-end and the back-end must agree on the boundary size of each data packet within a short period of time, otherwise, the attacker can construct and send a special data packet. From the front-end perspective, it is a request, but at the back-end It is interpreted as two different HTTP requests.
In order to improve the user experience. Many site owners choose CDN for acceleration. The principle of CDN is similar to the reverse proxy of nginx. A reverse proxy server is added in front of the source site. When users request some static resources, the proxy server directly allocates the relevant resource server.
Because of the characteristics of Keep-alive, in order to relieve server pressure, generally speaking, the reverse proxy server and the back-end origin server will reuse the TCP link. This is also easy to understand. The distribution of users is very wide, and the time to establish a connection is also uncertain, so that TCP links are difficult to reuse, and the IP addresses of the proxy server and the back-end origin server are relatively fixed.
When we send a carefully constructed HTTP request to the proxy server, due to the different implementations of the two servers, the proxy server may think it is an HTTP request and then forward it to the back-end origin server, but the origin server passes through After parsing and processing, only part of it is considered to be a normal request, and the remaining part is even a smuggled request. When this part affects the request of a normal user, an HTTP smuggling attack is realized.
0x02: Four attack methods for HTTP smuggling attacks
1: CL is not 0 (Content-length is not 0)
All requests that do not carry the request body HTTP
may be affected by this. Here are GET
examples of requests.
The front-end proxy server allows the GET
request to carry the request body; the back-end server does not allow the GET request to carry the request body, and it ignores GET
the Content-Length
header in the request and does not process it. This may lead to smuggling requests.
Example of construction request:
1 2 3 4 5 6 7 |
|
\r\n是换行的意思,windows的换行是\r\n,unix的是\n,mac的是\r
Vulnerability trigger
The front-end server receives the request, reads it Content-Length
, and judges that it is a complete request.
Then forward to the back-end server. After the back-end server receives it, because it does not Content-Length
process it, due to the existence of Pipeline, the back-end server thinks that it has received two requests, namely: the
first one:
1 2 |
|
the second
1 2 |
|
Then the second request caused HTTP request smuggling and was successfully exploited.
2:CL-CL
Some servers will not strictly implement this specification. It is assumed that the intermediate proxy server and the back-end origin server will not return 400
an error when receiving similar requests .
But the intermediate proxy server Content-Length
processes the request according to the value of the first one , and the back-end origin server processes the request according to Content-Length
the value of the second one .
1 2 3 4 5 6 7 |
|
Trigger process
The length of the data packet obtained by the intermediate proxy server is 8, and the above-mentioned entire data packet is forwarded to the back-end origin server intact.
The length of the data packet obtained by the back-end server is 7. After reading the first 7 characters, the back-end server considers that it has been read, and then generates a corresponding response and sends it out. At this time, there is still a letter a left in the buffer. For the back-end server, this a is part of the next request, but the transmission has not yet been completed.
If at this time another normal user makes a request to the server:
1 2 |
|
Because the TCP
connection between the proxy server and the origin server is generally reused . So the request of the normal user is spliced to the back of the letter a. After the back-end server has received it, the actual request it processes is actually:
1 2 |
|
At this time, the data packet that happens to be spliced will return an aGET request method not found
error.
3:CL-TE
CL-TE
, That is, when receiving a request packet with two request headers, the front-end proxy server only processes the Content-Length
request header, while the back-end server will comply with RFC2616
the regulations, ignore it Content-Length
, and process the Transfer-Encoding
request header.
1 2 3 4 5 6 7 8 9 10 |
|
You can get a response by sending several requests in a row.
Trigger process
由于前端服务器处理Content-Length
,所以这个请求对于它来说是一个完整的请求,请求体的长度为6,也就是
1 2 3 |
|
当请求包经过代理服务器转发给后端服务器时,后端服务器处理Transfer-Encoding
,当它读取到
1 2 |
|
认为已经读取到结尾了。
但剩下的字母a就被留在了缓冲区中,等待下一次请求。当我们重复发送请求后,发送的请求在后端服务器拼接成了类似下面这种请求:
1 2 3 4 |
|
服务器在解析时就会产生报错了,从而造成HTTP
请求走私。
4:TE-CL
TE-CL
,就是当收到存在两个请求头的请求包时,前端代理服务器处理Transfer-Encoding
请求头,后端服务器处理Content-Length
请求头。
1 2 3 4 5 6 7 8 9 10 11 |
|
触发过程
前端服务器处理Transfer-Encoding
,当其读取到
1 2 |
|
认为是读取完毕了。
此时这个请求对代理服务器来说是一个完整的请求,然后转发给后端服务器,后端服务器处理Content-Length
请求头,因为请求体的长度为4
.也就是当它读取完
1 |
|
就认为这个请求已经结束了。后面的数据就认为是另一个请求:
1 2 3 4 |
|
成功报错,造成HTTP
请求走私。
0x03:CTF题目考点
在CTF中目前遇到的题目考查点是利用HTTP走私绕waf
题目是一个计算的框,查看源代码得知了calc.php页面,访问得知
通过测试得知,上图只是一个简单的过滤,后台仍然存在waf过滤了字母
那么此时就可以用到HTTP走私来进行bypass
构造前
构造后
如上图所示,添加两个Content-Length即可进行HTTP走私
效果类似第一个Content-Length相当于欺骗waf 我是一个POST表单,顺利通过后传到后台服务器进行处理,成功解析Phpinfo页面。
0x04:总结
虽然HTTP走私漏洞实际挖掘漏洞过程中很少涉及
触发条件苛刻
但不失为一种好的突破方式,日常测试中可以加两个Content-Length观察返回包
作为开发和安全运维人员来说,要严格遵守RFC协议所制定的标准方可避免走私漏洞的存在。