Yun slice slider reverse

The URL of a certain slider is as follows:

aHR0cHM6Ly93d3cueXVucGlhbi5jb20vcHJvZHVjdC9jYXB0Y2hh
URL is obtained after base64 decryption

Here we take the embedded slider as an example. Most sliders basically have two interfaces, one to obtain the verification code and one to verify the verification code. We open the network, simulate the packet capture and lock the two requests.
Insert image description here
Insert image description here

After parsing the two interface parameters, we found that there are four get requests, as follows. If
Insert image description here
the verification code is verified, there will be one more token.

Insert image description here
We first analyze the get request with the stack
Insert image description here
and find that the entire link parameters are generated here. Following the parameters, we find that cb is a random number, i and k are derived from encrypted e data, and e contains some fingerprint information of the browser, captchaId is a fixed parameter, so we only need to analyze the cb, i, k parameters

Next, follow the cb parameter and deduct the method. Then
Insert image description here
first check the origin of some fingerprints in e. The most convenient thing here is to search the fingerprint keyword globally and
Insert image description here
find that the fingerprint is also generated by the browser environment information. You can deduct it here. The entire generation method can also be fixed. It looks like this.
Insert image description here
Then we go back to the place where i and k were generated and
Insert image description here
find that i is aes encryption, i's secret key and initialization vector are also a 16-bit random number, and k is rsa. Encryption, the public key of k can also be found.
Insert image description here
Here, we try to use js to restore these two encryptions, using the crypto and jsencrypt libraries, and the simple restoration is as follows

// aes加密函数
function encrypt(plaintext, key, iv) {
    
    
	const keyBuffer = Buffer.from(key, 'utf8');
	const ivBuffer = Buffer.from(iv, 'utf8');
	
	const cipher = crypto.createCipheriv('aes-128-cbc', keyBuffer, ivBuffer);
	let encrypted = cipher.update(plaintext, 'utf8', 'base64');
	encrypted += cipher.final('base64');
	return encrypted;
}

// rsa加密函数
function encryptRSA(N){
    
    
	let s = new jsencrypt();
	s.setPublicKey('MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDnOWe/gs033L/2/xR3oi6SLAMPBY5VledUqqH6dbCNOdrGX4xW+1x6NUfvmwpHRBA2C7xWDDvOIldTl0rMtERTDy9homrVqEcW6/TY+dSVFL3e2Yg2sVaehHv7FhmATkgfC2FcXt8Wvm99QpKRSrGKpcFYJwOj2F8hJh+rTG0IPQIDAQAB')
	const encryptedData = s.encrypt(N)
	return encryptedData
}


Then, we assemble it into python code.
Insert image description here
After successfully getting the request, the analysis is completed. Next, we analyze the verify request and continue following the stack. We find that the encrypted request entry is the same, so we only need to find the token parameters and restore the encryption track.

Carefully observe the number of restaurants returned by the get request. The token is actually in the get request
Insert image description here
. Then check the stack parameters and find that the e here is slightly different.
Insert image description here
The most important thing is that there are two more parameters, distanceX and points. Preliminary analysis shows that distanceX is a normalization vector, points is the coordinate trajectory, we continue to follow the stack analysis
Insert image description here
to find the parameter transfer position, distanceX is r, and the generation of r is calculated by the width of the obtained large background image and the width and offset of the gap image. After multiple tests, the width of the large background image remains unchanged, and the width of the notch image may have some changes, but in general it remains basically unchanged. Therefore, these two parameters can be fixed here.

Note: After testing, the width of the small image is sometimes 59 and sometimes 60. It does not change much and can be fixed, because the offset value of the final verification will actually have a floating value of several pixels in the back-end verification. , because there is no guarantee that it can be moved to the middle of the gap every time

	this.imgWidth = 304;
	this.alertImgTagwidth = 59;
    distanceX= (this.imgWidth - this.alertImgTagwidth) * (this.offsetX / (this.imgWidth - 42)) / 304;

Next, for the slider trajectory, you can continue to follow the stack to deduct the generated code, or you can simulate and generate it yourself. After many breakpoint analyses, it was found that the first parameter is the movement distance of x, and the second parameter is y. Moving distance, and the third parameter is the change of timestamp, so the approximate simulation process is as follows

	function reducePoints(offsetX){
    
    
		points = [[800 , 1979, 5]]
		var min = 10;
		var max = 30;
		for ( var i = 1; i<21;i ++){
    
    
			x = 800 + (offsetX / 20) * i
			randomInt = Math.floor(Math.random() * (41)) + min;
			point = [x , points[i - 1][1] + 1, points[i - 1][2] + randomInt]
			points.push(point)
		}
		return points
	}

The initial x, y, and timestamp here can be fixed. I cycled 20 times here, and finally generated a total of 21 points.

Note: After my experiment, the generated trajectory points are too few and will not pass. There is no experiment on the specific number. Generating 21 is basically no problem.

Finally, put the code together and integrate it, as follows
Insert image description here
. Then, use python to request the package. One thing to note here is that the large image obtained from the get request is inconsistent with the image size in the web page, so the trajectory needs to be calculated. The calculation of the trajectory It's very simple, offset/width of large image * width of web page image.
Insert image description here

finally succeeded
Insert image description here

Guess you like

Origin blog.csdn.net/qq_36551453/article/details/135230857