Problems when casting unsigned int to char in C language

When I was doing a socket communication project today, I had to calculate the length and width of the picture from the header of the data packet. As a result, the size of the picture was wrong and a long face was displayed. I made a record to remind myself.

The height and height of the picture are represented by four bytes. The data packet is width and height in sequence starting from the fifth bit. The bit operation is used without thinking. The code I wrote for the first time is:

int mapwidth = 0;
int mapheight = 0;

wsabuf.buf is of type char

for(int i = 0; i <= 2; i ++)
mapwidth += ((unsigned int)wsabuf.buf[i + 4]) << (8 * (3 - i));
for(int i = 0; i <= 3; i ++)
mapheight += ((unsigned int)wsabuf.buf[i + 8]) << (8 * (3 - i));

After debugging, it is found that the received data packet is completely correct, the height is correct, the width is wrong, the height is 1280, and the width of the four-byte array is 0, 0, 2, 208. After conversion, the result should be 256*2+208=720, But the result is 512. Obviously, the last digit is not added, so I wonder if there is an error in the operation of shifting 0 to the left, but if the width is wrong, why is the height right? Carefully check that the four bytes of the height data packet are 0, 0, 5, 0, and the last bit is 0, so the operation of shifting 0 to the left should have no effect. Now I feel that there is a problem with the operation of shifting 0 to the left. I did an experiment. Added these words before the original code:

//////////test////////////
unsigned char a[4];
memcpy(a, &(wsabuf.buf[4]), 4);
unsigned int j = ((unsigned int)wsabuf.buf[7]) << 0;
/////////////////////

Use hexadecimal numbers to see the debugging results:

Array a is: 0x00, 0x00, 0x02, 0xd0

j=0xff ff ff d0,

With a slight change, change the data type of j to int:

//////////test////////////
unsigned char a[4];
memcpy(a, &(wsabuf.buf[4]), 4);
int j = ((unsigned int)wsabuf.buf[7]) << 0;
/////////////////////

The result remains the same

Curiosity is heavier. Suppose this j is 0x00 00 02 00, what will be the result?

int j = (0x00000200) << 0;

The result of debugging is 0x00 00 02 00.

If j=0x00 00 02 d0, then the result of the operation is still 0x00 00 20 d0

It seems that my feeling is wrong. Shifting 0 to the left really means that all data bits are standing still, so the problem is locked in the ((unsigned int)wsabuf.buf[7]) type conversion.

wsabuf.buf is of char type, but I have been looking at array a for the convenience of observation. Array a is an unsigned char. Change the sentence int j = ((unsigned int)wsabuf.buf[7]) << 0; to:

int j = ((unsigned int)a[3]) << 0; the operation result is correct, when the char type is cast to unsigned int, the binary number of 0xd0 is 1101 0000, the decimal number is -48, the highest bit is 1, So when casting all the bits filled with 1 it becomes 0x ff ff ff d0.

The forced type conversion is converted from a smaller number of bytes to a larger number of bytes. When both the decimal and the large number are signed or unsigned or the large number is signed, we must ensure that the final decimal numbers represented by them are the same. For example, the value of the char type above is -48, and the value converted to the int type should also be -48. If the unsigned char is 208 in decimal, the converted int decimal result should also be 208; if the symbols are inconsistent, convert to unsigned int to see the highest value of char. Bits, the highest is 1 filled with 1s, otherwise filled with 0s. The principle of forced type conversion should be to try to ensure that the result of the decimal number is correct, not the original code of binary.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325403623&siteId=291194637