Explore CSAPP Chapter show_bytes function

CSAPP second chapter gives a help function --show_bytes function of the bit pattern we observed data, specifically to achieve the following:

#include<stdio.h>
typedef unsigned char *byte_pointer;

void show_bytes(byte_pointer start, size_t len)
{
    size_t i;
    for (i = 0; i < len; i++)
    {
        printf("%.2x", start[i]);
    }
    printf("\n");
}

void show_int(int x)
{
    show_bytes((byte_pointer)&x, sizeof(int));
}

void show_double(double x)
{
    show_bytes((byte_pointer)&x, sizeof(double));
}

void show_float(float x)
{
    show_bytes((byte_pointer)&x, sizeof(float));
}

Function is not difficult to understand, as long as the C programming language should be able to understand, so after the read function, the first time I think I realized it, codes are as follows:

#include<stdio.h>
typedef char *byte_pointer;

void show_bytes(byte_pointer start, size_t len)
{
    for (int i = 0; i < len; i++)
    {
        printf("%.2x", start[i]);
    }
    printf("\n");
}

void show_int(int x)
{
    show_bytes((byte_pointer)&x, sizeof(int));
}

void show_double(double x)
{
    show_bytes((byte_pointer)&x, sizeof(double));
}

void show_float(float x)
{
    show_bytes((byte_pointer)&x, sizeof(float));
}

After writing I immediately try to run a bit of a digital type int

int main(void)
{
    int x=1;
    show_int(x);
}

VS2017 run results:
TIM Screenshot 20191119161542.png
appears to be no problem. So I tried it float, he said:

int main(void)
{
    float x = 1.0f;
    show_float(x);
}

VS2017 run results:
TIM Screenshot 20191119162023.png
output 14 hexadecimal digits, six multi-output hexadecimal number, i.e. 3 bytes.
The three bytes is how come it?
By comparison function I wrote the book with a different function, the book byte_pointerShi unsigned char*, and I was char*.
What's the problem then, charand unsigned charis a byte-bit binary, hexadecimal two, why it happened multiple output case three bytes of it.
By inspection, I found that the problem is out in charthe unsigned charmiddle: The Difference Between the c language unsigned char * and char * are .
Specific reason is this:
C language, although did not specify whether the symbol char, char but on most machines are signed, and the output format in printf% .2x process is as follows: first char cast to an int , then put int type conversion to binary. This involves the extension bit, but there was expansion of the number of symbols is followed - 'sign-extended' See in particular my new blog ~.
Therefore, when extended, if the first bit char type bit pattern is 1, then extended to the int type, preceded by three bytes required 24 1, when the actual accuracy than printf .2x, in the naturally front-byte plus six f.

Guess you like

Origin www.cnblogs.com/z-y-k/p/11890477.html