Why does int variable throw an error, but not an int literal when assigned to a byte variable?

coderboy :

I have recently begun to learn Java, and couldn't just understand one feature of the language.

When I write the code below I don't get any error (and sensibly I shouldn't!):

byte b = 10 * 2

However when I type in the following code, the compiler throws an error:

int i = 10;
byte b = i * 2

When the compiler can perform a check on 10 * 2 to ensure that it's less than the range of byte, why can't it also perform a check on i * 2 and see whether it's less than the range of byte?

Has it got something to do with lower-level representation of bits, or something related to memory?

tplusk :

I'm not positive on anything Java-specific, but any modern compiler would perform constant-folding to "fold" expressions that are entirely constants. I.e., 10 * 2 folds to 20 so the compiler treats it as if you typed byte b = 20;

It is not really practical for a compiler to try and optimize for variables. Even though in your provided example it is relatively simple to look at and know that i is 10, if a compiler would try to optimize that and know what i was, it would have to maintain it's own symbol table and would essentially be an interpreter. Since java is a pre-compiled language, this defeats the purpose.

Elaborating:

There is a difference between a compiler and an interpreter. A compiler takes in source code as input, and writes machine code behind the scenes. When that machine code gets run, operations/executions/calculations are performed. Java is a compiled language, so it's compiler is not doing much computation, it is just writing machine code that can be run on a Java Virtual Machine. Python on the other hand, is an interpreted language. When you run a python program, it won't try to do any type conversion for i * 2 until after it actuall evaluates i * 2.

Now, sometimes compilers try to get smart, and have built in "optimizations." What this means is instead of writing machine code that does some operation, they write machine code in fewer instructions because it knows what it will be (so the compiler does some computation to achieve that). In your example, rather than write machine instructions that store the number 10, store the number 2, multiply them, then store the result, the compiler can multiple the 10 and 2, and just write a machine instruction to store that result.

When we introduce variables, it becomes harder for the compiler to optimize and figure out what that variable is. The actual compiling program (the Java compiler) would have to remember that i is a variable holding the number 10 right now. If we want to optimize just to know we can assign i * 2 to the byte, that would mean the compiler would have to remember every single integer variable on the off chance that it gets assigned to a byte in a later expression - at that point it isn't really worth the optimization as the compiler is spending extra computation (extra work to compile) that doesn't really give any benefit. A symbol table (mentioned above) is essentially a table remembering the variables and what their values are.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=307907&siteId=1