' When compiled to 32bit code, this prints "64 bit all ones: FFFFFFFFFFFFFFFF"
' When compiled to 64bit code, this prints "64 bit all ones: FFFFFFFF", as it should.
dim as ulongint N
'N = -1 ' Always FFFFFFFFFFFFFFFF
N = &HFFFFFFFF ' My coding mistake which tickled the bug
print "64 bit all ones: "; hex(N)
The type of integer literals is considered by default (when literal suffix not given) as INTEGER (32 or 64 bits, depending on compiler). For the 32-bit compiler only, if the literal value size requires a greater field size, LONGINT type is then considered by default.
For the 32-bit compiler:
- '&hFFFFFFFF' is considered as an INTEGER 32-bit, thus corresponding to '-1'.
- To force the ULONGINT type for such a literal value, you must explicit the type by adding a suffix to the literal value ('ull'): '&hFFFFFFFFull'
(note: '&hFFFFFFFFu' or '&hFFFFFFFFul' also works, by forcing the UINTEGER or ULONG type to the literal value)
Thanks for the reply. I still think it's a bug. Surely, if a programmer is not trying to be tricky, he should get the same result regardless of the target architecture? The 32bit compiler knows that the variable is a 64bit unsigned integer, so filling in all the spare bits with ones seems a bit crazy to me.
This is because for an integer literal value expressed in hexadecimal notation, there may be an ambiguity between a positive value and a negative value.
This is never the case for an integer literal value expressed in decimal notation, because the sign is always explicit:
' When compiled to 32bit code, this prints "64 bit all ones: FFFFFFFFFFFFFFFF"
' When compiled to 64bit code, this prints "64 bit all ones: FFFFFFFF", as it should.
dim as ulongint N
'N = -1 ' Always FFFFFFFFFFFFFFFF
N = &HFFFFFFFF ' My coding mistake which tickled the bug
print "64 bit all ones: "; hex(N)
N = 4294967295 '' or N = &H100000000 - 1
print "64 bit all ones: "; hex(N)