jevans4949 wrote:Ultimately, I suppose the solution would have to be to have specific big-endian and little-endian integer types built into the compiler, which could then generate processor-specific code to deal with each type.
Most variables internal to the program would continue to be "don't care" format.
erm, The last thing I want is to use 68k dwords on an x86 machine. It would be horribly slow. I'll stick to converting the endianness of the data to working with the CPU native endianness.
I think you miss my point. If you can specify to the compiler the endianness of a field on an external file, e.g., png or bmp, and the target machine for the compile, then the compiler can generate the bswap, or whatever the 68k equivalent might be, when needed.
Obviously, if you were referencing a field heavily and had the need for speed, then your program would copy the input field into a local variable in the native format, and copy the local field back to the output record immediately before re-writing. Just the same as if the number was in character format.
Of course, if there is no intention that a file will ever be ported from the system it's created on, then data fields can continue to be in native format.
The principle could be extended to other formats, e.g. EBCDIC character coding, other popular floating point formats, packed decimal formats ... I even encountered a mini-computer with a 6-byte integer format, with the lower 4 bytes big-endian followed by the upper 2 bytes big-endian!