Your PC is probably little endian. The least significant *byte* (not digit) is stored first in memory. Now granted that's probably backwards to how most people think about numbers and how we write them. Intel must have had their reasons. I can think of maybe one or two.
In short, you don't correct for it usually. The computer (or compiler) isn't confused about the endianness. It knows how to do the proper thing with a number of a particular width. There's no need unless you're dealing with a source of data other than your machine's architecture. Typically data sent over a network would be big endian and need to be converted before putting on the wire.
There's no way to declare a ptr as pointing to a value of a particular endian. An interesting idea, if you were dealing with in-memory data from another architecture. But other than that I can't think of any other use case.
If you are parsing a little-endian number one byte at a time, you read it like this pseudo code:
Code: Select all
number = 0
for x=0 to numbytes
number = number or (read_byte() shl 8*x)
next
If one wants to convert numbers for transmission on a network using a binary protocol, there are C library functions like htons(), htonl(), etc that convert a number *to* big endian. And the opposite functions ntohs(), ntohl(), etc. These functions are portable and can make code work across platforms with different endian. n means "network" or big endian and h means "host" which is whatever endian your current processor is using. On big endian machines the hton family are noops.
I remember working with FAT12 disk structures years ago. as I recall, it would store pairs of clusters in a three byte sequence, but of course in little endian. So if you read the pair one byte at a time you'd get 8 bits of cluster one, then 4 bits of cluster two, 4 bits of cluster one, and then the last 8 bits of cluster two. Doing an adhoc byte-based read required a bit of work on the programmer's part. The best bet was to read three bytes at a time, and then map a C struct over it, defining each field as 12 bits, and then the compiler would properly grab the numbers without fuss. Like I say computers don't have any problem with endianness. It's we who get confused.