Endian-ness would probably be a big barrier to porting to non-x86 architectures. I think that the Internet has a standard endian (probably big) and there are OS functions that on little endian machines flip the bits around. But I doubt the EMU uses those functions since it was derived mainly by packet sniffing.
One thing that could be done would be to encapsulate the necessary constants in a macro like ENDIAN_CORRECT. That macro could be defined like so:
Code:
#ifdef X86_ARCH
// little endian -- leave the same
#define ENDIAN_CORRECT(a) (a)
#else
// big endian -- flip the bytes
#define ENDIAN_CORRECT(a) \
(((a & 0x000000FF) << 24) | \
((a & 0x0000FF00) << 8) | \
((a & 0x00FF0000) >> 8) | \
((a & 0xFF000000) >> 24))
#endif
Of course, that macro would only work for 32 bit integers, but the compiler would take care of actually computing the new value instead of it being done at run time.
The trick would be making sure that every instance of a constant like this is put inside the macro, which would take a while. And I'd imagine that there are probably places in the code that assume little endian-ness that wouldn't necessairly be easy to spot.