Quote:
Originally Posted by joligario
Just out of curiosity, why are they int32 variables instead of just int? Could that make a difference when compiling between the different machines? Does it make a difference compiling on the different machines when typecasting an int -> int32?
|
To understand int32, we need to look at
common/types.h:
Code:
typedef unsigned char int8;
typedef unsigned char byte;
typedef unsigned short int16;
typedef unsigned int int32;
typedef unsigned char uint8;
typedef signed char sint8;
typedef unsigned short uint16;
typedef signed short sint16;
typedef unsigned int uint32;
typedef signed int sint32;
As I understand, int in C++ assumes signed by default on
most compilers:
Quote:
By default, if we do not specify either signed or unsigned most compiler settings will assume the type to be signed
|
You can certainly use
int for 32-bit signed integers, but by using
sint32 instead, you don't have to worry about compiler-specific assumptions, plus it can be easier to understand just browsing through the source.