I know they are required sometimes; this is not what I'm talking about. I'm more concerned with a particular question. I'm creating classes that wrap a couple from the STL that do things a little differently. One thing that I've always been a little annoyed about is that the size() methods of all the STL classes return a size_t instead of a signed integer type. This means that either I need to turn off or ignore things like signed/unsigned comparison or use unsigned types myself. I've never been a fan of them when they aren't needed.
For instance, take a std::vector. There is almost no way that there could be enough elements in the vector so that it is out of the range of, say, a 'signed long' but in the range of an 'unsigned long'. The only way is if it were a vector of chars or something else one byte and took up half the memory space, or if it was a vector<bool> under an implementation that packed bits. If you make a vector<short>, if there were more than the max value of a 'signed long', it would be bigger than the process memory space. (Okay, you probably could make an implementation of std::vector that uses a file to back its storage. But show me a vendor that actually does and I'll mail you some cookies.)
Meanwhile, using unsigned numbers means that you have to deal with situations like the following otherwise-reasonable loop not working
Code: Select all
for(unsigned int i = N - 1; i>=0 ; --i)
On the other hand, it's kind of nice to have the type describe that negative numbers aren't legal. However, without actual enforcement of that (I don't count wrapping -1 to 2^32-1 as enforcing that the value is not negative), I don't think that's enough of a reason to prefer it given the possibility for semi-latent bugs like that.
So I'm really asking two questions here. What are your thoughts on unsigned types in general and what do the C++ people think about making my wrapper classes return signed ints from size()?