c++ vector size. why -1 is greater than zero
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
why is ( 0 v.size()-1 ) when the vector v is empty?
v.size()
has an unsigned return type. Subtracting 1 from an unsigned 0 results in wrap around to some "very large" unsigned number (this is modulo arithmetic).
0 will be less than some "very large" number, always.
It's a common gotcha when working with standard containers and mixing signed/unsigned indices/sizes.
vector size - 1 when size is 0 in C++
vector::size()
is of type size_t
which is an unsigned type, and unsigned integers can't represent negative numbers.
Why do I see size of vector as zero?
Because std::vector::reserve
doesn't resize a vector. It simply re-allocates a larger chunk of memory for the vector's data (and copies elements from the original if necessary).
You need std::vector::resize
for that:
c.resize(pattern.length());
Currently, you are accessing c
out of bounds.
Alternatively, you can keep the call to resize and use push_back instead of operator[]
c.reserve(pattern.length());
c.push_back(-1);
c.push_back(0);
Why Vector's size() and capacity() is different after push_back()
The Standard mandates that std::vector<T>::push_back()
has amortized O(1)
complexity. This means that the expansion has to be geometrically, say doubling the amount of storage each time it has been filled.
Simple example: sequentially push_back
32 int
s into a std::vector<int>
. You will store all of them once, and also do 31 copies if you double the capacity each time it runs out. Why 31? Before storing the 2nd element, you copy the 1st; before storing the 3rd, you copy elements 1-2, before storing the 5th, you copy 1-4, etc. So you copy 1 + 2 + 4 + 8 + 16 = 31 times, with 32 stores.
Doing the formal analysis shows that you get O(N)
stores and copies for N
elements. This means amortized O(1)
complexity per push_back
(often only a store without a copy, sometimes a store and a sequence of copies).
Because of this expansion strategy, you will have size() < capacity()
most of the time. Lookup shrink_to_fit
and reserve
to learn how to control a vector's capacity in a more fine-grained manner.
Note: with geometrical growth rate, any factor larger than 1 will do, and there have been some studies claiming that 1.5 gives better performance because of less wasted memory (because at some point the reallocated memory can overwrite the old memory).
vector.size() is working unexpectedly in comparision
The problem is your loop:
for(int i = 0; i < v.size() -1;++i)
More specifically, this part of the condition: v.size() - 1
.
The size
function returns a value of type size_type
, which if you read e.g. this vector
reference will see is an unsigned type.
That means when you subtract 1
from the value 0
, you don't get -1
but instead get a very large value since unsigned underflow wraps around to its highest value.
That means your loop will indeed iterate, at least once, and lead to UB (Undefined Behavior) when you index out of bounds.
c++ illogical = comparison when dealing with vector.size() most likely due to size_type being unsigned
As others have pointed out, this is due to the somewhat
counter-intuitive rules C++ applies when comparing values with different
signedness; the standard requires the compiler to convert both values tounsigned
. For this reason, it's generally considered best practice to
avoid unsigned
unless you're doing bit manipulations (where the actual
numeric value is irrelevant). Regretfully, the standard containers
don't follow this best practice.
If you somehow know that the size of the vector can never overflowint
, then you can just cast the results of std::vector<>::size()
toint
and be done with it. This is not without danger, however; as Mark
Twain said: "It's not what you don't know that kills you, it's what you
know for sure that ain't true." If there are no validations when
inserting into the vector, then a safer test would be:
while ( rebuildFaces.size() <= INT_MAX
&& rebuildIndex >= (int)rebuildFaces.size() )
Or if you really don't expect the case, and are prepared to abort if it
occurs, design (or find) a checked_cast
function, and use it.
Related Topics
How to Remove Certain Characters from a String in C++
Rotate an Image Without Cropping in Opencv in C++
How to Get the Md5 Hash of a File in C++
C++ Get Name of Type in Template
How to Compile For Os X in Linux or Windows
C++ Alignment When Printing Cout ≪≪
How to Implement Matlab'S Mldivide (A.K.A. the Backslash Operator "\")
Enable C++11 in Eclipse Cdt (Juno/Kepler/Luna) Indexer
Floating Point Keys in Std:Map
Undefined Reference to Boost::System::System_Category() When Compiling
Sending and Receiving 2D Array Over Mpi
Will New Return Null in Any Case
How to Convert a String Variable Containing Time to Time_T Type in C++
C++11 Aggregate Initialization For Classes With Non-Static Member Initializers