What Are the Errors, Misconceptions or Bad Pieces of Advice Given by Cplusplus.Com

why c++ algorithm includes have at most 2*(count1+count2)-1 comparisons

WARNING: The reputation on this answer was added before everyone who had read the thread realised they had no clue what they were talking about (including me). If there is an accepted answer at some point in the future which doesn't disagree with all documentation on this algorithm, pay more attention to that!

In Short:

The 2*(whatever) bit is easy to explain and I have done so below. the (whatever) bit has me absolutely baffled and as far as I can tell it should be

2*(count1)

and have absolutely nothing to do with the length of the second range. Either I've missed something (most likely), or the documentation is wrong (wouldn't that be nice...).

If you want my reasoning, see below. If I'm wrong and someone has the right answer, let me know so that I can delete this post!

Why it's 2 * (count1 etc.)

If you're asking why it's 2 times (count1 + count2) -1, The key is in this line:

The elements are compared using operator< for the first version, and comp for the second. Two elements, a and b are considered equivalent if (!(a < b) && !(b < a)) or if (!comp(a,b) && !comp(b,a)).

Every time it compares two elements, it does two comparisons: am I not equal to it and is it not equal to me.

The MAXIMUM amount of these "comparison pairs" it has to do is an awful lot harder to pin down. In fact, I can't figure it out. As far as I can tell, it should have nothing to do with the length of the 2nd range...

Why it's not (count1 + count2 -1) or (count1 + count2) -1

I've looked at this for days and done some tests based on the code samples in cppreference and I honestly don't know how this can be true.

The algorithms imply and a source dug up by the OP says that range2 must be a subset of range1, and both are sorted, so there is no necessity to check an element twice. That means that the algorithms must check at most all the elements of range1, plus one extra check if an element of range2 is larger than any element in range1. Not only that, but it doesn't matter where there are 2 or 20 elements in range2, it still does exactly the same amount of comparisons.

There are two potential definitions of comparison, which will obviously give different answers:

comparison == All comparison operations in the algorithm

In this case, the following comparisons occur:

  1. Have I reached the last range2 element
  2. Have I reached the last range1 element
  3. Is range2 elem < range1 elem
  4. Is range2 elem > range1 elem

In the simple case N1 == N2 == 1, this could generate at least 6 comparisons (1,2,3,4,1,2: where range1 = {1} and range2 = {10}, for example), which is far more than either algorithm allows. So this can't be the case.

comparison == checking elem1 and elem2 are equal

In this case, there are two comparisons for each element of range1 until it has found all the elements of range2 or it gets to the end of range1 where it stops (upon finding that it has reached the end of range1).

Thus, as far as I can tell, the complexity is independent of the length of N2, and the complexity should be

2*(N1)

Note that for this definition of "comparison", 2*(N1 + N2 - 1) would only appear to hold for N2 == 1, and 2*(N1 + N2) -1 never holds as the number of comparisons is only odd in a non-maximal complexity case (range2 has a number which is not in range1 AND not greater than max(range1)).

Any other definition of comparison would be selective. The only other thing I can think of is that the compiler optimises out certain steps, like not checking whether it's reached the end of range2 again when the element isn't incremented (which would also make the algorithm dependent on N2 as required), but I can't see anything else that it could optimise out in order to get the number down to something that matches either stated complexity entirely.

...anyone else got a better answer? I'm now as curious as the OP is about this.

What is the meaning of the address symbol & in a constructor?

The information on cplusplus.com is... sometimes not dependable. (See What's wrong with cplusplus.com? for a discussion of this.) On CPPReference, you can see that the move constructor is, you know, just a regular move constructor.

Understanding exception safety of shared_ptr::void reset (U* p);

Per [util.smartptr.shared.mod], all four overloads of shared_ptr::reset(stuff) are exactly equivalent to

shared_ptr(stuff).swap(*this)

If constructing shared_ptr(stuff) throws (e.g., if allocating the new control block (or whatever equivalent mechanism used by the implementation) throws), then *this is unaffected because you never get to the swap, and any pointer passed in stuff is deleted appropriately (because that's guaranteed by shared_ptr's constructor).

swap itself is nothrow, and so is destroying the temporary shared_ptr after the swap.

Mistake in cpp specification about mutex try_lock?

The Standard says the following:

30.4.1.2/14 [thread.mutex.requirements.mutex]

An implementation
may fail to obtain the lock even if it is not held by any other thread. [ Note: This spurious failure is
normally uncommon, but allows interesting implementations based on a simple compare and exchange
(Clause 29). —end note ]

So you can even get 0 if all of try_lock fail.

Also, please do not use cplusplus.com, it has a long history of having lots of mistakes.

It's safer to use cppreference.com which is much closer to the Standard

std::streampos, std::streamoff and std::streamsize to long long int?

Well, as far as C++98/03 is concerned, there is no long long int. So I'll assume you're asking about C++11.

The streamsize and streamoff are required to be typedefs of an integral type (streampos is not an integer, so you won't be passing that to anything that takes a long long). Since integral types are basic types, they can only be defined by either C++ or as a compiler-specific definition.

Thus, the only question is this: are these typedefs larger than long long? All integral types are convertible to a larger or equal-sized type (signed/unsigned notwithstanding, but all of the types here are signed, so no problem). But if it is larger... what are you going to do about it?

Assuming you can't change the signature of the function you are "injecting" it into (because if you could, there's no reason not to just take streamsize as the parameter type and thus avoid the problem), you don't have any options. You have a data value that is larger than what the function takes. There's no way of getting around it here.

You can perform a static_cast into a long long to shut the compiler up, but this won't help if the actual size can't fit in a long long.

Ultimately, this is an intractable problem. You have a function that takes a parameter which is potentially too small for what you're passing. The most you can do is detect when it might be a problem via a static_assert. Something like this:

static_assert(sizeof(std::streamsize) <= sizeof(long long), "Oops.");

To be honest, I wouldn't worry about it. Odds are good that long long will be the largest integral size that your compiler natively supports.



Related Topics



Leave a reply



Submit