Correct printf format specifier for size_t: %zu or %Iu?
MS Visual Studio didn't support %zu
printf specifier before VS2013
. Starting from VS2013 (e.g. _MSC_VER
>= 1800
) %zu
is available.
As an alternative, for previous versions of Visual Studio if you are printing small values (like number of elements from std containers) you can simply cast to an int
and use %d
:
printf("count: %d\n", (int)str.size()); // less digital ink spent
// or:
printf("count: %u\n", (unsigned)str.size());
Platform independent size_t Format specifiers in c?
Yes: use the z
length modifier:
size_t size = sizeof(char);
printf("the size is %zu\n", size); // decimal size_t ("u" for unsigned)
printf("the size is %zx\n", size); // hex size_t
The other length modifiers that are available are hh
(for char
), h
(for short
), l
(for long
), ll
(for long long
), j
(for intmax_t
), t
(for ptrdiff_t
), and L
(for long double
). See §7.19.6.1 (7) of the C99 standard.
How can one print a size_t variable portably using the printf family?
Use the z
modifier:
size_t x = ...;
ssize_t y = ...;
printf("%zu\n", x); // prints as unsigned decimal
printf("%zx\n", x); // prints as hex
printf("%zd\n", y); // prints as signed decimal
Clean code to printf size_t in C++ (or: Nearest equivalent of C99's %z in C++)
Most compilers have their own specifier for size_t
and ptrdiff_t
arguments, Visual C++ for instance use %Iu and %Id respectively, I think that gcc will allow you to use %zu and %zd.
You could create a macro:
#if defined(_MSC_VER) || defined(__MINGW32__) //__MINGW32__ should goes before __GNUC__
#define JL_SIZE_T_SPECIFIER "%Iu"
#define JL_SSIZE_T_SPECIFIER "%Id"
#define JL_PTRDIFF_T_SPECIFIER "%Id"
#elif defined(__GNUC__)
#define JL_SIZE_T_SPECIFIER "%zu"
#define JL_SSIZE_T_SPECIFIER "%zd"
#define JL_PTRDIFF_T_SPECIFIER "%zd"
#else
// TODO figure out which to use.
#if NUMBITS == 32
#define JL_SIZE_T_SPECIFIER something_unsigned
#define JL_SSIZE_T_SPECIFIER something_signed
#define JL_PTRDIFF_T_SPECIFIER something_signed
#else
#define JL_SIZE_T_SPECIFIER something_bigger_unsigned
#define JL_SSIZE_T_SPECIFIER something_bigger_signed
#define JL_PTRDIFF_T_SPECIFIER something-bigger_signed
#endif
#endif
Usage:
size_t a;
printf(JL_SIZE_T_SPECIFIER, a);
printf("The size of a is " JL_SIZE_T_SPECIFIER " bytes", a);
Do the and operators work correctly when size_t overflow?
My guess is your platform has a 64-bit size_t
, and you're using the wrong format specifier to print a size_t
, which is undefined behavior and is resulting in the misleading output.
To print size_t
s, use %zu
on gcc and clang, and %Iu
on MSVC. Or forget all that and use std::cout
to print the results.
Using %Iu
on VS2015, the output I get on a 64-bit compiler is
largerNum = 12
Num = 4294967295
Num + 1 = 4294967296
largerNum now = 4294967296
largerNum did not overflow: 4294967296
Is (0 < UINT_MAX)?
YES
Is (largerNum < Num)?
NO
GCC format string error oscillates between two 'format specifies ... but the argument has type ...' values, doesn't like either one
You can use %z
logd("%zu\n", sizeof(x)); /* for size_t */
logd("%zd\n", sizeof(x)); /* for ssize_t */
%z
was added in C99. In case you are using MSVC which doesn't support any of the later C standards the correct prefix for size_t
the correct prefix is %Iu
Edit: As @Weather Vane correctly pointed MSVC now supports %Iu
.
The subject was also treated here
Can't print the sizeof a signed int type
This is the correct way:
printf("Size of int: %zu bytes\n", sizeof(int));
sizeof
operator yields a value of type size_t
and %zu
is the correct conversion specification to print a value of type size_t
.
If you have a compiler that doesn't support c99 or c11, you can do:
printf("Size of int: %lu bytes\n", (unsigned long) sizeof(int));
Someone can explain this?
This is the wrong format specifier here:
printf("%i and %i",sizeof(*dataInput)/sizeof(float),dataSize);
^^
sizeof
returns size_t
type which is unsigned the correct format specifier is %zu
or %Iu
in Visual Studio.
Using the wrong format specifier invokes undefined behavior but that does not seem to explain the output of 10
for dataSize
which does not make sense since sizeof(*dataInput)
will be the size of a float
. So we would expect sizeof(*dataInput)/sizeof(float)
to be 1
, as Macattack said an SSCCE should help resolve that output.
Related Topics
How to Change String into Qstring
Why Is Std::Min Failing When Windows.H Is Included
Is There a Working C++ Refactoring Tool
5 Years Later, Is There Something Better Than the "Fastest Possible C++ Delegates"
How to Get Total CPU Usage in Linux Using C++
How to Use SQLite in a Multi-Threaded Application
Can Lambda Functions Be Templated
How to Make a Multiple-Read/Single-Write Lock from More Basic Synchronization Primitives
How to Use Glortho() in Opengl
When Do I Really Need to Use Atomic<Bool> Instead of Bool
What Are Declarations and Declarators and How Are Their Types Interpreted by the Standard
How Is the C++ Exception Handling Runtime Implemented
G++ Does Not Show a 'Unused' Warning
Behavior When Dereferencing the .End() of a Vector of Strings