On most machines, printf("%d", INT_MIN)
will cause a signed integer overflow, resulting in undefined behavior.
At line 782 the absolute value is computed with (unsigned int)(value > 0 ? value : 0 - value)
. However when VALUE == INT_MIN
on a typical twos-complement machine, then 0 - value
is -INT_MIN
which cannot be represented as int
and so overflow occurs. -value
would have the same issue. (For instance, on a typical 32-bit machine int
can represent -2147483648
as 0x80000000
, but cannot represent 2147483648
.)
Compile printf.c with gcc -ftrapv
or gcc -fsanitize=undefined
and you'll get a runtime error for this example.
There is the same problem for printf("%ld", LONG_MIN)
and printf("%lld", LLONG_MIN)
.
I think it should instead be value > 0 ? (unsigned int)value : -(unsigned int)value
. Converting a negative int
to unsigned
effectively adds UINT_MAX
(typically 2**32) and negating subtracts from UINT_MAX, yielding the correct result. Unsigned integer overflow in C is well defined. There will still be a problem on a machine where -INT_MIN
doesn't fit in unsigned int
but they should be rare. (You could cast up to unsigned long long
which is a little better, but you will still have the same problem with -LLONG_MIN
.)
By the way, these cases are not exercised in the test suite, which might be a good idea. It's a little tricky to add because we don't know the value of INT_MIN
ahead of time, so we don't know what string to expect, but we also can't safely do printf("%d", -2147483648)
because we could be on a machine with 16-bit int
.