An anonymous submitter was working with some vendor-supplied code for talking to a network device. They sent along this sample C code for properly populating the required header.
unsigned int head = htonl(0<<24 | (len+address_len+1)); memcpy(&header, &head, 4);
Let's start with the stupid part.
0<<24 | (len+address_len+1). Zero, bitshifted left twenty-four times is going to be… zero. Zero
ored with anything is going to be… that thing. Now, maybe they got their order of operations wrong, and thought they were
oring with twenty-four, then bitshifting, but 0 bitshifted by anything is still zero. There's no reason to do any of this. It just ends up looking confusing and pointless.
But that's not the WTF.
head is declared as an
unsigned int, which unfortunately for us, is a compiler specific size. It is, by spec, at least 16 bits of integer. But it could be more. Which, if you spot that
memcpy, they expect it to be 32 bits. There is no guarantee that this is the case, though it is the case for many compilers, but this is a delightful little bomb that could blow up when your code reads past the end of
The added bonus here is that
address_len are declared to be
ints. This is nonsensical, since packet lengths can't be negative values- you can't send a packet of -5 bytes. But it's also asking for trouble, since
head is unsigned if you do get some negative values, the wrapping is going to create some absurd packet sizes.
The rest of the code is of a similar quality, and it raises the question: if this is what they're sending to customers to show them how to use the devices, what kind of code is on that device?.
This post originally appeared on The Daily WTF.