b341a09c1d
The size checking is done in the caller. The size parameter is an unsigned (64b wide) right now, so the comparison with zero should be enough in most cases. But it won't help in the following case. If the allocating request input a huge number by mistake, e.g., some overflow after the calculation (especially subtraction), the checking in the caller will succeed since it is not zero. Indeed, there is not enough space in the system to support such huge memory allocation. Usually it will return failure in the following code. But if the input size is just a little smaller than the UINT64_MAX, like -2 in signed type. The roundup will cause an overflow and then "reset" the size to 0, and then only a header (128B now) with zero length will be returned. The following will be the previous allocation header. It should be OK in most cases if the application won't access the memory body. Or else, some critical issue will be caused and not easy to debug. So this issue should be prevented at the beginning, like other big size failure, NULL pointer should be returned also. Fixes: fdf20fa7bee9 ("add prefix to cache line macros") Cc: stable@dpdk.org Signed-off-by: Bing Zhao <bingz@mellanox.com> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>