i386 turns out to not have __uint128_t. So confusingly use 64-bit math
instead. Since we're little endian, we can get away with it. Also, since the counters in quesitons would require billions of iops for tens of billions of seconds to overflow, and since such data rates are unlikely for people using i386 for a while, that's OK. The fastest cards today can't do even a million IOPs. Noticed by: dim@ Sponsored by: Netflix, Inc
This commit is contained in:
parent
29aee14890
commit
4e44c386ff
@ -75,10 +75,18 @@ kv_lookup(const struct kv_name *kv, size_t kv_count, uint32_t key)
|
||||
}
|
||||
|
||||
/*
|
||||
* 128-bit integer augments to standard values
|
||||
* 128-bit integer augments to standard values. On i386 this
|
||||
* doesn't exist, so we use 64-bit values. The 128-bit counters
|
||||
* are crazy anyway, since for this purpose, you'd need a
|
||||
* billion IOPs for billions of seconds to overflow them.
|
||||
* So, on 32-bit i386, you'll get truncated values.
|
||||
*/
|
||||
#define UINT128_DIG 39
|
||||
#ifdef __i386__
|
||||
typedef uint64_t uint128_t;
|
||||
#else
|
||||
typedef __uint128_t uint128_t;
|
||||
#endif
|
||||
|
||||
static inline uint128_t
|
||||
to128(void *p)
|
||||
|
Loading…
Reference in New Issue
Block a user