Base the calculation of maxmbufmem in part on kmem_map size
instead of kernel_map size to prevent kernel memory exhaustion by mbufs and a subsequent panic on physical page allocation failure. On architectures without a direct map all mbuf memory (except for jumbo mbufs larger than PAGE_SIZE) comes from kmem_map. It is the limiting factor hence. For architectures with a direct map using the size of kmem_map is a good proxy of available kernel memory as well. If it is much smaller the mbuf limit may be sub-optimal but remains reasonable, while avoiding panics under exhaustion. The overall mbuf memory limit calculation may be reconsidered again later, however due to the many different mbuf sizes and different backing KVM maps it is a tricky subject. Found by: pho's new network stress test Pointed out by: alc (kmem_map instead of kernel_map) Tested by: pho
This commit is contained in:
parent
2bb075a095
commit
87ef00d48c
@ -118,7 +118,7 @@ tunable_mbinit(void *dummy)
|
||||
* At most it can be 3/4 of available kernel memory.
|
||||
*/
|
||||
realmem = qmin((quad_t)physmem * PAGE_SIZE,
|
||||
vm_map_max(kernel_map) - vm_map_min(kernel_map));
|
||||
vm_map_max(kmem_map) - vm_map_min(kmem_map));
|
||||
maxmbufmem = realmem / 2;
|
||||
TUNABLE_QUAD_FETCH("kern.maxmbufmem", &maxmbufmem);
|
||||
if (maxmbufmem > realmem / 4 * 3)
|
||||
|
Loading…
Reference in New Issue
Block a user