From 2ebcc8ac4ad7f74f0df4a32d480747b6d79dfa79 Mon Sep 17 00:00:00 2001 From: Andre Oppermann Date: Wed, 24 Apr 2013 13:54:55 +0000 Subject: [PATCH] Base the calculation of maxmbufmem in part on kmem_map size instead of kernel_map size to prevent kernel memory exhaustion by mbufs and a subsequent panic on physical page allocation failure. On architectures without a direct map all mbuf memory (except for jumbo mbufs larger than PAGE_SIZE) comes from kmem_map. It is the limiting factor hence. For architectures with a direct map using the size of kmem_map is a good proxy of available kernel memory as well. If it is much smaller the mbuf limit may be sub-optimal but remains reasonable, while avoiding panics under exhaustion. The overall mbuf memory limit calculation may be reconsidered again later, however due to the many different mbuf sizes and different backing KVM maps it is a tricky subject. Found by: pho's new network stress test Pointed out by: alc (kmem_map instead of kernel_map) Tested by: pho --- sys/kern/kern_mbuf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sys/kern/kern_mbuf.c b/sys/kern/kern_mbuf.c index 2259aa234e54..758f66998da8 100644 --- a/sys/kern/kern_mbuf.c +++ b/sys/kern/kern_mbuf.c @@ -118,7 +118,7 @@ tunable_mbinit(void *dummy) * At most it can be 3/4 of available kernel memory. */ realmem = qmin((quad_t)physmem * PAGE_SIZE, - vm_map_max(kernel_map) - vm_map_min(kernel_map)); + vm_map_max(kmem_map) - vm_map_min(kmem_map)); maxmbufmem = realmem / 2; TUNABLE_QUAD_FETCH("kern.maxmbufmem", &maxmbufmem); if (maxmbufmem > realmem / 4 * 3)