6fa8fbd029
g_io_deliver(). In such case it increases 'pace' counter on each ENOMEM and reschedules the request. The 'pace' counter is decreased for each request going down, but until 'pace' is greater than zero, GEOM will handle at most 10 requests per second. For GEOM GATE users that are proxy to local GEOM providers (like ggatel(8) and HAST) we can end up with almost permanent slow down of GEOM down queue. This is because once we reach GEOM GATE queue limit, we return ENOMEM to the GEOM. This means that we have, eg. 1024 I/O requests in the GEOM GATE queue. To make room in the queue and stop returning ENOMEM we need to proceed the requests of course, but those requests are handled by userland daemons that handle them by reading/writing also from/to local GEOM providers. For example with HAST, a new requests comes to /dev/hast/data, which is GEOM GATE provider. GEOM GATE passes the request to hastd(8) and hastd(8) reads/writes from/to /dev/da0. Once we reach GEOM GATE queue limit, to free up a slot in GEOM GATE queue, hastd(8) has to read/write from/to /dev/da0, but this request will also be very slow, because GEOM now slows down all the requests. We end up with full queue that we can unload at the speed of 10 requests per second. This simply looks like a deadlock. Fix it by allowing userland daemons that work with both GEOM GATE and local GEOM providers to specify unlimited queue size, so GEOM GATE will never return ENOMEM to the GEOM. MFC after: 1 week |
||
---|---|---|
.. | ||
g_gate.c | ||
g_gate.h |