This attempts to fix the IPI handling code to correctly differentiate
between bitmapped IPIs and function IPIs. The Xen IPIs were on low numbers
which clashed with the bitmapped IPIs.
This commit bumps those IPI numbers up to 240 and above (just like in the i386
code) and fiddles with the ipi_vectors[] logic to call the correct function.
This still isn't "right". Specifically, the IPI code may work fine for TLB
shootdown events but the rendezvous/lazypmap IPIs are thrown by calling ipi_*()
routines which don't set the call_func stuff (function id, addr1, addr2) that
the TLB shootdown events are. So the Xen SMP support is still broken.
PR: 135069
The code path this was copied from (sys/i386/i386/mp_machdep.c:ipi_selected())
handles bitmap'ed IPIs and normal IPIs via separate notification paths. Xen
SMP handles them the same way.
lots of new features compared to 9.4.x, including:
Full NSEC3 support
Automatic zone re-signing
New update-policy methods tcp-self and 6to4-self
DHCID support.
More detailed statistics counters including those supported in BIND 8.
Faster ACL processing.
Efficient LRU cache-cleaning mechanism.
NSID support.
lots of new features compared to 9.4.x, including:
Full NSEC3 support
Automatic zone re-signing
New update-policy methods tcp-self and 6to4-self
DHCID support.
More detailed statistics counters including those supported in BIND 8.
Faster ACL processing.
Efficient LRU cache-cleaning mechanism.
NSID support.
We do not install these files so there is little use to keeping them in
the tree, and the drafts directory in particular is the source of a lot
of churn for each new version.
does NFSv4 Closes in the experimental client's VOP_INACTIVE().
I also replaced a bunch of ap->a_vp with a local copy of vp,
because I thought that made it more readable.
Approved by: kib (mentor)
o add safety belt in vdetach for failed state block allocation
o fix dynamic change to tdma config; ERESTART may not result in
kicking the state machine so we need to explicitly mark the
beacon for update
Sponsored by:
in all the places/cases IPI messages will be generated, at least be consistent
with how the call_data pointer is assigned and cleared (ie, all done inside
the spinlock.
Ensure that its NULL before continuing, just to try and identify situations
where things are going horribly wrong.
CPU for too long period than necessary. Additively, interfaces are kept
polled (in the tick) even if no more packets are available.
In order to avoid such situations a new generic mechanism can be
implemented in proactive way, keeping track of the time spent on any
packet and fragmenting the time for any tick, stopping the processing
as soon as possible.
In order to implement such mechanism, the polling handler needs to
change, returning the number of packets processed.
While the intended logic is not part of this patch, the polling KPI is
broken by this commit, adding an int return value and the new flag
IFCAP_POLLING_NOCOUNT (which will signal that the return value is
meaningless for the installed handler and checking should be skipped).
Bump __FreeBSD_version in order to signal such situation.
Reviewed by: emaste
Sponsored by: Sandvine Incorporated
permissions, such as VWRITE_ACL. For a filsystems that don't
implement it, there is a default implementation, which works
as a wrapper around VOP_ACCESS.
Reviewed by: rwatson@
Formerly, this tried to clear the flags on the symlink's target
instead of the symlink itself.
As before, this only happens for root or for the unlink(1) variant of rm.
PR: bin/111226 (part of)
Submitted by: Martin Kammerhofer
Approved by: ed (mentor)
MFC after: 3 weeks
IPI's in Xen are implemented through hypervisor event channels.
The MP code creates a pair of IRQs for each base IPI per CPU
(one for IPI function dispatch calls, one for IPI bitmap dispatch calls.)
Using PCPU_GET() was returning the IRQ of the IPI handler for the
current CPU; thus calls to ipi_cpu() were sending itself a message.
Instead, looking up the IPI in the target CPU ipi-to-irq map is needed.
Note: This doesn't fix Xen SMP (far from it!) but it at least
sends IPI's to the right places. Next - sending IPIs..
PR: 135069