It seems igb supports TSO6, but the capability got lost in
the iflib update. Restore this capability.
PR: 231476
Reported by: lev
Reviewed by: erj
Approved by: re (gjb)
Sponsored by: Limelight Networks
Differential Revision: https://reviews.freebsd.org/D17242
It is currently unused and reserved for future use to keep KBI/KPI.
Also add several spare pointers to be able extend structure if it
will be needed.
Approved by: re (gjb)
Various capabilities were not being handled correctly in the
SIOCSIFCAP handler. Specifically:
IFCAP_RXCSUM and IFCAP_RXCSUM_IPV6 could be set even if not supported
It was impossible to disable IFCAP_RXCSUM and/or IFCAP_RXCSUM_IPV6 via
ifconfig since it does ioctl() per command-line flag rather than combine
them into a single call.
IFCAP_VLAN_HWCSUM could not be modified via the ioctl()
Setting any combination of the three IFCAP_WOL flags would set only
IFCAP_WOL_MCAST | IFCAP_WOL_MAGIC. For example, setting only
IFCAP_WOL_UCAST would result in both IFCAP_WOL_MCAST and IFCAP_WOL_MAGIC
being enabled, but IFCAP_WOL_UCAST would not be enabled.
Because if_vlancap() was called before if_togglecapenable(), vlan flags
were sometimes not applied correctly.
Interfaces were being unnecessarily stopped and restarted for WoL
PR: 231151
Submitted by: Kaho Toshikazu <kaho@elam.kais.kyoto-u.ac.jp>
Reported by: Shirkdog <mshirk@daemon-security.com>
Reviewed by: galladin
Approved by: re (gjb)
Sponsored by: Limelight Networks
Differential Revision: https://reviews.freebsd.org/D17158
The old code appears to assume that vmem_alloc() would import
size-aligned KVA chunks from the parent kernel_arena, but vmem doesn't
provide this guarantee.
Also remove the unused global RWX arena and add comments explaining why
we have per-domain arenas.
Reported by: alc
Reviewed by: alc, kib (previous version)
Approved by: re (gjb)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17249
not freed. This can happen since the list traversal and locking
was converted to epoch(9). If the inp is marked "freed", skip it.
This prevents a NULL pointer deref panic in ip6_savecontrol_v4()
trying to access the socket hanging off the inp, which was gone
by the time we got there.
Reported by: andrew
Tested by: andrew
Approved by: re (gjb)
Ensure that pages backing the same virtual large page come from the
same physical domain, as kmem_malloc_domain() does.
PR: 231038
Reviewed by: alc, kib
Approved by: re (gjb)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17248
A lot of function have the following check:
cmpq %rax,%rdi /* verify address is valid */
ja fusufault
The label is present earlier in kernel .text, which means this is a jump
backwards. Absent any information in branch predictor, the cpu predicts it
as taken. Since it is almost never taken in practice, this results in a
completely avoidable misprediction.
Move it past all consumers, so that it is predicted as not taken.
Approved by: re (kib)
- Explicitly load an empty initial state into FP registers when taking
the fault on the first FP instruction in a thread. Setting
SSTATE.FS to INITIAL is just a marker to let context switch restore
code know that it can load FP registers with zeroes instead of
memory loads. It does not imply that the hardware will reset all
registers to zero on first access. In addition, set the state to
CLEAN instead of INITIAL after the first FP instruction.
cpu_switch() doesn't do anything for INITIAL and only restores from
the pcb if the state is CLEAN. We could perhaps change cpu_switch
to call fpe_state_clear if the state was INITIAL and leave SSTATE.FS
set to INITIAL instead of CLEAN after the first FP instruction.
However, adding this complexity to cpu_switch() doesn't seem worth
the supposed gain.
- Only save the current FPU registers in fill_fpregs() if the request
is made to save the current thread's registers. Previously if a
debugger requested FP registers via ptrace() it was getting a copy
of the debugger's FP registers rather than the debugee's.
- Zero the entire FP register set structure returned for ptrace() if a
thread hasn't used FP registers rather than leaking garbage in the
fp_fcsr field.
- If a debugger writes FP registers via ptrace(), always mark the pcb
as having valid FP registers and set SSTATUS.FS_MASK to CLEAN so
that the registers will be restored when the debugged thread
resumes.
- Be more explicit about clearing the SSTATUS.FS field before setting
it to CLEAN on the first FP instruction trap.
Submitted by: br, markj
Approved by: re (rgrimes)
Sponsored by: DARPA
Differential Revision: https://reviews.freebsd.org/D17141
Zero the entire FP register set structure returned for ptrace() if a
thread hasn't used FP registers rather than leaking garbage in the
fp_sr and fp_cr fields.
Reviewed by: emaste, andrew
Approved by: re (rgrimes)
MFC after: 2 weeks
Sponsored by: DARPA, AFRL
Differential Revision: https://reviews.freebsd.org/D17140
This simplifies the runtime logic and reduces the number of
runtime-constant branches.
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
Differential revision: https://reviews.freebsd.org/D16736
This keeps the initialization coupled together with the kmem_* KPI
implementation, which is the main user of these arenas.
No functional change intended.
Reviewed by: alc
Approved by: re (gjb)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17247
route cache updates.
Bring over locking changes applied to udp_output() for the route cache
in r297225 and fixed in r306559 which achieve multiple things:
(1) acquire an exclusive inp lock earlier depending on the expected
conditions; we add a comment explaining this in udp6,
(2) having acquired the exclusive lock earlier eliminates a slight
possible chance for a race condition which was present in v4 for
multiple years as well and is now gone, and
(3) only pass the inp_route6 to ip6_output() if we are holding an
exclusive inp lock, so that possible route cache updates in case
of routing table generation number changes can happen safely.
In addition this change (as the legacy IP counterpart) decomposes the
tracking of inp and pcbinfo lock and adds extra assertions, that the
two together are acquired correctly.
PR: 230950
Reviewed by: karels, markj
Approved by: re (gjb)
Pointyhat to: bz (for completely missing this bit)
Differential Revision: https://reviews.freebsd.org/D17230
Since ifunc-capable linker is now required on i386, bring this code in
line with the amd64 counterpart.
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
Differential revision: https://reviews.freebsd.org/D16736
The current cache logic checks the total number of stacks in the kernel,
which even on small boxes significantly exceeds the 128 limit (e.g. an
8-way box with zfs has almost 800 stacks allocated).
Stacks are cached earlier for each main thread.
As a result the code is rarely executed, but when it is then (on boxes like
the above) it always fails. Since there are no provisions made for NUMA and
release time is approaching, just do a quick check to avoid acquiring the
lock.
Approved by: re (kib)
pm_pcid is unsigned.
Reviewed by: cem, markj
CID: 1395727
Noted by: cem
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
MFC after: 3 days
Differential revision: https://reviews.freebsd.org/D17235
UFS quotaoff iterates over all mp vnodes, and derefences and clears
the pointers to corresponding dquots. If SU work items transiently
reference some of dquots,quotaoff() would eventually fail, but all
processed vnodes are already stripped from dquots. The state is
problematic, since quotas are left enabled, but there is no dquots
where blocks and inodes can be accounted. The result is assertion
failures and NULL pointer dereferences.
Fix it by suspending writes around quotaoff() call. Since the
filesystem is synced, no dandling references to dquots from SU
workitems can left behind, which means that quotaoff succeeds.
The complication there is that quotaoff VFS op is performed with the
mount point busied, while to suspend, we need to start write on the
mp. If vn_start_write() is called on busied mp, system might deadlock
against parallel unmount request. Handle this by unbusy-ing mp before
starting write, which in turn requires changing the quotaoff()
interface to return with the mount point not busied, same as was done
for quotaon().
Reviewed by: mckusick
Reported and tested by: pho
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D17208
We drop the keg lock when we go to actually allocate the slab, allowing
other threads to advance the cursor. This can cause us to exit the
round-robin loop before having attempted allocations from all domains,
resulting in a hang during a subsequent blocking allocation attempt from
a depleted domain.
Reported and tested by: Jan Bramkamp <crest@bultmann.eu>
Reviewed by: alc, cem
Approved by: re (gjb)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17209
The amd64 kernel started using ifunc for a variety of functions with
arch-specific implementations, and we would like to make use of the
same functionality on i386 and as much as possible avoid divergence
between i386 and amd64. In particular, future changes for security
improvements and mitigations may rely on ifunc support.
Approved by: re (kib)
Sponsored by: The FreeBSD Foundation
Limits can be safely obtained with lim_cur from the thread. racct is compiled
in but disabled by default. Note that racct enablement is a boot-only tunable.
This eliminates second most common place of taking the lock while pkg building.
While here don't take the lock in mlockall either.
Reviewed by: kib
Approved by: re (gjb)
Differential Revision: https://reviews.freebsd.org/D17210
State check before enqueuing transmit task in bxe_link_attn() routine.
State check before invoking bxe_nic_unload in bxe_shutdown().
Submitted by:Vaishali.Kulkarni@cavium.com
Approved by:re(gjb)
Doing so can deadlock when the thread already owns another vnode lock,
e.g. during a rename, as was demonstrated by the reporter. In fact,
there seems to be no need to force the call to getinoquota() always,
because vn_open() locks vnode exclusively, and this is the most
important case. To add to the point, directories where the dirent is
added or removed, are locked exclusively as well.
Reported by: bwidawsk
Tested by: bwidawsk, pho (as part of the larger patch)
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
MFC after: 1 week
The atpic_register_sources callback tries to avoid registering interrupt
sources that would collide with an I/O APIC. However, the previous
implementation was failing to register IRQs 8-15 since the slave PIC
saw valid IRQs from the master and assumed an I/O APIC was present. To
fix, go back to registering all 8259A interrupt sources in one loop when
the master's register_sources method is invoked.
PR: 231291
Approved by: re (kib)
MFC after: 1 month
Also change the behaviour slightly: instead of freeing "config" if the
last nvlist doesn't pass the tests, return the last config that did pass
those tests. This matches the comment at the beginning of the function.
PR: 230704
Diagnosed by: avg
Reviewed by: asomers, avg
Tested by: Mark Martinec <Mark.Martinec@ijs.si>
Approved by: re (gjb)
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D17202
Patch removes all checks for pti/pcid/invpcid from the context switch
path. I verified this by looking at the generated code, compiling with
the in-tree clang. The invpcid_works1 trick required inline attribute
for pmap_activate_sw_pcid_pti() to work.
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
Differential revision: https://reviews.freebsd.org/D17181
There is no need to use %rax for temporary values and avoiding doing
so shortens the func.
Handle the explicit 'check for tail' depessimisization for backwards copying.
This reduces the diff against userspace.
Tested with the glibc test suite.
Approved by: re (kib)
This will be used in following conversion of pmap_activate_sw().
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
Approved by: re (gjb)
Differential revision: https://reviews.freebsd.org/D17181
There is a braino in the non-erms variant which breaks the
functionality.
Will be fixed at a later time with a different patch.
Reported by: Manfred Antar
Approved by: re (implicit)
There is no need to use %rax for temporary values and avoiding doing
so shortens the func.
Handle the explicit 'check for tail' depessimisization for backwards copying.
This reduces the diff against userspace.
Approved by: re (kib)
when there is work to do. This reduces CPU consumption to one
third on systems. This will help keep the thread CPU usage under
control now that the default hash size has increased.
Reviewed by: kp
Approved by: re (kib)
Differential Revision: https://reviews.freebsd.org/D17097
Intel docs claim such a memset (rep stosb + 4096 bytes) is
special-cased by microarchs. They also switched Linux to use
it for this purpose.
Approved by: re (gjb)
Not all event descriptions have a sample rate (such as inst_retired.any)
this will restore the legacy behavior of using 65536 in that case. It also
prevents accidental API misuse that could lead to panic.
PR: 230985
Reported by: markj
Reviewed by: markj
Approved by: re (gjb)
Sponsored by: Limelight Networks
Differential Revision: https://reviews.freebsd.org/D16958
We ought to be consistent across our Tier-1 and nearly-Tier-1
architectures, so enable Capsicum for 32-bit armv6/armv7 by default.
PR: 204008
Reviewed by: ian, oshogbo
Approved by: re (gjb)
Relnotes: Yes
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D17023
The previous default of "balanced" appears to have caused pathological
behavior, including very poor performance and 100% CPU load in the
arc_reclaim_thread.
The symptoms appeared when the daily periodic run started.
With this change, the system--and the ARC in particular--behaved
normally during a manual daily periodic run.
From Mark Johnston: The port of the balanced strategy is incomplete,
since arc_prune_async() is a no-op on FreeBSD. (This also seems
to imply that r337653 is a no-op.) After 12 is branched we can
port the remaining bits and consider changing the default back.
Submitted by: markj (essentially)
Reviewed by: markj
Approved by: re (gjb)
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D17156