Some Linux futex ops atomically verifies that the futex address uaddr
(uval) contains the value val. Comparing signed uval and unsigned val
may lead to an unexpected result, mostly to a deadlock.
So copyin uaddr to an unsigned int to compare the parameters correctly.
While here change ktr records to print parameters in more readable format.
Tested by eadler@
MFC after: 3 days
situation checked by assert is verified to not take place in
vm_map_wire(), and protection permissions on the wired entry can be
revoked afterward.
Reported by: markj
Reviewed by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
was not allocating space for the parameter save area in the stack frame.
If the compiler chose to save the argument to the signal handler on the
stack, it would overwrite the first 32 bits of the sigaction struct with
it, corrupting it for a subsequent invocation.
PR: powerpc/183040
MFC after: 8 days
BUS_DMA_KMEM_ALLOC. They serve the same purpose, but using the flag
means that the map can be NULL again, which in turn enables significant
optimizations for the common case of no bouncing.
Obtained from: Netflix, Inc.
MFC after: 3 days
- Switch from timeout() to callout_*() for per-request timers.
- Use device_find_child() in the identify routine.
- Use device_printf() instead of passing device_get_nameunit() to
printf().
- Expand the SBP_LOCK coverage simplifying the locking.
- Uninline STAILQ_FOREACH_SAFE().
Tested by: sbruno
Add a new zfs property, "redundant_metadata" which can have values "all" or
"most". The default will be "all", which is the current behavior. When set
to all, ZFS stores an extra copy of all metadata. If a single on-disk block
is corrupt, at worst a single block of user data (which is recordsize bytes
long) can be lost.
Setting to "most" will cause us to only store 1 copy of level-1 indirect
blocks of user data files. This can improve performance of random writes,
because less metadata has to be written. In practice, at worst about
100 blocks (of recordsize bytes each) of user data can be lost if a single
on-disk block is corrupt.
The exact behavior of which metadata blocks are stored redundantly may change
in future releases.
Illumos issue: 3835 zfs need not store 2 copies of all metadata
MFC after: 2 weeks
guest for which the rules regarding xsetbv emulation are known. In
particular future extensions like AVX-512 have interdependencies among
feature bits that could allow a guest to trigger a GP# in the host with
the current approach of allowing anything the host supports.
- Add proper checking of Intel MPX and AVX-512 XSAVE features in the
xsetbv emulation and allow these features to be exposed to the guest if
they are enabled in the host.
- Expose a subset of known-safe features from leaf 0 of the structured
extended features to guests if they are supported on the host including
RDFSBASE/RDGSBASE, BMI1/2, AVX2, AVX-512, HLE, ERMS, and RTM. Aside
from AVX-512, these features are all new instructions available for use
in ring 3 with no additional hypervisor changes needed.
Reviewed by: neel
Netmap gets its own hardware-assisted virtual interface and won't take
over or disrupt the "normal" interface in any way. You can use both
simultaneously.
For kernels with DEV_NETMAP, cxgbe(4) carves out an ncxl<N> interface
(note the 'n' prefix) in the hardware to accompany each cxl<N>
interface. These two ifnet's per port share the same wire but really
are separate interfaces in the hardware and software. Each gets its own
L2 MAC addresses (unicast and multicast), MTU, checksum caps, etc. You
should run netmap on the 'n' interfaces only, that's what they are for.
With this, pkt-gen is able to transmit > 45Mpps out of a single 40G port
of a T580 card. 2 port tx is at ~56Mpps total (28M + 28M) as of now.
Single port receive is at 33Mpps but this is very much a work in
progress. I expect it to be closer to 40Mpps once done. In any case
the current effort can already saturate multiple 10G ports of a T5 card
at the smallest legal packet size. T4 gear is totally untested.
trantor:~# ./pkt-gen -i ncxl0 -f tx -D 00:07:43🆎cd:ef
881.952141 main [1621] interface is ncxl0
881.952250 extract_ip_range [275] range is 10.0.0.1:0 to 10.0.0.1:0
881.952253 extract_ip_range [275] range is 10.1.0.1:0 to 10.1.0.1:0
881.962540 main [1804] mapped 334980KB at 0x801dff000
Sending on netmap:ncxl0: 4 queues, 1 threads and 1 cpus.
10.0.0.1 -> 10.1.0.1 (00:00:00:00:00:00 -> 00:07:43🆎cd:ef)
881.962562 main [1882] Sending 512 packets every 0.000000000 s
881.962563 main [1884] Wait 2 secs for phy reset
884.088516 main [1886] Ready...
884.088535 nm_open [457] overriding ifname ncxl0 ringid 0x0 flags 0x1
884.088607 sender_body [996] start
884.093246 sender_body [1064] drop copy
885.090435 main_thread [1418] 45206353 pps (45289533 pkts in 1001840 usec)
886.091600 main_thread [1418] 45322792 pps (45375593 pkts in 1001165 usec)
887.092435 main_thread [1418] 45313992 pps (45351784 pkts in 1000834 usec)
888.094434 main_thread [1418] 45315765 pps (45406397 pkts in 1002000 usec)
889.095434 main_thread [1418] 45333218 pps (45378551 pkts in 1001000 usec)
890.097434 main_thread [1418] 45315247 pps (45405877 pkts in 1002000 usec)
891.099434 main_thread [1418] 45326515 pps (45417168 pkts in 1002000 usec)
892.101434 main_thread [1418] 45333039 pps (45423705 pkts in 1002000 usec)
893.103434 main_thread [1418] 45324105 pps (45414708 pkts in 1001999 usec)
894.105434 main_thread [1418] 45318042 pps (45408723 pkts in 1002001 usec)
895.106434 main_thread [1418] 45332430 pps (45377762 pkts in 1001000 usec)
896.107434 main_thread [1418] 45338072 pps (45383410 pkts in 1001000 usec)
...
Relnotes: Yes
Sponsored by: Chelsio Communications.
uart2: <Intel AMT - PM965/GM965 KT Controller> port 0x1830-0x1837
mem 0xfe024000-0xfe024fff irq 17 at device 3.3 on pci0
uart2: console (115200,n,8,1)
Tested as tty and serial console. Seems "fine"
- Put "_LE_" into the register access macros to indicate little endian
byte order is expected by the hardware.
- Avoid using the bounce buffer when not strictly needed. Try to move
data directly using bus-space functions first.
- Ensure we preserve the reserved bits in the power down mode
register. Else the hardware goes into a non-recoverable state.
- Always use 32-bit access when writing or reading registers or FIFOs,
because the hardware is 32-bit oriented and don't really understand 8-
and 16-bit access.
- Correct writes to the memory address register. There is no need to
shift the register offset.
- Correct interval for interrupt endpoints.
- Optimise 90ns internal memory buffer read delay.
- Rename PDT into PTD, which is how the datasheet writes it.
- Add missing programming for activating host controller PTDs.
Sponsored by: DARPA, AFRL
mappings. Instead, they should be first mapping to an RSS bucket and
then querying the RSS bucket -> CPU ID mapping to figure out the target
CPU.
When (if?) RSS rebalancing is implemented or some other (non round-robin)
distribution of work from buckets to CPU IDs, various bits of code - both
userland and kernel - will need to know how this mapping works.
So, to support this:
* Add a new function rss_m2bucket() - this maps an mbuf to a given bucket.
Anything which is currently doing hash -> CPU work may instead wish to
do hash -> bucket, and then query the bucket->cpuid map for which
CPU it belongs on. Or, map it to a bucket, then re-pin that bucket ->
CPU during a rebalance operation.
* For userland applications which wish to exploit affinity to RSS buckets,
the bucket -> CPU ID mapping is now available via a sysctl.
net.inet.rss.bucket_mapping lists the bucket to CPU ID mapping via
a list of bucket:cpu pairs.
Use armv7_setttb that sets proper PT attributes.
Get rid of unused CPU functions, put nullop instead.
Exchange obsolete pj4b_/arm11_ functions to the appropriate armv7_ ones.
API function 'vie_calculate_gla()'.
While the current implementation is simplistic it forms the basis of doing
segmentation checks if the guest is in 32-bit protected mode.
of the guest linear address space. These APIs in turn use a new ioctl
'VM_GLA2GPA' to convert the guest linear address to guest physical.
Use the new copyin/copyout APIs when emulating ins/outs instruction in
bhyve(8).
taskqueue worker thread(s) to.
For now it isn't a taskqueue/taskthread error to fail to pin
to the given cpuid.
Thanks to rpaulo@, kib@ and jhb@ for feedback.
Tested:
* igb(4), with local RSS patches to pin taskqueues.
TODO:
* ask the doc team for help in documenting the new API call.
* add a taskqueue_start_threads_cpuset() method which takes
a cpuset_t - but this may require a bunch of surgery to
bring cpuset_t into scope.
'struct vm_guest_paging'.
Check for canonical addressing in vmm_gla2gpa() and inject a protection
fault into the guest if a violation is detected.
If the page table walk is restarted in vmm_gla2gpa() then reset 'ptpphys' to
point to the root of the page tables.
indicate the faulting linear address.
If the guest PML4 entry has the PG_PS bit set then inject a page fault into
the guest with the PGEX_RSV bit set in the error_code.
Get rid of redundant checks for the PG_RW violations when walking the page
tables.
memory ordering model allows writes to different devices to complete out
of order, leading to a situation where the write that clears an interrupt
source at a device can complete after a write that unmasks and EOIs the
interrupt at the interrupt controller, leading to a spurious re-interrupt.
This adds a generic barrier function specific to the needs of interrupt
controllers, and calls that function from the GIC and TI AINTC controllers.
There may still be other soc-specific controllers that need to make the call.
Reviewed by: cognet, Svatopluk Kraus <onwahe@gmail.com>
MFC after: 3 days
Idle priority is not even time-share, so if system is busy in any way,
those events may never be executed. Since in some cases system waits
for events processed by that thread, that may cause deadlocks.
to be consistent with mutex destruction in ipf_log_soft_destroy(). As a
result mutex destruction in ipf_log_soft_fini() is redundant.
Approved by: glebius (mentor)
Obtained from: darrenr (author)
the kmem object lock is held. Do the pmap_remove() before acquiring the
kmem object lock.
MFC after: 1 week
Sponsored by: EMC / Isilon Storage Division
The CUSE library is a wrapper for the devfs kernel functionality which
is exposed through /dev/cuse . In order to function the CUSE kernel
code must either be enabled in the kernel configuration file or loaded
separately as a module. Currently none of the committed items are
connected to the default builds, except for installing the needed
header files. The CUSE code will be connected to the default world and
kernel builds in a follow-up commit.
The CUSE module was written by Hans Petter Selasky, somewhat inspired
by similar functionality found in FUSE. The CUSE library can be used
for many purposes. Currently CUSE is used when running Linux kernel
drivers in user-space, which need to create a character device node to
communicate with its applications. CUSE has full support for almost
all devfs functionality found in the kernel:
- kevents
- read
- write
- ioctl
- poll
- open
- close
- mmap
- private per file handle data
Requested by several people. Also see "multimedia/cuse4bsd-kmod" in
ports.
the UART FIFO.
The emulation is constrained in a number of ways: 64-bit only, doesn't check
for all exception conditions, limited to i/o ports emulated in userspace.
Some of these constraints will be relaxed in followup commits.
Requested by: grehan
Reviewed by: tychon (partially and a much earlier version)