fragment reassembly queues.
This allows policies to label reassembly queues, perform access
control checks when matching fragments to a queue, update a queue
label when fragments are matched, and label the resulting
reassembled datagram.
Obtained from: TrustedBSD Project
sooner to decomplicate locking and eliminate the need for a rather
chatty comment about why we have to handle the global lock in a
special way for the benefit of ipfw and pf cred rules.
MFC after: 3 days
(unless explicitly locked to mode 11b) so when we join the bss the
channel attached to the scan cache entry may need to be demoted.
o demote to 11b if the ap is advertising 11b rates
o skip the ap if it's 11b but we're locked to 11g (could consider this
advisory but for now treat it as mandatory)
o handle an odd edge case, if there is a fixed transmit rate for 11g
then the rate check against the 11b ap will fail, try to demote to
11b and retry the rate check
Reviewed by: sephe, thompsa
G3 as well as the internal ADB keyboard and mice in PowerBooks and iBooks. This
also brings in Mac GPIO support, for which we should eventually have a better
interface.
Obtained from: NetBSD (CUDA and PMU drivers)
- Consistently add parentheses to return statements.
- Use NULL instead of 0 when comparing pointers, also avoiding
unnecessary casts.
- Do not use pointers as booleans.
Reviewed by: rwatson (earlier version)
MFC after: 2 months
the net80211 layer has complete control over the handling of mgt frames
(in particular, the ac, tx rate, and retry count); this also allows us
to purge the M_LINK0 flag that was attached to mbufs to mark them as
needing encryption for shared key auth
o change ieee80211_send_setup to take a tid parameter so it can be used
to setup QoS frames
o correct BAR frame construction for AMPDU
o retransmit BAR frames until ACK'd or timeout (use tunables to
control behaviour, default is very aggressive)
o defer seq# update until BAR frame is ACK'd
o add BAR response handling callback for driver to interpose and
push new state to device or push pending aggregates
While here also:
o add backpointer to node in the per-tid tx aggregation data structure
o move ampdu tx state setup/teardown work to separate functions
o yank useless code for setting fixed rate through media opts: this
mechanism didn't scale to HT rates and couldn't handle multiple bands;
fixed tx rates are set with the IEEE80211_IOC_TXPARAMS ioctl
I noticed on a system at home that restarting named(8) causes the
/var/named/dev mount to be moved to the bottom of the mount list,
because it gets remounted. When I received the daily security email this
morning, I was quite amazed to see that the security report listed the
differences, while it was nothing out of the ordinary.
If we just throw the `mount -p' output through sort(1), we'll only
receive notifications about changes to mounts if something has really
changed.
are possibly still being created. The d_secperunit field
contains the number of sectors of the disk and not of the
slice/partition to which the disklabel applies.
Rather than reject the disklabel, we now silently adjust
the field. Existing code, like bslabel(8), does not seem
to check the label that extensively and seems to adjust
fields as a side-effect as well.
In other words, it's not that important apparently, so
gpart should not be too strict about it.
Reported by: nyan@
Reported by: Andriy Gapon <avg@icyb.net.ua>
Olaf Kirch noticed that the i915_set_status_page() function of the i915
kernel driver calls ioremap with an address offset that is supplied by
userspace via ioctl. The function zeroes the mapped memory via memset
and tells the hardware about the address. Turns out that access to that
ioctl is not restricted to root so users could probably exploit that to
do nasty things. We haven't tried to write actual exploit code though.
It only affects the Intel G33 series and newer.
Approved by: bz (secteam)
Obtained from: Intel drm repo
Security: CVE-2008-3831
Memory Interface (CFI). The flash memory can be read and written
to through /dev/cfi# and an ioctl() exists so processes can read
the query information.
The driver supports the AMD and Intel command set, though only
the AMD command has been tested.
Obtained from: Juniper Networks, Inc.
established a valid link or not.
In rl_start_locked, don't try to send packets unless we have valid
link. While I'm here add a check that verifies whether driver can
accept Tx requests by inspecting IFF_DRV_OACTIVE/IFF_DRV_RUNNING
flag.
- The hardware does not support DAC so limit DMA address space to
4GB.
- Removed BUS_DMA_ALLOC_NOW flag.
- Created separated Tx buffer and Rx buffer DMA tags. Previously
it used to single DMA tag and it was not possible to specify
different DMA restrictions.
- Apply 4 bytes alignment limitation of Tx buffer.
- Apply 8 bytes alignment limitation of Rx buffer.
- Tx side bus_dmamap_load_mbuf_sg(9) support.
- Preallocate Tx DMA maps as creating DMA maps take very long time
on architectures that require real DMA maps.
- Adjust guard buffer size to 1522 + 8 as it should include VLAN
and additional reserved bytes in Rx buffer.
- Plug memory leak in device detach. Previously wrong buffer
address was used to free allocated memory.
- Added rl_list_rx_init() to clear Rx buffer and cleared the
buffer.
- Don't destroy DMA maps in rl_txeof() as the DMA map should be
reused. There is no reason to destroy/recreate the DMA maps in
this driver.
- Removed rl_dma_map_rxbuf()/rl_dma_map_txbuf() callbacks.
- The hardware does not support descriptor based DMA on Tx side
and the Tx buffer address should be aligned on 4 bytes boundary
as well as manual padding for short frames. Because of this
hardware limitation rl(4) always used to invoke m_defrag(9) to
get a 4 bytes aligned single buffer. However m_defrag(9) takes
a lot of CPU cycles on slow machines and not all packets need
the help of m_defrag(9). Armed with the information, don't
invoke m_defrag(9) if the following conditions are true.
1. Buffer is not fragmented.
2. Buffer is aligned on 4 bytes boundary.
3. Manual padding is not necessary.
4. Or padding is necessary but upper stack passed a writable
buffer and the space needed for padding is satisfied.
This change combined with preallocated DMA maps greatly
increased Tx performance of driver on sparc64.
- Moved bus_dmamap_sync(9) in rl_start_locked() to rl_encap() and
corrected memory synchronization operation specifier of
bus_dmamap_sync(9).
- Removed bus_dmamap_unload(9) in rl_stop(). There is no need to
reload/unload Rx buffer as rl(4) always have to copy from the
buffer. It just needs proper bus_dmamap_sync(9) calls before
copying the received frame.
With this change rl(4) should work on systems with more than 4GB
memory.
PR: kern/128143