This patch introduces support for Epson RX-8803 RTC controller accessible
over I2C bus. It has a resolution of 1 sec.
Support for interrupt based alarm was not implemented.
Submitted by: Kornel Duleba <mindal@semihalf.com>
Reviewed by: manu
Obtained from: Semihalf
Sponsored by: Alstom Group
Differential Revision: https://reviews.freebsd.org/D24364
Add basic TCA6416 GPIO expander support over I2C bus. The driver handles
enabling and disabling pins, setting pin mode to IN and OUT and
toggling the pins. External interrupts are not supported.
Submitted by: Dawid Gorecki <dgr@semihalf.com>
Reviewed by: manu, mmel
Obtained from: Semihalf
Sponsored by: Alstom Group
Differential Revision: https://reviews.freebsd.org/D24363
For TLS v1.3 the 12 bytes of the initial vector, IV, should just be copied
as-is from the kernel to the gcm_iv field, which hold the first 4 bytes,
and the remaining 8 bytes go to the subsequent implicit_iv field.
There is no need to consider the byte order on the 12 bytes of IV like
initially done.
Sponsored by: Mellanox Technologies
Setting so_snd.sb_lowat to at least 1/8 of the socket buffer size allows
send thread more actively use PDUs coalescing, that dramatically reduces
TCP lock congestion and number of context switches, when the socket is
full and PDUs are small.
MFC after: 1 week
Sponsored by: iXsystems, Inc.
This is all very long-standing bug stuff that is touchy and still poorly
documented. Ok, here goes.
The basic bug:
* deleting a VAP causes the RX path (and TX path too) to be restarted
without a full chip reset, which causes RX hangs on the AR9380 and later.
(ie, the ones with the newer DMA engine.)
The basic fix:
* do an RX flush when stopping RX in ath_vap_delete() to match what happens
when RX is stopped elsewhere. This ensures any pending frames are completed
and we restart at the right spot; it also ensures we don't push new RX buffers
into the hardware if we're stopping receive.
The other issues I found:
* Don't bother checking the RX packet ring in the deferred read taskqueue;
that's specifically supposed to be for completing frames rather than
just yanking them off the receive ring.
* Cancel/drain any pending deferred read taskqueue. This isn't done inside
any locks so we should be super careful here. This stops the hardware
being reprogrammed at the same time in another thread/CPU whilst we're
stopping RX.
* .. (yes, this should be better serialised, but that's for another day. maybe.)
* Add more debugging to trace what's going on here.
And the fun bit:
* Reinitialise the RX FIFO ONLY if we've been reset or stopped, rather than just
reset. I noticed that after all the above was done I was STILL seeing RXEOL.
RXEOL isn't enabled on the AR9380 so I'd only see it if I was sending TX frames
(ie a ping where it'd be transmitted but never received) so I was not being
spammed by RXEOL. So, as long as stuff is stopped, restart it.
This seems to be doing the right thing in both AP and STA modes.
What I should do next, if I ever get time:
* as I said above, serialise the receive stop/start to include taskqueues
* monitor RXEOL on the AR9380 and I keep seeing it spammed / lockups, just
go do a full chip reset to get things back on track. It sucks, but it
is better than nothing.
Tested:
* AR9380 AP/STA mode, adding/deleting a hostap VAP to trigger the TX/RX
queue stop/start; whilst also running an iperf through it. Lots of times.
Lots. Of.. Times.
I have to dig into why I'm seeing it on chips as late as the AR9380 era
stuff (as it's marked as an AR5416 bug, but who knows!) but i'm seeing
aggregate TX frames complete with no blockack bit set. So, everything
should be treated as a failure and do a hardware reset for good measure.
Tested:
* AR9380, STA mode
* AR9580 (5GHz), AP mode
I wasn't enforcing the maximum packet length when using static rates
so although the driver was enforcing it itself OK, the statistics were
sometimes going into the wrong bin.
Tested:
* AR9380, STA mode
- Consistently use 'void *' for key schedules / key contexts instead
of a mix of 'caddr_t', 'uint8_t *', and 'void *'.
- Add a ctxsize member to enc_xform similar to what auth transforms use
and require callers to malloc/zfree the context. The setkey callback
now supplies the caller-allocated context pointer and the zerokey
callback is removed. Callers now always use zfree() to ensure
key contexts are zeroed.
- Consistently use C99 initializers for all statically-initialized
instances of 'struct enc_xform'.
- Change the encrypt and decrypt functions to accept separate in and
out buffer pointers. Almost all of the backend crypto functions
already supported separate input and output buffers and this makes
it simpler to support separate buffers in OCF.
- Remove xform_userland.h shim to permit transforms to be compiled in
userland. Transforms no longer call malloc/free directly.
Reviewed by: cem (earlier version)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D24855
This change adds Hyper-V socket feature in FreeBSD. New socket address
family AF_HYPERV and its kernel support are added.
Submitted by: Wei Hu <weh@microsoft.com>
Reviewed by: Dexuan Cui <decui@microsoft.com>
Relnotes: yes
Sponsored by: Microsoft
Differential Revision: https://reviews.freebsd.org/D24061
Fix returning from xenstore device with locks held, which triggers the
following panic:
# cat /dev/xen/xenstore
^C
userret: returning with the following locks held:
exclusive sx evtchn_ringc_sx (evtchn_ringc_sx) r = 0 (0xfffff8000650be40) locked @ /usr/src/sys/dev/xen/evtchn/evtchn_dev.c:262
Note this is not a security issue since access to the device is
limited to root by default.
Sponsored by: Citrix Systems R&D
MFC after: 1 week
Previously the driver handled the bit within itself, but did not expose
the state change to net80211 and interface layers.
This change uses net80211 KPI for rfkill signaling.
The code is modeled after similar code in iwn and wpi.
Reviewed by: adrian
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D24923
My preivous logic was a bit wrong. This caused transmissions that failed due
to a mix of short and long retries to count intermediate rates as OK if the
LONG retry count indicated some retries had made it to this intermediate rate,
but the SHORT retry count was the one that caused the whole transmit to fail.
Now status is passed in again - and this is the status for the whole transmission -
and then update_stats() does some quick math to see if the current transmission
series hit its long retry count or not before updating things as a success
or failure.
into account and remove the requirement that the MCS rate is "higher" if we're
considering a new rate.
Ok, another fun one.
* In order for reliable non-software retried higher MCS rates, the TX schedules
(inconsistently!) use hard-coded lower rates at the end of the schedule.
Now, hard-coded is a problem because (a) it means that aggregate formation
is limited by the SLOWEST rate, so I never formed large AMDU frames for
3 stream rates, and (b) if the AP disables lower rates as base rates, it
complains about "unknown rix" every frame you transmit at that rate.
So, for now just disable the third and fourth schedule entry for AMPDUs.
Now I'm forming 32k and 64k aggregates for the higher density MCS rates
much more reliably.
It would be much nicer if the rate schedule stuff wasn't fixed but instead
I'd just populate ath_rc_series[] when I fetch the rates. This is all a
holdover of ye olde pre-11n stuff and I really just need to nuke it.
But for now, ye hack.
* The check for "is this MCS rate better" based on MCS itself is just garbage.
It meant things like going MCS0->7 would be fine, and say 0->8->16 is fine,
(as they're equivalent encoding but 1,2,3 spatial streams), BUT it meant
going something like MCS7->11 would fail even though it's likely that
MCS11 would just be better, both for EWMA/BER and throughput.
So for now just use the average tx time. The "right" way for this comparison
would be to compare PHY bitrates rather than MCS / rate indexes, but I'm not
yet there. The bit rates ARE available in the PHY index, but honestly
I have a lot of other cleaning up to here before I think about that.
* Don't include the RTS/CTS retry count (and thus time) into the average tx time
caluation. It just makes temporarily failures make the rate look bad by
QUITE A LOT, as RTS/CTS exchanges are (a) long, and (b) mostly irrelevant
to the actual rate being tried. If we keep hitting RTS/CTS failures then
there's something ELSE wrong on the channel, not our selected rate.
* Fix formatting, cause reasons;
* Put back the "and the chosen rate is within 90% of the current rate" logic;
* Ensure the best rate and the current rate aren't the same; this ...
* ... fixes the packets_since_switch[] tracking to actually conut how many
frames since the rate switched, so now I know how stable stuff is; and
* Ensure that MCS can go up to a higher MCS at this or any other spatial stream.
My previous quick hack attempt was doing > rather than >= so you had to go
to both a higher root MCS rate (0..7) and spatial stream. Eg, you couldn't
go from MCS0 (1ss) to MCS8 (2ss) this way.
The best rate and switching rate logic still have a bunch more work to do
because they're still quite touchy when it comes to average tx time but at least
now it's choosing higher rates correctly when it wants to try a higher rate.
Tested:
* AR9380, STA mode
Some laptops don't send ACPI "lid status changed" notifications upon
opening the lid if the system was currently suspended. In r358219
this was partially fixed, updating the "lid_status" variable upon
resume even if there is no "status changed" notification from ACPI.
Unfortunately the fix in r358219 did not include notifying userland
via devd; this causes problems on systems using upowerd (e.g. KDE),
since upowerd remembers the most recent devd notification about the
lid status rather than querying the sysctl to get the current status.
This showed up as two symptoms when KDE's "When laptop lid closed: Sleep"
option is set:
1. 50% of the time, closing the lid would not trigger S3 sleep.
2. 50% of the time, plugging/unplugging AC power would trigger S3 sleep.
PR: 246477
MFC after: 3 days
My initial rate control code was .. suboptimal. I wanted to at least get MCS
rates sent, but it didn't do anywhere near enough to handle low signal level links
or remotely keep accurate statistics.
So, 8 years later, here's what I should've done back then.
* Firstly, I wasn't at all tracking packet sizes other than the two buckets
(250 and 1600 bytes.) So, extend it to include 4096, 8192, 16384, 32768 and
65536. I may go add 2048 at some point if I find it's useful.
This is important for a few reasons. First, when forming A-MPDU or AMSDU
aggregates the frame sizes are larger, and thus the TX time calculation
is woefully, increasingly wrong. Secondly, the behaviour of 802.11 channels
isn't some fixed thing, both due to channel conditions and radios themselves.
Notably, there was some observations done a few years ago on 11n chipsets
which noticed longer aggregates showed an increase in failed A-MPDU sub-frame
reception as you got further along in the transmit time. It could be due to
a variety of things - transmitter linearity, channel conditions changing,
frequency/phase drift, etc - but the observation was to potentially form
shorter aggregates to improve BER.
* .. and then modify the ath TX path to report the length of the aggregate sent,
so as the statistics kept would line up with the correct bucket.
* Then on the rate control look-up side - i was also only using the first frame
length for an A-MPDU rate control lookup which isn't good enough here.
So, add a new method that walks the TID software queue for that node to
find out what the likely length of data available is. It isn't ALL of the
data in the queue because we'll only ever send enough data to fit inside the
block-ack window, so limit how many bytes we return to roughly what ath_tx_form_aggr()
would do.
* .. and cache that in the first ath_buf in the aggregate so it and the eventual
AMPDU length can be returned to the rate control code.
* THEN, modify the rate control code to look at them both when deciding which bucket
to attribute the sent frame on. I'm erring on the side of caution and using the
size bucket that the lookup is based on.
Ok, so now the rate lookups and statistics are "more correct". However, MCS rates
are not the same as 11abg rates in that they're not a monotonically incrementing
set of faster rates and you can't assume that just because a given MCS rate fails,
the next higher one wouldn't work better or be a lower average tx time.
So, I had to do a bunch of surgery to the best rate and sample rate math.
This is the bit that's a WIP.
* First, simplify the statistics updates (update_stats()) to do a single pass on
all rates.
* Next, make sure that each rate average tx time is updated based on /its/ failure/success.
Eg if you sent a frame with { MCS15, MCS12, MCS8 } and MCS8 succeeded, MCS15 and MCS
12 would have their average tx time updated for /their/ part of the transmission,
not the whole transmission.
* Next, EWMA wasn't being fully calculated based on the /failures/ in each of the
rate attempts. So, if MCS15, MCS12 failed above but MCS8 didn't, then ensure
that the statistics noted that /all/ subframes failed at those rates, rather than
the eventual set of transmitted/sent frames. This ensures the EWMA /and/ average
TX time are updated correctly.
* When picking a sample rate and initial rate, probe rates aroud the current MCS
but limit it to MCS0..7 /for all spatial streams/, rather than doing crazy things
like hitting MCS7 and then probing MCS8 - MCS8 is basically MCS0 but two spatial
streams. It's a /lot/ slower than MCS7. Also, the reverse is true - if we're at
MCS8 then don't probe MCS7 as part of it, it's not likely to succeed.
* Fix bugs in pick_best_rate() where I was /immediately/ choosing the highest MCS
rate if there weren't any frames yet transmitted. I was defaulting to 25% EWMA and
.. then each comparison would accept the higher rate. Just skip those; sampling
will fill in the details.
So, this seems to work a lot better. It's not perfect; I'm still seeing a lot of
instability around higher MCS rates because there are bursts of loss/retransmissions
that aren't /too/ bad. But i'll keep iterating over this and tidying up my hacks.
Ok, so why this still something I'm poking at? rather than porting minstrel_ht?
ath_rate_sample tries to minimise airtime, not maximise throughput. I have
extended it with an EWMA based on sub-frame success/failures - high MCS rates
that have partially successful receptions still show super short average frame
times, but a /lot/ of retransmits have to happen for that to work.
So for MCS rates I also track this EWMA and ensure that the rates I'm choosing
don't have super crappy packet failures. I don't mind not getting lower
peak throughput versus minstrel_ht; instead I want to see if I can make "minimise
airtime" work well.
Tested:
* AR9380, STA mode
* AR9344, STA mode
* AR9580, STA/AP mode
This function is responsible for setting pc_domain in each pcpu
structure. Call it from the main function that starts APs, rather than
a separate SYSINIT. This makes it easier to close the window where
UMA's per-CPU slab allocator may be called while pc_domain is
uninitialized. In particular, the allocator uses pc_domain to allocate
domain-local pages, so allocations before this point end up using domain
0 for everything.
Reviewed by: kib
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D24757
__builtin_unreachable doesn't raise any compile-time warnings/errors on its
own, so problems with its usage can't be easily detected. While it would be
nice for this situation to change and compilers to at least add a warning
for trivial cases where local state means the instruction can't be reached,
this isn't the case at the moment and likely will not happen.
This commit adds an __assert_unreachable, whose intent is incredibly clear:
it asserts that this instruction is unreachable. On INVARIANTS builds, it's
a panic(), and on non-INVARIANTS it expands to __unreachable().
Existing users of __unreachable() are converted to __assert_unreachable,
to improve debuggability if this assumption is violated.
Reviewed by: mjg
Differential Revision: https://reviews.freebsd.org/D23793
So, replicate the ATI vendor snoop configuration for the AMD vendor.
I think that this should fix a number of cases where users currently
have to resort to polling or disabling MSI.
MFC after: 1 week
Right now (well, since I did this in 2011/2012) the rate control code
makes some super bad choices for 11n aggregates/rates, and it tracks
statistics even more questionably.
It's been long enough and I'm now trying to use it again daily, so let's
start by:
* telling the rate control code if it's an aggregate or not;
* being clearer about the TID - yes it can be extracted from the
ath_buf but this way it can be overridden by the caller without
changing the TID itself.
(This is for doing experiments with voice/video QoS at some point..)
* Return an optional field to limit how long the aggregate is in
microseconds. Right now the rate control code supplies a rate table
and the ath aggr form code will look at the rate table and limit
the aggregate size to 4ms at the slowest rate. Yeah, this is pretty
terrible.
* Add some more TODO comments around handling txpower, rate and
handling filtered frames status so if I continue to have spoons for
this I can go poke at it.
Yes, people shouldn't use bitfields in C for structure parsing.
If someone ever wants a cleanup task then it'd be great to remove them
from this vendor code and other places in the ar9285/ar9287 HALs.
Alas, here we are.
AH_BYTE_ORDER wasn't defined and neither were the two values it could be.
So when compiling ath_ee_print_9300 it'd default to the big endian struct
layout and get a WHOLE lot of stuff wrong.
So:
* move AH_BYTE_ORDER into ath_hal/ah.h where it can be used by everyone.
* ensure that AH_BYTE_ORDER is actually defined before using it!
This should work on both big and little endian platforms.
There are no in-kernel consumers.
Reviewed by: cem
Relnotes: yes
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D24775
It no longer has any in-kernel consumers via OCF. smbfs still uses
single DES directly, so sys/crypto/des remains for that use case.
Reviewed by: cem
Relnotes: yes
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D24773
There are no longer any in-kernel consumers. The software
implementation was also a non-functional stub.
Reviewed by: cem
Relnotes: yes
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D24771
Although a few drivers supported this algorithm, there were never any
in-kernel consumers. cryptosoft and cryptodev never supported it,
and there was not a software xform auth_hash for it.
Reviewed by: cem
Relnotes: yes
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D24767
Pursuant to r360398, implement driver-specific versions of the
ifdi_needs_restart iflib device method.
Some (if not most?) Intel network cards don't need reinitializing when a
VLAN is added or removed from the device hardware, so these implement
ifdi_needs_restart in a way that tell iflib not to bring the interface
up or down when a VLAN is added or removed, regardless of whether the
VLAN_HWFILTER interface capability flag is set or not.
This could potentially solve several PRs relating to link flaps that
occur when VLANs are added/removed to devices.
Signed-off-by: Eric Joyner <erj@freebsd.org>
PR: 240818, 241785
Reviewed by: gallatin@, olivier@
MFC after: 3 days
MFC with: r360398
Sponsored by: Intel Corporation
Differential Revision: https://reviews.freebsd.org/D24659
r360870 added linux/slab.h into liunx/bitmap.h and this include linux/types.h
The qlnx driver is redefining some of those types so remove them and add an
explicit linux/types.h include.
Pointy hat: manu
Reported by: Austin Shafer <ashafer@badland.io>
Some ethernet switches have very large register windows; for example
the AR8316 switch MIB starts at 0x20000.
Submitted by: Mori Hiroki <yamori813@yahoo.co.jp>
The attach method uses GPIO_GET_BUS() to get a "newbus" device
that provides a pin. But on hints-based systems a GPIO controller
driver might not be fully initialized yet and it does not know gpiobus
hanging off it. Thus, GPIO_GET_BUS() cannot be called yet.
The reason is that controller drivers typically create a child gpiobus
using gpiobus_attach_bus() and that leads to the following call chain:
gpiobus_attach_bus() -> gpiobus_attach() ->
bus_generic_attach(gpiobus) -> gpioiic_attach().
So, gpioiic_attach() is called before gpiobus_attach_bus() returns.
I observed this bug with nctgpio driver on amd64.
I think that the problem was introduced in r355276.
The fix is to avoid calling GPIO_GET_BUS() from the attach method.
Instead, we know that on hints-based systems only the parent gpiobus can
provide the pins.
Nothing is changed for FDT-based systems.
MFC after: 1 week
Sometimes, especially when there is not much memory in the system left,
allocating mbuf jumbo clusters (like 9KB or 16KB) can take a lot of time
and it is not guaranteed that it'll succeed. In that situation, the
fallback will work, but if the refill needs to take a place for a lot of
descriptors at once, the time spent in m_getjcl looking for memory can
cause system unresponsiveness due to high priority of the Rx task. This
can also lead to driver reset, because Tx cleanup routine is being
blocked and timer service could detect that Tx packets aren't cleaned
up. The reset routine can further create another unresponsiveness - Rx
rings are being refilled there, so m_getjcl will again burn the CPU.
This was causing NVMe driver timeouts and resets, because network driver
is having higher priority.
Instead of 16KB jumbo clusters for the Rx buffers, 9KB clusters are
enough - ENA MTU is being set to 9K anyway, so it's very unlikely that
more space than 9KB will be needed.
However, 9KB jumbo clusters can still cause issues, so by default the
page size mbuf cluster will be used for the Rx descriptors. This can have a
small (~2%) impact on the throughput of the device, so to restore
original behavior, one must change sysctl "hw.ena.enable_9k_mbufs" to
"1" in "/boot/loader.conf" file.
As a part of this patch (important fix), the version of the driver
was updated to v2.1.2.
Submitted by: cperciva
Reviewed by: Michal Krawczyk <mk@semihalf.com>
Reviewed by: Ido Segev <idose@amazon.com>
Reviewed by: Guy Tzalik <gtzalik@amazon.com>
MFC after: 3 days
PR: 225791, 234838, 235856, 236989, 243531
Differential Revision: https://reviews.freebsd.org/D24546
The bus is independent of the device, so all devices can be attached to
either a PCI bus or an MMIO bus. For example, QEMU's virtio-rng-device
gives the MMIO variant of virtio-rng-pci, and is now detected.
Reviewed by: andrew, br, brooks (mentor)
Approved by: andrew, br, brooks (mentor)
Differential Revision: https://reviews.freebsd.org/D24730
The non-legacy virtio MMIO specification drops the use of PFNs and
replaces them with physical addresses. Whilst many implementations are
so-called transitional devices, also implementing the legacy
specification, TinyEMU[1] does not. Device-specific configuration
registers have also changed to being little-endian, and must be accessed
using a single aligned access for registers up to 32 bits, and two
32-bit aligned accesses for 64-bit registers.
[1] https://bellard.org/tinyemu/
Reviewed by: br, brooks (mentor)
Approved by: br, brooks (mentor)
Differential Revision: https://reviews.freebsd.org/D24681
With the removal of in-tree consumers of DES, Triple DES, and
MD5-HMAC, the only algorithm this driver still supports is SHA1-HMAC.
This is not very useful as a standalone algorithm (IPsec AH-only with
SHA1 would be the only user).
This driver has also not been kept up to date with the original driver
in OpenBSD which supports a few more cards and AES-CBC on newer cards.
The newest card currently supported by this driver was released in
2005.
Reviewed by: cem
MFC after: 1 week
Relnotes: yes
Sponsored by: Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D24691