this case, allocate a plain mbuf and copy the frame into it, then send the
copy up the stack, leaving the original mbuf+cluster in place in the
receive ring for immediate re-use. This saves a trip through 2 of the
3 zones of the compound mbuf allocator, a trip through busdma, and a trip
through the 1 of the 3 mbuf destructors. For our load at Netflix, this can
lower CPU consumption by as much as 20%. The copy algorithm is based on
investigative work from Luigi Rizzo earlier in the year.
Reviewed by: jfv
Obtained from: Netflix
- Add a couple of new devices
- Flow control changes in shared and core code
- Bug fix to Flow Director for 82598
- Shared code sync to internal with required core change
Thanks to those helping in the testing and improvements to this driver!
MFC after:5 days
Put a bandaid to prevent ixgbe(4) from completely locking up the system
under high load. Our platform has a few CPU cores and a single active
ixgbe(4) port with 4 queues. Under high enough traffic load, at about
7.5GBs and 700,000 packets/sec (outbound), the entire system would
deadlock. What we found was that each CPU was in an endless loop on a
different ix taskqueue thread. The OACTIVE flag had gotten set on each
queue, and the ixgbe_handle_queue() function was continuously rescheduling
itself via the taskqueue_enqueue. Since all CPUs were busy with their
taskqueue threads, the ixgbe_local_timer() function couldn't run to clear
the OACTIVE flag.
Submitted by: scottl
MFC after: 1 week
Add TSO6 and LRO/IPv6 support.
Fix the module Makefile to at least properly inlcude opt_inet6.h
and allow builds without INET or INET6.
Sponsored by: The FreeBSD Foundation
Sponsored by: iXsystems
Reviewed by: gnn (as part of the whole)
MFC After: 3 days
header will make the data go over the 64k limits announced to busdma as
maxsize and the transaction will fail.
With TSO this can result in a TCP regression due to the lost packet.
According to the data sheets ixgbe(4) 82598 and 82599 can handle up to
256k so increase the maximum.
Reported by: Jon Kåre Hellan, UNINETT (jon.kare.hellan uninett.no)
Tested by: Jon Kåre Hellan, UNINETT (jon.kare.hellan uninett.no)
MFC after: 1 week
Contrarily to what i wrote in my previous commit, the 82599
does include the CRC in the length. The operating mode is
reset in ixgbe_init_locked() and so we need to hook into
the places where the two registers (HLREG0 and RDRXCTL) are
modified.
values as in the Intel driver 3.8.21 for linux. The fact that it
is standard in the above driver suggests that it has no bad side
effects.
But of course there must be a reason for enabling features, not
just "it does not harm", so here it is a good one:
Prefetching enables full line rate even using a single queue (14.88
Mpps, compared to ~12 Mpps without prefetch). This in turn is
terribly useful when one wants to schedule traffic.
For obvious reasons the difference is only visible with netmap
or other high speed solutions, but presumably the advantage
should be in the order of a fraction of a microsecond when
starting transmission on an empty queue.
Discussed with Jack Vogel.
MFC after: 1 week
USERSPACE:
1. add support for devices with different number of rx and tx queues;
2. add better support for zero-copy operation, adding an extra field
to the netmap ring to indicate how many buffers we have already processed
but not yet released (with help from Eddie Kohler);
3. The two changes above unfortunately require an API change, so while
at it add a version field and some spares to the ioctl() argument
to help detect mismatches.
4. update the manual page for the two changes above;
5. update sample applications in tools/tools/netmap
KERNEL:
1. simplify the internal structures moving the global wait queues
to the 'struct netmap_adapter';
2. simplify the functions that map kring<->nic ring indexes
3. normalize device-specific code, helps mainteinance;
4. start exploring the impact of micro-optimizations (prefetch etc.)
in the ixgbe driver.
Use 'legacy' descriptors on the tx ring and prefetch slots gives
about 20% speedup at 900 MHz. Another 7-10% would come from removing
the explict calls to bus_dmamap* in the core (they are effectively
NOPs in this case, but it takes expensive load of the per-buffer
dma maps to figure out that they are all NULL.
Rx performance not investigated.
I am postponing the MFC so i can import a few more improvements
before merging.
Introduce some functions to map NIC ring indexes into netmap ring
indexes and vice versa. This way we can implement the bound
checks only in one place (and hopefully in a correct way).
On passing, make the code and comments more uniform across the
various drivers.
- remove experimental code for disabling CRC
- use the correct constant for conversion between interrupt rate
and EITR values (the previous values were off by a factor of 2)
- make dev.ix.N.queueM.interrupt_rate a RW sysctl variable.
Changing individual values affects the queue immediately,
and propagates to all interfaces at the next reinit.
- add dev.ix.N.queueM.irqs rdonly sysctl, to export the actual
interrupt counts
Netmap-related changes for ixgbe:
- use the "new" format for TX descriptors in netmap mode.
- pass interrupt mitigation delays to the user process doing poll()
on a netmap file descriptor.
On the RX side this means we will not check the ring more than once
per interrupt. This gives the process a chance to sleep and process
packets in larger batches, thus reducing CPU usage.
On the TX side we take this even further: completed transmissions are
reclaimed every half ring even if the NIC interrupts more often.
This saves even more CPU without any additional tx delays.
Generic Netmap-related changes:
- align the netmap_kring to cache lines so that there is no false sharing
(possibly useful for multiqueue NICs and MSIX interrupts, which are
handled by different cores). It's a minor improvement but it does not
cost anything.
Reviewed by: Jack Vogel
Approved by: Jack Vogel
1. correct the initialization of RDT when there is an ixgbe_init()
while a netmap client is active. This code was previously
in ixgbe_initialize_receive_units() but RDT is overwritten
shortly afterwards in ixgbe_init_locked()
2. add code (not active yet) to disable CRCSTRIP while in netmap mode.
From all evidence i could gather, it seems that when the 82599 has to
write a data block that is not a full cache line, it first reads
the line (64 bytes) and then writes back the updated version.
This hurts reception of min-sized frames, which are only 60 bytes
if the CRC is stripped: i could never get above 11Mpps
(received from one queue) with CRCSTRIP enabled, whyle 64+4-byte
packets reach 14.2 Mpps (the theoretical maximum).
Leaving the CRC in gets us 14.88Mpps for 60+4 byte frames,
(and penalizes 64+4). The min-size case is important not just because
it looks good in benchmarks, but also because this is the size
of pure acks.
Note we cannot leave CRCSTRIP on by default because it is
incompatible with some other features (LRO etc.)
the memory allocator used by netmap. No functional change,
two small bug fixes:
- in if_re.c add a missing bus_dmamap_sync()
- in netmap.c comment out a spurious free() in an error handling block
- {ixgbe,ixv}_header_split is passed to TUNABLE_INT, so delcare it
int, not bool.
- {ixgbe,ixv}_tx_ctx_setup() returns a boolean value, so declare it
bool, not int.
- {ixgbe,ixv}_tso_setup() returns a bool, so declare it bool, not boolean_t.
- {ixgbe,ixv}_txeof() returns a bool, so declare it bool, not boolean_t.
- Do not re-define bool if the symbol already exists.
MFC after: 2 weeks
Sponsored by: Isilon Systems, LLC
A link reset now is completely transparent for the netmap client:
even if the NIC resets its own ring (e.g. restarting from 0),
the client will not see any change in the current rx/tx positions,
because the driver will keep track of the offset between the two.
2. make the device-specific code more uniform across different drivers
There were some inconsistencies in the implementation of the netmap
support routines, now drivers have been aligned to a common
code structure.
3. import netmap support for ixgbe . This is implemented as a very
small patch for ixgbe.c (233 lines, 11 chunks, mostly comments:
in total the patch has only 54 lines of new code) , as most of
the code is in an external file sys/dev/netmap/ixgbe_netmap.h ,
following some initial comments from Jack Vogel about making
changes less intrusive.
(Note, i have emailed Jack multiple times asking if he had
comments on this structure of the code; i got no reply so
i assume he is fine with it).
Support for other drivers (em, lem, re, igb) will come later.
"ixgbe" is now the reference driver for netmap support. Both the
external file (sys/dev/netmap/ixgbe_netmap.h) and the device-specific
patches (in sys/dev/ixgbe/ixgbe.c) are heavily commented and should
serve as a reference for other device drivers.
Tested on i386 and amd64 with the pkt-gen program in tools/tools/netmap,
the sender does 14.88 Mpps at 1050 Mhz and 14.2 Mpps at 900 MHz
on an i7-860 with 4 cores and 82599 card. Haven't tried yet more
aggressive optimizations such as adding 'prefetch' instructions
in the time-critical parts of the code.
The current code was rounding down the maximum frame size instead of
routing up, resulting in a read size of 1024 bytes, in the non-jumbo
frame case, and splitting the packets across multiple mbufs.
Consequently the above problem exposed another issue, which is when
packets were splitted across multiple mbufs, and all of the mbufs in the
chain have the M_PKTHDR flag set.
Submitted by: original patch by Ray Ruvinskiy at BlueCoat dot com
Reviewed by: jfv, kmacy, rwatson
Approved by: re (rwatson)
MFC after: 5 days
crusty, and this still isn't perfect, but its at least a bit
more recent.
Secondly, a few improvements to the driver from Andrew Boyer,
support hint to allow devices to not attach, add VLAN_HWTSO
capability so vlans can use TSO, fix in the interrupt handler
to make sure the stack TX queue is processed. Oh, and also
make sure IPv6 does not cause a re-init in the ioctl routine.
Thanks for your efforts Andrew!
Thanks to Claudio Jeker for noticing the ixgbe_xmit() routine
was not correctly swapping the dma map from the first to the
last descriptor in a multi-descriptor transmission, corrected
this.
- Also a couple minor tweaks to the TX code from the same source.
- Add the INET ioctl code which has been missing from this driver,
and which caused IP aliases to reset the interface.
- Last, some minor logic changes that just reflect upcoming
hardware support, but have no other functional effect now.
MFC after a week
CRITICAL FIX - with stats changes the older 82598 will panic
and trash the stack on driver load, FCOE registers ONLY exist
in 82599 and must not be read otherwise.
kern/153951 - to correct incorrect media type on adapters
with pluggable modules I have eliminated the old static
table in favor of a new dynamic shared code routine. This
also has the benefit of detecting changes when a different
module is inserted.
Performance/enhancement to the Flow Director code from my
linux coworker (the developer of the code).
Fixes from Michael Tuexen - a data corruption problem on the
82599 (CRITICAL), fix so the buf size correctly adjusts as
the cluster changes, and max descriptors are set properly.
Also added 16K clusters for those REALLY big jumbos :)
In the RX path, the RX LOCK was not being released, and this
causes LOR problems. Add the code that igb already has.
Sync with in house shared code, this was necessary for the
Flow Director fix.
MFC in 2 days
finding. The test to compare the mbuf m_len against
a fixed value and then returning needs to be removed.
When using VLANS and doing HW_TAGGING, and IPV6, the
ICMP6 packets actually fail this condition, the constant
assumes that the tag is IN the frame, and its not, so
the length is actually tiny. Furthermore, I'm not sure
what the point was to just return??
MFC after: 3 days
- This adds a VM SRIOV interface, ixv, it is however
transparent to the user, it links with the ixgbe.ko,
but when ixgbe is loaded in a virtualized guest with
SRIOV configured this will be detected.
- Sync shared code to latest
- Many bug fixes and improvements, thanks to everyone
who has been using the driver and reporting issues.
configuration function. For failed memory allocations, em(4)/lem(4)
called panic(9) which is not acceptable on production box.
igb(4)/ixgb(4)/ix(4) allocated the required memory in stack which
consumed 768 bytes of stack memory which looks too big.
To address these issues, allocate multicast array memory in device
attach time and make multicast configuration success under any
conditions. This change also removes the excessive use of memory in
stack.
Reviewed by: jfv
limit the advertised speed of an SFP+ to 1G, effectively
"forcing" link at that lower speed. It is off by default
and is enabled by sysctl dev.ix.0.force_gig=1, 0 will
set it back to the norm.