compilation accordingly. The net effect is that tracing is not by
default present and that it can easily be compiled-in.
The tracer uses getenv() and printf(), which cannot be used in all
cases (ie from the debugger) and with this change we improved the
applicability of the unwinder.
This change is made on the vendor branch and given back to the
vendor for possible inclusion in future versions.
immediately after the kernel map has been sized, and is
the optimal place for the autosizing of memory allocations
which occur within the kernel map to occur.
Suggested by: bde
without Giant held.
A quick outline of the locking strategy:
Since all IOMMUs are synchronized, there is a single lock, iommu_mtx,
which protects the hardware registers (where needed) and the global and
per-IOMMU software states. As soon as the IOMMUs are divorced, each struct
iommu_state will have its own mutex (and the remaining global state
will be moved into the struct).
The dvma rman has its own internal mutex; the TSB slots may only be
accessed by the owner of the corresponding resource, so neither needs
extra protection.
Since there is a second access path to maps via LRU queues, the consumer-
provided locking is not sufficient; therefore, each map which is on a
queue is additionally protected by iommu_mtx (in part, there is one
member which only the map owner may access). Each map on a queue may
be accessed and removed from or repositioned in a queue in any context as
long as the lock is held; only the owner may insert a map.
To reduce lock contention, some bus_dma functions remove the map from
the queue temporarily (on behalf of the map owner) for some operations and
reinsert it when they are done. Shorter operations and operations which are
not done on behalf of the lock owner are completely covered by the lock.
To facilitate the locking, reorganize the streaming buffer handling;
while being there, fix an old oversight which would cause the streaming
buffer to always be flushed, regardless of whether streaming was enabled
in the TSB entry. The streaming buffer is still disabled for now, since
there are a number of drivers which lack critical bus_dmamp_sync() calls.
Additional testing by: jake
series, the 8139C+ has a descriptor-based DMA mechanism, and its
performance is actually pretty respectable. Note: the 8139D chip does
not support C+ mode. Only the 8139C+ and 8169 gigE chips support C+ mode.
Supported features:
- RX and TX checksum offload
- hardware VLAN tag insertion/extraction
- TX interrupt moderation using the 8139's on-board timer
Everything should be properly busdma'ed and endian-independent, so
things should work ok on non-x86 platforms. Unfortunately, my call
for testers on this code was met with deafening silence, and I don't
have access to any non-x86 FreeBSD boxes at the moment, so this is
speculation.
The device detection code has been cleaned up a little as well
(thanks to Michal Mertl) for the patches.
There are also updates to the rl(4) man page (which I accidentally
checked in before when I updated the dc(4) man page. Oops.)
Todo: finish support for the 8169 gigabit ethernet chip. This
mainly requires writing an rlgphy driver to handle the 8169's built-in
PHY. This will have to wait until I actually get my hands on an 8169
card for testing though. (I still can't find a source for one in the
U.S. Suggestions/pointers welcome.)
- MN-110 10/100 USB ethernet (ADMtek Pegasus II, if_aue)
- MN-120 10/100 cardbus (ADMtek Centaur-C, if_dc)
- MN-130 10/100 PCI (ADMtek Centaur-P, if_dc)
Also update dc(4) man page to mention support for MN-120 and MN-130.
* Always use polled mode. The intr approach did not work for many
controllers and required the hw.acpi.ec.event_driven workaround.
* Only use an edge (not level) triggered GPE handler
* Add sc->ec_mtx for locking operations to a single EC. There were
many race conditions earlier between an SCI event and EcRead/Write.
* Use 1 ms as the global lock timeout
* Only acquire global lock if _GLK != 0
* Update EcWaitEvent to use an incremental backoff delay in its
poll loop. Wait 50 ms max instead of 10. Most ECs respond
in < 5 us (50 us when heavily loaded). However, some time out
occasionally even with a 10 ms timeout. For delays past 1 ms, use
msleep instead of DELAY to give SCI interrupts a chance to occur.
* Add EcCommand to send a command and wait for the appropriate event.
* The hw.acpi.ec.event_driven tunable is no longer applicable and
has been removed.
Ideas from: Linux
bus_dma_tag_create. We need to be sure that our packets are
kept in-sequence (that's how ATM is supposed to work) and
therefor use BUS_DMA_NOWAIT in all calls to bus_dmamap_load.
For memory allocated with bus_dmamem_alloc the use of anything
other than NULL arguments for the locking is anyway bogus because
this memory never should need bouncing and hence the load should never
be defered.
Allow the receipt of OAM and RM cells on raw connections. Caveat: it seems
that RM cells are still processed by the hardware even when we open the
connection as UBR.
register, present only on 3c90xB and later NICs. This meant that you could
not use a 1500 byte MTU with VLANs on original 3c905/3c900 cards (boomerang
chipset). The boomerang chip does support large frames though, just not
in the same way: you can set the 'allow large frames' bit in the MAC
control register to receive frames up to 4K in size.
Changes:
- Set the 'allow large frames' bit for boomerang chips and increase
the packet size register for cyclone and later chips. This allows
us to use IFCAP_VLAN_MTU on all supported xl(4) NICs.
- Actually set the IFCAP_VLAN_MTU flag in the capabilities word
in xl_attach().
- Change the method used to detect older boomerang chips. My 3c575C
cardbus NIC was being incorrectly identified as 3c90x chip instead
of 3c90xB because the capabilities word in its EEPROM reports
a bizzare value. In addition to checking for the supportsNoTxLength
bit, also check for the absence of the supportsLargePackets bit.
Both of these cases denote a 3c90xB chip.
- Make RX and TX checksums configurable via the SIOCSIFCAP ioctl.
- Avoid an unecessary le32toh() in xl_rxeof(): we already have the
received frame size in the lower 16 bits of rxstat, no need to
read it again.
Tested with 3c905-TX, 3c900-TPO, 3c980C and 3c575C NICs.
instead of SENDMAIL_MC but don't remove on it 'make clean' as the
user may not have the original .mc file and removing it could be
dangerous (e.g., make SENDMAIL_CF=/etc/mail/sendmail.cf clean).
Noticed by: peter
MFC after: 3 days
on the implied sign extension. The single unified VADDR() macro was
not able to avoid sign extending the VM_MAXUSER_ADDRESS/USRSTACK values.
Be explicit about UVADDR() (positive address space) and KVADDR()
(kernel negative address space) to make mistakes show up more
spectacularly.
Increase user VM space from 1/2TB (512GB) to 128TB.
corresponding release code. This was preventing the use of more than
1/2TB of user VM. I also spent a week staring at this code only to
eventually find that I'd mistakenly typed a P as an R.
rather than a non-existing pte. There is code elsewhere in i386/amd64
pmap that neglects to handle the large page cases because it knows that
it will see PG_PS in the returned "pte".
- Use atomic ops to update the bigpipe count
- Make the bigpipe count sysctl readable
- Remove a duplicate comparison in an if statement
- Comment two SYSCTLs.