resource allocation on x86 platforms:
- Add a new helper API that Host-PCI bridge drivers can use to restrict
resource allocation requests to a set of address ranges for different
resource types.
- For the ACPI Host-PCI bridge driver, use Producer address range resources
in _CRS to enumerate valid address ranges for a given Host-PCI bridge.
This can be disabled by including "hostres" in the debug.acpi.disabled
tunable.
- For the MPTable Host-PCI bridge driver, use entries in the extended
MPTable to determine the valid address ranges for a given Host-PCI
bridge. This required adding code to parse extended table entries.
Similar to the new PCI-PCI bridge driver, these changes are only enabled
if the NEW_PCIB kernel option is enabled (which is enabled by default on
amd64 and i386).
Approved by: re (kib)
resource allocation from an x86 Host-PCI bridge driver so that it can be
reused by the ACPI Host-PCI bridge driver (and eventually the MPTable
Host-PCI bridge driver) instead of duplicating the same logic. Note that
this means that hw.acpi.host_mem_start is now replaced with the
hw.pci.host_mem_start tunable that was already used in the non-ACPI case.
This also removes hw.acpi.host_mem_start on ia64 where it was not
applicable (the implementation was very x86-specific).
While here, adjust the logic to apply the new start address on any
"wildcard" allocation even if that allocation comes from a subset of
the allowable address range.
Reviewed by: imp (1)
driver would verify that requests for child devices were confined to any
existing I/O windows, but the driver relied on the firmware to initialize
the windows and would never grow the windows for new requests. Now the
driver actively manages the I/O windows.
This is implemented by allocating a bus resource for each I/O window from
the parent PCI bus and suballocating that resource to child devices. The
suballocations are managed by creating an rman for each I/O window. The
suballocated resources are mapped by passing the bus_activate_resource()
call up to the parent PCI bus. Windows are grown when needed by using
bus_adjust_resource() to adjust the resource allocated from the parent PCI
bus. If the adjust request succeeds, the window is adjusted and the
suballocation request for the child device is retried.
When growing a window, the rman_first_free_region() and
rman_last_free_region() routines are used to determine if the front or
end of the existing I/O window is free. From using that, the smallest
ranges that need to be added to either the front or back of the window
are computed. The driver will first try to grow the window in whichever
direction requires the smallest growth first followed by the other
direction if that fails.
Subtractive bridges will first attempt to satisfy requests for child
resources from I/O windows (including attempts to grow the windows). If
that fails, the request is passed up to the parent PCI bus directly
however.
The PCI-PCI bridge driver will try to use firmware-assigned ranges for
child BARs first and only allocate a "fresh" range if that specific range
cannot be accommodated in the I/O window. This allows systems where the
firmware assigns resources during boot but later wipes the I/O windows
(some ACPI BIOSen are known to do this) to "rediscover" the original I/O
window ranges.
The ACPI Host-PCI bridge driver has been adjusted to correctly honor
hw.acpi.host_mem_start and the I/O port equivalent when a PCI-PCI bridge
makes a wildcard request for an I/O window range.
The new PCI-PCI bridge driver is only enabled if the NEW_PCIB kernel option
is enabled. This is a transition aide to allow platforms that do not
yet support bus_activate_resource() and bus_adjust_resource() in their
Host-PCI bridge drivers (and possibly other drivers as needed) to use the
old driver for now. Once all platforms support the new driver, the
kernel option and old driver will be removed.
PR: kern/143874 kern/149306
Tested by: mav
method is used by the PCI bus driver to query the power management system
to determine the proper device state to be used for a device during suspend
and resume. For the ACPI PCI bridge drivers this calls
acpi_device_pwr_for_sleep(). This removes ACPI-specific knowledge from
the PCI and PCI-PCI bridge drivers.
Reviewed by: jkim
bus_generic_resume() since the pci_link(4) driver was added.
- Change the ACPI PCI-PCI bridge driver to inherit most of its methods
from the generic PCI-PCI bridge driver. In particular, this will now
restore PCI config registers for ACPI PCI-PCI bridges.
Tested by: Oleg Sharoyko osharoiko of gmail
a _BBN value of 0 if it was for the first bridge encountered since some
older systems returned _BBN of 0 for all bridges. However, some newer
systems enumerate bridges with non-zero _BBN before bus 0 which is
perfectly valid. Handle both cases by trusting the first bridge that has
a _BBN of 0 and falling back to reading from non-standard config registers
only for subsequent bridges with a _BBN of 0. We also only perform this
check for segment (domain) 0. We assume that _BBN is always correct
for segments other than 0.
Tested by: Josef Moellers josef.moellers at fujitsu
MFC after: 1 week
support machines having multiple independently numbered PCI domains
and don't support reenumeration without ambiguity amongst the
devices as seen by the OS and represented by PCI location strings.
This includes introducing a function pci_find_dbsf(9) which works
like pci_find_bsf(9) but additionally takes a domain number argument
and limiting pci_find_bsf(9) to only search devices in domain 0 (the
only domain in single-domain systems). Bge(4) and ofw_pcibus(4) are
changed to use pci_find_dbsf(9) instead of pci_find_bsf(9) in order
to no longer report false positives when searching for siblings and
dupe devices in the same domain respectively.
Along with this change the sole host-PCI bridge driver converted to
actually make use of PCI domain support is uninorth(4), the others
continue to use domain 0 only for now and need to be converted as
appropriate later on.
Note that this means that the format of the location strings as used
by pciconf(8) has been changed and that consumers of <sys/pciio.h>
potentially need to be recompiled.
Suggested by: jhb
Reviewed by: grehan, jhb, marcel
Approved by: re (kensmith), jhb (PCI maintainer hat)
- Simplify the amount of work that has be done for each architecture by
pushing more of the truly MI code down into the PCI bus driver.
- Don't bind MSI-X indicies to IRQs so that we can allow a driver to map
multiple MSI-X messages into a single IRQ when handling a message
shortage.
The changes include:
- Add a new pcib_if method: PCIB_MAP_MSI() which is called by the PCI bus
to calculate the address and data values for a given MSI/MSI-X IRQ.
The x86 nexus drivers map this into a call to a new 'msi_map()' function
in msi.c that does the mapping.
- Retire the pcib_if method PCIB_REMAP_MSIX() and remove the 'index'
parameter from PCIB_ALLOC_MSIX(). MD code no longer has any knowledge
of the MSI-X index for a given MSI-X IRQ.
- The PCI bus driver now stores more MSI-X state in a child's ivars.
Specifically, it now stores an array of IRQs (called "message vectors" in
the code) that have associated address and data values, and a small
virtual version of the MSI-X table that specifies the message vector
that a given MSI-X table entry uses. Sparse mappings are permitted in
the virtual table.
- The PCI bus driver now configures the MSI and MSI-X address/data
registers directly via custom bus_setup_intr() and bus_teardown_intr()
methods. pci_setup_intr() invokes PCIB_MAP_MSI() to determine the
address and data values for a given message as needed. The MD code
no longer has to call back down into the PCI bus code to set these
values from the nexus' bus_setup_intr() handler.
- The PCI bus code provides a callout (pci_remap_msi_irq()) that the MD
code can call to force the PCI bus to re-invoke PCIB_MAP_MSI() to get
new values of the address and data fields for a given IRQ. The x86
MSI code uses this when an MSI IRQ is moved to a different CPU, requiring
a new value of the 'address' field.
- The x86 MSI psuedo-driver loses a lot of code, and in fact the separate
MSI/MSI-X pseudo-PICs are collapsed down into a single MSI PIC driver
since the only remaining diff between the two is a substring in a
bootverbose printf.
- The PCI bus driver will now restore MSI-X state (including programming
entries in the MSI-X table) on device resume.
- The interface for pci_remap_msix() has changed. Instead of accepting
indices for the allocated vectors, it accepts a mini-virtual table
(with a new length parameter). This table is an array of u_ints, where
each value specifies which allocated message vector to use for the
corresponding MSI-X message. A vector of 0 forces a message to not
have an associated IRQ. The device may choose to only use some of the
IRQs assigned, in which case the unused IRQs must be at the "end" and
will be released back to the system. This allows a driver to use the
same remap table for different shortage values. For example, if a driver
wants 4 messages, it can use the same remap table (which only uses the
first two messages) for the cases when it only gets 2 or 3 messages and
in the latter case the PCI bus will release the 3rd IRQ back to the
system.
MFC after: 1 month
- First off, device drivers really do need to know if they are allocating
MSI or MSI-X messages. MSI requires allocating powerof2() messages for
example where MSI-X does not. To address this, split out the MSI-X
support from pci_msi_count() and pci_alloc_msi() into new driver-visible
functions pci_msix_count() and pci_alloc_msix(). As a result,
pci_msi_count() now just returns a count of the max supported MSI
messages for the device, and pci_alloc_msi() only tries to allocate MSI
messages. To get a count of the max supported MSI-X messages, use
pci_msix_count(). To allocate MSI-X messages, use pci_alloc_msix().
pci_release_msi() still handles both MSI and MSI-X messages, however.
As a result of this change, drivers using the existing API will only
use MSI messages and will no longer try to use MSI-X messages.
- Because MSI-X allows for each message to have its own data and address
values (and thus does not require all of the messages to have their
MD vectors allocated as a group), some devices allow for "sparse" use
of MSI-X message slots. For example, if a device supports 8 messages
but the OS is only able to allocate 2 messages, the device may make the
best use of 2 IRQs if it enables the messages at slots 1 and 4 rather
than default of using the first N slots (or indicies) at 1 and 2. To
support this, add a new pci_remap_msix() function that a driver may call
after a successful pci_alloc_msix() (but before allocating any of the
SYS_RES_IRQ resources) to allow the allocated IRQ resources to be
assigned to different message indices. For example, from the earlier
example, after pci_alloc_msix() returned a value of 2, the driver would
call pci_remap_msix() passing in array of integers { 1, 4 } as the
new message indices to use. The rid's for the SYS_RES_IRQ resources
will always match the message indices. Thus, after the call to
pci_remap_msix() the driver would be able to access the first message
in slot 1 at SYS_RES_IRQ rid 1, and the second message at slot 4 at
SYS_RES_IRQ rid 4. Note that the message slots/indices are 1-based
rather than 0-based so that they will always correspond to the rid
values (SYS_RES_IRQ rid 0 is reserved for the legacy INTx interrupt).
To support this API, a new PCIB_REMAP_MSIX() method was added to the
pcib interface to change the message index for a single IRQ.
Tested by: scottl
pcib_alloc_msix() methods instead of using the method from the generic
PCI-PCI bridge driver as the PCI-PCI methods will be gaining some PCI-PCI
specific logic soon.
- Add 3 new functions to the pci_if interface along with suitable wrappers
to provide the device driver visible API:
- pci_alloc_msi(dev, int *count) backed by PCI_ALLOC_MSI(). '*count'
here is an in and out parameter. The driver stores the desired number
of messages in '*count' before calling the function. On success,
'*count' holds the number of messages allocated to the device. Also on
success, the driver can access the messages as SYS_RES_IRQ resources
starting at rid 1. Note that the legacy INTx interrupt resource will
not be available when using MSI. Note that this function will allocate
either MSI or MSI-X messages depending on the devices capabilities and
the 'hw.pci.enable_msix' and 'hw.pci.enable_msi' tunables. Also note
that the driver should activate the memory resource that holds the
MSI-X table and pending bit array (PBA) before calling this function
if the device supports MSI-X.
- pci_release_msi(dev) backed by PCI_RELEASE_MSI(). This function
releases the messages allocated for this device. All of the
SYS_RES_IRQ resources need to be released for this function to succeed.
- pci_msi_count(dev) backed by PCI_MSI_COUNT(). This function returns
the maximum number of MSI or MSI-X messages supported by this device.
MSI-X is preferred if present, but this function will honor the
'hw.pci.enable_msix' and 'hw.pci.enable_msi' tunables. This function
should return the largest value that pci_alloc_msi() can return
(assuming the MD code is able to allocate sufficient backing resources
for all of the messages).
- Add default implementations for these 3 methods to the pci_driver generic
PCI bus driver. (The various other PCI bus drivers such as for ACPI and
OFW will inherit these default implementations.) This default
implementation depends on 4 new pcib_if methods that bubble up through
the PCI bridges to the MD code to allocate IRQ values and perform any
needed MD setup code needed:
- PCIB_ALLOC_MSI() attempts to allocate a group of MSI messages.
- PCIB_RELEASE_MSI() releases a group of MSI messages.
- PCIB_ALLOC_MSIX() attempts to allocate a single MSI-X message.
- PCIB_RELEASE_MSIX() releases a single MSI-X message.
- Add default implementations for these 4 methods that just pass the
request up to the parent bus's parent bridge driver and use the
default implementation in the various MI PCI bridge drivers.
- Add MI functions for use by MD code when managing MSI and MSI-X
interrupts:
- pci_enable_msi(dev, address, data) programs the MSI capability address
and data registers for a group of MSI messages
- pci_enable_msix(dev, index, address, data) initializes a single MSI-X
message in the MSI-X table
- pci_mask_msix(dev, index) masks a single MSI-X message
- pci_unmask_msix(dev, index) unmasks a single MSI-X message
- pci_pending_msix(dev, index) returns true if the specified MSI-X
message is currently pending
- Save the MSI capability address and data registers in the pci_cfgreg
block in a PCI devices ivars and restore the values when a device is
resumed. Note that the MSI-X table is not currently restored during
resume.
- Add constants for MSI-X register offsets and fields.
- Record interesting data about any MSI-X capability blocks we come
across in the pci_cfgreg block in the ivars for PCI devices.
Tested on: em (i386, MSI), bce (amd64/i386, MSI), mpt (amd64, MSI-X)
Reviewed by: scottl, grehan, jfv
MFC after: 2 months
various pcib drivers to use their own private devclass_t variables for
their modules.
- Use the DEFINE_CLASS_0() macro to declare drivers for the various pcib
drivers while I'm here.
with some Dell servers that booted w/o a problem[*] on 5.4, but failed
with 6.0-BETA.
On the PCI bus, when we do lazy resource allocation, we narrow the
range requested as we pass through bridges to reflect how the bridges
are programmed and what addresses they pass. However, when we're
doing an allocation on a bus that's directly connected to a host
bridge, no such translation can take place. We already had a fallback
range for memory requests, but none for ioports. As such, provide a
fallback for I/O ports so we don't allocate location 0, which will
have undesired side effects when the resources are actually used.
This fixes a problem with booting a Dell server with usb in the
kernel. However, it is an unsatisfying solution. I don't like the
hard coded value, and I think we should start narrowing the resources
returned to not be in the so-called isa alias area (where the ranage &
0x0300 must be 0 iirc). Doing such filtering will have to wait for
another day.
This may be a good 6 candidate, maybe after its had a chance to be
refined.
Tested by: glebius@
- Use a new-bus device driver for the ACPI PCI link devices. The devices
are called pci_linkX. The driver includes suspend/resume support so that
the ACPI bridge drivers no longer have to poke the links to get them
to handle suspend/resume. Also, the code to handle which IRQs a link is
routed to and choosing an IRQ when a link is not already routed is all
contained in the link driver. The PCI bridge drivers now ask the link
driver which IRQ to use once they determine that a _PRT entry does not
use a hardwired interrupt number.
- The new link driver includes support for multiple IRQ resources per
link device as well as preserving any non-IRQ resources when adjusting
the IRQ that a link is routed to.
- The entire approach to routing when using a link device is now
link-centric rather than pci bus/device/pin specific. Thus, when
using a tunable to override the default IRQ settings, one now uses
a single tunable to route an entire link rather than routing a single
device that uses the link (which has great foot-shooting potential if
the user tries to route the same link to two different IRQs using two
different pci bus/device/pin hints). For example, to adjust the IRQ
that \_SB_.LNKA uses, one would set 'hw.pci.link.LNKA.irq=10' from the
loader.
- As a side effect of having the link driver, unused link devices will now
be disabled when they are probed.
- The algorithm for choosing an IRQ for a link that doesn't already have an
IRQ assigned is now much closer to the one used in $PIR routing. When a
link is routed via an ISA IRQ, only known-good IRQs that the BIOS has
already used are used for routing instead of using probabilities to
guess at which IRQs are probably not used by an ISA device. One change
from $PIR is that the SCI is always considered a viable ISA IRQ, so that
if the BIOS does not setup any IRQs the kernel will degenerate to routing
all interrupts over the SCI. For non ISA IRQs, interrupts are picked
from the possible pool using a simplistic weighting algorithm.
Tested by: ru, scottl, others on acpi@
Reviewed by: njl
allocate unallocated memory resources from the top 32MB of the address
space rather than the top 2GB. While the latter works on some
chipsets, it fails badly on others. 32MB is more conservative and
matches what cheap harware from this era is hardwired to pass.
incomplete in that the PRT routing was not aware of link programming.
Fix this by doing all routing through the link devices. The new algorithm
for setting up links is:
1. Read _CRS to get current setting. If invalid (not in _PRS), then set
to 0.
2. Attempt to call _DIS on the link. If successful, mark the link as not
routed. Otherwise, assume it still is.
Then when a routing request occurs:
3. Update weights for all IRQs
4. Attempt to route the initial IRQ if valid
5. If that fails, walk through the sorted list, attempting to route IRQs.
6. Configure the trigger/polarity based on _PRS.
Other changes:
* Add acpi_pci_find_prt() to look up the PRT entry for a given device and
acpi_pci_link_route() to select/route the best IRQ for it.
* Remove duplicated code in acpi_pcib_route_interrupt() that picked the
first IRQ from _PRS.
* Remove unneeded arguments from acpi_pcib_resume() and friends.
* Ignore _STA on link devices but report if it seems strange.
* Add a prt_source handle to the PRT structure since the ACPI struct
ACPI_PCI_ROUTING_TABLE uses a fixed-size entry for it. We'll need to
dynamically size this object if we want to use it the same way ACPI-CA
does. Null-terminate the source.
Tested by: Luo Hong <luohong99_at_mails.tsinghua.edu.cn>,
Jeffrey Katcher <jmkatcher_at_yahoo.com>
Info from: jhb, Len Brown (Intel)
this more accurately reflects what the underlying hardware of most
acpi machines that don't have children pci busses.
We still need a better way to get this information from acpi/hardware.
Unify the code to disable GPEs with the enable code. Shutdown is handled
the same way. ACPI now does all wake/sleep prep for child devices so
now they no longer need to call external functions in the suspend/resume
path. Add the flags to non-ACPI busses (i.e., pci).
allocation was passed up to nexus. Now, we probe sysresource objects and
manage the resources they describe in a local rman pool. This helps
devices which attach/detach varying resources (like the _CST object) and
module loads/unloads. The allocation/release routines now check to see if
the resource is described in a child sysresource object and if so,
allocate from the local rman. Sysresource objects add their resources to
the pool and reserve them upon boot. This means sysresources need to be
probed before other ACPI devices.
Changes include:
* Add ordering to the child device probe. The current order is: system
resource objects, embedded controllers, then everything else.
* Make acpi_MatchHid take a handle instead of a device_t arg.
* Replace acpi_{get,set}_resource with the generic equivalents.
o Save and restore bars for suspend/resume as well as for D3->D0
transitions.
o preallocate resources that the PCI devices use to avoid resource
conflicts
o lazy allocation of resources not allocated by the BIOS.
o set unattached drivers to state D3. Set power state to D0
before probe/attach. Right now there's two special cases
for this (display and memory devices) that need work in other
areas of the tree.
Please report any bugs to me.
to PCI bridge can be read be evaluating the _BBN method of the host to PCI
device. Unfortunately, there appear to be some lazy/ignorant/moronic/
whatever BIOS writers that return 0 for _BBN for all host to PCI bridges in
the system. On a system with a single host to PCI bridge this is not a
problem as the child bus of that single bridge will be bus 0 anyway.
However, on systems with multiple host to PCI bridges and l/i/m/w BIOS
writers this is a major problem resulting in all but the first host to
PCI bridge failing to attach. So, this adds a workaround.
If the _BBN of a host to PCI bridge is zero and pcib0 already exists
and is not us, the we use _ADR to look up our PCI function and slot
(we currently assume we are on bus 0) and use that to call
host_pcib_get_busno() to try and extract our bus number from config
registers on the host to PCI bridge device. If that fails, then we make
an evil assumption that ACPI's _SB_ namespace lays out the host to PCI
bridges in ascending order and use our pcib unit number as our bus
number.
Approved by: re
This allocate the best IRQ to boot-disable devices (have IRQ 0).
Allocated IRQ will be used for PCI interrupt routing when ACPI is
enabled.
Note that verbose messaging enabled for the time being so that
people can easily notice the strange behavior if it happened.
- Add an ACPI PCI-PCI bridge driver (the previous driver just handled
Host-PCI bridges) that is a PCI driver that is a subclass of the generic
PCI-PCI bridge driver. It overrides probe, attach, read_ivar, and
pci_route_interrupt.
- The probe routine only succeeds if our parent is an ACPI PCI bus which
we test for by seeing if we can read our ACPI_HANDLE as an ivar.
- The attach routine saves a copy of our handle and calls the new
acpi_pcib_attach_common() function described below.
- The read_ivar routine handles normal PCI-PCI bridge ivars and adds an
ivar to return the ACPI_HANDLE of the bus this bridge represents.
- The route_interrupt routine fetches the _PRT (PCI Interrupt Routing
Table) from the bridge device's softc and passes it off to
acpi_pcib_route_interrupt() to route the interrupt.
- Split the old ACPI Host-PCI bridge driver into two pieces. Part of
the attach routine and most of the route_interrupt routine remain in
acpi_pcib.c and are shared by both ACPI PCI bridge drivers.
- The attach routine verifies the PCI bridge is present, reads in
the _PRT for the bridge, and attaches the child PCI bus.
- The route_interrupt routine uses the passed in _PRT to route a PCI
interrupt.
The rest of the driver is the ACPI Host-PCI bridge specific bits that
live in acpi_pcib_acpi.c.
- We no longer duplicate pcib_maxslots but use it directly.
- The driver now uses the pcib devclass instead of its own devclass.
This means that PCI busses are now only children of pcib devices.
- Allow the ACPI_HANDLE for the child PCI bus to be read as an ivar
of the child bus.
- Fetch the _PRT for routing PCI interrupts directly from our softc
instead of walking the devclass to find ourself and then fetch our
own softc.
With this change and the new ACPI PCI bus driver, ACPI can now properly
route interrupts for devices behind PCI-PCI bridges. That is, the
Itanium2 with like 10 PCI busses can now boot ok and route all the PCI
interrupts. Hopefully this will also fix problems people are having with
CardBus bridges behind PCI-PCI bridges not properly routing interrupts
when ACPI is used.
Tested on: i386, ia64
LNK device (interrupt source provider sort of) is present before using it,
but the code actually tested the status (_STA) of the PCI bridge device
doing the routing, not the actual LNK device. Fix it to check the status
of the LNK device.
Use ACPI_SUCCESS/ACPI_FAILURE consistently.
The AcpiGetInto* interfaces are obsoleted by ACPI_ALLOCATE_BUFFER.
Use _ADR as well as _BBN to get our bus number.