to declaring a proper module. The module event handler is part of the
gpart core and will add the scheme to an internal list on module load
and will remove the scheme from the internal list on module unload.
This makes it possible to dynamically load and unload partitioning
schemes.
to it for tasting. This is useful when the class, through means outside
the scope of GEOM, can claim providers previously unclaimed.
The g_retaste() function posts an event which is handled by the
g_retaste_event().
Event suggested by: phk
exhaustion is encountered. There was a fix made previously for this
problem but the solution (breaking out of the receive loop) does not
seem to work. mbuf reuse strategy is already adopted by other drivers
such as if_bge. The problem was recreated and the patch is also
verified in the same test environment.
layouts different than the defaults:
o hint.npe.0.mac="A", "B", etc. specifies the window for MAC register accesses
o hint.npe.0.mii="A", "B", etc. specifies PHY registers
o hint.npe.1.phy=%d specifies the PHY to map to a port
This allows devices like NSLU to be setup w/o code changes and will
also be used for forthcoming support for more Avila boards.
Reviewed by: imp
MFC after 1 week
BO_LOCK/UNLOCK/MTX when manipulating the bufobj.
- Create a new lock in the bufobj to lock bufobj fields independently.
This leaves the vnode interlock as an 'identity' lock while the bufobj
is an io lock. The bufobj lock is ordered before the vnode interlock
and also before the mnt ilock.
- Exploit this new lock order to simplify softdep_check_suspend().
- A few sync related functions are marked with a new XXX to note that
we may not properly interlock against a non-zero bv_cnt when
attempting to sync all vnodes on a mountlist. I do not believe this
race is important. If I'm wrong this will make these locations easier
to find.
Reviewed by: kib (earlier diff)
Tested by: kris, pho (earlier diff)
code.
The bug:
There exists a race condition for timeout/untimeout(9) due to the
way that the softclock thread dequeues timeouts.
The softclock thread sets the c_func and c_arg of the callout to
NULL while holding the callout lock but not Giant. It then drops
the callout lock and acquires Giant.
It is at this point where untimeout(9) on another cpu/thread could
be called.
Since c_arg and c_func are cleared, untimeout(9) does not touch the
callout and returns as if the callout is canceled.
The softclock then tries to acquire Giant and likely blocks due to
the other cpu/thread holding it.
The other cpu/thread then likely deallocates the backing store that
c_arg points to and finishes working and hence drops Giant.
Softclock resumes and acquires giant and calls the function with
the now free'd c_arg and we have corruption/crash.
The fix:
We need to track curr_callout even for timeout(9) (LOCAL_ALLOC)
callouts. We need to free the callout after the softclock processes
it to deal with the race here.
Obtained from: Juniper Networks, iedowse
Reviewed by: jhb, iedowse
MFC After: 2 weeks.
around the check for the BV_BKGRDINPROG in the brelse() and bqrelse().
See the comment for the explanation why it is safe.
Tested by: pho
Submitted by: jeff
ffs_extread() when setting the IN_ACCESS flag by checking whether the
IN_ACCESS is already set. The possible race there is admissible.
Tested by: pho
Submitted by: jeff
to enter thread_suspend_check().
- Set TDF_ASTPENDING along with TDF_NEEDSUSPCHK so we can move the
thread_suspend_check() to ast() rather than userret().
- Check TDF_NEEDSUSPCHK in the sleepq_catch_signals() optimization so
that we don't miss a suspend request. If this is set use the
expensive signal path.
- Set NEEDSUSPCHK when creating a new thread in thr in case the
creating thread is due to be suspended as well but has not yet.
Reviewed by: davidxu (Authored original patch)
lock in the 8259A drivers as these drivers are only used on UP systems.
This slightly reduces the penalty of an SMP kernel (such as GENERIC) on
a UP x86 machine.
resource to a CPU. The default method is to pass the request up to the
parent similar to BUS_CONFIG_INTR() so that all busses don't have to
explicitly implement bus_bind_intr. A bus_bind_intr(9) wrapper routine
similar to bus_setup/teardown_intr() is added for device drivers to use.
Unbinding an interrupt is done by binding it to NOCPU. The IRQ resource
must be allocated, but it can happen in any order with respect to
bus_setup_intr(). Currently it is only supported on amd64 and i386 via
nexus(4) methods that simply call the intr_bind() routine.
Tested by: gallatin
putting the correct size in the fib header. Presumably the older firmware
silently ignored a bad size field.
(This change tested with a 3805 controller. Passthrough devices were
created when running firmware build 12814, but not 15323 or later. With
this change they're created for both old and new firmware versions.)
Submitted by: Adaptec
FSACTL_LNX_SEND_LARGE_FIB, and FSACTL_LNX_SEND_RAW_SRB, and correct size
checks on FIBs passed in from userspace. Both changes were obtained from
Adaptec's driver build 15317. Adaptec's commandline RAID tool arcconf uses
these ioctls when creating a RAID-10 array (and probably other operations
too).
so the annoying message is not printed.
o Don't warn about FUTEX_FD not being implemented
and return ENOSYS instead of 0 (eg. success).
o Clear FUTEX_PRIVATE_FLAG as we actually implement
only private futexes so there is no reason to
return ENOSYS when app asks for a private futex.
We don't reject shared futexes because they worked
just fine with our implementation so far.
Approved by: kib (mentor)
Tested by: bsam
MFC after: 1 week
work on architectures with a write-back cache as the PIO writes end up
in the cache which the sync(BUS_DMASYNC_POSTREAD) in usb_transfer_complete
then discards; compensate in the xfer methods that do PIO by pushing the
writes out of the cache before usb_transfer_complete is called.
This fixes USB on xscale and likely other places.
Sponsored by: hobnob
Reviewed by: cognet, imp
MFC after: 1 month
obtain the reference. In particular, this fixes the panic reported in
the PR. Remove the comments stating that this needs to be done.
PR: kern/119422
MFC after: 1 week
all uses) involve a read but usbd_start_transfer only does a PREWRITE; change
this to BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE as I'm not sure if any
users do write+read.
Reviewed by: cognet, imp
MFC after: 1 month
rqindex back in struct thread.
- Compile kern_switch.c independently again and stop #include'ing it from
schedulers.
- Remove the ts_thread backpointers and convert most code to go from
struct thread to struct td_sched.
- Cleanup the ts_flags #define garbage that was causing us to sometimes
do things that expanded to td->td_sched->ts_thread->td_flags in 4BSD.
- Export the kern.sched sysctl node in sysctl.h
This one line change makes the following code found in many ethernet device drivers
(at least em, igb, ixgbe, and cxgb) gratuitous
case SIOCSIFADDR:
if (ifa->ifa_addr->sa_family == AF_INET) {
/*
* XXX
* Since resetting hardware takes a very long time
* and results in link renegotiation we only
* initialize the hardware only when it is absolutely
* required.
*/
ifp->if_flags |= IFF_UP;
if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) {
EM_CORE_LOCK(adapter);
em_init_locked(adapter);
EM_CORE_UNLOCK(adapter);
}
arp_ifinit(ifp, ifa);
} else
error = ether_ioctl(ifp, command, data);
break;
thread_fini(). The schedulers initialize themselves properly during
sched_fork_thread() anyhow. fini is only called when we're returning
the memory to the allocator which surely doesn't care what state the
memory is in.
is only used by 4bsd.
- Create a new runq_choose_fuzz() function rather than polluting runq_choose()
with 4BSD specific code.
- Move the fuzz sysctl into sched_4bsd.c
- Remove some dead code from kern_switch.c
maxsockets limit, not maxfiles limit. The question remains why those
limits are handled differently (with error code for maxfiles but with
sleep for maxsokets), but those would be addressed in a separate commit
if necessary.
Requested by: rwhatson, jeff
before doing the very expensive cursig() and related locking. NEEDSIGCHK
is updated whenever our signal mask change or when a signal is delivered and
should be sufficient to avoid the more expensive tests. This eliminates
another source of PROC_LOCK contention in multithreaded programs.
- In the last revision the code was changed to use maxfilesperproc rather than
the per-process file limit to restrict the size of the poll array. This
eliminates a significant source of process lock contention in multithreaded
programs and is cheaper. This had been committed with the wrong batch of
changes.
a simple (wmesg, count) tuple in a hash to keep track of how many times
we sleep at each wait message. We hash on message and not channel. No
line number information is given as typically wait messages are not used in
more than one place. Identical strings defined at different addresses will
show up with seperate counters.
- Use debug.sleepq.enable to enable, .reset to reset, and .stats dumps stats.
- Do an unsynchronized check in sleepq_switch() prior to switching before
calling sleepq_profile() which uses a global lock to synchronize the hash.
Only sleeps which actually cause a context switch are counted.
1.38 in 2001. Break out of the FOREACH_THREAD_IN_PROC loop when we've
discovered a new proc in the chain.
- Increment i and check for maxlockdepth once per matching process not
once per thread. This didn't properly terminate the loop before.
- Fix a bug which has existed potentially since rev 1.1. waitblock->lf_next
can be NULL when a thread has been woken-up but not yet scheduled. Check
for this condition rather than blindly dereferencing.
Found by: libMicro
requiring the per-process spinlock to only requiring the process lock.
- Reflect these changes in the proc.h documentation and consumers throughout
the kernel. This is a substantial reduction in locking cost for these
fields and was made possible by recent changes to threading support.
2's compliment.
The 2's compliment transform is done so a "count down" sampling interval
can be converted into a "count up" PMC value. a 2's complimented 'count down'
value is written to the PMC counter; then the read-back counter is reverted
via another 2's compliment.
PR: kern/121660
Reviewed by: jkoshy
Approved by: jkoshy
MFC after: 1 week
vm/vm_contig.c, vm/vm_page.c, and vm/vm_pageq.c. Today, vm/vm_pageq.c
has withered to the point that it contains only four short functions,
two of which are only used by vm/vm_page.c. Since I can't foresee any
reason for vm/vm_pageq.c to grow, it is time to fold the remaining
contents of vm/vm_pageq.c back into vm/vm_page.c.
Add some comments. Rename one of the functions, vm_pageq_enqueue(),
that is now static within vm/vm_page.c to vm_page_enqueue().
Eliminate PQ_MAXCOUNT as it no longer serves any purpose.
- Always include the ie_disable and ie_eoi methods in 'struct intr_event'
and collapse down to one intr_event_create() routine. The disable and
eoi hooks simply aren't used currently in the !INTR_FILTER case.
- Expand 'disab' to 'disable' in a few places.
- Use function casts for arm and i386:intr_eoi_src() instead of wrapper
routines since to trim one extra indirection.
Compiled on: {arm,amd64,i386,ia64,ppc,sparc64} x {FILTER, !FILTER}
Tested on: {amd64,i386} x {FILTER, !FILTER}
the referenced data is only obtained/changed in the device open handler,
and the ioctl handler can only run after the open handler. Also fix a
few nearby style issues.
Submitted by: Matt Jacob
drivers.
In the giant_XXX wrappers for the device methods of the D_NEEDGIANT
drivers, do not dereference the cdev->si_devsw. It is racing with
the destroy_devl() clearing of the si_devsw. Instead, use the
dev_refthread() and return ENXIO for the destroyed device. [1]
The check for the D_INIT in the prep_cdevsw() was not synchronized with
the call of the fini_cdevsw() in destroy_devl(), that under rapid device
creation/destruction may result in the use of uninitialized cdevsw [2].
Change the protocol for the prep_cdevsw(), requiring it to be called
under dev_mtx, where the check for D_INIT is done.
Do not free the memory allocated for the gianttrick cdevsw while holding
the dev_mtx, put it into the free list to be freed later. Reuse the
d_gianttrick pointer to keep the size and layout of the struct cdevsw
(requested by phk). Free the memory in the dev_unlock_and_free(), and do
all the free after the dev_mtx is dropped (suggested by jhb).
Reported by: bsdimp + many [1], pho [2]
Reviewed by: phk, jhb
Tested by: pho
MFC after: 1 week
for a configurable number of seconds, spin the disk down. Spin it back
up on the next request.
Notice that the timeout is only armed by a request, so to spin down a
disk you may have to do:
atacontrol spindown ad10 5
dd if=/dev/ad10 of=/dev/null count=1
To disable spindown, set timeout to zero:
atacontrol spindown ad10 0
In order to debug any trouble caused, this code is somewhat noisy on the
console.
Enabling spindown on a disk containing / or /var/log/messages is not
going to do anything sensible.
Spinning a disk up and down all the time will wear it out, use sensibly.
Approved by: sos
10 microseconds is too short.
Always set the cpu to the highest frequency so that we get through
boot and don't handicap cpus where powerd(8) is not used.
10 microseconds is too short.
Always set the cpu to the highest frequency so that we get through
boot and don't handicap cpus where powerd(8) is not used.
monitor mode. This solves a problem that sometimes mangled frames
are passed.
Submitted by: Werner Backes <werner_at_bit-1.de>
Tested by: Werner Backes <werner_at_bit-1.de>
PR: kern/121608
Approved by: thompsa (mentor)
will have a special section, named .PPC.EMB.apuinfo, which will
tell GDB that a BookE processor is targeted and which will
result in GDB using a different register definition. In order
to support remote GDB for BookE, we need the GDB stub in the
kernel look for that section and use the BookE definitions.
uidinfo structure. This entirely removes contention observed on the
ui_mtxp mutex (as it is now gone).
- Convert the uihashtbl_mtx mutex to a rwlock, as most of the time we just
need to read-lock it.
Reviewed by: jhb, jeff, kris & others
Tested by: kris
this means that it no longer grabs the lagg rwlock. Use two port table arrays
which list the active ports for Tx and switch between them with an atomic op.
Now the lagg rwlock is only exclusively locked for management (ioctls) and
queuing of lacp control frames isnt needed.
a jail, etc. by simply calling setpriority(PRIO_PROCESS, <PID>, 0) and
checking the return value: 0 means that the process exists and -1 that
it doesn't exist.
Reviewed by: rwatson
MFC after: 1 week
Instead of checking each page for PG_UNMANAGED, perform a one-time
check whether the object is OBJT_PHYS. (PG_UNMANAGED pages only
belong to OBJT_PHYS objects.)
with style(9) recommendation that macros not contain the
terminating ';', leaving that to the invoker. All SYSINIT()
consumers must now provide a trailing ';'.
Unlike the change to remove the ';'s from callers, this change
shouldn't be MFC'd unless we don't mind requiring source changes
to third party modules that might still depend on SYSINIT()
providing its own ';'.
after each SYSINIT() macro invocation. This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.
MFC after: 1 month
Discussed with: imp, rink
Otherwise the parameter is no-op, since zone by default limits number
of descriptors to some 12K entries. Attempt to allocate more ends up
sleeping on zonelimit.
MFC after: 2 weeks
all. The reference in ia64 code is due to cutNpaste in its history
and can safely be removed.
Revired by: cognet, raj, marcel, jhb and maybe one other whom I'm forgetting
- Add a new intr_event method ie_assign_cpu() that is invoked when the MI
code wishes to bind an interrupt source to an individual CPU. The MD
code may reject the binding with an error. If an assign_cpu function
is not provided, then the kernel assumes the platform does not support
binding interrupts to CPUs and fails all requests to do so.
- Bind ithreads to CPUs on their next execution loop once an interrupt
event is bound to a CPU. Only shared ithreads are bound. We currently
leave private ithreads for drivers using filters + ithreads in the
INTR_FILTER case unbound.
- A new intr_event_bind() routine is used to bind an interrupt event to
a CPU.
- Implement binding on amd64 and i386 by way of the existing pic_assign_cpu
PIC method.
- For x86, provide a 'intr_bind(IRQ, cpu)' wrapper routine that looks up
an interrupt source and binds its interrupt event to the specified CPU.
MI code can currently (ab)use this by doing:
intr_bind(rman_get_start(irq_res), cpu);
however, I plan to add a truly MI interface (probably a bus_bind_intr(9))
where the implementation in the x86 nexus(4) driver would end up calling
intr_bind() internally.
Requested by: kmacy, gallatin, jeff
Tested on: {amd64, i386} x {regular, INTR_FILTER}
In that case return an continue processing the packet without IPsec.
PR: 121384
MFC after: 5 days
Reported by: Cyrus Rahman (crahman gmail.com)
Tested by: Cyrus Rahman (crahman gmail.com) [slightly older version]
"Fast IPsec: Initialized Security Association Processing." printf.
People kept asking questions about this after the IPsec shuffle.
This still is the Fast IPsec implementation so no worries that it would
be any slower now. There are no functional changes.
Discussed with: sam
MFC after: 4 days
No need to compile 'dead' code.
I am leaving it in because we will have to review the concept and
should use the common function in various places.
MFC after: 5 days
receivers from being given interrupts if any CPUs in the system were not
tagged as interrupt receivers that I introduced when switching the x86
interrupt code to track CPUs via FreeBSD CPU IDs rather than local APIC
IDs. In practice this only affects systems with Hyperthreading (though
disabling HTT in the BIOS would workaround the issue) as that is the only
case currently where one can have CPUs that aren't tagged as interrupt
receivers. On a Dell SC1425 test box with 2 x Xeon w/ HTT (so 4 logical
CPUs of which 2 were interrupt receivers) the result was that all
device interrupts were sent to CPU 0.
MFC after: 1 week
Pointy hat to: jhb
different "platforms" on x86 machines. The existing code already handles
having two platforms: ACPI and legacy. However, the existing approach was
rather hardcoded and difficult to extend. These changes take the approach
that each x86 hardware platform should provide its own nexus(4) driver (it
can inherit most of its behavior from the default legacy nexus(4) driver)
which is responsible for probing for the platform and performing
appropriate platform-specific setup during attach (such as adding a
platform-specific bus device). This does mean changing the x86 platform
busses to no longer use an identify routine for probing, but to move that
logic into their matching nexus(4) driver instead.
- Make the default nexus(4) driver in nexus.c on i386 and amd64 handle the
legacy platform. It's probe routine now returns BUS_PROBE_GENERIC so it
can be overriden.
- Expose a nexus_init_resources() routine which initializes the various
resource managers so that subclassed nexus(4) drivers can invoke it from
their attach routine.
- The legacy nexus(4) driver explicitly adds a legacy0 device in its
attach routine.
- The ACPI driver no longer contains an new-bus identify method. Instead
it exposes a public function (acpi_identify()) which is a probe routine
that the MD nexus(4) drivers can use to probe for ACPI. All of the
probe logic in acpi_probe() is now moved into acpi_identify() and
acpi_probe() is just a stub.
- On i386 and amd64, an ACPI-specific nexus(4) driver checks for ACPI via
acpi_identify() and claims the nexus0 device if the probe succeeds. It
then explicitly adds an acpi0 device in its attach routine.
- The legacy(4) driver no longer knows anything about the acpi0 device.
- On ia64 if acpi_identify() fails you basically end up with no devices.
This matches the previous behavior where the old acpi_identify() would
fail to add an acpi0 device again leaving you with no devices.
Discussed with: imp
Silence on: arch@
callout_* API (e.g. callout_init_mtx(9)). This was one of the numerous
items on the http://wiki.freebsd.org/SMPTODO list.
Reviewed by: imp, obrien, jhb
MFC after: 1 week
virtual 86 mode to query the BIOS directly. This is needed for certain
HP machines whose BIOS only provide an SMAP when invoked from real mode.
On such machines the loader will be able to query the SMAP successfully
due to the recent BTX changes, but the kernel will not.
One thing I'm not sure of is if we can skip the INT 12h probe altogether
if we have the SMAP from the loader as it seems that we do the INT 12h
probe to setup enough state so we can use vm86 to call the BIOS.
MFC after: 1 week
failing to load on a kernel that has "nodevice mem" in the config. It will
now properly bring in the mem(4) module.
Submitted by: antoine
Reviewed by: imp
MFC after: 1 week
ABI and the direction flag, that is it now assumes that the direction
flag is cleared at the entry of a function and it doesn't clear once
more if needed. This new behaviour conforms to the i386/amd64 ABI.
Modify the signal handler frame setup code to clear the DF {e,r}flags
bit on the amd64/i386 for the signal handlers.
jhb@ noted that it might break old apps if they assumed DF == 1 would be
preserved in the signal handlers, but that such apps should be rare and
that older versions of gcc would not generate such apps.
Submitted by: Aurelien Jarno <aurelien aurel32 net>
PR: 121422
Reviewed by: jhb
MFC after: 2 weeks
- Close a sleepqueue signal race by interlocking with the per-process
spinlock. This was mistakenly omitted from the thread_lock patch and
has been a race since.
MFC After: 1 week
PR: bin/117603
Reported by: Danny Braniss <danny@cs.huji.ac.il>
PhysMask fields based on the number of physical address bits supported
by the current CPU. The old code assumed 36 bits on i386 and 40 bits on
amd64. In truth, all Intel CPUs up until recently used 36 bits (a newer
Intel CPU uses 38 bits) and all the Opteron CPUs used 40 bits.
In at least one case (the new Intel CPU) having the size of the mask field
wrong resulted in writing questionable values into the MTRR registers on
the application processors (BSP as well if you modify the MTRRs via
memcontrol or running X, etc.). The result of the questionable physmask
was that all of memory was apparently treated as uncached rather than
write-back resulting in a very significant performance hit.
Fix this by constructing a run-time mask for the PhysBase and PhysMask
fields based on the number of physical address bits supported by the CPU.
All 64-bit capable CPUs provide a count of PA bits supported via the
0x80000008 extended CPUID feature, so use that if it is available. If that
feature is not available, then assume 36 PA bits.
While I'm here, expand the (now-unused) macros for the PhysBase and
PhysMask fields to the current largest possible value (52 PA bits).
MFC after: 1 week
PR: i386/120516
Reported by: Nokia
hangs (one at boot, one at shutdown) in recent machines. First, only try
to take ownership of the EHCI controller if the BIOS currently owns the
controller. On a HP DL160 G5, the machine hangs when we try to take
ownership. Second, don't bother trying to give up ownership of the
controller during shutdown. It's not strictly required and a Dell DCS S29
hangs on shutdown after the config write.
Both of these changes match the behavior of the Linux EHCI driver. I also
think both of these hangs are caused by bugs in the BIOS' SMM handler
causing it to get stuck in an infinite loop in SMM.
MFC after: 1 week
accept a mouse using the boot subclass. Instead, restore the original
hid_is_collection() test and fallback to testing the interface class,
subclass, and protocol if that fails.
MFC after: 1 week
PR: usb/118670
might be currently programmed into the registers.
Underlying firmware (U-Boot) would typically program MAC address into the
first unit only, and others are left uninitialized. It is now possible to
retrieve and program MAC address for all units properly, provided they were
passed on in the bootinfo metadata.
Reviewed by: imp, marcel
Approved by: cognet (mentor)
We're now more robust against cases of non-sorted and/or non-continuous
numbering of those entries.
Reviewed by: imp, marcel
Approved by: cognet (mentor)
This was introduced as a workaround long time ago for some Alpha firmware
(which is now gone), and actually prevented net_close() to ever be
called.
Certain firmwares (U-Boot) need local shutdown operations to be performed on a
network controller upon transaction end: such platform-specific hooks are
supposed to be called via netif_close() (from within net_close()).
This change effectively reverts the following CVS commit:
sys/boot/common/dev_net.c
revision 1.7
date: 2000/05/13 15:40:46; author: dfr; state: Exp; lines: +2 -1
Only probe network settings on the first open of the network device.
The alpha firmware takes a seriously long time to open the network device
the first time.
Also suppress excessive output while netbooting via loader, unless debugging.
While there, make sys/boot/uboot more style(9) compliant.
Reviewed by: imp
Approved by: cognet (mentor)
While the KSE project was quite successful in bringing threading to
FreeBSD, the M:N approach taken by the kse library was never developed
to its full potential. Backwards compatibility will be provided via
libmap.conf for dynamically linked binaries and static binaries will
be broken.
sched_sleep(). This removes extra thread_lock() acquisition and
allows the scheduler to decide what to do with the static boost.
- Change the priority arguments to cv_* to match sleepq/msleep/etc.
where 0 means no priority change. Catch -1 in cv_broadcastpri() and
convert it to 0 for now.
- Set a flag when sleeping in a way that is compatible with swapping
since direct priority comparisons are meaningless now.
- Add a sysctl to ule, kern.sched.static_boost, that defaults to on which
controls the boost behavior. Turning it off gives better performance
in some workloads but needs more investigation.
- While we're modifying sleepq, change signal and broadcast to both
return with the lock held as the lock was held on enter.
Reviewed by: jhb, peter
Before this patch callback returned result of the last finished call chain.
Now it returns last nonzero result from all call chain results in this request.
As soon as this improvement gives reliable error reporting, it is now possible
to remove dirty workaround in ng_socket, made to return ENOBUFS error statuses
of request-response operations. That workaround was responsible for returning
ENOBUFS errors to completely unrelated requests working at the same time
on socket.
set a default name. If the IRQ is added as a consequence of
configurating the IRQ without there ever being a handler
assigned to it, we will not have a name. This breaks the
fragile intrcnt/intrnames logic.
state change and reliable error recovery.
o Moved vr_softc structure and relevant macros to header file.
o Use PCIR_BAR macro to get BARs.
o Implemented suspend/resume methods.
o Implemented automatic Tx threshold configuration which will be
activated when it suffers from Tx underrun. Also Tx underrun
will try to restart only Tx path and resort to previous
full-reset(both Rx/Tx) operation if restarting Tx path have failed.
o Removed old bit-banging MII interface. Rhine provides simple and
efficient MII interface. While I'm here show PHY address and PHY
register number when its read/write operation was failed.
o Define VR_MII_TIMEOUT constant and use it in MII access routines.
o Always honor link up/down state reported by mii layers. The link
state information is used in vr_start() to determine whether we
got a valid link.
o Removed vr_setcfg() which is now handled in vr_link_task(), link
state taskqueue handler. When mii layer reports link state changes
the taskqueue handler reprograms MAC to reflect negotiated duplex
settings. Flow-control changes are not handled yet and it should
be revisited when mii layer knows the notion of flow-control.
o Added a new sysctl interface to get statistics of an instance of
the driver.(sysctl dev.vr.0.stats=1)
o Chip name was renamed to reflect the official name of the chips
described in VIA Rhine I/II/III datasheet.
REV_ID_3065_A -> REV_ID_VT6102_A
REV_ID_3065_B -> REV_ID_VT6102_B
REV_ID_3065_C -> REV_ID_VT6102_C
REV_ID_3106_J -> REV_ID_VT6105_A0
REV_ID_3106_S -> REV_ID_VT6105M_A0
The following chip revisions were added.
#define REV_ID_VT6105_B0 0x83
#define REV_ID_VT6105_LOM 0x8A
#define REV_ID_VT6107_A0 0x8C
#define REV_ID_VT6107_A1 0x8D
#define REV_ID_VT6105M_B1 0x94
o Always show chip revision number in device attach. This shall help
identifying revision specific issues.
o Check whether EEPROM reloading is complete by inspecting the state
of VR_EECSR_LOAD bit. This bit is self-cleared after the EEPROM
reloading. Previously vr(4) blindly spins for 200us which may/may
not enough to complete the EEPROM reload.
o Removed if_mtu setup. It's done in ether_ifattach().
o Use our own callout to drive watchdog timer.
o In vr_attach disable further interrupts after reset. For VT6102 or
newer hardwares, diable MII state change interrupt as well because
mii state handling is done by mii layer.
o Add more sane register initialization for VT6102 or newer chips.
- Have NIC report error instead of retrying forever.
- Let hardware detect MII coding error.
- Enable MODE10T mode.
- Enable memory-read-multiple for VT6107.
o PHY address for VT6105 or newer chips is located at fixed address 1.
For older chips the PHY address is stored in VR_PHYADDR register.
Armed with these information, there is no need to re-read
VR_PHYADDR register in miibus handler to get PHY address. This
saves one register access cycle for each MII access.
o Don't reprogram VR_PHYADDR register whenever access to a register
located at a PHY address is made. Rhine fmaily allows reprogramming
PHY address location via VR_PHYADDR register depending on
VR_MIISTAT_PHYOPT bit of VR_MIISTAT register. This used to lead
numerous phantom PHYs attached to miibus during phy probe phase and
driver used to limit allowable PHY address in mii register accessors
for certain chip revisions. This removes one more register access
cycle for each MII access.
o Correctly set VLAN header length.
o bus_dma(9) conversion.
- Limit DMA access to be in range of 32bit address space. Hardware
doesn't support DAC.
- Apply descriptor ring alignment requirements(16 bytes alignment)
- Apply Rx buffer address alignment requirements(4 bytes alignment)
- Apply Tx buffer address alignment requirements(4 bytes alignment)
for Rhine I chip. Rhine II or III has no Tx buffer address
alignment restrictions, though.
- Reduce number of allowable number of DMA segments to 8.
- Removed the atomic(9) used in descriptor ownership managements
as it's job of bus_dmamap_sync(9).
With these change vr(4) should work on all platforms.
o Rhine uses two separated 8bits command registers to control Tx/Rx
MAC. So don't access it as a single 16bit register.
o For non-strict alignment architectures vr(4) no longer require
time-consuming copy operation for received frames to align IP
header. This greatly improves Rx performance on i386/amd64
platforms. However the alignment is still necessary for
strict-alignment platforms(e.g. sparc64). The alignment is handled
in new fuction vr_fixup_rx().
o vr_rxeof() now rejects multiple-segmented(fragmented) frames as
vr(4) is not ready to handle this situation. Datasheet said nothing
about the reason when/why it happens.
o In vr_newbuf() don't set VR_RXSTAT_FIRSTFRAG/VR_RXSTAT_LASTFRAG
bits as it's set by hardware.
o Don't pass checksum offload information to upper layer for
fragmented frames. The hardware assisted checksum is valid only
when the frame is non-fragmented IP frames. Also mark the checksum
is valid for corrupted frames such that upper layers doesn't need
to recompute the checksum with software routine.
o Removed vr_rxeoc(). RxDMA doesn't seem to need to be idle before
sending VR_CMD_RX_GO command. Previously it used to stop RxDMA
first which in turn resulted in long delays in Rx error recovery.
o Rewrote Tx completion handler.
- Always check VR_TXSTAT_OWN bit in status word prior to
inspecting other status bits in the status word.
- Collision counter updates were corrected as VT3071 or newer
ones use different bits to notify collisions.
- Unlike other chip revisions, VT86C100A uses different bit to
indicate Tx underrun. For VT3071 or newer ones, check both
VR_TXSTAT_TBUFF and VR_TXSTAT_UDF bits to see whether Tx
underrun was happend. In case of Tx underrun requeue the failed
frame and restart stalled Tx SM. Also double Tx DMA threshold
size on each failure to mitigate future Tx underruns.
- Disarm watchdog timer only if we have no queued packets,
otherwise don't touch watchdog timer.
o Rewrote interrupt handler.
- status word in Tx/Rx descriptors indicates more detailed error
state required to recover from the specific error. There is no
need to rely on interrupt status word to recover from Tx/Rx
error except PCI bus error. Other event notifications like
statistics counter overflows or link state events will be
handled in main interrupt handler.
- Don't touch VR_IMR register if we are in suspend mode. Touching
the register may hang the hardware if we are in suspended state.
Previously it seems that touching VR_IMR register in interrupt
handler was to work-around panic occurred in system shutdown
stage on SMP systems. I think that work-around would hide
root-cause of the panic and I couldn't reproduce the panic
with multiple attempts on my box.
o While padding space to meet minimum frame size, zero the pad data
in order to avoid possibly leaking sensitive data.
o Rewrote vr_start_locked().
- Don't try to queue packets if number of available Tx descriptors
are short than that of required one.
o Don't reinitialize hardware whenever media configuration is
changed. Media/link state changes are reported from mii layer if
this happens and vr_link_task() will perform necessary changes.
o Don't reinitialize hardware if only PROMISC bit was changed. Just
toggle the PROMISC bit in hardware is sufficient to reflect the
request.
o Rearrganed the IFCAP_POLLING/IFCAP_HWCSUM handling in vr_ioctl().
o Generate Tx completion interrupts for every VR_TX_INTR_THRESH-th
frames. This reduces Tx completion interrupts under heavy network
loads.
o Since vr(4) doesn't request Tx interrupts for every queued frames,
reclaim any pending descriptors not handled in Tx completion
handler before actually firing up watchdog timeouts.
o Added vr_tx_stop()/vr_rx_stop() to wait for the end of active
TxDMA/RxDMA cycles(draining). These routines are used in vr_stop()
to ensure sane state of MAC before releasing allocated Tx/Rx
buffers. vr_link_task() also takes advantage of these functions to
get to idle state prior to restarting Tx/Rx.
o Added vr_tx_start()/vr_rx_start() to restart Rx/Tx. By separating
Rx operation from Tx operation vr(4) no longer need to full-reset
the hardware in case of Tx/Rx error recovery.
o Implemented WOL.
o Added VT6105M specific register definitions. VT6105M has the
following hardware capabilities.
- Tx/Rx IP/TCP/UDP checksum offload.
- VLAN hardware tag insertion/extraction. Due to lack of information
for getting extracted VLAN tag in Rx path, VLAN hardware support
was not implemented yet.
- CAM(Content Addressable Memory) based 32 entry perfect multicast/
VLAN filtering.
- 8 priority queues.
o Implemented CAM based 32 entry perfect multicast filtering for
VT6105M. If number of multicast entry is greater than 32, vr(4)
uses traditional hash based filtering.
o Reflect real Tx/Rx descriptor structure. Previously vr(4) used to
embed other driver (private) data into these structure. This type
of embedding make it hard to work on LP64 systems.
o Removed unused vr_mii_frame structure and MII bit-baning
definitions.
o Added new PCI configuration registers that controls mii operation
and mode selection.
o Reduced number of Tx/Rx descriptors to 128 from 256. From my
testing, increasing number of descriptors above than 64 didn't help
increasing performance at all. Experimentations show 128 Rx
descriptors seems to help a lot reducing Rx FIFO overruns under
high system loads. It seems the poor Tx performance of Rhine
hardwares comes from the limitation of hardware. You wouldn't
satuarte the link with vr(4) no matter how fast CPU/large number of
descriptors are used.
o Added vr_statistics structure to hold various counter values.
No regression was reported but one variant of Rhine III(VT6105M)
found on RouterBOARD 44 does not work yet(Reported by Milan Obuch).
I hope this would be resolved in near future.
I'd like to say big thanks to Mike Tancsa who kindly donated a Rhine
hardware to me. Without his enthusiastic testing and feedbacks
overhauling vr(4) never have been possible. Also thanks to Masayuki
Murayama who provided some good comments on the hardware's internals.
This driver is result of combined effort of many users who provided
many feedbacks so I'd like to say special thanks to them.
Hardware donated by: Mike Tancsa (mike AT sentex dot net)
Reviewed by: remko (initial version)
Tested by: Mike Tancsa(x86), JoaoBR ( joao AT matik DOT com DOT br )
Marcin Wisnicki ( mwisnicki+freebsd AT gmail DOT com )
Stefan Ehmann ( shoesoft AT gmx DOT net )
Florian Smeets ( flo AT kasimir DOT com )
Phil Oleson ( oz AT nixil DOT net )
Larry Baird ( lab AT gta DOT com )
Milan Obuch ( freebsd-current AT dino DOT sk )
remko (initial version)
tdq_runq_add to select the runq rather than hoping we set it properly
when we adjusted the priority. This involves the same number of
branches as before so should perform identically without the extra
fragility.
Tested by: bz
Reviewed by: bz
the cpufreq drivers to reliably use properties of PCI devices for quirks,
etc.
- For the legacy drivers, add CPU devices via an identify routine in the
CPU driver itself rather than in the legacy driver's attach routine.
- Add CPU devices after Host-PCI bridges in the acpi bus driver.
- Change the ichss(4) driver to use pci_find_bsf() to locate the ICH and
check its device ID rather than having a bogus PCI attachment that only
checked for the ID in probe and always failed. As a side effect, you
can now kldload ichss after boot.
- Fix the ichss(4) driver to use the correct device_t for the ICH (and not
for ichss0) when doing PCI config space operations to enable SpeedStep.
MFC after: 2 weeks
Reviewed by: njl, Andriy Gapon avg of icyb.net.ua
present in cpu_feature2. Also, use CPUID2_EST rather than a magic
number.
- Don't free the ACPI settings list in detach if we are going to fail the
request. Otherwise an attempt to kldunload est would free the array
but the driver would keep trying to use it.
MFC after: 1 week
routines (V86 requests from the client and hardware interrupt handlers):
- Install trampoline real mode interrupt handlers at IDT vectors 0x20-0x2f
to handle hardware interrupts by invoking the appropriate vector (0x8-0xf
or 0x70-0x78). This allows the 8259As to use vectors 0x20-0x2f in real
mode as well as protected mode will ensuring that the master 8259A
doesn't share IDT space with CPU exceptions in protected mode.
- Since we don't need to reserve space for page tables and a page directory
anymore since dropping paging support, move the TSS and protected mode
IDT up by 16k. Grow the ring 1 link stack by 16k as a result.
- Repurpose the ring 1 link stack to be used as a real mode stack when
invoking real mode routines either via a V86 request or a hardware
interrupts. This simplifies a few things as we avoid disturbing the
original user stack.
- Add some more block comments to explain how the code interacts with the
V86 structure as this wasn't immediately obvious from the prior comments
(e.g. that we explicitly copy the seg regs for real mode out of the V86
struct onto the stack to be popped off when going into real mode, etc.).
Also, document some of the stack frames we create going to real mode and
back.
- Remove all of the virtual 86 related code including having to simulate
various instructions and BIOS calls on a trap from virtual 86 mode.
- Explicitly panic if a user client attempts to perform a V86 CALL
request that isn't a far call.
- Bump version to 1.2.
Assuming this works ok this should fix some of the long standing issues
with USB booting as well as etherboot.
MFC after: 2 weeks
Submitted by: kib (some parts from his original real mode patch)
- Only calculate timeshare priorities once per tick or when a thread is woken
from sleeping.
- Keep the ts_runq pointer valid after all priority changes.
- Call tdq_runq_add() directly from sched_switch() without passing in via
tdq_add(). We don't need to adjust loads or runqs anymore.
- Sort tdq and ts_sched according to utilization to improve cache behavior.
Sponsored by: Nokia
- Normalize the preemption/ipi setting code by introducing sched_shouldpreempt()
so the logical is identical and not repeated between tdq_notify() and
sched_setpreempt().
- In tdq_notify() don't set NEEDRESCHED as we may not actually own the thread lock
this could have caused us to lose td_flags settings.
- Garbage collect some tunables that are no longer relevant.
the NOPs used are 0x01.
While we could simply pad with EOLs (which are 0x00), rather use an
explicit 0x00 constant there to not confuse poeple with 'EOL padding'.
Put in a comment saying just that.
Problem discussed on: src-committers with andre, silby, dwhite as
follow up to the rev. 1.161 commit of tcp_var.h.
MFC after: 11 days
the appropriate bit in the DEVACTB register.
This change allows the C2 state on those systems to work as expected.
Reviewed by: njl
Submitted by: Andriy Gapon <avg at icyb.net.ua>
MFC after: 1 week
Specifically, since the delete-behind heuristic is never applied to a
device-backed object, there is no point in checking whether each of the
object's pages is fictitious. (Only device-backed objects have
fictitious pages.)
know if has siblings that need an actual probe. Introduce a specail
return value called BUS_PROBE_NOOWILDCARD. If the driver returns
this, the probe is only successful for devices that have had a
specific devclass set for them.
Reviewed by: current@, jhb@, grehan@
in*() and out*() primitives should not be used, other than by
ISA drivers. In this case they were used for memory-mapped I/O
and were not even used in the spirit of the primitives.
if netgraph reported error while delivering to destination.
Reset 'next send' counter to the last requested by peer on ack timeout
to resend all subsequest packets after lost one again without additional hints.
Solaris and AIX.
fcntl(fd, F_DUP2FD, arg) and dup2(fd, arg) are functionnaly equivalent.
Document it.
Add some regression tests (identical to the dup2(2) regression tests).
PR: 120233
Submitted by: Jukka Ukkonen
Approved by: rwaston (mentor)
MFC after: 1 month
HPT drivers would sometimes test the value of a preprocessor definition but
not always make sure that the definition existed in the first place, leading
to warnings on newer compilers. I blindly assumed the same with this driver,
and it turned out to be wrong and to enable some code that doesn't work.
process lock leading to a hang. This bug was introduced in
kern_sig.c:1.351, when the call to expand_name() was moved earlier
bit this particular error case was not updated.
It so happens that U-Boot disables the D-cache when booting
an ELF image, so this change makes sure we run with the
D-cache enabled from now on. It shows too...
While here, remove the duplicate definition of the hw.model
sysctl.
variable is set. On my Mac Mini this puts the CPU in NAP mode when
the kernel is idle and, any technical or environmental reasons
aside, avoids that I have to listen to the fan all day :-)
trashing and improve performance.
Remove waitflag argument from ng_ksocket_incoming2(), it means nothing
as function call was queued by netgraph.
Remove node validity check, as node validity guarantied by netgraph.
Update comments.