making the relevant files standard. This avoids duplication and
makes it easier to override/disable unwanted schemes. Since ARM
doesn't have a DEFAULTS configuration file, leave the source
files for the BSD and MBR partitioning schemes in files.arm for
now.
- Add codec id for AD1988B, along with fixing its line-in and other
issues (with proper quirks). [2]
Submitted by: [1] barbara.xxx1975@libero.it
[2] Oliver Brandmueller ob@e-Gitt.NET
MFC after: 3 days
In particular:
- Add an explicative table for locking of struct vmmeter members
- Apply new rules for some of those members
- Remove some unuseful comments
Heavily reviewed by: alc, bde, jeff
Approved by: jeff (mentor)
be called with an incorrect segment end value. tcp_reass() may
trim segments when they overlap with already existing ones in the
reassembly queue. Instead of saving the segment end value before
the call to tcp_reass() compute it on the fly based on the effective
segment length afterwards.
This bug was not really problematic as no information got lost and
the eventual SACK information computation was correct nontheless.
MFC after: 1 week
instead of an authentication function. There are a design reason
and a practical reason for that. First, the module belongs in
account management because it checks availability of the account
and does no authentication. Second, there are existing and potential
PAM consumers that skip PAM authentication for good or for bad.
E.g., sshd(8) just prefers internal routines for public key auth;
OTOH, cron(8) and atrun(8) do implicit authentication when running
a job on behalf of its owner, so their inability to use PAM auth
is fundamental, but they can benefit from PAM account management.
Document this change in the manpage.
Modify /etc/pam.d files accordingly, so that pam_nologin.so is listed
under the "account" function class.
Bump __FreeBSD_version (mostly for ports, as this change should be
invisible to C code outside pam_nologin.)
PR: bin/112574
Approved by: des, re
is really a memory mapped I/O address. The bug is in the GAS that
describes the address and in particular the SpaceId field. The field
should not say the address is an I/O port when it clearly is not.
With an additional check for the IA64_BUS_SPACE_IO case in the bus
access functions, and the fact that I/O ports pretty much not used
in general on ia64, make the calculation of the I/O port address a
function. This avoids inlining the work-around into every driver,
and also helps reduce overall code bloat.
builds had been succeeding if run serially but could fail if run in
parallel because the bge module build might start before ofw_bus_if.h
got created as part of the mainline kernel build.
Diagnosis and patch by: ru
to the build.
This allocator uses a binary buddy system with a twist. First and
foremost, this allocator is required to support the implementation of
superpages. As a side effect, it enables a more robust implementation
of contigmalloc(9). Moreover, this reimplementation of
contigmalloc(9) eliminates the acquisition of Giant by
contigmalloc(..., M_NOWAIT, ...).
The twist is that this allocator tries to reduce the number of TLB
misses incurred by accesses through a direct map to small, UMA-managed
objects and page table pages. Roughly speaking, the physical pages
that are allocated for such purposes are clustered together in the
physical address space. The performance benefits vary. In the most
extreme case, a uniprocessor kernel running on an Opteron, I measured
an 18% reduction in system time during a buildworld.
This allocator does not implement page coloring. The reason is that
superpages have much the same effect. The contiguous physical memory
allocation necessary for a superpage is inherently colored.
Finally, the one caveat is that this allocator does not effectively
support prezeroed pages. I hope this is temporary. On i386, this is
a slight pessimization. However, on amd64, the beneficial effects of
the direct-map optimization outweigh the ill effects. I speculate
that this is true in general of machines with a direct map.
Approved by: re
caches with data caches after writing to memory. This typically
is required to make breakpoints work on ia64 and powerpc. For
those architectures the function is implemented.
This patch fixes places where they should be called atomically changing
their locking requirements (both assume per-proc spinlock held) and
introducing rufetchcalc which wrappers both calls to be performed in
atomic way.
Reviewed by: jeff
Approved by: jeff (mentor)
in tcp_outout(). This is currently not strictly necessary but paves
the way to simplify the entire SYN options handling quite a bit.
Clarify comment. No change in effective behavour with this commit.
RFC1323 requires the window field in a SYN (i.e., a <SYN> or
<SYN,ACK>) segment itself never be scaled.
and simplify handling of the send/receive window scaling. No
change in effective behavour.
RFC1323 requires the window field in a SYN (i.e., a <SYN> or
<SYN,ACK>) segment itself never be scaled.
Noticed by: yar
- Unsafeness on ruadd() in thread_exit()
- Unatomicity of thread_exiit() in the exit1() operations
This patch addresses these problems allocating p_fd as part of the
process and modifying the way it is accessed.
A small chunk of this patch, resolves a race about p_state in kern_wait(),
since we have to be sure about the zombif-ing process.
Submitted by: jeff
Approved by: jeff (mentor)
a timer issues a shutdown and a simultaneous close on the socket
happens. This race condition is inherent in the current socket/
inpcb life cycle system but can be handled well.
Reported by: kris
Tested by: kris (on 8-core machine)
- Reorder send failed to be in correct order.
- Fixed calulation of init-ack to be right off
mbuf lengths instead of the precalculated value. This
will fix one 64 bit platform issue.
error doing so. It seems an increasing number of phones have this
quirk, and we're not keeping up. There appears to be nothing bad that
happens for non-quirked phones.
Minor cleanups:
o prefer device_printf over printf
o kill devinfo stuff
o minor other preening.
need to do it at all anymore. Remove it from here. Expand
USB_ATTACH_SETUP inline now that it is one line and we're moving away
from the compat macros. Remove some bzero calls that turn out not be
be necessary.
what we print, don't print it anymore. And don't compute it anymore.
And don't malloc/free memory for it anymore. While I'm here, prefer
device_printf where appropriate.
the value of ph_nhooks to zero, not the address. This removes
extranious calls to pfil_run_hooks (and an rw lock) from the
network stack's critical path when no pfil hooks are active.
Reviewed by: csjp
Sponsored by: Myricom Inc.
implementing some of them using existing ones.
- Allow to compile ZFS on all archs and use atomic operations surrounded
by global mutex on archs we don't have or can't have all atomic
operations needed by ZFS.
algoritm would not go through the proper initialization.
- The initialization was incorrect as well, causing problems in
sat networks with > 1sec RTT
- Get rid of magic numbers in RTT calculations.
A change to dconschat(8) will follow so that it can bomb
this address over FireWire to reset a wedged system.
Though this method is just a hack and far from perfection,
it should be useful if you don't want to go machine room
just to reset or to power-cycle a machine without
remote-managed power supply. And much better than doing:
# fwcontrol -m target-eui64
# dd if=/dev/zero of=/dev/fwmem0.2 bs=1m
Now, it's safe to call the fwohci interrupt(polling) routine while ddb/gdb
is active. After this change, a dcons connnection over FireWire can survive
bus resets even in kernel debugger.
This means that it is not too late to plug a FireWire cable after a panic
to investigate the problem.
Actually there is a small window(between a jump to kernel from loader and
initialization of dcons_crom) in which no one can take care of a bus reset.
Except that window, firewire console should keep working
from loader to reboot even with a panic and a bus reset.
(as far as you enable LOADER_FIREWIRE_SUPPORT)
embedded storage in struct ucred. This allows audit state to be cached
with the thread, avoiding locking operations with each system call, and
makes it available in asynchronous execution contexts, such as deep in
the network stack or VFS.
Reviewed by: csjp
Approved by: re (kensmith)
Obtained from: TrustedBSD Project
cache size limit but this bucket row is empty. Normally we want to
recycle the oldest entry in the bucket row. If there isn't any the
TAILQ_REMOVE leads to a panic by trying to remove a non-existing
element. Fix this by just returning NULL and failing the insert.
This is not a problem as the TCP hostache is only advisory.
Submitted by: jhb
grab sched_lock. This would serialize calls to pmap_switch from
cpu_switch(). With the introduction of thread_lock, this is not
possible anymore, because thread_lock is not a single lock. It
varies. Secondly and most importantly, it's not needed at all. The
only requirement for pmap_switch() is that it's not preempted
while in the middle of updating the CPU and PCPU. In other words,
it's a critical region. No locking required.
This is enabled by default. It should be disabled for
those who are uneasy with peeking/poking from FireWire.
Please note sbp(4) and dcons(4) over FireWire need
this feature.
- Moved BCM5706S/5708S SerDes support to brgphy (since they are not technically
TBI interfaces)
- Added 2.5G support for BCM5708S
Comments:
Since this driver is shared with bge I tested several available controllers
supported by bge and all worked as expected, however the list was not
exhaustive. Need wider testing.
MFC after: 4 weeks
In Rx path it allocates a new mbuf with m_getcl(9) so the length of
the mbuf is MCLBYTES which is greater than a segment size specified by
the dma tag. This segment size mismatch caused a voluntary panic.
Fix the panic by settting the mbuf length to TULIP_DATA_PER_DESC.
Reported by: Arne H Juul <arnej AT yahoo-inc DOT com>
Tested by: Arne H Juul <arnej AT yahoo-inc DOT com>
- lock its own locks and drop Giant.
- create its own taskqueue thread.
- split interrupt routine
- use interrupt filter as a fast interrupt.
- run watchdog timer in taskqueue so that it should be
serialized with the bottom half.
- add extra sanity check for transaction labels.
disable ad-hoc workaround for unknown tlabels.
- add sleep/wakeup synchronization primitives
- don't reset OHCI in fwohci_stop()
when coping out association data.
- Fixes a small bug that prevented the SCTP_UNORDERED indication
from going up to the app on a recv in the sinfo_flags field.
seems not enough to verify its consistencies.
- Define AC97_MIXER_SIZE as SOUND_MIXER_NRDEVICES (25), since we
don't need more than that. Stop doing wild and random guess about
its size since we're stricly bound to it.
application specific SEND_OP_COND (CMD55 + ACMD41), go ahead and allow
100 tries. This gives a timeout of a second rather than the ~100ms
the old style produces.
I've had one old 16MB SD card which needs the extra time. I've now
had reports from the field that other cards need this too.
Originally done at BSDcan 2007 while waiting to give my embedding
madness minitalk.
Add the machine-specific definitions for configuring the new physical
memory allocator.
Set the size of phys_avail[] and dump_avail[] using one of these
definitions.
which support a 2.5Gbps mode over fiber using next page extensions during
autonegotiation. Typically only found in blade systems which also include
a Broadcom 2.5Gbps capable switch.
MFC after: 2 weeks
oldthread should point at before we return.
- When cpu_switch() is called the td_lock pointer in the old thread may
point at the blocked lock. This prevents other processors from
switching into this thread while we're still switching out. Wait
until we're done deactivating the vmspace before we release the
thread by assigning to td_lock.
- Before we can activate the new vmspace we must make sure that the new
thread is not assigned to the blocked lock. It may be in the process
of switching out on another cpu. Spin until the new thread is
available.
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Add a new parameter to cpu_switch() that is used to release the lock on
the outgoing thread and properly acquire the lock on the incoming
thread. This parameter is not required for schedulers that don't do
per-cpu locking and architectures which do not support it may continue
to use the 4BSD scheduler. This feature is presently not supported
on ia64
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- There is no globally visible scheduler lock any longer. For now the
watchdog can only check Giant. This model of checking particular locks
is flawed and should be revisited. Other metrics should be considered.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Use sched_throw() rather than replicating the same cpu_throw() code for
each architecture. This also allows the scheduler to use any locking it
may want to.
- Use the thread_lock() rather than sched_lock when preempting.
- The scheduler lock is not required to synchronize release_aps.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Add new spinlocks to support thread_lock() and adjust ordering.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Attempt to return the ttyinfo() selection algorithm to something sane
as it has been broken and disabled for some time. Adapt this algorithm
in such a way that it does not conflict with per-cpu scheduler locking.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Use a global umtx spinlock to protect the sleep queues now that there
is no global scheduler lock.
- Use thread_lock() to protect thread state.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Use a global kse spinlock to protect upcall and thread assignment. The
per-process spinlock can not be used because this lock must be acquired
via mi_switch() where we already hold a thread lock. The kse spinlock
is a leaf lock ordered after the process and thread spinlocks.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Replace the tail-end of fork_exit() with a scheduler specific routine
which can do the appropriate lock manipulations.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Protect the cp_time tick counts with atomics instead of a global lock.
There will only be one atomic per tick and this allows all processors
to execute softclock concurrently.
- In softclock, protect access to rusage and td_*tick data with the
thread_lock(), expanding the scope of the thread lock over the whole
function.
- Do some creative re-arranging in hardclock() to avoid excess locking.
- Protect the p_timer fields with the per-process spinlock.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)
- Use thread_lock() rather than sched_lock for per-thread scheduling
sychronization.
- Use the per-process spinlock rather than the sched_lock for per-process
scheduling synchronization.
- Move some common code into thread_suspend_switch() to handle the
mechanics of suspending a thread. The locking here is incredibly
convoluted and should be simplified.
Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)