remove some parentheses; fix some whitespace errors; fix only one case of
a boolean comparison of a non-boolean).
Improve an error message by quoting ".", and by not printing large positive
values as negative ones.
Approved by: re (kensmith) (blanket)
namespace pollution in <sys/vnode.h>.
Sort the include of <sys/mutex.h> instead of unsorting it after
<sys/vnode.h> and depending on the pollution there.
Approved by: re (kensmith) (blanket)
the use of divert sockets to dead locks. A number of LORs have been reported
between divert and a number of other network subsystems including: IPSEC, Pfil,
multicast, ipfw and others. Other dead locks could occur because of recursive
entry into the IP stack. This change should take care of most if not all of
these issues.
A summary of the changes follow:
- We disallow multicast operations on divert sockets. It really doesn't make
semantic sense to allow this, since typically you would set multicast
parameters on multicast end points.
NOTE: As a part of this change, we actually dis-allow multicast options on
any socket that IS a divert socket OR IS NOT a SOCK_RAW or SOCK_DGRAM family
- We check to see if there are any socket options that have been specified on
the socket, and if there was (which is very un-common and also probably
doesnt make sense to support) we duplicate the mbuf carrying the options.
- We then drop the INP/INFO locks over the call to ip_output(). It should be
noted that since we no longer support multicast operations on divert sockets
and we have duplicated any socket options, we no longer need the reference
to the pcb to be coherent.
- Finally, we replaced the call to ip_input() to use netisr queuing. This
should remove the recursive entry into the IP stack from divert.
By dropping the locks over the call to ip_output() we eliminate all the lock
ordering issues above. By switching over to netisr on the inbound path,
we can no longer recursively enter the ip_input() code via divert.
I have tested this change by using the following command:
ipfwpcap -r 8000 - | tcpdump -r - -nn -v
This should exercise the input and re-injection (outbound) path, which is
very similar to the work load performed by natd(8). Additionally, I have
run some ospf daemons which have a heavy reliance on raw sockets and
multicast.
Approved by: re@ (kensmith)
MFC after: 1 month
LOR: 163
LOR: 181
LOR: 202
LOR: 203
Discussed with: julian, andre et al (on freebsd-net)
In collaboration with: bms [1], rwatson [2]
[1] bms helped out with the multicast decisions
[2] rwatson submitted the original netisr patches and came up with some
of the original ideas on how to combat this issue.
for bakeoff.. using the next sequential ones)
- In cookie processing 1-2-1, we did not increment the stcb
refcnt before releasing the tcb lock. We need to do this
to keep the tcb from being freed by a abort or ?? unlikely
but worth doing. Also get rid of unneed INP_WLOCK.
- extra receive info included the rcvinfo which killed the
padding/alignment. We now redefine all the fields properly
so they both align properly both to 128 bytes.
- A peeled off socket would not close without an error due to
its misguided idea that sctp_disconnect() was not supported
on it. This fixes it so it goes through the proper path.
- When an assoc was being deleted after abort (via a timer) a
small race condition exists where we might take a packet for
the old assoc (since we are waiting for a cleanup timer). This
state especially happens in mac. We now add a state in the asoc
so these can properly handle the packet as OOTB.
Approved by: re@freebsd.org(Ken Smith)
previously conditionally acquired Giant based on debug.mpsafenet. As that
has now been removed, they are no longer required. Removing them
significantly simplifies error-handling in the socket layer, eliminated
quite a bit of unwinding of locking in error cases.
While here clean up the now unneeded opt_net.h, which previously was used
for the NET_WITH_GIANT kernel option. Clean up some related gotos for
consistency.
Reviewed by: bz, csjp
Tested by: kris
Approved by: re (kensmith)
Recently the AP in my Merced box seems to have grown a habit
of getting unexpected interrupts, such as redundant wake-ups
and legacy interrupts that require an INTA cycle.
While here, replace DELAY(0) with cpu_spinwait() so that it's
clear what we're doing as well as enable the code to take
advantage of cpu_spinwait() when it gets implemented.
Approved by: re (blanket)
There's no advantage in allowing nested external interrupts.
In fact, it leads to a potential stack overrun.
While here, put the interrupt vector in the trapframe, so as
to compensate for the 36 cycle latency of reading cr.ivr.
Further simplify assembly code by dealing with ASTs from C.
Approved by: re (blanket)
vm_object_terminate() on a device-backed object at the same time that
another processor, call it Pa, is performing dev_pager_alloc() on the
same device. The problem is that vm_pager_object_lookup() should not be
allowed to return a doomed object, i.e., an object with OBJ_DEAD set,
but it does. In detail, the unfortunate sequence of events is: Pt in
vm_object_terminate() holds the doomed object's lock and sets OBJ_DEAD
on the object. Pa in dev_pager_alloc() holds dev_pager_sx and calls
vm_pager_object_lookup(), which returns the doomed object. Next, Pa
calls vm_object_reference(), which requires the doomed object's lock, so
Pa waits for Pt to release the doomed object's lock. Pt proceeds to the
point in vm_object_terminate() where it releases the doomed object's
lock. Pa is now able to complete vm_object_reference() because it can
now complete the acquisition of the doomed object's lock. So, now the
doomed object has a reference count of one! Pa releases dev_pager_sx
and returns the doomed object from dev_pager_alloc(). Pt now acquires
dev_pager_mtx, removes the doomed object from dev_pager_object_list,
releases dev_pager_mtx, and finally calls uma_zfree with the doomed
object. However, the doomed object is still in use by Pa.
Repeating my key point, vm_pager_object_lookup() must not return a
doomed object. Moreover, the test for the object's state, i.e.,
doomed or not, and the increment of the object's reference count
should be carried out atomically.
Reviewed by: kib
Approved by: re (kensmith)
MFC after: 3 weeks
us to do the data serializations once after writing multiple
region registers, as is done in pmap_switch(). All existing
calls to ia64_set_rr() are followed with calls to ia64_srlz_d().
Approved by: re (blanket)
Previously, any parse error will result in the calling program exiting with an
unpleasant message. This change will cause libdisk to issue a warning and
ignore lines it cannot parse instead of bluntly terminating the unfortunate
enough program.
This change will allow you to use sysinstall if you have a NTFS parition with
a space in the name (such as 'Win Xp'). In such a case, a line like the
following will appear in the kern.geom.conftxt output:
2 LABEL ntfs/Win Xp 209818635264 512 i 0 o 0
As the fields are space-separated, libdisk would go beserk and exit the program.
This would happen if using FreeBSD 7.0 snapshot images (as GEOM_LABEL is in
the installation kernel as well), thus making it impossible to install FreeBSD
without renaming your NTFS paritions.
Reported by: Dwight Berendse <dwight at berendse dot org>
Nod from: phk
Reviewed by: imp
Approved by: re (bmah), imp (mentor)
MFC after: 1 month
otherwise mmap() gets called multiple times, which eventually fails due
to address space exhaustion on i386.
Approved by: re (kensmith)
MFC after: 1 week
Also rename the related functions in a similar way.
There are no functional changes.
For a packet coming in with IPsec tunnel mode, the default is
to only call into the firewall with the "outer" IP header and
payload.
With this option turned on, in addition to the "outer" parts,
the "inner" IP header and payload are passed to the
firewall too when going through ip_input() the second time.
The option was never only related to a gif(4) tunnel within
an IPsec tunnel and thus the name was very misleading.
Discussed at: BSDCan 2007
Best new name suggested by: rwatson
Reviewed by: rwatson
Approved by: re (bmah)
sector, instead of failing the whole mount if it is garbage. Fields
in the fsinfo sector are only advisory, so there are better sanity
checks than this, and we already silently fix up the only other advisory
field in the fsinfo (the free cluster count).
This wasn't handled quite right in rev.1.92, 1.117, or in NetBSD. 1.92
also failed the whole mount for the non-garbage magic value 0xffffffff
1.117 fixed this well enough in practice since garbage values shouldn't
occur in practice, but left the error handling larger and more convoluted
than necessary. Now we handle the magic value as a special case of
fixing up all out of bounds values.
Also fix up the estimated next free cluster number when there is no
fsinfo sector. We were using 0, but CLUST_FIRST is safer.
Approved by: re (kensmith)
instead of per IOMMU, so we no longer need to program all of them
identically in systems having multiple IOMMUs. This continues the
rototilling of the nexus(4) done about 5 months ago, which amongst
others changed nexus(4) and the drivers for host-to-foo bridges
to provide bus_get_dma_tag methods, allowing to handle DMA tags in
a hierarchical way and to link them with devices.
This still doesn't move the silicon bug workarounds for Sabre (and
in the uncommitted schizo(4) for Tomatillo) bridges into special
bus_dma_tag_create() and bus_dmamap_sync() methods though, as w/o
fully newbus'ified bus_dma_tag_create() and bus_dma_tag_destroy()
this still requires too much hackery, i.e. per-child parent DMA
tags in the parent driver.
- Let the host-to-foo drivers supply the maximum physical address
of the IOMMU accompanying the bridges. Previously iommu(4) hard-
coded an upper limit of 16GB, which actually only applies to the
IOMMUs of the Hummingbird and Sabre bridges. The Psycho variants
as well as the U2S in fact can can translate to up to 2TB, i.e.
translate to 41-bit physical addresses. According to the recently
available Tomatillo documentation these bridges even translate to
43-bit physical addresses and hints at the Schizo bridges doing
43 bits as well.
This fixes the issue the FreeBSD 6.0 todo list item "Max RAM on
sparc64" was refering to and pretty much obsoletes the lack of
support for bounce buffers on sparc64.
Thanks to Nathan Whitehorn for pointing me at the Tomatillo manual.
Approved by: re (kensmith)
requiring DC_TX_ALIGN or DC_TX_COALESCE, which was previously done
in dc_start_locked(), into dc_encap().
o In dc_encap():
- If m_defrag() fails just drop the packet like other NIC drivers
do. This should only happen when there's a mbuf shortage, in which
case it was possible to end up with an IFQ full of packets which
couldn't be processed as they couldn't be defragmented as they
were taking up all the mbufs themselves. This includes adjusting
dc_start_locked() to not trying to prepend the mbuf (chain) if
dc_encap() has freed it.
- Likewise, if bus_dmamap_load_mbuf() fails as dc_dma_map_txbuf()
failed, free the mbuf possibly allocated by the above call to
m_defrag() and drop the packet.
o In dc_txeof():
- Don't clear IFF_DRV_OACTIVE unless there are at least 6 free TX
descriptors. Further down the road dc_encap() will bail if there
are only 5 or fewer free TX descriptors, causing dc_start_locked()
to abort and prepend the dequeued mbuf again so it makes no sense
to pretend we could process mbufs again when in fact we won't.
While at it replace this magic 5 with a macro DC_TX_LIST_RSVD.
- Just always assign idx to sc->dc_cdata.dc_tx_cons; it doesn't
make much sense to exclude the idx == sc->dc_cdata.dc_tx_cons
case.
o In dc_dma_map_txbuf() there's no need to set sc->dc_cdata.dc_tx_err
to error if the latter is != 0, bus_dmamap_load_mbuf() already
returns the same error value in that case anyway.
o For less overhead, convert to use bus_dmamap_load_mbuf_sg() for
loading RX buffers.
o Remove some banal and/or outdated comments.
Approved by: re (kensmith)
MFC after: 1 week
to clear RL_TDESC_VLANCTL_TAG). This fixes sending packets in the
native VLAN when running both tagged and an untagged VLAN over the
same trunk and descriptors are recycled.
Approved by: re (kensmith)
MFC after: 1 week
d_mmap methods. prep_cdevsw() already installs the shims that
acquire/drop Giant for the methods of a driver that specified the
D_NEEDGIANT flag.
Reviewed by: alc
Approved by: re (kensmith)
- If the path cost is calculated when the link is down, set a pending flag so
it is calculated again when it comes back up.
- To not use 00:00:00:00:00:00 as the bridge id, all interfaces are scanned and
the lowest number wins. All zeros is too low.
Approved by: re (rwatson)
ia64_cpu.h. This improves readability and consistency and aids in
auditing the code.
Add instruction-serialization after writing to cr.pta.
Delay enabling interrupts until after we setup the clocks and after
we program the task priority register.
Approved by: re (blanket)
ia64_cpu.h. This improves readability and consistency and aids in
auditing the code.
Add data-serialization after writing to the region registers and
add instruction-serialization after writing to cr.pta.
Approved by: re (blanket)
ia64_cpu.h. This improves readability and consistency and aids in
auditing the code.
Add data-serialization after writing to cr.tpr.
Approved by: re (blanket)
tdq_group structure. Hyper-threaded cores won't really benefit from
seperate locks anyway.
- Seperate out the migration case from sched_switch to simplify the main
switch code. We only migrate here if called via sched_bind().
- When preempted place the preempted thread back in the same queue at
the head.
- Improve the cpu group and topology infrastructure.
Tested by: many on current@
Approved by: re
message explained why the size is 1 sector, but the code used a
size of 1 cluster.
I/o sizes larger than necessary may cause serious coherency problems
in the buffer cache. Here I think there were only minor efficiency
problems, since a too-large fsinfo buffer could only get far enough
to overlap buffers for the same vnode (the device vnode), so mappings
are coherent at the page level although not at the buffer level, and
the former is probably enough due to our limited use of the fsinfo
buffer.
Approved by: re (kensmith)
- Copy before testing a pointer. This closes a race window.
- Use msleep with the node interlock instead of tsleep.
- Do proper locking around access to tn_vpstate.
- Assert vnode VOP lock for dir_{atta,de}tach to capture
inconsistent locking.
Suggested by: kib
Submitted by: delphij
Reviewed by: Howard Su
Approved by: re (tmpfs blanket)