This is useful when doing copies of packet where some leading
space has been preallocated to insert protocol headers.
Note that there are in fact almost no users of m_copypacket.
MFC candidate.
in mi_switch() just before calling cpu_switch() so that the first switch
after a resched request will satisfy the request.
- While I'm at it, move a few things into mi_switch() and out of
cpu_switch(), specifically set the p_oncpu and p_lastcpu members of
proc in mi_switch(), and handle the sched_lock state change across a
context switch in mi_switch().
- Since cpu_switch() no longer handles the sched_lock state change, we
have to setup an initial state for sched_lock in fork_exit() before we
release it.
Please note:
When committing changes to this file, it is important to note that
linux is not freebsd -- their system call numbers (and sometimes names)
are different on different platforms. When in doubt (and you always need
to be) check the arch-specific unistd.h and entry.S files in the linux
kernel sources to see what the syscall numbers really are.
is sent to a process, psignal() needs to schedule an AST for the
process if the process is runnable, not just if it is current, so that
pending signals get checked for on the next return of the process to
user mode. This wasn't practical until recently because the AST flag
was per-cpu so setting it for a non-current process would usually just
cause a bogus AST for the current process.
For non-current processes looping in user mode, it took accidental
(?) magic to deliver signals at all. Signals were usually delivered
late as a side effect of rescheduling (need_resched() sets astpending,
etc.). In pre-SMPng, delivery was delayed by at most 1 quantum (the
need_resched() call in roundrobin() is certain to occur within 1
quantum for looping processes). In -current, things are complicated
by normal interrupt handlers being threads. Missing handling of the
complications makes roundrobin() a bogus no-op, but preemptive
scheduling sort of works anyway due to even larger bogons elsewhere.
always on curproc. This is needed to implement signal delivery properly
(see a future log message for kern_sig.c).
Debogotified the definition of aston(). aston() was defined in terms
of signotify() (perhaps because only the latter already operated on
a specified process), but aston() is the primitive.
Similar changes are needed in the ia64 versions of cpu.h and trap.c.
I didn't make them because the ia64 is missing the prerequisite changes
to make astpending and need_resched per-process and those changes are
too large to make without testing.
tsc_present in the right places (together with other variables of the
same linkage), and don't use messy ifdefs just to avoid exporting it in
some cases.
actually in the kernel. This structure is a different size than
what is currently in -CURRENT, but should hopefully be the last time
any application breakage is caused there. As soon as any major
inconveniences are removed, the definition of the in-kernel struct
ucred should be conditionalized upon defined(_KERNEL).
This also changes struct export_args to remove dependency on the
constantly-changing struct ucred, as well as limiting the bounds
of the size fields to the correct size. This means: a) mountd and
friends won't break all the time, b) mountd and friends won't crash
the kernel all the time if they don't know what they're doing wrt
actual struct export_args layout.
Reviewed by: bde
Add new PRC_UNREACH_ADMIN_PROHIB in sys/sys/protosw.h
Remove condition on TCP in src/sys/netinet/ip_icmp.c:icmp_input
In src/sys/netinet/ip_icmp.c:icmp_input set code = PRC_UNREACH_ADMIN_PROHIB
or PRC_UNREACH_HOST for all unreachables except ICMP_UNREACH_NEEDFRAG
Rename sysctl icmp_admin_prohib_like_rst to icmp_unreach_like_rst
to reflect the fact that we also react on ICMP unreachables that
are not administrative prohibited. Also update the comments to
reflect this.
In sys/netinet/tcp_subr.c:tcp_ctlinput add code to treat
PRC_UNREACH_ADMIN_PROHIB and PRC_UNREACH_HOST different.
PR: 23986
Submitted by: Jesper Skriver <jesper@skriver.dk>
case there is nothing to do. This happens normally when the card shares
the interrupt line with other devices.
This code saves a couple of microseconds per interrupt even on a
fast CPU. You normally would not care, except under heavy tinygram
traffic where you can have some 50-100.000 interrupts per second...
On passing, correct a spelling error.
lookup vop so that it defaulted to using vop_eopnotsupp for strange
lookups like the ones for open("/dev/null/", ...) and stat("/dev/null/",
...). This mainly caused the wrong errno to be returned by vfs syscalls
(EOPNOTSUPP is not in POSIX, and is not documented in connection with
specfs in open.2 and is not documented in stat.2 at all). Also, lookup
vops are apparently required to set *ap->a_vpp to NULL on error, but
vop_eopnotsupp is too broken to do this.
register our sub-busses in the reversed order. In the future, we may provide
a hint to CAM on how to order the scans for multi-function adapters that also
set this flag, but trying to do it the "twin channel" way will lead to
a panic.
allocation, as required.
If m_getm() receives NULL as a first argument, then it allocates `len'
(second argument) bytes worth of mbufs + clusters and returns the chain
only if it was able to allocate everything.
If the first argument is non-NULL, then it should be an existing mbuf
chain (e.g. pre-allocated mbuf sitting on a ring, on some list, etc.) and
so it will allocate `len' bytes worth of clusters and mbufs, as needed,
and append them to the tail of the passed in chain, only if it was able
to allocate everything requested.
If allocation fails, only what was allocated by the routine will be freed,
and NULL will be returned.
Also, get rid of existing m_getm() in netncp code and replace calls to it
to calls to this new generic code.
Heavily Reviewed by: bp
clear MCPCIA_INT_MASK0 helps things substantially. So, why not indeed?
Rearrange irq and cookie calculation to use shifts/masks instead
of division. Fix things to correctly remember the intpin for that
one in a million non-INTA PCI device.
made no sense in the context of wrapping them within the _SYBRIDGE macro-
or anything like it- so we concluded that this must have been a typo
in the docs. This also doesn't use the same bridge offset as anything
else.
Add some defines for the INT_CTL register.
address is configured on a interface. This is useful for routers with
dynamic interfaces. It is now possible to say:
0100 allow tcp from any to any established
0200 skipto 1000 tcp from any to any
0300 allow ip from any to any
1000 allow tcp from 1.2.3.4 to me 22
1010 deny tcp from any to me 22
1020 allow tcp from any to any
and not have to worry about the behaviour if dynamic interfaces configure
new IP numbers later on.
The check is semi expensive (traverses the interface address list)
so it should be protected as in the above example if high performance
is a requirement.
run-time. This is temporary solution until proper kernel Unicode interfaces
are in place and as such was purposely designed to be as tiny as possible
(3 lines of the code not counting comments). The port with conversion routines
for the most popular single-byte languages will be added later today
Reviewed by: bp, "Michael C . Wu" <keichii@iteration.net>
Approved by: bp
one the number of variables needed for top and other setgid kmem
utilities that could only be accessed via /dev/kmem previously.
Submitted by: Thomas Moestl <tmoestl@gmx.net>
Reviewed by: freebsd-audit
- All processes go into the same array of queues, with different
scheduling classes using different portions of the array. This
allows user processes to have their priorities propogated up into
interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
32. We used to have 4 separate arrays of 32 queues each, so this
may not be optimal. The new run queue code was written with this
in mind; changing the number of run queues only requires changing
constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter. This
is intended to be used to create per-cpu run queues. Implement
wrappers for compatibility with the old interface which pass in
the global run queue structure.
- Group the priority level, user priority, native priority (before
propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
This was used to detect when a process' priority had lowered and
it should yield. We now effectively yield on every interrupt.
- Activate propogate_priority(). It should now have the desired
effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
idle loop. It interfered with propogate_priority() because
the idle process needed to do a non-blocking acquire of Giant
and then other processes would try to propogate their priority
onto it. The idle process should not do anything except idle.
vm_page_zero_idle() will return in the form of an idle priority
kernel thread which is woken up at apprioriate times by the vm
system.
- Update struct kinfo_proc to the new priority interface. Deliberately
change its size by adjusting the spare fields. It remained the same
size, but the layout has changed, so userland processes that use it
would parse the data incorrectly. The size constraint should really
be changed to an arbitrary version number. Also add a debug.sizeof
sysctl node for struct kinfo_proc.
not be retried. It is an indication that there was an error that was
corrected during the execution of the command. This is per ANSI SCSI2
spec.
It's possible that these should also be noted to the console (as indicative,
perhaps, of growing media defect lists in drives), but the default of
printing errors out if bootverbose in this case is probably enough.
Also, there'd been a missing ERESTART for that clause anyway.
2. If you have an ABORTED COMMAND, it's almost invariably a SCSI parity
error. You should never be silent about these since users should do something
about this if it occurs (moving that power cord *away* from the SCSI cable is
always a good first start). This should print irrespective of bootverbose
because it's an actual real error even if we retry a transmission.
Reviewed by: audit@freebsd.org, gibbs@freebsd.org
- Missing cpu_to_scr() added (endian-ness).
Improvement (fix|workaroung??):
- Blindly firing a PPR can lead to some messy situations due to
various causes or misfeatures, for example:
* The 53C1010-[33|66] supports offset 62 in DT mode, but only
offset 31 in ST mode. As a result, a PPR(DT, offset 62)
responded with PPR(ST, any offset > 31) must be rejected.
* A device that doesn't know about PPR should reject it, but
may also be confused by this message.
When a PPR encounters problems, the driver now patches the goal
transfer settings for legacy negotiations to be performed later
with the offending target. This give a chance for bad situations
to be fixed automagically.
Some things needed bits of <i386/include/lock.h> - cy.c now has its
own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK()
has been moved to <i386/include/apic.h> (AKA <machine/apic.h>).
Reviewed by: jhb
were performed to determine if the received packet should be reset. This
created erroneous ratelimiting and false alarms in some cases. The code
has now been reorganized so that the checks for validity come before
the call to badport_bandlim. Additionally, a few changes in the symbolic
names of the bandlim types have been made, as well as a clarification of
exactly which type each RST case falls under.
Submitted by: Mike Silbersack <silby@silby.com>
and function argument declarations. Make sure that functions that are
supposed to return a pointer return NULL in case of failure. Don't cast
NULL. Finally, get rid of annoying `register' uses.
isp_iid_set/isp_iid for fibre channel- this is because we now
fake a port database entry for ourselves. Add the additional loop
states between LOOP_PDB_RCVD and LOOP_READY.
Change and comment on a wad of Fibre Channel isp_control functions.
Change and comment on some of the ISPASYNC Fibre Channel events.
the unit number doesn't get reused.
Make sure that if we've compiled for ISP_TARGET_MODE we set the
default role to be ISP_ROLE_INITIATOR|ISP_ROLE_TARGET.
Do some misc other cleanups.
and depending on role, make sure link is up, scan the fabric (if we're
connected to a fabric), scan the local loop (if appropriate), merge
the results into the local port database then, check once again
to make sure we have f/w at FW_READY state and the the loopstate
is LOOP_READY.
Comment out usage of ISP_SMPLOCK- I have my doubts that this works sanely
as yet because CAM itself still needs Giant. I *was* dropping my lock
and grabbing Giant when doing the upcall for completion, but this is all
seems ridiculous until CAM is fixed.
if we're ISP_ROLE_NONE. Change ISPASYNC_LOGGED_INOUT to ISPASYNC_PROMENADE.
Make sure we note if something is a fabric device.
Target mode:
Finally fix (to a first approximation) SCSI Target Mode again- we needed
to correctly check against CAM_TARGET_WILDCARD and CAM_LUN_WILDCARD
so that targbh won't confuse us. Comment out the drainqueue stuff for
now. Use isp_fc_runstate instead if isp_control/ISPCTL_FCLINK_TEST.
Remove ISP2100_FABRIC defines- we always handle fabric now. Insert
isp_getmap helper function (for getting Loop Position map). Make
sure we (for our own benefit) mark req_state_flags with RQSF_GOT_SENSE
for Fibre Channel if we got sense data- the !*$)!*$)~*$)*$ Qlogic
f/w doesn't do so. Add ISPCTL_SCAN_FABRIC, ISPCTL_SCAN_LOOP, ISPCTL_SEND_LIP,
and ISPCTL_GET_POSMAP isp_control functions. Correctly send async notifications
upstream for changes in the name server, changes in the port database, and
f/w crashes. Correctly set topology when we get a ASYNC_PTPMODE event.
Major stuff:
Quite massively redo how we handle Loop events- we've now added several
intermediate states between LOOP_PDB_RCVD and LOOP_READY. This allows us
a lot finer control about how we scan fabric, whether we go further
than scanning fabric, how we look at the local loop, and whether we
merge entries at the level or not. This is the next to last step for
moving managing loop state out of the core module entirely (whereupon
loop && fabric events will simply freeze the command queue and a thread
will run to figure out what's changed and *it* will re-enable the queu).
This fine amount of control also gets us closer to having an external
policy engine decide which fabric devices we really want to log into.
tracing in order to avoid duplication.
- Insert some tracepoints back into the mutex acq/rel code, thus ensuring
that we can trace all lock acq/rel's again.
- All CURPROC != NULL checks are MPASS()es (under MUTEX_DEBUG) because they
signify a serious mutex corruption.
- Change up some KASSERT()s to MPASS()es, and vice-versa, depending on the
type of problem we're debugging (INVARIANTS is used here to check that
the API is being used properly whereas MUTEX_DEBUG is used to ensure that
something general isn't happening that will have bad impact on mutex
locks).
Reminded by: jhb, jake, asmodai
genassym here, but what I've also noticed is that we're dorking
with a mutex directly at assembler level- I'm not sure that this
is wise at this stage in the SMP port- I think it's going to be much
safer for a while to do things in C until SMP wunderkind figure out
what works and slow down this 3 order differential...