Commit Graph

7837 Commits

Author SHA1 Message Date
peter
a87cdb15d7 Make the kernel actually compile and link under a.out, using
gcc -aout -mno-underscores.  The bioscall.s tweak is not an a.out
requirement really, but to work around the bugs in the antique version of
gas that used for a.out.  Makefile hacks are all that is needed to
get an a.out kernel.  There is no telling if it will work though.
This is little more than an academic curiosity anyway since all it is
good for is situations where the boot code is hard wired, eg: rom
bootstraps (such as the gnat box).

GENERIC:
...
size -aout kernel ; chmod 755 kernel
text    data    bss     dec     hex
3051520 368640  198688  3618848 373820
2001-02-25 07:44:39 +00:00
peter
6d85c89bd4 Always use the ELF naming after the demise of asnames.h. 2001-02-25 07:23:03 +00:00
jake
5915abc03a Remove the leading underscore from all symbols defined in x86 asm
and used in C or vice versa.  The elf compiler uses the same names
for both.  Remove asnames.h with great prejudice; it has served its
purpose.

Note that this does not affect the ability to generate an aout kernel
due to gcc's -mno-underscores option.

moral support from:	peter, jhb
2001-02-25 06:29:04 +00:00
peter
ef7bde7c6c Drop the 'count' from the aha device specs 2001-02-25 05:52:38 +00:00
jake
83c68690c5 - Rename the lcall system call handler from Xsyscall to Xlcall_syscall
to be more like Xint0x80_syscall and less like c function syscall().
- Reduce code duplication between the int0x80 and lcall handlers by
  shuffling the elfags into the right place, saving the sizeof the
  instruction in tf_err and jumping into the common int0x80 code.

Reviewed by:	peter
2001-02-25 02:53:06 +00:00
obrien
1994d0af7f MFS: bring the consistent `compat_3_brand' support into -CURRENT
(the work was first done in the RELENG_4 branch near a release
	 during a MFC to make the code cleaner and more consistent)
2001-02-24 22:20:11 +00:00
jhb
ffbf99ac36 Add back in INVARIANT_SUPPORT and expand the comments in NOTES about it
to include the reasoning Eivind justifiably thwapped me over the head with.
2001-02-24 19:03:18 +00:00
bp
6b51a83f19 Introduce API for sequential reads/writes (build/dissect) of mbuf chains.
Reviewed by:	Ian Dowse <iedowse@maths.tcd.ie>,
		Bosko Milekic <bmilekic@technokratis.com>,
		Julian Elischer <julian@elischer.org> and arch@/net@
Obtained from:	smbfs
2001-02-24 15:44:30 +00:00
peter
2242ed4c70 Activate USER_LDT by default. The new thread libraries are going to
depend on this.  The linux ABI emulator tries to use it for some linux
binaries too.  VM86 had a bigger cost than this and it was made default
a while ago.

Reviewed by:	jhb, imp
2001-02-23 01:25:02 +00:00
jhb
7f992991dd Remove undefined and unreferenced doreti_syscall_ret globl. While I'm
here, adjust comment block above doreti.  We don't have the old MP lock
anymore.
2001-02-23 00:41:05 +00:00
jhb
e2224a9869 The p_md.md_regs member of proc is used in signal handling to reference
the the original trapframe of the syscall, trap, or interrupt that entered
the kernel.  Before SMPng, ast's were handled via a psuedo trap at the
end of doerti.  With the SMPng commit, ast's were broken out into a
separate ast() function that was called from doreti to match the behavior
of other architectures.  Unfortunately, when this was done, the
p_md.md_regs member of curproc was not updateda in ast(), thus when
signals are handled by userret() after an interrupt that returns to
userland, we end up using a stale trapframe that will result in the
registers from the old trapframe overwriting the real trapframe and
smashing all the registers right before we return to usermode.  The saved
%cs:%eip from where we were in usermode are saved in the trapframe for
example.
2001-02-22 19:35:20 +00:00
jhb
54e619a2fa - Change ast() to take a pointer to a trapframe like other architectures.
- Don't use an atomic operation to update cnt.v_soft in ast().  This is
  the only place the variable is written to, and sched_lock is always
  held when it is written, so it is already protected and the mutex release
  of sched_lock asserts a memory barrier that ensures the value will be
  updated in a timely fashion.
2001-02-22 18:05:15 +00:00
jhb
762ce2e750 - Use TRAPF_PC() on the alpha to acess the PC in the trap frame.
- Don't hold sched_lock around addupc_task() as this apparently breaks
  profiling badly due to sched_lock being held across copyin().

Reported by:	bde (2)
2001-02-22 16:23:12 +00:00
jhb
b9baaaf8c5 GC unused and now obsolete assertion macros. 2001-02-22 15:45:49 +00:00
jhb
04256ea24d Now that zerror() and SPLASSERT() have been laid to rest, INVARIANT_SUPPORT
is no longer needed.  R.I.P.
2001-02-22 10:03:05 +00:00
jhb
3a6cd51da8 - Add a new ithread_schedule() function to do the bulk of the work of
scheduling an interrupt thread to run when needed.  This has the side
  effect of enabling support for entropy gathering from interrupts on
  all architectures.
- Change the software interrupt and x86 and alpha hardware interrupt code
  to use ithread_schedule() for most of their processing when scheduling
  an interrupt to run.
- Remove the pesky Warning message about interrupt threads having entropy
  enabled.  I'm not sure why I put that in there in the first place.
- Add more error checking for parameters and change some cases that
  returned EINVAL to panic on failure instead via KASSERT().
- Instead of doing a documented evil hack of setting the P_NOLOAD flag
  on every interrupt thread whose pri was SWI_CLOCK, set the flag
  explicity for clk_ithd's proc during start_softintr().
2001-02-20 10:25:29 +00:00
jhb
c48a983569 - Don't call clear_resched() in userret(), instead, clear the resched flag
in mi_switch() just before calling cpu_switch() so that the first switch
  after a resched request will satisfy the request.
- While I'm at it, move a few things into mi_switch() and out of
  cpu_switch(), specifically set the p_oncpu and p_lastcpu members of
  proc in mi_switch(), and handle the sched_lock state change across a
  context switch in mi_switch().
- Since cpu_switch() no longer handles the sched_lock state change, we
  have to setup an initial state for sched_lock in fork_exit() before we
  release it.
2001-02-20 05:26:15 +00:00
bde
02b79b7edd Removed all traces of T_ASTFLT (except for gaps where it was). It became
unused except in dead code when ast() was split off from trap().
2001-02-19 15:47:38 +00:00
bde
1aba14bb3c Changed the aston() family to operate on a specified process instead of
always on curproc.  This is needed to implement signal delivery properly
(see a future log message for kern_sig.c).

Debogotified the definition of aston().  aston() was defined in terms
of signotify() (perhaps because only the latter already operated on
a specified process), but aston() is the primitive.

Similar changes are needed in the ia64 versions of cpu.h and trap.c.
I didn't make them because the ia64 is missing the prerequisite changes
to make astpending and need_resched per-process and those changes are
too large to make without testing.
2001-02-19 04:15:59 +00:00
bde
b00a6a17ee Fixed style bugs in clock.c rev.1.164 and cpu.h rev.1.52-1.53 -- declare
tsc_present in the right places (together with other variables of the
same linkage), and don't use messy ifdefs just to avoid exporting it in
some cases.
2001-02-19 03:00:34 +00:00
markm
add9c4c77a Allow the superuser to prefent all interrupt harvesting on
her system.
2001-02-18 17:47:55 +00:00
asmodai
a667ae6c5c Preceed/preceeding are not english words. Use precede or preceding. 2001-02-18 10:25:42 +00:00
bde
3c1db4d8e4 Fixed disordering in previous commit. "Fixed" a null comment in previous
commit by removing it.
2001-02-17 03:57:38 +00:00
jlemon
c50375af66 Allow debugging output to be controlled on a per-syscall granularity.
Also clean up debugging output in a slightly more uniform fashion.

The default behavior remains the same (all debugging output is turned on)
2001-02-16 16:40:43 +00:00
jlemon
0116e39ef9 Re-gen auto generated files. 2001-02-16 14:47:24 +00:00
jlemon
ff569e8974 Remove dummy stub functions. 2001-02-16 14:46:16 +00:00
jlemon
f14626c607 Add mount syscall to linux emulation. Also improve emulation of reboot. 2001-02-16 14:42:11 +00:00
jlemon
00c3549c08 Extend kqueue down to the device layer.
Backwards compatible approach suggested by: peter
2001-02-15 16:34:11 +00:00
ume
eec882b4a9 Correct 2nd argument of getnameinfo(3) to socklen_t.
Reviewed by:	itojun
2001-02-15 10:35:55 +00:00
jake
45e3d6a42c Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
markm
4251e00dad RIP <machine/lock.h>.
Some things needed bits of <i386/include/lock.h> - cy.c now has its
own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK()
has been moved to <i386/include/apic.h> (AKA <machine/apic.h>).
Reviewed by:	jhb
2001-02-11 10:44:09 +00:00
jake
16c2f9d1ea Clear the reschedule flag after finding it set in userret(). This
used to be in cpu_switch(), but I don't see any difference between
doing it here.
2001-02-10 20:33:35 +00:00
jhb
102dbb7a6c Re-enable preemption on interrupts. My last commit accidentally reverted
it as I was playing with some other ways of doing kernel preemption.
2001-02-10 02:41:50 +00:00
jhb
9a12d76ae0 - Make astpending and need_resched process attributes rather than CPU
attributes.  This is needed for AST's to be properly posted in a preemptive
  kernel.  They are backed by two new flags in p_sflag: PS_ASTPENDING and
  PS_NEEDRESCHED.  They are still accesssed by their old macros:
  aston(), astoff(), etc.  For completeness, an astpending() macro has been
  added to check for a pending AST, and clear_resched() has been added to
  clear need_resched().
- Rename syscall2() on the x86 back to syscall() to be consistent with
  other architectures.
2001-02-10 02:20:34 +00:00
jhb
600d223734 Add a macro mtx_intr_enable() to alter a spin lock such that interrupts
will be enabled when it is released.
2001-02-10 02:15:18 +00:00
jhb
414ce9fa66 Revert the spin mutex for the cy(4) driver.
Requested by:	bde
2001-02-09 22:37:24 +00:00
jhb
80ee63ad5a Catch up to the new swi API. 2001-02-09 18:35:53 +00:00
jhb
8aff61f0e6 - Use a spin mutex instead of COM_LOCK, since COM_LOCK is going away.
The same name from the sio(4) driver was used and an appropriate
  dictionary item added at the top to reduce diffs.
- Catch up to the new swi API.
2001-02-09 17:55:32 +00:00
jhb
156876757c Catch up to changes to inthand_add(). 2001-02-09 17:48:33 +00:00
jhb
53e39c3cee Use the MI ithread helper functions in the x86 interrupt code. 2001-02-09 17:47:44 +00:00
jhb
7b9787bcbc - Catch up to the new swi API changes:
- Use swi_* function names.
  - Use void * to hold cookies to handlers instead of struct intrhand *.
- In sio.c, use 'driver_name' instead of "sio" as the name of the driver
  lock to minimize diffs with cy(4).
2001-02-09 17:46:35 +00:00
jhb
28a4a5944c Move the initailization of the proc lock for proc0 very early into the MD
startup code.
2001-02-09 16:25:16 +00:00
jhb
d1f60507a5 Woops, remove an obsolete reference to gd_cpu_lockid. 2001-02-09 16:13:57 +00:00
jhb
3a85b764af Remove unused forward_irq counters. 2001-02-09 14:30:03 +00:00
jhb
7fff705af1 Axe gd_cpu_lockid as it is no longer used. 2001-02-09 14:25:22 +00:00
bmilekic
e67bcfcaf3 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
msmith
fdfd6ebf8b Free the memory we get from devclass_get_devices and device_get_children.
Submitted by:	wpaul
2001-02-08 20:44:49 +00:00
jhb
eeffb1cfff Don't enable interrupts for a kernel breakpoint or trace trap. Otherwise,
this negates the explicit disabling of interrupts when entering the
debugger in Debugger().
2001-02-08 00:10:07 +00:00
jhb
910fdf814a When SMPng was first committed, we removed 'cpl' from the interrupt
frame.  Teach ddb about this as there is one less word for it to skip
over when finding a trapframe on the interrupt frame stack.
2001-02-07 22:41:47 +00:00
semenu
e32ff5cf8a Reflect recently added support for SMC9432FTX cards. 2001-02-07 20:18:54 +00:00