Commit Graph

158 Commits

Author SHA1 Message Date
Olivier Houchard
99cf590376 We want to ignore BUS_DMASYNC_POSTWRITE, not BUS_DMASYNC_POSTREAD.
Spotted out by:	mux
Pointy hat to:	cognet
2004-10-21 11:59:33 +00:00
Olivier Houchard
9c2ac80375 Use a default MD_ROOT_SIZE of 65535. 2004-10-11 14:42:06 +00:00
Olivier Houchard
ed4dc69883 Use MD_ROOT_SIZE, instead of our own macro. 2004-10-11 14:41:38 +00:00
Olivier Houchard
74e9b5ed3b Add optimized version of the bswap macroes for constants if __OPTIMIZED__ is
defined.
2004-10-01 16:55:59 +00:00
Olivier Houchard
d60ea0a816 There's no need to turn on MALLOC_PROFILE by default. 2004-10-01 16:51:37 +00:00
Olivier Houchard
77ee40aac5 Calling fuword from fuword32 with bl and without returning after is really a bad
idea.
Any way I get a customized CVS template with "Pointy hat to:	cognet"
pre-filled ?
2004-09-28 14:39:26 +00:00
Olivier Houchard
022fb84224 Always invalidate the whole data cache in pmap_enter() for now.
It should not be needed.
2004-09-28 14:38:14 +00:00
Olivier Houchard
e462e1ba03 Remove dead code. 2004-09-28 14:37:39 +00:00
Olivier Houchard
4b7d15c6dc Add the config file for the IQ31244 board. 2004-09-23 22:55:00 +00:00
Olivier Houchard
f67baa4d6b Use the new KERNVIRTADDR and PHYSADDR options.
Add KDB.
2004-09-23 22:53:50 +00:00
Olivier Houchard
6052fa47a9 Import partial support for the IQ31244 eval board (i80321 CPU). IQ80321 might
work out of the box too, but I have no hardware to test.
It works well enough to go multiuser. Network works, SATA does not, as I have
no drive to test.
Thanks to Intel for sending such a board.

Obtained from:  NetBSD
2004-09-23 22:45:36 +00:00
Olivier Houchard
3ce77c124e Add Xscale common headers. 2004-09-23 22:36:13 +00:00
Olivier Houchard
906ce37658 Big cleanup: get ride of the whole spl level logic, as FreeBSD doesn't use
it anymore.
2004-09-23 22:33:38 +00:00
Olivier Houchard
8413603da8 Now that we have pmap_growkernel(), use more KVA. 2004-09-23 22:32:33 +00:00
Olivier Houchard
1f5f31b4ec Remove the empty definition of struct osigcontext, as it will never be used. 2004-09-23 22:31:49 +00:00
Olivier Houchard
7ea7271711 Remove the pcb32_cstate field of struct pcb. 2004-09-23 22:31:08 +00:00
Olivier Houchard
f04d49ad11 Declare sigcode and szsigcode. 2004-09-23 22:30:05 +00:00
Olivier Houchard
9f0f6bf453 Define VM_PROT_READ_IS_EXEC. 2004-09-23 22:29:43 +00:00
Olivier Houchard
ffa589bf15 Implement _mcount().
Obtained from:	NetBSD
2004-09-23 22:29:18 +00:00
Olivier Houchard
c038ee8196 Define STACKALIGNBYTES and STACKALIGN. 2004-09-23 22:27:42 +00:00
Olivier Houchard
a40d2bb653 We are using _mcount, not __mcount.
Remove the !__ELF__ case.
2004-09-23 22:26:29 +00:00
Olivier Houchard
8476fd9ff7 Use sf_bufs for uiomove_fromphys(). 2004-09-23 22:25:20 +00:00
Olivier Houchard
04aebdab36 On Xscale, use the minicache for the kernel stack. 2004-09-23 22:24:12 +00:00
Olivier Houchard
9979f39280 Make sure to call cred_update_thread() if needed.
Add partial support for KTRACE.
2004-09-23 22:22:33 +00:00
Olivier Houchard
8be9ab9730 Implement cpu_throw().
Obtained from:	NetBSD
2004-09-23 22:20:59 +00:00
Olivier Houchard
01997784aa Remove unused macroes.
Add user, btrap, etrap, bintr and eintrt in the GPROF case.
2004-09-23 22:18:56 +00:00
Olivier Houchard
0627741cbf Implement sigreturn(). 2004-09-23 22:12:28 +00:00
Olivier Houchard
f0c85e996a Add the hw.machine sysctl. 2004-09-23 22:11:43 +00:00
Olivier Houchard
a7e3e43349 Remove definitions related to the pmap cache state, and add TDF_NEEDRESCHED. 2004-09-23 22:11:06 +00:00
Olivier Houchard
7c320e5bfb Add new functions to know which irqs are pending, and to mask and unmask
interrupts, as these are CPU specific.
If the interrupt handler is not marked as INTR_FAST, don't unmask the
interrupt until it as been serviced.
2004-09-23 22:09:57 +00:00
Olivier Houchard
1e82631893 Rename macroes, as we don't need to mess with alignment faults.
Call ast() if TDF_NEEDRESCHED is set too, not just TDF_ASTPENDING.
2004-09-23 22:05:40 +00:00
Olivier Houchard
289d61042d Use sigcode. 2004-09-23 22:03:25 +00:00
Olivier Houchard
282c3a6588 In db_stack_trace_cmd, remove the "pc" variable, we don't need it. 2004-09-23 22:02:59 +00:00
Olivier Houchard
3f0cbe0ef6 Use the right path for xscale files. 2004-09-23 21:59:43 +00:00
Olivier Houchard
a5bb1c8501 Remove bus_space_vaddr(), it does not exists in FreeBSD. 2004-09-23 21:59:14 +00:00
Olivier Houchard
4637f47217 Don't attempt to manage our own segment list, and just remember the buffers
provided.

Obtained from:	NetBSD
2004-09-23 21:57:47 +00:00
Olivier Houchard
f68fab42ef Use the right path for the bcopyinout_xscale.S file. 2004-09-23 21:56:36 +00:00
Olivier Houchard
371853e562 Add MD syscalls to sync the icache and to drain the write buffer.
Obtained from:	NetBSD
2004-09-23 21:56:01 +00:00
Olivier Houchard
8e90166a08 Implement pmap_growkernel() and pmap_extract_and_hold().
Remove the cache state logic : right now, it provides more problems than it
helps.
Add helper functions for mapping devices while bootstrapping.
Reorganize the code a bit, and remove dead code.

Obtained from:	NetBSD (partially)
2004-09-23 21:54:25 +00:00
Olivier Houchard
6f358f0045 Map the kernel very early if needed.
Implement sigcode.
2004-09-23 21:49:10 +00:00
John Baldwin
76764432e4 - Add support for "paging" in stack trace output. That is, when you do
a stack trace from ddb, the output will pause with a '--More--' prompt
  every 18 lines.  If you hit Enter, it will print another line and prompt
  again.  If you hit space it will output another page and then prompt.
  If you hit 'q' or 'x' it will abort the rest of the stack trace.
- Fix the sparc64 userland stack trace to honor the total count of lines
  to print.  This is useful if your trace happens to walk back onto
  0xdeadc0de and gets stuck in an endless loop.

MFC after:	1 month
Tested on:	i386, alpha, sparc64
2004-09-20 19:05:32 +00:00
Scott Long
50736a153b Fix a problem with tag->boundary inheritence that has existed since day one
and was propagated to nearly every platform.  The boundary of the child needs
to consider the boundary of the parent and pick the minimum of the two, not
the maximum.  However, if either is 0 then pick the appropriate one.
This bug was exposed by a recent change to ATA, which should now be fixed by
this change.  The alignment and maxsegsz tag attributes likely also need
a similar review in the near future.

This is a MT5 candidate.

Reviewed by: marcel
Submitted by: sos (in part)
2004-09-08 04:54:19 +00:00
Julian Elischer
ed062c8d66 Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.

The KSE structure has become the same as the "per thread scheduler
private data" structure. In order to not make the diffs too great
one is #defined as the other at this time.

The KSE (or td_sched) structure is  now allocated per thread and has no
allocation code of its own.

Concurrency for a KSEGRP is now kept track of via a simple pair of counters
rather than using KSE structures as tokens.

Since the KSE structure is different in each scheduler, kern_switch.c
is now included at the end of each scheduler. Nothing outside the
scheduler knows the contents of the KSE (aka td_sched) structure.

The fields in the ksegrp structure that are to do with the scheduler's
queueing mechanisms are now moved to the kg_sched structure.
(per ksegrp scheduler private data structure). In other words how the
scheduler queues and keeps track of threads is no-one's business except
the scheduler's. This should allow people to write experimental
schedulers with completely different internal structuring.

A scheduler call sched_set_concurrency(kg, N) has been added that
notifies teh scheduler that no more than N threads from that ksegrp
should be allowed to be on concurrently scheduled. This is also
used to enforce 'fainess' at this time so that a ksegrp with
10000 threads can not swamp a the run queue and force out a process
with 1 thread, since the current code will not set the concurrency above
NCPU, and both schedulers will not allow more than that many
onto the system run queue at a time. Each scheduler should eventualy develop
their own methods to do this now that they are effectively separated.

Rejig libthr's kernel interface to follow the same code paths as
linkse for scope system threads. This has slightly hurt libthr's performance
but I will work to recover as much of it as I can.

Thread exit code has been cleaned up greatly.
exit and exec code now transitions a process back to
'standard non-threaded mode' before taking the next step.
Reviewed by:	scottl, peter
MFC after:	1 week
2004-09-05 02:09:54 +00:00
Marcel Moolenaar
0f2fe153bc Move the kernel-specific logic to adjust frompc from MI to MD. For
these two reasons:
1. On ia64 a function pointer does not hold the address of the first
   instruction of a functions implementation. It holds the address
   of a function descriptor. Hence the user(), btrap(), eintr() and
   bintr() prototypes are wrong for getting the actual code address.
2. The logic forces interrupt, trap and exception entry points to
   be layed-out contiguously. This can not be achieved on ia64 and is
   generally just bad programming.

The MCOUNT_FROMPC_USER macro is used to set the frompc argument to
some kernel address which represents any frompc that falls outside
the kernel text range. The macro can expand to ~0U to bail out in
that case.
The MCOUNT_FROMPC_INTR macro is used to set the frompc argument to
some kernel address to represent a call to a trap or interrupt
handler. This to avoid that the trap or interrupt handler appear to
be called from everywhere in the call graph. The macro can expand
to ~0U to prevent adjusting frompc. Note that the argument is selfpc,
not frompc.

This commit defines the macros on all architectures equivalently to
the original code in sys/libkern/mcount.c. People can take it from
here...

Compile-tested on: alpha, amd64, i386, ia64 and sparc64
Boot-tested on: i386
2004-08-27 19:42:35 +00:00
Marcel Moolenaar
4da47b2fec Add __elfN(dump_thread). This function is called from __elfN(coredump)
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.

Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
2004-08-11 02:35:06 +00:00
Alan Cox
6306df6b89 Add a comment describing pmap_extract_and_hold() noting that the protection
check still needs implementation on arm.
2004-08-10 21:43:40 +00:00
Olivier Houchard
0f1f0c5d75 Use the new prototype for the zone constructor. 2004-08-06 22:32:53 +00:00
Alan Cox
684a62b7bf - Push down the acquisition and release of Giant into pmap_enter_quick()
on those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant in vm_map_pmap_enter().
2004-08-04 22:03:16 +00:00
Maxime Henrion
9f1b87f106 Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait().  It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by:	jhb
Reviewed by:	jhb
2004-08-03 18:44:27 +00:00
Olivier Houchard
9b60f79d2a *blush*
Fix htonl and htons.
2004-08-02 12:24:18 +00:00