Commit Graph

753 Commits

Author SHA1 Message Date
Scott Long
288a05f9ee Fix another mistake in the bus_dmamem_alloc_size() thing
Submitted by:	tmm
2003-01-29 20:36:08 +00:00
Scott Long
c07f24ca0c Fix some more missing dt_ prefixes for dma tag fields. 2003-01-29 17:41:29 +00:00
Scott Long
2cbd991d46 Fix a typo in dt_maxsize from the last commit 2003-01-29 07:28:25 +00:00
Scott Long
5193a34646 Implement bus_dmamem_alloc_size() and bus_dmamem_free_size() as
counterparts to bus_dmamem_alloc() and bus_dmamem_free().  This allows
the caller to specify the size of the allocation instead of it defaulting
to the max_size field of the busdma tag.

This is intended to aid in converting drivers to busdma.  Lots of
hardware cannot understand scatter/gather lists, which forces the
driver to copy the i/o buffers to a single contiguous region
before sending it to the hardware.  Without these new methods, this
would require a new busdma tag for each operation, or a complex
internal allocator/cache for each driver.

Allocations greater than PAGE_SIZE are rounded up to the next
PAGE_SIZE by contigmalloc(), so this is not suitable for multiple
static allocations that would be better served by a single
fixed-length subdivided allocation.

Reviewed by:	jake (sparc64)
2003-01-29 07:25:27 +00:00
Jake Burkholder
52e59d41d8 Enable device zs and device sab by default. 2003-01-27 05:05:52 +00:00
Jake Burkholder
e3c5d56ff4 Fix standard kse breakage of non-x86 platforms.
Pointy hat to:	davidxu
2003-01-26 23:52:45 +00:00
David Xu
0dbb100b9b Move UPCALL related data structure out of kse, introduce a new
data structure called kse_upcall to manage UPCALL. All KSE binding
and loaning code are gone.

A thread owns an upcall can collect all completed syscall contexts in
its ksegrp, turn itself into UPCALL mode, and takes those contexts back
to userland. Any thread without upcall structure has to export their
contexts and exit at user boundary.

Any thread running in user mode owns an upcall structure, when it enters
kernel, if the kse mailbox's current thread pointer is not NULL, then
when the thread is blocked in kernel, a new UPCALL thread is created and
the upcall structure is transfered to the new UPCALL thread. if the kse
mailbox's current thread pointer is NULL, then when a thread is blocked
in kernel, no UPCALL thread will be created.

Each upcall always has an owner thread. Userland can remove an upcall by
calling kse_exit, when all upcalls in ksegrp are removed, the group is
atomatically shutdown. An upcall owner thread also exits when process is
in exiting state. when an owner thread exits, the upcall it owns is also
removed.

KSE is a pure scheduler entity. it represents a virtual cpu. when a thread
is running, it always has a KSE associated with it. scheduler is free to
assign a KSE to thread according thread priority, if thread priority is changed,
KSE can be moved from one thread to another.

When a ksegrp is created, there is always N KSEs created in the group. the
N is the number of physical cpu in the current system. This makes it is
possible that even an userland UTS is single CPU safe, threads in kernel still
can execute on different cpu in parallel. Userland calls kse_create to add more
upcall structures into ksegrp to increase concurrent in userland itself, kernel
is not restricted by number of upcalls userland provides.

The code hasn't been tested under SMP by author due to lack of hardware.

Reviewed by: julian
2003-01-26 11:41:35 +00:00
Jeff Roberson
c3384118a1 - Introduce the SCHED_ULE and SCHED_4BSD options for compile time selection
of the scheduler.
 - Add SCHED_4BSD as the scheduler for all kernel config files in cvs.
2003-01-26 05:29:12 +00:00
Jake Burkholder
21ecbe78a2 Merge some code paths back together so that we only instantiate 1 copy of
the user tlb fault handlers.
2003-01-26 03:38:30 +00:00
Jake Burkholder
4a3381caae Moved some (gas) macros up so they can be used in more places. 2003-01-24 23:47:46 +00:00
Thomas Moestl
a00f3148b6 Fixes for a number of problems in the IOMMU code:
1.) Fix an off-by-one in the DVMA space handling, which would make it
    possible to allocate one page beyond the end of the DVMA area.
    This page was aliased to the first page. Apparently, this bug was
    responsible for the trashed nvram/eeprom some people were reporting,
    in conjunction with a number of unfortunate coincidences.
2.) Fix broken boundary and and lowaddr calculations.
3.) Fix a memory leak on an error path.
4.) Update a outdated comment to reflect the introduction of IOMMU_MAX_PRE,
    make the usage of IOMMU_MAX_PRE more consistent and KASSERT that the
    preallocation size is not 0.
5.) Fix a case where an error return was lost.
6.) When signalling an error to the caller by invoking the callback, do
    not use a segment pointer of NULL for compatability with existing
    drivers.

Also, increase the maximum segment number to 64; it is rather arbitrary,
with the exception of the of the stack space consumed by the segment
array.

Special thanks go to Harti Brandt <brandt@fokus.fraunhofer.de> for
spotting 4 and 5, and testing many iterations of patches.

Pointy hats to:	tmm
2003-01-21 18:22:26 +00:00
Thomas Moestl
d2ccea1588 Fix iommu_dvmamap_sync(): it was still operating as if the BUS_DMASYNC_*
constants where flag bits (as in NetBSD), although they are consecutively
numbered in FreeBSD. This would cause unnecessary flushing in the
BUS_DMASYNC_POSTWRITE case, but was otherwise mostly harmless.
2003-01-21 17:08:22 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Jeff Roberson
04de47b0d3 - Add a VM_WAIT in the appropriate cases where vm_page_alloc() fails and flags
indicate that uma_small_alloc should not.  This code should be refactored so
   that there is not so much cross arch duplication.

Reviewed by:	jake
Spotted by:	tmm
Tested on:	alpha, sparc64
Pointy hat to:	jeff and everyone who cut and pasted the bad code. :-)
2003-01-21 05:44:52 +00:00
Jake Burkholder
7251b4bf93 Resolve relative relocations in klds before trying to parse the module's
metadata.  This fixes module dependency resolution by the kernel linker on
sparc64, where the relocations for the metadata are different than on other
architectures; the relative offset is in the addend of an Elf_Rela record
instead of the original value of the location being patched.
Also fix printf formats in debug code.

Submitted by:	Hartmut Brandt <brandt@fokus.gmd.de>
PR:		46732
Tested on:	alpha (obrien), i386, sparc64
2003-01-21 02:42:44 +00:00
Matthew N. Dodd
060daf4376 The abs() function isn't defined locally; include a header file that
defines it.
2003-01-16 08:53:03 +00:00
Matthew Dillon
e3669cee72 Merge all the various copies of vm_fault_quick() into a single
portable copy.
2003-01-16 00:02:21 +00:00
Matthew Dillon
f597900329 Merge all the various copies of vmapbuf() and vunmapbuf() into a single
portable copy.  Note that pmap_extract() must be used instead of
pmap_kextract().

This is precursor work to a reorganization of vmapbuf() to close remaining
user/kernel races (which can lead to a panic).
2003-01-15 23:54:35 +00:00
Matthew N. Dodd
6dc61b5ae5 - GC a few more hand-rolled 'abs' macros.
- GC a few hand-rolled min()/max() macros while I'm here.
2003-01-15 02:15:57 +00:00
David E. O'Brien
497c45b6e1 Enable rl(4). It is now fully working using busdma. 2003-01-13 04:06:38 +00:00
Jake Burkholder
4ee5222bac Don't allow user process to set an invalid window state through sigreturn.
Spotted by:	tmm
2003-01-10 00:04:56 +00:00
Jake Burkholder
88a8ca8569 Implement bus_space_subregion. 2003-01-08 04:29:00 +00:00
Thomas Moestl
fe461235a2 Change the iommu code to be able to handle more than one DVMA area per
map. Use this new feature to implement iommu_dvmamap_load_mbuf() and
iommu_dvmamap_load_uio() functions in terms of a new helper function,
iommu_dvmamap_load_buffer(). Reimplement the iommu_dvmamap_load()
to use it, too.
This requires some changes to the map format; in addition to that,
remove unused or redundant members.
Add SBus and Psycho wrappers for the new functions, and make them
available through the respective DMA tags.
2003-01-06 21:59:54 +00:00
Thomas Moestl
702f83a807 - remove the unused parent DMA tag argument from
_nexus_dmamap_load_buffer()
- implement nexus_dmamap_load() in terms of _nexus_dmamap_load_buffer().
  Note that this is untested, as this code is not currently used (but
  might be later for UPA devices).
- move BUS_DMAMAP_NSEGS to bus_private.h
- disable the ecache flushing in nexus_dmamap_sync(); it should not be
  needed, although the docs are not entirely clear on that.
2003-01-06 20:54:07 +00:00
Thomas Moestl
0415010cbd Bump the IOMMU TSB size to 32kB, to match the default size on PCI
systems.
2003-01-06 19:48:31 +00:00
Thomas Moestl
08707f05a0 Prefix the members of struct bus_space_tag and struct bus_dma_tag with
a uniqifier. No functional changes.
2003-01-06 19:43:10 +00:00
Thomas Moestl
7f349cb889 Style and comment fixes, no functional changes. 2003-01-06 17:35:40 +00:00
Thomas Moestl
6cf280f100 Look for the correct method in sparc64_dmamap_load_mbuf() and
sparc64_dmamap_load_uio().
2003-01-06 17:17:26 +00:00
Thomas Moestl
8286df3339 Initialize the cache line size register of all PCI devices in the
initial setup pass.
2003-01-06 17:12:23 +00:00
Thomas Moestl
5b398f68ee Some cleanup:
- move some constants into iommureg.h
- correct some comments
- use KASSERT() in one place instead of rolling our own
- take a sanity check out of #ifdef DIAGNOSTIC
- fix a syntax error in normally #ifdef'ed out debug code
2003-01-06 17:10:07 +00:00
Thomas Moestl
3a68043d39 - remove some outdated comments
- tweak the announce message a bit
- remove '\n's from a few panic() calls
- don't use the DVMA base adress the firmware reports; instead, figure
  it out from the appropriate register on Sabres and let the IOMMU code
  choose it on Psychos. This also makes the IOMMU TSB size freely
  selectable.
2003-01-06 16:51:06 +00:00
Thomas Moestl
7bee014426 1.) fix a copy-and-paste-o in a panic() message
2.) pass the requesting child device (instead of the bus one) up when
    handling interrupt resources
3.) remeber to mark the resource list entry as unused in
    sbus_release_resource().

Reported by:	scottl (3)
2003-01-06 16:36:05 +00:00
Jake Burkholder
e8237e53da - Reorganize PMAP_STATS to scale a little better.
- Add some more stats for things that are now considered interesting.
2003-01-05 05:30:40 +00:00
Jake Burkholder
3afb575773 Make imgact_elf32.c compile on sparc64.
Obtained from:	ia64
2003-01-05 03:48:55 +00:00
Jake Burkholder
75db2b5e27 Add a driver for the Zilog 8530 dual uart found in Ultra 1s and Ultra 2s.
With a 1 byte transmit fifo, 3 byte receive fifo, and wierd multiplexed I/O
designed for a Z80 cpu, this chip redefines suckage.

Based on the openbsd and netbsd drivers.  Only really works as a console,
modem support is not complete since I can't test it.
2003-01-01 19:49:30 +00:00
Jens Schweikhardt
9d5abbddbf Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,
especially in troff files.
2003-01-01 18:49:04 +00:00
Jens Schweikhardt
d64ada501a Fix typos, mostly s/ an / a / where appropriate and a few s/an/and/
Add FreeBSD Id tag where missing.
2002-12-30 21:18:15 +00:00
Jake Burkholder
07a312f6b6 Use memset instead of __builtin_memset. Apparently there's an inline
memset in libkern which causes problems; why that's there is beyond me.
2002-12-29 08:37:11 +00:00
Jake Burkholder
bc4ede2030 Use the meaningful mnemonics for ancillary state registers now that gas
is invoked properly to understand them.

	%asr19 -> %gsr
	%asr20 -> %set_softint
	%asr21 -> %clear_softint
2002-12-29 00:23:48 +00:00
Jake Burkholder
418a8ca992 Forgot this file in previous commit. 2002-12-28 23:58:18 +00:00
Jake Burkholder
af13cb9f11 - Moved storing %g1-%g5 in the trapframe until after interrupts are enabled.
- Restore %g6 and %g7 for kernel traps if we are returning to prom code.
  This allows complex traps (ones that call into C code) to be handled from
  the prom.
2002-12-28 23:57:52 +00:00
Jake Burkholder
63100290f3 Pass 0 in %o1 to tl0_trap for all non-interrupt traps. This will be used
to pass the pil when tl0_trap also handles interrupts.
2002-12-28 23:34:21 +00:00
Alan Cox
472c3b6f4f Hold the page queues lock around calls to vm_page_flag_clear() and
vm_page_wakeup().
2002-12-28 21:14:44 +00:00
Alan Cox
fca5d6baba Use VM_ALLOC_WIRED in pmap_pinit(). 2002-12-28 08:10:29 +00:00
Jake Burkholder
7b666b648f Define UMA_MD_SMALL_ALLOC so that uma_small_alloc and uma_small_free will
be used for zones that allocate objects of less 1 page.  The biggest advantage
of this is that all of a sudden the majority of kernel malloc-ed data doesn't
need kva allocated for it.  Besides microbenchmarks I haven't seen a measurable
performance improvement from doing this.
2002-12-27 19:31:26 +00:00
Jake Burkholder
66a6d547ec Teach /dev/kmem about direct mapped addresses.
Note that a better solution for how to make kernacc work for direct mapped
addresses is needed for all platforms that use them.
2002-12-27 19:18:04 +00:00
Jake Burkholder
9ff9f0fb80 Implement uma_small_alloc and uma_small_free. Not yet used. 2002-12-27 03:11:29 +00:00
Jake Burkholder
885f001d98 - Use direct mapped addresses for the message buffer, for the crash dump
mappings, and for pmap_map which is used to map the vm_page structures.
- Don't allocate kva space for any of the above.
2002-12-27 01:50:29 +00:00
Jake Burkholder
a106c6930a - Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
  to use 4mb tlb entries instead of 8k.  This requires that the system only
  use the direct mapped addresses when they have the same virtual colour as
  all other mappings of the same page, instead of being able to choose the
  colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
  being able to choose the colour or cachability of the direct mapped
  address.  This adds a lot more cases to handle.  Basically when a page has
  a different colour than its direct mapped address we have a choice between
  bypassing the data cache and using physical addresses directly, which
  requires a cache flush, or mapping it at the right colour, which requires
  a tlb flush.  For now we choose to map the page and do the tlb flush.

This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
Jake Burkholder
dcedf26597 - Fix a bug where the faulting address for an mmu miss could sometimes be
clobbered due to some debug code.  This was harmless and just superfluous
  soft faults.
- Update some comments.
2002-12-23 02:18:45 +00:00