Commit Graph

380 Commits

Author SHA1 Message Date
Alexander Kabaev
0eda4c08a5 Style fixes.
Remove DBL_DIG, DBL_MIN, DBL_MAX and their FLT_ counterparts, they
were marked for deprecation ever since SUSv1 at least.
Only define ULLONG_MIN/MAX and LLONG_MAX if long long type is
supported.
Restore a lost comment in MI _limits.h file and remove it from
sys/limits.h where it does not belong.
2003-05-04 22:13:04 +00:00
Peter Wemm
7f47668191 Slight reorg and added AMD64 support. A couple of the MODINFOMD_* values
that were added to sparc64 and later powerpc, really should have been in
the MI area.  But changing that now with insufficient preperation will
just cause too much pain.

Move MD_FETCH() to the MI sys/linker.h file to avoid another two copies
of it.
2003-05-01 03:31:18 +00:00
Jake Burkholder
2d5d213f82 Allow fast instruction and data access mmu miss traps to be handled by
user trap handlers.
2003-04-29 21:30:59 +00:00
Alexander Kabaev
104a9b7e3e Deprecate machine/limits.h in favor of new sys/limits.h.
Change all in-tree consumers to include <sys/limits.h>

Discussed on:	standards@
Partially submitted by: Craig Rodrigues <rodrigc@attbi.com>
2003-04-29 13:36:06 +00:00
Jake Burkholder
9283d2bf2a - Fix placement of cvs ids in previous commit to match .S files in libc.
- gcc uses 32 byte alignment for functions regardless of profiling, so
  follow suit.
2003-04-29 00:37:41 +00:00
David E. O'Brien
4bee32ae9c I was wrong, the ENTRY bits in asm.h did have a purpose -- for userland.
Restore the bits and remove them from asmacros.h.  *.S will now be asm.h
consumers.

Approved by:	jake
2003-04-26 20:54:45 +00:00
David E. O'Brien
ef7b1f2f0f The ENTRY bits were in two places. Remove the one not used (asm.h), but
presurve the nice comment by adding it to asmacros.h.
2003-04-26 17:17:45 +00:00
David E. O'Brien
9cd8976ea9 Two tokens that don't together form a vaid preprocssor token cannot be
pasted together using ANSI-C token concatinatation.  GCC's cpp, at least,
produces the desired result w/o using "##".
2003-04-26 17:00:10 +00:00
Alexander Kabaev
6fd839f9c7 Add a new sys/limits.h file which in turn depends on machine/_limits.h
to get actual constant values. This is in preparation for machine/limits.h
retirement.

Discussed on:	standards@
Submitted by:	Craig Rodrigues <rodrigc@attbi.com>  (*)
Modified by:	kan
2003-04-23 21:41:59 +00:00
Jake Burkholder
50e24eb628 - Move the routine for flushing all user mappings from the tlb from pmap to
the cpu dependent files.  It will need to be done differently for USIII.
- Simplify the logic for detecting context rollovers.  Instead of dealing
  with it when the next context switch would cause the context numbers to
  rollover, deal with it when they actually do rollover.
- Move some things around in cpu_switch so that we only do 1 membar #Sync
  when switching address space, instead of 2.
- Detect kernel threads by comparing the new vm space to vmspace0, instead
  if checking if the tlb context is 0.
- Removed some debug code.
2003-04-13 21:54:58 +00:00
Maxime Henrion
7a648f56cf I deserve a big pointy hat for having missed all those references
to bus_dmasync_op_t in my last commit.
2003-04-10 23:50:06 +00:00
Maxime Henrion
141bacb048 Change the operation parameter of bus_dmamap_sync() from an
enum to an int and redefine the BUS_DMASYNC_* constants as
flags.  This allows us to specify several operations in one
call to bus_dmamap_sync() as in NetBSD.
2003-04-10 23:03:33 +00:00
Jake Burkholder
58d7ebfa7c Use vm_paddr_t for physical addresses. 2003-04-08 06:35:09 +00:00
Jake Burkholder
c81c0cf196 Make the pmap stats writeable. It can be useful to clear them. 2003-04-06 18:17:31 +00:00
Jake Burkholder
92fed30a07 Use the vis block copy/zero functions for pmap_copy_page and pmap_zero_page.
These are called through function pointers so that different implementations
can be provided for cheetah, where the block load instructions may or may
not be a win, and so they can be disabled with the machdep.use_vis tunable.
In terms of raw bandwidth the integer versions are faster, but not allocating
lines in the L2 cache for useless data gives a measurable improvement in user
time for the benchmarks I tested (mostly buildworld with -j8).

As far as I can tell the instructions used are implemented on everything
back to UltraSPARC I, so there should not be a problem with different cpu
types.
2003-04-06 17:05:26 +00:00
Jake Burkholder
6412c65cf0 Add optimized block copy and zero functions using vis instructions, which
can do 64 bytes at a time and don't allocate lines in the L2 cache.  These
assume that everything is 64 byte aligned, and that there's more than 128
bytes of data (best for whole pages).  The block load and store instructions
don't follow normal memory ordering rules and require either a memory barrier
or move between registers before the data can actually be used.  This
implementation correctly shuffles around 3 out of the 4 sets of registers
in order to avoid memory barriers expect for the last 2 blocks.
2003-04-03 18:43:40 +00:00
Jake Burkholder
7dafcb6914 - Add space for kernel floating point registers to the pcb. These will be
used to support block copy and zero operations in the kernel which use the
  floating point registers.
- While I'm changing the size, improve the layout of struct pcb, sort by size,
  then alphabetical etc.
- Add some assertions to validate assumptions made about how the pcb is
  allocated.
2003-04-03 18:28:03 +00:00
Jake Burkholder
73adf5691f - Add a flags field to struct pcb. Use this to keep track of wether or
not the pcb has floating point registers saved in it.
- Implement get_mcontext and set_mcontext.
2003-04-01 04:58:50 +00:00
Jake Burkholder
f217a77ce4 - Rename pcb_fpstate to pcb_ufp (user floating point), and change it to
a simple array of 64 ints.
- Use a critical section when saving floating point state in cpu_fork
  instead of sched_lock.
2003-04-01 04:02:45 +00:00
Jake Burkholder
e50173aeaa Rename pcb_fp to pcb_sp, so as to not be confused with floating point
state.
2003-04-01 03:05:46 +00:00
Jake Burkholder
0f0dfee4d5 Handle the fictitious pages created by the device pager. For fictitious
pages which represent actual physical memory we must strip off the fake
page in order to allow illegal aliases to be detected.  Otherwise we map
uncacheable in the virtual and physical caches and set the side effect bit,
as is required for mapping device memory.

This fixes gstat on sparc64, which wants to mmap kernel memory through a
character device.
2003-03-27 02:16:31 +00:00
Jake Burkholder
227f9a1c58 - Add vm_paddr_t, a physical address type. This is required for systems
where physical addresses larger than virtual addresses, such as i386s
  with PAE.
- Use this to represent physical addresses in the MI vm system and in the
  i386 pmap code.  This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
  detection code, and due to kvtop returning vm_paddr_t instead of u_long.

Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.

Sponsored by:	DARPA, Network Associates Laboratories
Discussed with:	re, phk (cdevsw change)
2003-03-25 00:07:06 +00:00
Jake Burkholder
00aabd830d - Remove unused cache flushing routines. These will not necessary work
on future UltraSPARC cpus for which the data cache is not direct mapped.
- Move UltraSPARC I and II (spitfire, blackbird, sapphire, sabre) specific
  functions to spitfire.c, and add cheetah.c for UltraSPARC III specific
  functions.  Initially just cache flushing, but there are a few other
  functions that will need to move here.
- Add an ipi handler for data cache flushing on UltraSPARC III.
- Use function pointers to select the right cache flushing functions based
  on cpu_impl.

With this it is possible to boot single user from an mfs root on UltraSPARC
III systems, including spinning up secondary processors.  There is currently
no support for the host to pci bridge, and no documentation for it is
publically available.

Thanks to Oleg Derevenetz for providing access to a system with UltraSPARC
III+ cpus.
2003-03-19 06:55:37 +00:00
Jake Burkholder
eb51ffcb6d Remove unused fields. 2003-03-18 08:15:24 +00:00
Jake Burkholder
5501d40bb9 Made the prototypes for pmap_kenter and pmap_kremove MD. These functions
are machine dependent because they are not required to update the tlb when
mappings are added or removed, and doing so is machine dependent.
In addition, an implementation may require that pages mapped with pmap_kenter
have a backing vm_page_t, which is not necessarily true of all physical
pages, and so may choose to pass the vm_page_t to pmap_kenter instead of the
physical address in order to make this requirement clear.
2003-03-16 04:16:03 +00:00
Maxime Henrion
f6c912dd0c Correctly set BUS_SPACE_MAXSIZE in all the busdma backends.
It was bogusly set to 64 * 1024 or 128 * 1024 because it was
bogusly reused in the BUS_DMAMAP_NSEGS definition.
2003-02-26 02:16:06 +00:00
David E. O'Brien
0f7d7a85f0 Make the 'a' parameter of bus_space_write_multi_stream_*() a const pointer. 2003-02-24 00:11:15 +00:00
David E. O'Brien
437ce69ead The rest of our platforms make bus_space_write_multi_stream_2's 'a' a
const pointer.
2003-02-23 20:42:32 +00:00
David E. O'Brien
2ae86927a6 Add an empty bus_space_unmap() like Alpha has. puc(4) uses it. 2003-02-23 19:54:16 +00:00
Mike Barcroft
8cf5ed5125 Implement fpclassify():
o Add a MD header private to libc called _fpmath.h; this header
  contains bitfield layouts of MD floating-point types.
o Add a MI header private to libc called fpmath.h; this header
  contains bitfield layouts of MI floating-point types.
o Add private libc variables to lib/libc/$arch/gen/infinity.c for
  storing NaN values.
o Add __double_t and __float_t to <machine/_types.h>, and provide
  double_t and float_t typedefs in <math.h>.
o Add some C99 manifest constants (FP_ILOGB0, FP_ILOGBNAN, HUGE_VALF,
  HUGE_VALL, INFINITY, NAN, and return values for fpclassify()) to
  <math.h> and others (FLT_EVAL_METHOD, DECIMAL_DIG) to <float.h> via
  <machine/float.h>.
o Add C99 macro fpclassify() which calls __fpclassify{d,f,l}() based
  on the size of its argument.  __fpclassifyl() is never called on
  alpha because (sizeof(long double) == sizeof(double)), which is good
  since __fpclassifyl() can't deal with such a small `long double'.

This was developed by David Schultz and myself with input from bde and
fenner.

PR:		23103
Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
		(significant portions)
Reviewed by:	bde, fenner (earlier versions)
2003-02-08 20:37:55 +00:00
Scott Long
288a05f9ee Fix another mistake in the bus_dmamem_alloc_size() thing
Submitted by:	tmm
2003-01-29 20:36:08 +00:00
Scott Long
c07f24ca0c Fix some more missing dt_ prefixes for dma tag fields. 2003-01-29 17:41:29 +00:00
Scott Long
5193a34646 Implement bus_dmamem_alloc_size() and bus_dmamem_free_size() as
counterparts to bus_dmamem_alloc() and bus_dmamem_free().  This allows
the caller to specify the size of the allocation instead of it defaulting
to the max_size field of the busdma tag.

This is intended to aid in converting drivers to busdma.  Lots of
hardware cannot understand scatter/gather lists, which forces the
driver to copy the i/o buffers to a single contiguous region
before sending it to the hardware.  Without these new methods, this
would require a new busdma tag for each operation, or a complex
internal allocator/cache for each driver.

Allocations greater than PAGE_SIZE are rounded up to the next
PAGE_SIZE by contigmalloc(), so this is not suitable for multiple
static allocations that would be better served by a single
fixed-length subdivided allocation.

Reviewed by:	jake (sparc64)
2003-01-29 07:25:27 +00:00
Thomas Moestl
a00f3148b6 Fixes for a number of problems in the IOMMU code:
1.) Fix an off-by-one in the DVMA space handling, which would make it
    possible to allocate one page beyond the end of the DVMA area.
    This page was aliased to the first page. Apparently, this bug was
    responsible for the trashed nvram/eeprom some people were reporting,
    in conjunction with a number of unfortunate coincidences.
2.) Fix broken boundary and and lowaddr calculations.
3.) Fix a memory leak on an error path.
4.) Update a outdated comment to reflect the introduction of IOMMU_MAX_PRE,
    make the usage of IOMMU_MAX_PRE more consistent and KASSERT that the
    preallocation size is not 0.
5.) Fix a case where an error return was lost.
6.) When signalling an error to the caller by invoking the callback, do
    not use a segment pointer of NULL for compatability with existing
    drivers.

Also, increase the maximum segment number to 64; it is rather arbitrary,
with the exception of the of the stack space consumed by the segment
array.

Special thanks go to Harti Brandt <brandt@fokus.fraunhofer.de> for
spotting 4 and 5, and testing many iterations of patches.

Pointy hats to:	tmm
2003-01-21 18:22:26 +00:00
Jake Burkholder
4ee5222bac Don't allow user process to set an invalid window state through sigreturn.
Spotted by:	tmm
2003-01-10 00:04:56 +00:00
Jake Burkholder
88a8ca8569 Implement bus_space_subregion. 2003-01-08 04:29:00 +00:00
Thomas Moestl
fe461235a2 Change the iommu code to be able to handle more than one DVMA area per
map. Use this new feature to implement iommu_dvmamap_load_mbuf() and
iommu_dvmamap_load_uio() functions in terms of a new helper function,
iommu_dvmamap_load_buffer(). Reimplement the iommu_dvmamap_load()
to use it, too.
This requires some changes to the map format; in addition to that,
remove unused or redundant members.
Add SBus and Psycho wrappers for the new functions, and make them
available through the respective DMA tags.
2003-01-06 21:59:54 +00:00
Thomas Moestl
702f83a807 - remove the unused parent DMA tag argument from
_nexus_dmamap_load_buffer()
- implement nexus_dmamap_load() in terms of _nexus_dmamap_load_buffer().
  Note that this is untested, as this code is not currently used (but
  might be later for UPA devices).
- move BUS_DMAMAP_NSEGS to bus_private.h
- disable the ecache flushing in nexus_dmamap_sync(); it should not be
  needed, although the docs are not entirely clear on that.
2003-01-06 20:54:07 +00:00
Thomas Moestl
08707f05a0 Prefix the members of struct bus_space_tag and struct bus_dma_tag with
a uniqifier. No functional changes.
2003-01-06 19:43:10 +00:00
Thomas Moestl
6cf280f100 Look for the correct method in sparc64_dmamap_load_mbuf() and
sparc64_dmamap_load_uio().
2003-01-06 17:17:26 +00:00
Thomas Moestl
5b398f68ee Some cleanup:
- move some constants into iommureg.h
- correct some comments
- use KASSERT() in one place instead of rolling our own
- take a sanity check out of #ifdef DIAGNOSTIC
- fix a syntax error in normally #ifdef'ed out debug code
2003-01-06 17:10:07 +00:00
Jake Burkholder
e8237e53da - Reorganize PMAP_STATS to scale a little better.
- Add some more stats for things that are now considered interesting.
2003-01-05 05:30:40 +00:00
Jake Burkholder
3afb575773 Make imgact_elf32.c compile on sparc64.
Obtained from:	ia64
2003-01-05 03:48:55 +00:00
Jens Schweikhardt
9d5abbddbf Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,
especially in troff files.
2003-01-01 18:49:04 +00:00
Jake Burkholder
07a312f6b6 Use memset instead of __builtin_memset. Apparently there's an inline
memset in libkern which causes problems; why that's there is beyond me.
2002-12-29 08:37:11 +00:00
Jake Burkholder
7b666b648f Define UMA_MD_SMALL_ALLOC so that uma_small_alloc and uma_small_free will
be used for zones that allocate objects of less 1 page.  The biggest advantage
of this is that all of a sudden the majority of kernel malloc-ed data doesn't
need kva allocated for it.  Besides microbenchmarks I haven't seen a measurable
performance improvement from doing this.
2002-12-27 19:31:26 +00:00
Jake Burkholder
a106c6930a - Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
  to use 4mb tlb entries instead of 8k.  This requires that the system only
  use the direct mapped addresses when they have the same virtual colour as
  all other mappings of the same page, instead of being able to choose the
  colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
  being able to choose the colour or cachability of the direct mapped
  address.  This adds a lot more cases to handle.  Basically when a page has
  a different colour than its direct mapped address we have a choice between
  bypassing the data cache and using physical addresses directly, which
  requires a cache flush, or mapping it at the right colour, which requires
  a tlb flush.  For now we choose to map the page and do the tlb flush.

This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
Jake Burkholder
c3c2862df4 - Add a spin lock to single thread cache invalidation and tlb flush ipis,
which allows ipis to be sent outside of Giant.
- Remove the ap boot mutex, which is unused.
2002-12-22 20:50:23 +00:00
Tim J. Robbins
b30a7779d4 MB_LEN_MAX is not MD, move it to the MI limits.h. 2002-12-22 06:38:45 +00:00
Jake Burkholder
b8eb0267c0 - Add a pmap pointer to struct md_page, and use this to find the pmap that
a mapping belongs to by setting it in the vm_page_t structure that backs
  the tsb page that the tte for a mapping is in.  This allows the pmap that
  a mapping belongs to to be found without keeping a pointer to it in the
  tte itself.
- Remove the pmap pointer from struct tte and use the space to make the
  tte pv lists doubly linked (TAILQs), like on other architectures.  This
  makes entering or removing a mapping O(1) instead of O(n) where n is the
  number of pmaps a page is mapped by (including kernel_pmap).
- Use atomic ops for setting and clearing bits in the ttes, now that they
  return the old value and can be easily used for this purpose.
- Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will
  inline it (4 inline stores using %g0 instead of a function call).
- Initially set the virtual colour for all the vm_page_ts to be equal to their
  physical colour.  This will be more useful once uma_small_alloc is
  implemented, but basically pages with virtual colour equal to phsyical
  colour are easier to handle at the pmap level because they can be safely
  accessed through cachable direct virtual to physical mappings with that
  colour, without fear of causing illegal dcache aliases.

In total these changes give a minor performance improvement, about 1%
reduction in system time during buildworld.
2002-12-21 22:43:19 +00:00