Commit Graph

2119 Commits

Author SHA1 Message Date
das
28e8dea258 People porting FreeBSD to new architectures ought not have to
implement a deprecated FPU control interface in addition to the
standard one.  To make this clearer, further deprecate ieeefp.h
by not declaring the function prototypes except on architectures
that implement them already.

Currently i386 and amd64 implement the ieeefp.h interface for
compatibility, and for fp[gs]etprec(), which doesn't exist on
most other hardware.  Powerpc, sparc64, and ia64 partially implement
it and probably shouldn't, and other architectures don't implement it
at all.
2011-10-21 06:41:46 +00:00
kib
1134edae2b Remove unused define.
MFC after:	1 month
2011-10-07 16:09:44 +00:00
attilio
a73e834ebb Add the possibility to specify from kernel configs MAXCPU value.
This patch is going to help in cases like mips flavours where you
want a more granular support on MAXCPU.

No MFC is previewed for this patch.

Tested by:	pluknet
Approved by:	re (kib)
2011-07-19 00:37:24 +00:00
jkim
52539f62b4 Correct cpu_monitor() and cpu_mwait() for amd64. These instructions take
%rcx as "extensions" in long mode.  If any unused bit is set in %rcx, these
instructions cause general protection fault.  Fix style nits and synchronize
i386 with amd64.
2011-07-05 18:42:10 +00:00
jhb
83fca1d193 Move {amd64,i386}/pci/pci_bus.c and {amd64,i386}/include/pci_cfgreg.h to
the x86 tree.  The $PIR code is still only enabled on i386 and not amd64.
While here, make the qpi(4) driver on conditional on 'device pci'.
2011-06-22 21:04:13 +00:00
jhb
3fa22c485f Oops, missed these in 223424.
Reported by:	jkim
2011-06-22 18:48:07 +00:00
avg
74204e61b2 remove code for dynamic offlining/onlining of CPUs on x86
The code has definitely been broken for SCHED_ULE, which is a default
scheduler.  It may have been broken for SCHED_4BSD in more subtle ways,
e.g. with manually configured CPU affinities and for interrupt devilery
purposes.
We still provide a way to disable individual CPUs or all hyperthreading
"twin" CPUs before SMP startup.  See the UPDATING entry for details.

Interaction between building CPU topology and disabling CPUs still
remains fuzzy: topology is first built using all availble CPUs and then
the disabled CPUs should be "subtracted" from it.  That doesn't work
well if the resulting topology becomes non-uniform.

This work is done in cooperation with Attilio Rao who in addition to
reviewing also provided parts of code.

PR:		kern/145385
Discussed with:	gcooper, ambrisko, mdf, sbruno
Reviewed by:	attilio
Tested by:	pho, pluknet
X-MFC after:	never
2011-06-08 08:12:15 +00:00
attilio
ccbb37970b Reintroduce the lazypmap infrastructure and convert it to using
cpuset_t.

Requested by:	alc
2011-05-20 14:53:16 +00:00
attilio
6a2b7fdc52 MFC 2011-05-18 16:01:29 +00:00
attilio
9ff3491e67 MFC 2011-05-13 20:58:48 +00:00
attilio
d7cb9e4814 MFC 2011-05-09 18:53:13 +00:00
attilio
a0b51ba62f MFC 2011-05-06 22:45:33 +00:00
attilio
fe4de567b5 Commit the support for removing cpumask_t and replacing it directly with
cpuset_t objects.
That is going to offer the underlying support for a simple bump of
MAXCPU and then support for number of cpus > 32 (as it is today).

Right now, cpumask_t is an int, 32 bits on all our supported architecture.
cpumask_t on the other side is implemented as an array of longs, and
easilly extendible by definition.

The architectures touched by this commit are the following:
- amd64
- i386
- pc98
- arm
- ia64
- XEN

while the others are still missing.
Userland is believed to be fully converted with the changes contained
here.

Some technical notes:
- This commit may be considered an ABI nop for all the architectures
  different from amd64 and ia64 (and sparc64 in the future)
- per-cpu members, which are now converted to cpuset_t, needs to be
  accessed avoiding migration, because the size of cpuset_t should be
  considered unknown
- size of cpuset_t objects is different from kernel and userland (this is
  primirally done in order to leave some more space in userland to cope
  with KBI extensions). If you need to access kernel cpuset_t from the
  userland please refer to example in this patch on how to do that
  correctly (kgdb may be a good source, for example).
- Support for other architectures is going to be added soon
- Only MAXCPU for amd64 is bumped now

The patch has been tested by sbruno and Nicholas Esborn on opteron
4 x 12 pack CPUs. More testing on big SMP is expected to came soon.
pluknet tested the patch with his 8-ways on both amd64 and i386.

Tested by:	pluknet, sbruno, gianni, Nicholas Esborn
Reviewed by:	jeff, jhb, sbruno
2011-05-05 14:39:14 +00:00
attilio
f756d5bed6 Revert md_assert_preempt() introduction.
Discussed with:	jeff, jhb
2011-05-04 20:29:40 +00:00
attilio
b55fd3d196 Remove unnused typedef. 2011-05-01 00:08:13 +00:00
attilio
1ce93775ec Add the function md_assert_nopreempt(), which is a very consistent
function on the possibility of a thread to not preempt.

As this function is very tied to x86 (interrupts disabled checkings)
it is not intended to be used in MI code.
2011-04-30 23:12:37 +00:00
attilio
05a159a130 Remove the support for lazy cr3 switching from i386.
amd64 has already this micro-optimization removed.

Submitted by:	kib
2011-04-30 23:02:17 +00:00
jkim
369bfa0af2 Define "Hypervisor Present" bit. This bit is used by several hypervisors to
identify CPUs running under emulation.  Currently QEMU-KVM, Xen-HVM, VMware,
and MS Hyper-V are known to set this bit.

MFC after:	3 days
2011-04-28 22:23:39 +00:00
kib
01863c3790 Make pmap_invalidate_cache_range() available for consumption on amd64.
Add pmap_invalidate_cache_pages() method on x86. It flushes the CPU
cache for the set of pages, which are not neccessary mapped. Since its
supposed use is to prepare the move of the pages ownership to a device
that does not snoop all CPU accesses to the main memory (read GPU in
GMCH), do not rely on CPU self-snoop feature.

amd64 implementation takes advantage of the direct map. On i386,
extract the helper pmap_flush_page() from pmap_page_set_memattr(), and
use it to make a temporary mapping of the flushed page.

Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	3 weeks
2011-04-18 21:24:42 +00:00
jkim
7d94642dc2 Add a function rdtsc32() to read lower 32 bits from TSC and discard upper
32 bits.  Some times compiler inserts unnecessary instructions to preserve
unused upper 32 bits even when it is casted to a 32-bit value.  It reduces
such compiler mistakes where every cycle counts.
2011-04-14 16:53:32 +00:00
jkim
8ada5a0bae Consistently use __volatile as the rest of this file. 2011-04-14 16:19:41 +00:00
jkim
9b08b2f085 Consistently use C99 standard integers as the rest of this file. 2011-04-14 16:02:52 +00:00
jkim
69382ad692 Add forgotten declarations for tsc_perf_stat from the previous commit. 2011-04-12 22:22:01 +00:00
jkim
df8e7b4e4c Add definitions for CPUID instruction 6, ECX information. 2011-04-12 22:12:23 +00:00
jkim
0c7a0c810c Implement atomic_load_acq_64(9) and atomic_store_rel_64(9) for i386. These
functions are implemented with CMPXCHG8B instruction where it is available,
i. e., all Pentium-class and later processors.  Note this instruction is
also used for atomic_store_rel_64() because a simple XCHG-like instruction
for 64-bit memory access does not exist, unfortunately.  If the processor
lacks the instruction, i. e., 80486-class CPUs, two 32-bit load/store are
performed with interrupt temporarily disabled, assuming it does not support
SMP.  Although this assumption may be little naive, it is true in reality.
This implementation is inspired by Linux.
2011-04-06 23:59:59 +00:00
jkim
9ce8e5e965 Use cpu_ticks() for get_cyclecount(9) rather than checking existence of TSC
at run-time on i386.  cpu_ticks() is set to use RDTSC early enough on i386
where it is available.  Otherwise, cpu_ticks() is driven by the current
timecounter hardware as binuptime(9) does.  This also avoids unnecessary
namespace pollution from <machine/cputypes.h>.
2011-04-04 22:56:33 +00:00
alc
c84b8f6e0c Modestly increase the maximum allowed size of the kmem map on i386.
Also, express this new maximum as a fraction of the kernel's address
space size rather than a constant so that increasing KVA_PAGES will
automatically increase this maximum.  As a side-effect of this change,
kern.maxvnodes will automatically increase by a proportional amount.

While I'm here ensure that this change doesn't result in an unintended
increase in maxpipekva on i386.  Calculate maxpipekva based upon the
size of the kernel address space and the amount of physical memory
instead of the size of the kmem map.  The memory backing pipes is not
allocated from the kmem map.  It is allocated from its own submap of
the kernel map.  In short, it has no real connection to the kmem map.
(In fact, the commit messages for the maxpipekva auto-sizing talk
about using the kernel map size, cf. r117325 and r117391, even though
the implementation actually used the kmem map size.)  Although the
calculation is now done differently, the resulting value for
maxpipekva should remain almost the same on i386.  However, on amd64,
the value will be reduced by 2/3.  This is intentional.  The recent
change to VM_KMEM_SIZE_SCALE on amd64 for the benefit of ZFS also had
the unnecessary side-effect of increasing maxpipekva.  This change is
effectively restoring maxpipekva on amd64 to its prior value.

Eliminate init_param3() since it is no longer used.
2011-03-23 16:38:29 +00:00
jeff
2d7d8c05e7 - Merge changes to the base system to support OFED. These include
a wider arg2 for sysctl, updates to vlan code, IFT_INFINIBAND,
   and other miscellaneous small features.
2011-03-21 09:40:01 +00:00
jkim
881681406b Rework r219679. Always check CPU class at run-time to make it predictable.
Unfortunately, it pulls in <machine/cputypes.h> but it is small enough and
namespace pollution is minimal, I hope.

Pointed out by:	bde
Pointy hat:	jkim
2011-03-16 16:09:08 +00:00
jkim
8060d27e7b Partially revert r219672. After r198295, kernel need to seed randomness as
soon as possible for stack protector.  However, dummy timecounter does not
have enough entropy and we don't need to sacrifice Pentium class and later.

Pointed out by:	Maxim Dounin (mdounin at mdounin dot ru)
2011-03-15 21:45:10 +00:00
jkim
2108e6a856 Remove tsc_present from this file, really. 2011-03-15 18:09:29 +00:00
jkim
d3440080b0 Unconditionally use binuptime(9) for get_cyclecount(9) on i386. Since this
function is almost exclusively used for random harvesting, there is no need
for micro-optimization.  Adjust the manual page accordingly.
2011-03-15 17:14:26 +00:00
jkim
193712e7cf Make get_cyclecount(9) little bit more useful where binuptime(9) is used. 2011-03-14 23:30:14 +00:00
jkim
98d68ca741 Deprecate rarely used tsc_is_broken. Instead, we zero out tsc_freq because
it is almost always used with tsc_freq any way.
2011-03-10 20:02:58 +00:00
jhb
e9ec5c3def Fix whitespace nit. 2011-02-22 14:58:14 +00:00
alc
2f4da8e71e Remove pmap fields that are either unused or not fully implemented.
Discussed with:	kib
2011-02-17 15:36:29 +00:00
dchagin
f09038c073 To avoid excessive code duplication create wrapper for fill regs
from stack frame. Change the trap() code to use newly created function
instead of explicit regs assignment.
2011-02-16 17:50:21 +00:00
jkim
ea861abf2a Add reader/writer lock around mem_range_attr_get() and mem_range_attr_set().
Compile sys/dev/mem/memutil.c for all supported platforms and remove now
unnecessary dev_mem_md_init().  Consistently define mem_range_softc from
mem.c for all platforms.  Add missing #include guards for machine/memdev.h
and sys/memrange.h.  Clean up some nearby style(9) nits.

MFC after:	1 month
2011-01-17 22:58:28 +00:00
kib
4f8260e700 Move repeated MAXSLP definition from machine/vmparam.h to sys/vmmeter.h.
Update the outdated comments describing MAXSLP and the process
selection algorithm for swap out.

Comments wording and reviewed by:	alc
2011-01-09 12:50:44 +00:00
tijl
75b3c29fb3 Copy powerpc/include/_inttypes.h to x86 and replace i386/amd64/pc98
headers with stubs.

Approved by:	kib (mentor)
2011-01-08 18:09:48 +00:00
tijl
89281909e1 On mixed 32/64 bit architectures (mips, powerpc) use __LP64__ rather than
architecture macros (__mips_n64, __powerpc64__) when 64 bit types (and
corresponding macros) are different from 32 bit. [1]

Correct the type of INT64_MIN, INT64_MAX and UINT64_MAX.

Define (U)INTMAX_C as an alias for (U)INT64_C matching the type definition
for (u)intmax_t. Do this on all architectures for consistency.

Suggested by:	bde [1]
Approved by:	kib (mentor)
2011-01-08 12:43:05 +00:00
tijl
61d89c0b21 On 32 bit architectures define (u)int64_t as (unsigned) long long instead
of (unsigned) int __attribute__((__mode__(__DI__))). This aligns better
with macros such as (U)INT64_C, (U)INT64_MAX, etc. which assume (u)int64_t
has type (unsigned) long long.

The mode attribute was used because long long wasn't standardised until
C99. Nowadays compilers should support long long and use of the mode
attribute is discouraged according to GCC Internals documentation.

The type definition has to be marked with __extension__ to support
compilation with "-std=c89 -pedantic".

Discussed with:	bde
Approved by:	kib (mentor)
2011-01-08 11:47:55 +00:00
tijl
af03e997ba Fix types of some values in machine/_limits.h.
On some architectures UCHAR_MAX and USHRT_MAX had type unsigned int.
However, lacking integer suffixes for types smaller than int, their type
should correspond to that of an object of type unsigned char (or short)
when used in an expression with objects of type int. In that case unsigned
char (short) are promoted to int (i.e. signed) so the type of UCHAR_MAX and
USHRT_MAX should also be int.

Where MIN/MAX constants implicitly have the correct type the suffix has
been removed.

While here, correct some comments.

Reviewed by:	bde
Approved by:	kib (mentor)
2011-01-08 11:13:34 +00:00
tijl
69f9492737 Remove unused support for 64 bit long on 32 bit architectures.
It was used mainly to discover and fix some 64-bit portability problems
before 64-bit arches were widely available.

Discussed with:	bde
Approved by:	kib (mentor)
2011-01-07 22:57:31 +00:00
kib
ed862725de Add AT_STACKPROT elf aux vector. Will be used to inform rtld about the
initial stack protection set by the kernel image activator.
2011-01-07 14:22:34 +00:00
rwatson
b5469e8b58 Make "options XENHVM" compile for i386, not just amd64 -- a largely
mechanical change.  This opens the door for using PV device drivers
under Xen HVM on i386, as well as more general harmonisation of i386
and amd64 Xen support in FreeBSD.

Reviewed by:    cperciva
MFC after:      3 weeks
2011-01-04 14:49:54 +00:00
cperciva
0440193beb Make i386_set_ldt work on i386/XEN, step 5/5.
When cleaning up a thread, reset its LDT to the default LDT.

Note: Casting the LDT pointer to an int and storing it in pc_currentldt is
wildly bogus, but is harmless since pc_currentldt is a write-only variable.

MFC after:	3 days
2010-12-31 17:42:25 +00:00
cperciva
a7dfcf0362 Make i386_set_ldt work on i386/XEN, step 2/5.
Don't map physical to machine page numbers in pte_load_store, since it uses
PT_SET_VA (which takes a physical page number and converts it to a machine
page number).

MFC after:	3 days
2010-12-31 17:39:58 +00:00
tijl
0f810ef0a2 Merge amd64 and i386 bus.h and move the resulting header to x86. Replace
the original amd64 and i386 headers with stubs.

Rename (AMD64|I386)_BUS_SPACE_* to X86_BUS_SPACE_* everywhere.

Reviewed by:	imp (previous version), jhb
Approved by:	kib (mentor)
2010-12-20 16:39:43 +00:00
kib
43f2291188 Inform a compiler which asm statements in the x86 implementation of
atomics change eflags.

Reviewed by:	jhb
MFC after:	2 weeks
2010-12-18 16:41:11 +00:00