Looking at our source code history, it seems the uname(),
getdomainname() and setdomainname() system calls got deprecated
somewhere after FreeBSD 1.1, but they have never been phased out
properly. Because we don't have a COMPAT_FREEBSD1, just use
COMPAT_FREEBSD4.
Also fix the Linuxolator to build without the setdomainname() routine by
just making it call userland_sysctl on kern.domainname. Also replace the
setdomainname()'s implementation to use this approach, because we're
duplicating code with sysctl_domainname().
I wasn't able to keep these three routines working in our
COMPAT_FREEBSD32, because that would require yet another keyword for
syscalls.master (COMPAT4+NOPROTO). Because this routine is probably
unused already, this won't be a problem in practice. If it turns out to
be a problem, we'll just restore this functionality.
Reviewed by: rdivacky, kib
instead of 32+32+15+1) on all arches that have such long doubles (amd64,
ia64 and i386). Large objects should be be accessed in large units,
and the 32+32+15+1[+padding] decomposition asks for almost the opposite
of that, sometimes resulting in very slow accesses depending on how
well the compiler ignores what we ask for and converts to the best
units for the given machine. E.g., on Athlons, there is a 10-20 cycle
penalty for accessing the middle 32-bit word immediately after an
80-bit store.
Whether actually using the alternative view is better is very machine-
dependent. A 32+32+16 view is probably best with old 32-bit systems
and gcc through 4.2.1. The compiler should mostly avoid the view and
generate best accesses, but gcc-4.2.1 is far from doing that. I think
64+16 is best for now. Similarly for doubles -- they should be using
64+0 especially on 64-bit machines, but fdlibm uses 32+32 extensively
for them. Fortunately, in 64-bit mode for doubles, gcc already ignores
the 32+32-bit view and generates best accesses in many cases.
my original implementation made both use the same code. Unfortunately,
this meant libm depended on a vendor header at compile time and previously-
unexposed vendor bits in libc at runtime.
Hence, I just wrote my own version of the relevant vendor routine. As it
turns out, mine has a factor of 8 fewer of lines of code, and is a bit more
readable anyway. The strtod() and *scanf() routines still use vendor code.
Reviewed by: bde
syscalls, unless WITHOUT_SYSCALL_COMPAT is defined. The default case
will have the .c wrappers still. If you define WITHOUT_SYSCALL_COMPAT,
the .c wrappers will go away and libc will make direct syscalls.
After 7-stable starts, the direct syscall method will be default.
Approved by: re (kensmith)
particular:
SYSCALL() makes a syscall, with errno handling, and continues execution
directly after the macro in the non-error case.
RSYSCALL() is just like SYSCALL(), but returns after success.
Both SYSCALL(name) and RSYSCALL(name) export "__sys_name" as a strong
symbol, with "_name" and "name" as weak aliases.
PSEUDO() is just like RSYSCALL(), but skipping the "name" weak alias. It
still does "__sys_name" and "_name".
Change i386 to add errno handling to PSEUDO. The same for amd64 and
sparc64, with appear to have copied the behavior.
ia64 was correct (as was alpha). Just remove some apparently unused
variants of the macros. (untested!)
I believe powerpc is correct.
Fix arm to not export "name" from the PSEUDO case. Remove apparently
extra unused variants. (untested!)
The errno problem manifested on i386/amd64/sparc64 by having "PSEUDO"
classified syscalls return without setting errno. eg: "addr = mmap()"
could return with "addr" = 22 instead of setting errno to 22 and
returning -1.
Approved by: re (kensmith)
net: endhostdnsent is named _endhostdnsent and is
private to netdb family of functions.
posix1e: acl_size.c has been never compiled in,
so there's no "acl_size".
rpc: "getnetid" is a static function.
stdtime: "gtime" is #ifdef'ed out in the source.
some symbols are specific only to some architectures,
e.g., ___tls_get_addr is only defined on i386.
__htonl, __htons, __ntohl and __ntohs are no longer
functions, they are now (internal) defines in
<machine/endian.h>.
Submitted by: ru
bit in a long double. For architectures that don't have such a bit,
LDBL_NBIT is 0. This makes it possible to say `mantissa & ~LDBL_NBIT'
in places that previously used an #ifdef to select the right expression.
The optimizer should dispense with the extra arithmetic when LDBL_NBIT
is 0 anyway.
- Add an XXX comment for the big endian case.
scalbn() implementation from libm. (The two functions are defined to
be identical, but ldexp() lives in libc for backwards compatibility.)
The old ldexp() implementation...
- was more complicated than this one
- set errno instead of raising FP exceptions
- got some corner cases wrong
(e.g. ldexp(1.0, 2000) in round-to-zero mode)
The new implementation lives in libc/gen instead of
libc/$MACHINE_ARCH/gen, since we don't need N copies of a
machine-independent file. The amd64 and i386 platforms
retain their fast and correct MD implementations and
override this one.
By using r8 instead of r14 to do the swap, we put the dst argument
in the return register. Since bcopy() doesn't clobber r8, we don't
have to do anything else. This fixes ports/textproc/aspell.
_mcount() stub when profiling is enabled. Emit this code sequence
for assembly routines as welli (MCOUNT definition in <machine/asm.h>.
We do not pass the GOT entry however as the 4th argument, because it's
not used. The _mcount() stub calls __mcount(), which does the actual
work. Define _MCOUNT_DECL to define __mcount. We do not have an
implementation of mcount(), so we define MCOUNT as empty, but have a
weak alias to _mcount() in _mcount.S.
Note that the _mcount() stub in the kernel is slightly different from
the stub in userland. This is because we do not have to worry about
nested routines in the kernel.
isnormal() the hard way, rather than relying on fpclassify(). This is
a lose in the sense that we need a total of 12 functions, but it is
necessary for binary compatibility because we have never bumped libm's
major version number. In particular, isinf(), isnan(), and isnanf()
were BSD libc functions before they were C99 macros, so we can't
reimplement them in terms of fpclassify() without adding a dependency
on libc.so.5. I have tried to arrange things so that programs that
could be compiled in FreeBSD 4.X will generate the same external
references when compiled in 5.X. At the same time, the new macros
should remain C99-compliant.
The isinf() and isnan() functions remain in libc for historical
reasons; however, I have moved the functions that implement the macros
isfinite() and isnormal() to libm where they belong. Moreover,
half a dozen MD versions of isinf() and isnan() have been replaced
with MI versions that work equally well.
Prodded by: kris
These files had tags after teh copyright notice,
inside the comment block (incorrect, removed),
and outside the comment block (correct).
Approved by: rwatson (mentor)
the denormal/unnormal trap, is not a standard IEEE trap. We did
not exclude it from being returned by fpgetmask(), nor did we make
sure that fpsetmask() didn't clobber it. Since the non-IEEE trap
is not part of fp_except_t, users of ifpgetmask()/fpsetmask() would
be confronted with unexpected behaviour, one of which is a SIGFPE
for denormal/unnormal FP results.
This commit makes sure that we don't leak the denormal/unnormal mask
bit in fp_except_t and also that we don't clobber it.
- All those diffs to syscalls.master for each architecture *are*
necessary. This needed clarification; the stub code generation for
mlockall() was disabled, which would prevent applications from
linking to this API (suggested by mux)
- Giant has been quoshed. It is no longer held by the code, as
the required locking has been pushed down within vm_map.c.
- Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES
to express their intention explicitly.
- Inspected at the vmstat, top and vm pager sysctl stats level.
Paging-in activity is occurring correctly, using a test harness.
- The RES size for a process may appear to be greater than its SIZE.
This is believed to be due to mappings of the same shared library
page being wired twice. Further exploration is needed.
- Believed to back out of allocations and locks correctly
(tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC).
PR: kern/43426, standards/54223
Reviewed by: jake, alc
Approved by: jake (mentor)
MFC after: 2 weeks
didn't provide a constant for one of them (non-IEEE denormal trap),
in an attempt to not support it probably, it's not we are left with
the lower 5 bits.
o Properly mask the passed or returned fp_except_t. Not doing so
causes instant core dumps by trying to write an invalid value to
ar.fpsr. Now that we're masking, stop using exclusive-or to invert
bits.
This fixes the illegal instruction fault encountered when building
mozilla.
that we can flush the register stack prior to entering the kernel.
This avoids having dirty registers and saves us from having to
manually write them to the backing store from within the kernel.
In that respect, flushing the RSE is both functionally required as
well as performance optimal.
On average we had 18 dirty registers when getcontext(2) was called
from libthr. Since libthr does not switch back to a context created
by getcontext(2), not having dealt with the dirty registers was
harmless.
on the corresponding .proc directive, or the .endp must not have a
name at all.
While here, remove an artificial dependency in Ovfork.S by performing
manual register renaming.
switching anymore, so there's no need to save and restore GP. This
change breaks threaded applications linked against libc_r. Pull the
tier 2 card again: relink. This will link against libthr instead.
o Make sure the arguments to ctx_wrapper() are loaded from the
backing store by forcing an underflow. Do this by making all
registers in the register frame local.
o Up to 8 arguments are allowed. This is the number of arguments
passed in registers. Subsequent registers are passed on the stack.
Trying to deal with this is not easy in C and likely forces us to
use assembly code. Let's avoid that for now. There's no indication
that more than 8 arguments is a strong requirement (Linux also has
an 8 argument limit).
o We expect that the stack base is 16-byte aligned and the stack
size is a multiple of 16-byte. We bomb out if this is not the case.
We probably want to be less strict by enforcing it ourselves. For
now it's better to not hide gross alignment bogons by silently
correcting it.
prime objectives are:
o Implement a syscall path based on the epc inststruction (see
sys/ia64/ia64/syscall.s).
o Revisit the places were we need to save and restore registers
and define those contexts in terms of the register sets (see
sys/ia64/include/_regset.h).
Secundairy objectives:
o Remove the requirement to use contigmalloc for kernel stacks.
o Better handling of the high FP registers for SMP systems.
o Switch to the new cpu_switch() and cpu_throw() semantics.
o Add a good unwinder to reconstruct contexts for the rare
cases we need to (see sys/contrib/ia64/libuwx)
Many files are affected by this change. Functionally it boils
down to:
o The EPC syscall doesn't preserve registers it does not need
to preserve and places the arguments differently on the stack.
This affects libc and truss.
o The address of the kernel page directory (kptdir) had to
be unstaticized for use by the nested TLB fault handler.
The name has been changed to ia64_kptdir to avoid conflicts.
The renaming affects libkvm.
o The trapframe only contains the special registers and the
scratch registers. For syscalls using the EPC syscall path
no scratch registers are saved. This affects all places where
the trapframe is accessed. Most notably the unaligned access
handler, the signal delivery code and the debugger.
o Context switching only partly saves the special registers
and the preserved registers. This affects cpu_switch() and
triggered the move to the new semantics, which additionally
affects cpu_throw().
o The high FP registers are either in the PCB or on some
CPU. context switching for them is done lazily. This affects
trap().
o The mcontext has room for all registers, but not all of them
have to be defined in all cases. This mostly affects signal
delivery code now. The *context syscalls are as of yet still
unimplemented.
Many details went into the removal of the requirement to use
contigmalloc for kernel stacks. The details are mostly CPU
specific and limited to exception_save() and exception_restore().
The few places where we create, destroy or switch stacks were
mostly simplified by not having to construct physical addresses
and additionally saving the virtual addresses for later use.
Besides more efficient context saving and restoring, which of
course yields a noticable speedup, this also fixes the dreaded
SMP bootup problem as a side-effect. The details of which are
still not fully understood.
This change includes all the necessary backward compatibility
code to have it handle older userland binaries that use the
break instruction for syscalls. Support for break-based syscalls
has been pessimized in favor of a clean implementation. Due to
the overall better performance of the kernel, this will still
be notived as an improvement if it's noticed at all.
Approved by: re@ (jhb)
package, a more recent, generalized set of routines. Among the
changes:
- Declare strtof() and strtold() in stdlib.h.
- Add glue to libc to support these routines for all kinds
of ``long double''.
- Update printf() to reflect the fact that dtoa works slightly
differently now.
As soon as I see that nothing has blown up, I will kill
src/lib/libc/stdlib/strtod.c. Soon printf() will be able
to use the new routines to output long doubles without loss
of precision, but numerous bugs in the existing code must
be addressed first.
Reviewed by: bde (briefly), mike (mentor), obrien
not save (restore) the global pointer (GP) in the jmpbuf in setjmp
(longjmp) because it's not needed in general. GP is considered a
scratch register at callsites and hence is always restored after a
call (when it's possible that the call resolves to a symbol in a
different loadmodule; otherwise GP does not have to be saved and
restored at all), including calls to setjmp/longjmp. There's just
one problem with this now that we use setjmp/longjmp for context
switching: A new context must have GP defined properly for the
thread's entry point. This means that we need to put GP in the
jmpbuf and consequently that we have to restore is in longjmp.
This automaticly requires us to save it as well.
When setjmp/longjmp isn't used for context switching, this can be
reverted again.
the J_SIG0 field. While here, rename J_SIG0 to J_SIGSET and
remove J_SIG1. The main reason for this change is that the
128-bit sigset_t is now aligned on a 16-byte boundary, which
allows us to use 16-byte atomic loads and stores on CPUs that
support it. The removal of J_SIG1 is done to avoid confusion:
it is never accessed and should not be. Renaming J_SIG0 to
J_SIGSET is the icing on the cake that's better done now than
later.