freebsd-nq/module/spl/spl-generic.c

762 lines
19 KiB
C
Raw Normal View History

/*****************************************************************************\
* Copyright (C) 2007-2010 Lawrence Livermore National Security, LLC.
* Copyright (C) 2007 The Regents of the University of California.
* Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
* Written by Brian Behlendorf <behlendorf1@llnl.gov>.
* UCRL-CODE-235197
*
* This file is part of the SPL, Solaris Porting Layer.
* For details, see <http://zfsonlinux.org/>.
*
* The SPL is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
* The SPL is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
* You should have received a copy of the GNU General Public License along
* with the SPL. If not, see <http://www.gnu.org/licenses/>.
*****************************************************************************
* Solaris Porting Layer (SPL) Generic Implementation.
\*****************************************************************************/
#include <sys/sysmacros.h>
#include <sys/systeminfo.h>
#include <sys/vmsystm.h>
#include <sys/kobj.h>
#include <sys/kmem.h>
#include <sys/mutex.h>
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
#include <sys/rwlock.h>
2009-01-05 23:08:03 +00:00
#include <sys/taskq.h>
Add Thread Specific Data (TSD) Implementation Thread specific data has implemented using a hash table, this avoids the need to add a member to the task structure and allows maximum portability between kernels. This implementation has been optimized to keep the tsd_set() and tsd_get() times as small as possible. The majority of the entries in the hash table are for specific tsd entries. These entries are hashed by the product of their key and pid because by design the key and pid are guaranteed to be unique. Their product also has the desirable properly that it will be uniformly distributed over the hash bins providing neither the pid nor key is zero. Under linux the zero pid is always the init process and thus won't be used, and this implementation is careful to never to assign a zero key. By default the hash table is sized to 512 bins which is expected to be sufficient for light to moderate usage of thread specific data. The hash table contains two additional type of entries. They first type is entry is called a 'key' entry and it is added to the hash during tsd_create(). It is used to store the address of the destructor function and it is used as an anchor point. All tsd entries which use the same key will be linked to this entry. This is used during tsd_destory() to quickly call the destructor function for all tsd associated with the key. The 'key' entry may be looked up with tsd_hash_search() by passing the key you wish to lookup and DTOR_PID constant as the pid. The second type of entry is called a 'pid' entry and it is added to the hash the first time a process set a key. The 'pid' entry is also used as an anchor and all tsd for the process will be linked to it. This list is using during tsd_exit() to ensure all registered destructors are run for the process. The 'pid' entry may be looked up with tsd_hash_search() by passing the PID_KEY constant as the key, and the process pid. Note that tsd_exit() is called by thread_exit() so if your using the Solaris thread API you should not need to call tsd_exit() directly.
2010-11-30 17:51:46 +00:00
#include <sys/tsd.h>
#include <sys/zmod.h>
#include <sys/debug.h>
#include <sys/proc.h>
#include <sys/kstat.h>
#include <sys/utsname.h>
#include <sys/file.h>
#include <linux/kmod.h>
#include <linux/proc_compat.h>
#include <spl-debug.h>
#ifdef SS_DEBUG_SUBSYS
#undef SS_DEBUG_SUBSYS
#endif
#define SS_DEBUG_SUBSYS SS_GENERIC
char spl_version[32] = "SPL v" SPL_META_VERSION "-" SPL_META_RELEASE;
EXPORT_SYMBOL(spl_version);
unsigned long spl_hostid = HW_INVALID_HOSTID;
EXPORT_SYMBOL(spl_hostid);
module_param(spl_hostid, ulong, 0644);
MODULE_PARM_DESC(spl_hostid, "The system hostid.");
char hw_serial[HW_HOSTID_LEN] = "<none>";
EXPORT_SYMBOL(hw_serial);
proc_t p0 = { 0 };
EXPORT_SYMBOL(p0);
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
#ifndef HAVE_KALLSYMS_LOOKUP_NAME
DECLARE_WAIT_QUEUE_HEAD(spl_kallsyms_lookup_name_waitq);
kallsyms_lookup_name_t spl_kallsyms_lookup_name_fn = SYMBOL_POISON;
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
#endif
int
highbit(unsigned long i)
{
register int h = 1;
SENTRY;
if (i == 0)
SRETURN(0);
#if BITS_PER_LONG == 64
if (i & 0xffffffff00000000ul) {
h += 32; i >>= 32;
}
#endif
if (i & 0xffff0000) {
h += 16; i >>= 16;
}
if (i & 0xff00) {
h += 8; i >>= 8;
}
if (i & 0xf0) {
h += 4; i >>= 4;
}
if (i & 0xc) {
h += 2; i >>= 2;
}
if (i & 0x2) {
h += 1;
}
SRETURN(h);
}
EXPORT_SYMBOL(highbit);
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
#if BITS_PER_LONG == 32
/*
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
* Support 64/64 => 64 division on a 32-bit platform. While the kernel
* provides a div64_u64() function for this we do not use it because the
* implementation is flawed. There are cases which return incorrect
* results as late as linux-2.6.35. Until this is fixed upstream the
* spl must provide its own implementation.
*
* This implementation is a slightly modified version of the algorithm
* proposed by the book 'Hacker's Delight'. The original source can be
* found here and is available for use without restriction.
*
* http://www.hackersdelight.org/HDcode/newCode/divDouble.c
*/
/*
* Calculate number of leading of zeros for a 64-bit value.
*/
static int
nlz64(uint64_t x) {
register int n = 0;
if (x == 0)
return 64;
if (x <= 0x00000000FFFFFFFFULL) {n = n + 32; x = x << 32;}
if (x <= 0x0000FFFFFFFFFFFFULL) {n = n + 16; x = x << 16;}
if (x <= 0x00FFFFFFFFFFFFFFULL) {n = n + 8; x = x << 8;}
if (x <= 0x0FFFFFFFFFFFFFFFULL) {n = n + 4; x = x << 4;}
if (x <= 0x3FFFFFFFFFFFFFFFULL) {n = n + 2; x = x << 2;}
if (x <= 0x7FFFFFFFFFFFFFFFULL) {n = n + 1;}
return n;
}
/*
* Newer kernels have a div_u64() function but we define our own
* to simplify portibility between kernel versions.
*/
static inline uint64_t
__div_u64(uint64_t u, uint32_t v)
{
(void) do_div(u, v);
return u;
}
/*
* Implementation of 64-bit unsigned division for 32-bit machines.
*
* First the procedure takes care of the case in which the divisor is a
* 32-bit quantity. There are two subcases: (1) If the left half of the
* dividend is less than the divisor, one execution of do_div() is all that
* is required (overflow is not possible). (2) Otherwise it does two
* divisions, using the grade school method.
*/
uint64_t
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
__udivdi3(uint64_t u, uint64_t v)
{
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
uint64_t u0, u1, v1, q0, q1, k;
int n;
if (v >> 32 == 0) { // If v < 2**32:
if (u >> 32 < v) { // If u/v cannot overflow,
return __div_u64(u, v); // just do one division.
} else { // If u/v would overflow:
u1 = u >> 32; // Break u into two halves.
u0 = u & 0xFFFFFFFF;
q1 = __div_u64(u1, v); // First quotient digit.
k = u1 - q1 * v; // First remainder, < v.
u0 += (k << 32);
q0 = __div_u64(u0, v); // Seconds quotient digit.
return (q1 << 32) + q0;
}
} else { // If v >= 2**32:
n = nlz64(v); // 0 <= n <= 31.
v1 = (v << n) >> 32; // Normalize divisor, MSB is 1.
u1 = u >> 1; // To ensure no overflow.
q1 = __div_u64(u1, v1); // Get quotient from
q0 = (q1 << n) >> 31; // Undo normalization and
// division of u by 2.
if (q0 != 0) // Make q0 correct or
q0 = q0 - 1; // too small by 1.
if ((u - q0 * v) >= v)
q0 = q0 + 1; // Now q0 is correct.
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
return q0;
}
}
EXPORT_SYMBOL(__udivdi3);
/*
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
* Implementation of 64-bit signed division for 32-bit machines.
*/
int64_t
__divdi3(int64_t u, int64_t v)
{
int64_t q, t;
q = __udivdi3(abs64(u), abs64(v));
t = (u ^ v) >> 63; // If u, v have different
return (q ^ t) - t; // signs, negate q.
}
EXPORT_SYMBOL(__divdi3);
/*
* Implementation of 64-bit unsigned modulo for 32-bit machines.
*/
uint64_t
__umoddi3(uint64_t dividend, uint64_t divisor)
{
return (dividend - (divisor * __udivdi3(dividend, divisor)));
}
EXPORT_SYMBOL(__umoddi3);
Add __divdi3(), remove __udivdi3() kernel dependency Up until now no SPL consumer attempted to perform signed 64-bit division so there was no need to support this. That has now changed so I adding 64-bit division support for 32-bit platforms. The signed implementation is based on the unsigned version. Since the have been several bug reports in the past concerning correct 64-bit division on 32-bit platforms I added some long over due regression tests. Much to my surprise the unsigned 64-bit division regression tests failed. This was surprising because __udivdi3() was implemented by simply calling div64_u64() which is provided by the kernel. This meant that the linux kernels 64-bit division algorithm on 32-bit platforms was flawed. After some investigation this turned out to be exactly the case. Because of this I was forced to abandon the kernel helper and instead to fully implement 64-bit division in the spl. There are several published implementation out there on how to do this properly and I settled on one proposed in the book Hacker's Delight. Their proposed algoritm is freely available without restriction and I have just modified it to be linux kernel friendly. The update implementation now passed all the unsigned and signed regression tests. This should be functional, but not fast, which is good enough for out purposes. If you want fast too I'd strongly suggest you upgrade to a 64-bit platform. I have also reported the kernel bug and we'll see if we can't get it fixed up stream.
2010-07-12 19:38:34 +00:00
#if defined(__arm) || defined(__arm__)
/*
* Implementation of 64-bit (un)signed division for 32-bit arm machines.
*
* Run-time ABI for the ARM Architecture (page 20). A pair of (unsigned)
* long longs is returned in {{r0, r1}, {r2,r3}}, the quotient in {r0, r1},
* and the remainder in {r2, r3}. The return type is specifically left
* set to 'void' to ensure the compiler does not overwrite these registers
* during the return. All results are in registers as per ABI
*/
void
__aeabi_uldivmod(uint64_t u, uint64_t v)
{
uint64_t res;
uint64_t mod;
res = __udivdi3(u, v);
mod = __umoddi3(u, v);
{
register uint32_t r0 asm("r0") = (res & 0xFFFFFFFF);
register uint32_t r1 asm("r1") = (res >> 32);
register uint32_t r2 asm("r2") = (mod & 0xFFFFFFFF);
register uint32_t r3 asm("r3") = (mod >> 32);
asm volatile(""
: "+r"(r0), "+r"(r1), "+r"(r2),"+r"(r3) /* output */
: "r"(r0), "r"(r1), "r"(r2), "r"(r3)); /* input */
return; /* r0; */
}
}
EXPORT_SYMBOL(__aeabi_uldivmod);
void
__aeabi_ldivmod(int64_t u, int64_t v)
{
int64_t res;
uint64_t mod;
res = __divdi3(u, v);
mod = __umoddi3(u, v);
{
register uint32_t r0 asm("r0") = (res & 0xFFFFFFFF);
register uint32_t r1 asm("r1") = (res >> 32);
register uint32_t r2 asm("r2") = (mod & 0xFFFFFFFF);
register uint32_t r3 asm("r3") = (mod >> 32);
asm volatile(""
: "+r"(r0), "+r"(r1), "+r"(r2),"+r"(r3) /* output */
: "r"(r0), "r"(r1), "r"(r2), "r"(r3)); /* input */
return; /* r0; */
}
}
EXPORT_SYMBOL(__aeabi_ldivmod);
#endif /* __arm || __arm__ */
#endif /* BITS_PER_LONG */
/* NOTE: The strtoxx behavior is solely based on my reading of the Solaris
* ddi_strtol(9F) man page. I have not verified the behavior of these
* functions against their Solaris counterparts. It is possible that I
* may have misinterpreted the man page or the man page is incorrect.
*/
2008-12-06 00:23:57 +00:00
int ddi_strtoul(const char *, char **, int, unsigned long *);
int ddi_strtol(const char *, char **, int, long *);
int ddi_strtoull(const char *, char **, int, unsigned long long *);
int ddi_strtoll(const char *, char **, int, long long *);
#define define_ddi_strtoux(type, valtype) \
int ddi_strtou##type(const char *str, char **endptr, \
int base, valtype *result) \
2008-12-06 00:23:57 +00:00
{ \
valtype last_value, value = 0; \
char *ptr = (char *)str; \
int flag = 1, digit; \
\
if (strlen(ptr) == 0) \
return EINVAL; \
\
/* Auto-detect base based on prefix */ \
if (!base) { \
if (str[0] == '0') { \
if (tolower(str[1])=='x' && isxdigit(str[2])) { \
base = 16; /* hex */ \
ptr += 2; \
} else if (str[1] >= '0' && str[1] < 8) { \
base = 8; /* octal */ \
ptr += 1; \
} else { \
return EINVAL; \
} \
} else { \
base = 10; /* decimal */ \
} \
} \
\
while (1) { \
if (isdigit(*ptr)) \
digit = *ptr - '0'; \
else if (isalpha(*ptr)) \
digit = tolower(*ptr) - 'a' + 10; \
else \
break; \
\
if (digit >= base) \
break; \
2008-12-06 00:23:57 +00:00
\
last_value = value; \
value = value * base + digit; \
if (last_value > value) /* Overflow */ \
return ERANGE; \
2008-12-06 00:23:57 +00:00
\
flag = 1; \
ptr++; \
2008-12-06 00:23:57 +00:00
} \
\
if (flag) \
*result = value; \
\
if (endptr) \
*endptr = (char *)(flag ? ptr : str); \
\
return 0; \
2008-12-06 00:23:57 +00:00
} \
#define define_ddi_strtox(type, valtype) \
int ddi_strto##type(const char *str, char **endptr, \
int base, valtype *result) \
{ \
int rc; \
2008-12-06 00:23:57 +00:00
\
if (*str == '-') { \
rc = ddi_strtou##type(str + 1, endptr, base, result); \
if (!rc) { \
if (*endptr == str + 1) \
*endptr = (char *)str; \
else \
*result = -*result; \
} \
2008-12-06 00:23:57 +00:00
} else { \
rc = ddi_strtou##type(str, endptr, base, result); \
2008-12-06 00:23:57 +00:00
} \
\
return rc; \
}
2008-12-06 00:23:57 +00:00
define_ddi_strtoux(l, unsigned long)
define_ddi_strtox(l, long)
define_ddi_strtoux(ll, unsigned long long)
define_ddi_strtox(ll, long long)
EXPORT_SYMBOL(ddi_strtoul);
2008-12-06 00:23:57 +00:00
EXPORT_SYMBOL(ddi_strtol);
EXPORT_SYMBOL(ddi_strtoll);
EXPORT_SYMBOL(ddi_strtoull);
int
ddi_copyin(const void *from, void *to, size_t len, int flags)
{
/* Fake ioctl() issued by kernel, 'from' is a kernel address */
if (flags & FKIOCTL) {
memcpy(to, from, len);
return 0;
}
return copyin(from, to, len);
}
EXPORT_SYMBOL(ddi_copyin);
int
ddi_copyout(const void *from, void *to, size_t len, int flags)
{
/* Fake ioctl() issued by kernel, 'from' is a kernel address */
if (flags & FKIOCTL) {
memcpy(to, from, len);
return 0;
}
return copyout(from, to, len);
}
EXPORT_SYMBOL(ddi_copyout);
Reimplement rwlocks for Linux lock profiling/analysis. It turns out that the previous rwlock implementation worked well but did not integrate properly with the upstream kernel lock profiling/ analysis tools. This is a major problem since it would be awfully nice to be able to use the automatic lock checker and profiler. The problem is that the upstream lock tools use the pre-processor to create a lock class for each uniquely named locked. Since the rwsem was embedded in a wrapper structure the name was always the same. The effect was that we only ended up with one lock class for the entire SPL which caused the lock dependency checker to flag nearly everything as a possible deadlock. The solution was to directly map a krwlock to a Linux rwsem using a typedef there by eliminating the wrapper structure. This was not done initially because the rwsem implementation is specific to the arch. To fully implement the Solaris krwlock API using only the provided rwsem API is not possible. It can only be done by directly accessing some of the internal data member of the rwsem structure. For example, the Linux API provides a different function for dropping a reader vs writer lock. Whereas the Solaris API uses the same function and the caller does not pass in what type of lock it is. This means to properly drop the lock we need to determine if the lock is currently a reader or writer lock. Then we need to call the proper Linux API function. Unfortunately, there is no provided API for this so we must extracted this information directly from arch specific lock implementation. This is all do able, and what I did, but it does complicate things considerably. The good news is that in addition to the profiling benefits of this change. We may see performance improvements due to slightly reduced overhead when creating rwlocks and manipulating them. The only function I was forced to sacrafice was rw_owner() because this information is simply not stored anywhere in the rwsem. Luckily this appears not to be a commonly used function on Solaris, and it is my understanding it is mainly used for debugging anyway. In addition to the core rwlock changes, extensive updates were made to the rwlock regression tests. Each class of test was extended to provide more API coverage and to be more rigerous in checking for misbehavior. This is a pretty significant change and with that in mind I have been careful to validate it on several platforms before committing. The full SPLAT regression test suite was run numberous times on all of the following platforms. This includes various kernels ranging from 2.6.16 to 2.6.29. - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-18 23:09:47 +00:00
#ifndef HAVE_PUT_TASK_STRUCT
/*
* This is only a stub function which should never be used. The SPL should
* never be putting away the last reference on a task structure so this will
* not be called. However, we still need to define it so the module does not
* have undefined symbol at load time. That all said if this impossible
* thing does somehow happen PANIC immediately so we know about it.
Reimplement rwlocks for Linux lock profiling/analysis. It turns out that the previous rwlock implementation worked well but did not integrate properly with the upstream kernel lock profiling/ analysis tools. This is a major problem since it would be awfully nice to be able to use the automatic lock checker and profiler. The problem is that the upstream lock tools use the pre-processor to create a lock class for each uniquely named locked. Since the rwsem was embedded in a wrapper structure the name was always the same. The effect was that we only ended up with one lock class for the entire SPL which caused the lock dependency checker to flag nearly everything as a possible deadlock. The solution was to directly map a krwlock to a Linux rwsem using a typedef there by eliminating the wrapper structure. This was not done initially because the rwsem implementation is specific to the arch. To fully implement the Solaris krwlock API using only the provided rwsem API is not possible. It can only be done by directly accessing some of the internal data member of the rwsem structure. For example, the Linux API provides a different function for dropping a reader vs writer lock. Whereas the Solaris API uses the same function and the caller does not pass in what type of lock it is. This means to properly drop the lock we need to determine if the lock is currently a reader or writer lock. Then we need to call the proper Linux API function. Unfortunately, there is no provided API for this so we must extracted this information directly from arch specific lock implementation. This is all do able, and what I did, but it does complicate things considerably. The good news is that in addition to the profiling benefits of this change. We may see performance improvements due to slightly reduced overhead when creating rwlocks and manipulating them. The only function I was forced to sacrafice was rw_owner() because this information is simply not stored anywhere in the rwsem. Luckily this appears not to be a commonly used function on Solaris, and it is my understanding it is mainly used for debugging anyway. In addition to the core rwlock changes, extensive updates were made to the rwlock regression tests. Each class of test was extended to provide more API coverage and to be more rigerous in checking for misbehavior. This is a pretty significant change and with that in mind I have been careful to validate it on several platforms before committing. The full SPLAT regression test suite was run numberous times on all of the following platforms. This includes various kernels ranging from 2.6.16 to 2.6.29. - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-18 23:09:47 +00:00
*/
void
__put_task_struct(struct task_struct *t)
{
PANIC("Unexpectly put last reference on task %d\n", (int)t->pid);
Reimplement rwlocks for Linux lock profiling/analysis. It turns out that the previous rwlock implementation worked well but did not integrate properly with the upstream kernel lock profiling/ analysis tools. This is a major problem since it would be awfully nice to be able to use the automatic lock checker and profiler. The problem is that the upstream lock tools use the pre-processor to create a lock class for each uniquely named locked. Since the rwsem was embedded in a wrapper structure the name was always the same. The effect was that we only ended up with one lock class for the entire SPL which caused the lock dependency checker to flag nearly everything as a possible deadlock. The solution was to directly map a krwlock to a Linux rwsem using a typedef there by eliminating the wrapper structure. This was not done initially because the rwsem implementation is specific to the arch. To fully implement the Solaris krwlock API using only the provided rwsem API is not possible. It can only be done by directly accessing some of the internal data member of the rwsem structure. For example, the Linux API provides a different function for dropping a reader vs writer lock. Whereas the Solaris API uses the same function and the caller does not pass in what type of lock it is. This means to properly drop the lock we need to determine if the lock is currently a reader or writer lock. Then we need to call the proper Linux API function. Unfortunately, there is no provided API for this so we must extracted this information directly from arch specific lock implementation. This is all do able, and what I did, but it does complicate things considerably. The good news is that in addition to the profiling benefits of this change. We may see performance improvements due to slightly reduced overhead when creating rwlocks and manipulating them. The only function I was forced to sacrafice was rw_owner() because this information is simply not stored anywhere in the rwsem. Luckily this appears not to be a commonly used function on Solaris, and it is my understanding it is mainly used for debugging anyway. In addition to the core rwlock changes, extensive updates were made to the rwlock regression tests. Each class of test was extended to provide more API coverage and to be more rigerous in checking for misbehavior. This is a pretty significant change and with that in mind I have been careful to validate it on several platforms before committing. The full SPLAT regression test suite was run numberous times on all of the following platforms. This includes various kernels ranging from 2.6.16 to 2.6.29. - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-18 23:09:47 +00:00
}
EXPORT_SYMBOL(__put_task_struct);
#endif /* HAVE_PUT_TASK_STRUCT */
struct new_utsname *__utsname(void)
{
#ifdef HAVE_INIT_UTSNAME
return init_utsname();
#else
return &system_utsname;
#endif
}
EXPORT_SYMBOL(__utsname);
/*
* Read the unique system identifier from the /etc/hostid file.
*
* The behavior of /usr/bin/hostid on Linux systems with the
* regular eglibc and coreutils is:
*
* 1. Generate the value if the /etc/hostid file does not exist
* or if the /etc/hostid file is less than four bytes in size.
*
* 2. If the /etc/hostid file is at least 4 bytes, then return
* the first four bytes [0..3] in native endian order.
*
* 3. Always ignore bytes [4..] if they exist in the file.
*
* Only the first four bytes are significant, even on systems that
* have a 64-bit word size.
*
* See:
*
* eglibc: sysdeps/unix/sysv/linux/gethostid.c
* coreutils: src/hostid.c
*
* Notes:
*
* The /etc/hostid file on Solaris is a text file that often reads:
*
* # DO NOT EDIT
* "0123456789"
*
* Directly copying this file to Linux results in a constant
* hostid of 4f442023 because the default comment constitutes
* the first four bytes of the file.
*
*/
char *spl_hostid_path = HW_HOSTID_PATH;
module_param(spl_hostid_path, charp, 0444);
MODULE_PARM_DESC(spl_hostid_path, "The system hostid file (/etc/hostid)");
static int
hostid_read(void)
{
int result;
uint64_t size;
struct _buf *file;
unsigned long hostid = 0;
file = kobj_open_file(spl_hostid_path);
if (file == (struct _buf *)-1)
return -1;
result = kobj_get_filesize(file, &size);
if (result != 0) {
printk(KERN_WARNING
"SPL: kobj_get_filesize returned %i on %s\n",
result, spl_hostid_path);
kobj_close_file(file);
return -2;
}
if (size < sizeof(HW_HOSTID_MASK)) {
printk(KERN_WARNING
"SPL: Ignoring the %s file because it is %llu bytes; "
"expecting %lu bytes instead.\n", spl_hostid_path,
size, (unsigned long)sizeof(HW_HOSTID_MASK));
kobj_close_file(file);
return -3;
}
/* Read directly into the variable like eglibc does. */
/* Short reads are okay; native behavior is preserved. */
result = kobj_read_file(file, (char *)&hostid, sizeof(hostid), 0);
if (result < 0) {
printk(KERN_WARNING
"SPL: kobj_read_file returned %i on %s\n",
result, spl_hostid_path);
kobj_close_file(file);
return -4;
}
/* Mask down to 32 bits like coreutils does. */
spl_hostid = hostid & HW_HOSTID_MASK;
kobj_close_file(file);
return 0;
}
#define GET_HOSTID_CMD \
"exec 0</dev/null " \
" 1>/proc/sys/kernel/spl/hostid " \
" 2>/dev/null; " \
"hostid"
static int
hostid_exec(void)
{
char *argv[] = { "/bin/sh",
"-c",
GET_HOSTID_CMD,
NULL };
char *envp[] = { "HOME=/",
"TERM=linux",
"PATH=/sbin:/usr/sbin:/bin:/usr/bin",
NULL };
int rc;
/* Doing address resolution in the kernel is tricky and just
* not a good idea in general. So to set the proper 'hw_serial'
* use the usermodehelper support to ask '/bin/sh' to run
* '/usr/bin/hostid' and redirect the result to /proc/sys/spl/hostid
* for us to use. It's a horrific solution but it will do for now.
*/
rc = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_PROC);
if (rc)
printk("SPL: Failed user helper '%s %s %s', rc = %d\n",
argv[0], argv[1], argv[2], rc);
return rc;
}
uint32_t
zone_get_hostid(void *zone)
{
static int first = 1;
unsigned long hostid;
int rc;
/* Only the global zone is supported */
ASSERT(zone == NULL);
if (first) {
first = 0;
/*
* Get the hostid if it was not passed as a module parameter.
* Try reading the /etc/hostid file directly, and then fall
* back to calling the /usr/bin/hostid utility.
*/
if ((spl_hostid == HW_INVALID_HOSTID) &&
(rc = hostid_read()) && (rc = hostid_exec()))
return HW_INVALID_HOSTID;
printk(KERN_NOTICE "SPL: using hostid 0x%08x\n",
(unsigned int) spl_hostid);
}
if (ddi_strtoul(hw_serial, NULL, HW_HOSTID_LEN-1, &hostid) != 0)
return HW_INVALID_HOSTID;
return (uint32_t)hostid;
}
EXPORT_SYMBOL(zone_get_hostid);
#ifndef HAVE_KALLSYMS_LOOKUP_NAME
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
/*
* The kallsyms_lookup_name() kernel function is not an exported symbol in
* Linux 2.6.19 through 2.6.32 inclusive.
*
* This function replaces the functionality by performing an upcall to user
* space where /proc/kallsyms is consulted for the requested address.
*
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
*/
#define GET_KALLSYMS_ADDR_CMD \
"exec 0</dev/null " \
" 1>/proc/sys/kernel/spl/kallsyms_lookup_name " \
" 2>/dev/null; " \
"awk '{ if ( $3 == \"kallsyms_lookup_name\" ) { print $1 } }' " \
" /proc/kallsyms "
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
static int
set_kallsyms_lookup_name(void)
{
char *argv[] = { "/bin/sh",
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
"-c",
GET_KALLSYMS_ADDR_CMD,
NULL };
char *envp[] = { "HOME=/",
"TERM=linux",
"PATH=/sbin:/usr/sbin:/bin:/usr/bin",
NULL };
int rc;
rc = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_PROC);
/*
* Due to I/O buffering the helper may return successfully before
* the proc handler has a chance to execute. To catch this case
* wait up to 1 second to verify spl_kallsyms_lookup_name_fn was
* updated to a non SYMBOL_POISON value.
*/
if (rc == 0) {
rc = wait_event_timeout(spl_kallsyms_lookup_name_waitq,
spl_kallsyms_lookup_name_fn != SYMBOL_POISON, HZ);
if (rc == 0)
rc = -ETIMEDOUT;
else if (spl_kallsyms_lookup_name_fn == SYMBOL_POISON)
rc = -EFAULT;
else
rc = 0;
}
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
if (rc)
printk("SPL: Failed user helper '%s %s %s', rc = %d\n",
argv[0], argv[1], argv[2], rc);
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
return rc;
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
}
#endif
static int
__init spl_init(void)
{
int rc = 0;
if ((rc = spl_debug_init()))
return rc;
if ((rc = spl_kmem_init()))
SGOTO(out1, rc);
if ((rc = spl_mutex_init()))
SGOTO(out2, rc);
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
if ((rc = spl_rw_init()))
SGOTO(out3, rc);
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
if ((rc = spl_taskq_init()))
SGOTO(out4, rc);
if ((rc = spl_vn_init()))
SGOTO(out5, rc);
if ((rc = spl_proc_init()))
SGOTO(out6, rc);
2009-01-05 23:08:03 +00:00
if ((rc = spl_kstat_init()))
SGOTO(out7, rc);
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
if ((rc = spl_tsd_init()))
Add Thread Specific Data (TSD) Implementation Thread specific data has implemented using a hash table, this avoids the need to add a member to the task structure and allows maximum portability between kernels. This implementation has been optimized to keep the tsd_set() and tsd_get() times as small as possible. The majority of the entries in the hash table are for specific tsd entries. These entries are hashed by the product of their key and pid because by design the key and pid are guaranteed to be unique. Their product also has the desirable properly that it will be uniformly distributed over the hash bins providing neither the pid nor key is zero. Under linux the zero pid is always the init process and thus won't be used, and this implementation is careful to never to assign a zero key. By default the hash table is sized to 512 bins which is expected to be sufficient for light to moderate usage of thread specific data. The hash table contains two additional type of entries. They first type is entry is called a 'key' entry and it is added to the hash during tsd_create(). It is used to store the address of the destructor function and it is used as an anchor point. All tsd entries which use the same key will be linked to this entry. This is used during tsd_destory() to quickly call the destructor function for all tsd associated with the key. The 'key' entry may be looked up with tsd_hash_search() by passing the key you wish to lookup and DTOR_PID constant as the pid. The second type of entry is called a 'pid' entry and it is added to the hash the first time a process set a key. The 'pid' entry is also used as an anchor and all tsd for the process will be linked to it. This list is using during tsd_exit() to ensure all registered destructors are run for the process. The 'pid' entry may be looked up with tsd_hash_search() by passing the PID_KEY constant as the key, and the process pid. Note that tsd_exit() is called by thread_exit() so if your using the Solaris thread API you should not need to call tsd_exit() directly.
2010-11-30 17:51:46 +00:00
SGOTO(out8, rc);
if ((rc = spl_zlib_init()))
SGOTO(out9, rc);
#ifndef HAVE_KALLSYMS_LOOKUP_NAME
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
if ((rc = set_kallsyms_lookup_name()))
SGOTO(out10, rc = -EADDRNOTAVAIL);
#endif /* HAVE_KALLSYMS_LOOKUP_NAME */
if ((rc = spl_kmem_init_kallsyms_lookup()))
SGOTO(out10, rc);
Linux VM Integration Cleanup Remove all instances of functions being reimplemented in the SPL. When the prototypes are available in the linux headers but the function address itself is not exported use kallsyms_lookup_name() to find the address. The function name itself can them become a define which calls a function pointer. This is preferable to reimplementing the function in the SPL because it ensures we get the correct version of the function for the running kernel. This is actually pretty safe because the prototype is defined in the headers so we know we are calling the function properly. This patch also includes a rhel5 kernel patch we exports the needed symbols so we don't need to use kallsyms_lookup_name(). There are autoconf checks to detect if the symbol is exported and if so to use it directly. We should add patches for stock upstream kernels as needed if for no other reason than so we can easily track which additional symbols we needed exported. Those patches can also be used by anyone willing to rebuild their kernel, but this should not be a requirement. The rhel5 version of the export-symbols patch has been applied to the chaos kernel. Additional fixes: 1) Implement vmem_size() function using get_vmalloc_info() 2) SPL_CHECK_SYMBOL_EXPORT macro updated to use $LINUX_OBJ instead of $LINUX because Module.symvers is a build product. When $LINUX_OBJ != $LINUX we will not properly detect exported symbols. 3) SPL_LINUX_COMPILE_IFELSE macro updated to add include2 and $LINUX/include search paths to allow proper compilation when the kernel target build directory is not the source directory.
2009-02-25 21:20:40 +00:00
if ((rc = spl_vn_init_kallsyms_lookup()))
SGOTO(out10, rc);
printk(KERN_NOTICE "SPL: Loaded module v%s-%s%s\n", SPL_META_VERSION,
SPL_META_RELEASE, SPL_DEBUG_STR);
SRETURN(rc);
out10:
spl_zlib_fini();
Add Thread Specific Data (TSD) Implementation Thread specific data has implemented using a hash table, this avoids the need to add a member to the task structure and allows maximum portability between kernels. This implementation has been optimized to keep the tsd_set() and tsd_get() times as small as possible. The majority of the entries in the hash table are for specific tsd entries. These entries are hashed by the product of their key and pid because by design the key and pid are guaranteed to be unique. Their product also has the desirable properly that it will be uniformly distributed over the hash bins providing neither the pid nor key is zero. Under linux the zero pid is always the init process and thus won't be used, and this implementation is careful to never to assign a zero key. By default the hash table is sized to 512 bins which is expected to be sufficient for light to moderate usage of thread specific data. The hash table contains two additional type of entries. They first type is entry is called a 'key' entry and it is added to the hash during tsd_create(). It is used to store the address of the destructor function and it is used as an anchor point. All tsd entries which use the same key will be linked to this entry. This is used during tsd_destory() to quickly call the destructor function for all tsd associated with the key. The 'key' entry may be looked up with tsd_hash_search() by passing the key you wish to lookup and DTOR_PID constant as the pid. The second type of entry is called a 'pid' entry and it is added to the hash the first time a process set a key. The 'pid' entry is also used as an anchor and all tsd for the process will be linked to it. This list is using during tsd_exit() to ensure all registered destructors are run for the process. The 'pid' entry may be looked up with tsd_hash_search() by passing the PID_KEY constant as the key, and the process pid. Note that tsd_exit() is called by thread_exit() so if your using the Solaris thread API you should not need to call tsd_exit() directly.
2010-11-30 17:51:46 +00:00
out9:
spl_tsd_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
out8:
spl_kstat_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
out7:
spl_proc_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
out6:
spl_vn_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
out5:
2009-01-05 23:08:03 +00:00
spl_taskq_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
out4:
spl_rw_fini();
out3:
spl_mutex_fini();
out2:
spl_kmem_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
out1:
spl_debug_fini();
printk(KERN_NOTICE "SPL: Failed to Load Solaris Porting Layer "
"v%s-%s%s, rc = %d\n", SPL_META_VERSION, SPL_META_RELEASE,
SPL_DEBUG_STR, rc);
return rc;
}
static void
spl_fini(void)
{
SENTRY;
printk(KERN_NOTICE "SPL: Unloaded module v%s-%s%s\n",
SPL_META_VERSION, SPL_META_RELEASE, SPL_DEBUG_STR);
spl_zlib_fini();
spl_tsd_fini();
spl_kstat_fini();
spl_proc_fini();
spl_vn_fini();
2009-01-05 23:08:03 +00:00
spl_taskq_fini();
Update rwlocks to track owner to ensure correct semantics The behavior of RW_*_HELD was updated because it was not quite right. It is not sufficient to return non-zero when the lock is help, we must only do this when the current task in the holder. This means we need to track the lock owner which is not something tracked in a Linux semaphore. After some experimentation the solution I settled on was to embed the Linux semaphore at the start of a larger krwlock_t structure which includes the owner field. This maintains good performance and allows us to cleanly intergrate with the kernel lock analysis tools. My reasons: 1) By placing the Linux semaphore at the start of krwlock_t we can then simply cast krwlock_t to a rw_semaphore and pass that on to the linux kernel. This allows us to use '#defines so the preprocessor can do direct replacement of the Solaris primative with the linux equivilant. This is important because it then maintains the location information for each rw_* call point. 2) Additionally, by adding the owner to krwlock_t we can keep this needed extra information adjacent to the lock itself. This removes the need for a fancy lookup to get the owner which is optimal for performance. We can also leverage the existing spin lock in the semaphore to ensure owner is updated correctly. 3) All helper functions which do not need to strictly be implemented as a define to preserve location information can be done as a static inline function. 4) Adding the owner to krwlock_t allows us to remove all memory allocations done during lock initialization. This is good for all the obvious reasons, we do give up the ability to specific the lock name. The Linux profiling tools will stringify the lock name used in the code via the preprocessor and use that. Update rwlocks validated on: - SLES10 (ppc64) - SLES11 (x86_64) - CHAOS4.2 (x86_64) - RHEL5.3 (x86_64) - RHEL6 (x86_64) - FC11 (x86_64)
2009-09-25 21:14:35 +00:00
spl_rw_fini();
spl_mutex_fini();
spl_kmem_fini();
spl_debug_fini();
}
/* Called when a dependent module is loaded */
void
spl_setup(void)
{
int rc;
/*
* At module load time the pwd is set to '/' on a Solaris system.
* On a Linux system will be set to whatever directory the caller
* was in when executing insmod/modprobe.
*/
rc = vn_set_pwd("/");
if (rc)
printk("SPL: Warning unable to set pwd to '/': %d\n", rc);
}
EXPORT_SYMBOL(spl_setup);
/* Called when a dependent module is unloaded */
void
spl_cleanup(void)
{
}
EXPORT_SYMBOL(spl_cleanup);
module_init(spl_init);
module_exit(spl_fini);
MODULE_AUTHOR("Lawrence Livermore National Labs");
MODULE_DESCRIPTION("Solaris Porting Layer");
MODULE_LICENSE("GPL");