configurations and make it opt-in for those who want it. LINT will
still build it.
While it may be a perfect win in some scenarios, it still troubles users
(see PRs) in general cases. In addition we are still allocating resources
even if disabled by sysctl and still leak arp/nd6 entries in case of
interface destruction.
Discussed with: qingli (2010-11-24, just never executed)
Discussed with: juli (OCTEON1)
PR: kern/148018, kern/155604, kern/144917, kern/146792
MFC after: 2 weeks
explicit process at fork trampoline path instead of eventhadler(schedtail)
invocation for each child process.
Remove eventhandler(schedtail) code and change linux ABI to use newly added
sysvec method.
While here replace explicit comparing of module sysentvec structure with the
newly created process sysentvec to detect the linux ABI.
Discussed with: kib
MFC after: 2 Week
- Use vm_paddr_t for pa in pmap_steal_memory()
- Use uintmax_t and %jx to ensure that physical address are printed
correctly in cpu_startup() and pmap_bootstrap()
a number of cores, this allows for a sparse set of CPUs. Implement support
for sparse core masks on Octeon.
XXX jeff@ suggests that all_cpus should include cores that are offline or
running other applications/OSes, so the platform API should be further
extended to allow us to set all_cpus to include all cores that are
physically-present as opposed to only those that are running FreeBSD.
Submitted by: Bhanu Prakash (with modifications)
Reviewed by: jchandra
Glanced at by: kib, jeff, jhb
o) Have mips_wblush just do syncw, not sync on Cavium Octeon.
o) Add support for reading and writing some Octeon-specific registers.
NB: Some of these are not entirely Octeon-specific.
Submitted by: Bhanu Prakash
should_yield(). Use this in various places. Encapsulate the common
case of check-and-yield into a new function maybe_yield().
Change several checks for a magic number of iterations to use
should_yield() instead.
MFC after: 1 week
- Provide trivial implementation of sf_buf_alloc(), sf_buf_free(),
sf_buf_kva() and sf_buf_page() using direct map for n64.
- uio_machdep.c - use macros so that the direct map will be used in
case of n64.
Reviewed by: imp (earlier version)
Obtained from: jmallett (user/jmallett/octeon)
- Remove sys/conf/ldscript.mips.64 and sys/conf/ldscript.mips.n32 and use
ldscript.mips for all ABIs. The default OUTPUT_FORMAT of the toolchain
is correct.
- Remove LDSCRIPT_NAME entires from XLR n32 and n64 conf files.
- Remove TARGET_BIG_ENDIAN from XLR conf files.
- Fix machine entry in XLRN32
sf buf allocation, use wakeup() instead of wakeup_one() to notify sf
buffer waiters about free buffer.
sf_buf_alloc() calls msleep(PCATCH) when SFB_CATCH flag was given,
and for simultaneous wakeup and signal delivery, msleep() returns
EINTR/ERESTART despite the thread was selected for wakeup_one(). As
result, we loose a wakeup, and some other waiter will not be woken up.
Reported and tested by: az
Reviewed by: alc, jhb
MFC after: 1 week
Compile sys/dev/mem/memutil.c for all supported platforms and remove now
unnecessary dev_mem_md_init(). Consistently define mem_range_softc from
mem.c for all platforms. Add missing #include guards for machine/memdev.h
and sys/memrange.h. Clean up some nearby style(9) nits.
MFC after: 1 month
In n32 and n64, add support for physical address above 4GB by having
64 bit page table entries and physical addresses. Major changes are:
- param.h: update PTE sizes, masks and shift values to support 64 bit PTEs.
- param.h: remove DELAY(), mips_btop(same as atop), mips_ptob (same as
ptoa), and reformat.
- param.h: remove casting to unsigned long in trunc_page and round_page
since this will be used on physical addresses.
- _types.h: have 64 bit __vm_paddr_t for n32.
- pte.h: update TLB LO0/1 access macros to support 64 bit PTE
- pte.h: assembly macros for PTE operations.
- proc.h: md_upte is now 64 bit for n32 and n64.
- exception.S and swtch.S: use the new PTE macros for PTE operations.
- cpufunc.h: TLB_LO0/1 registers are 64bit for n32 and n64.
- xlr_machdep.c: Add memory segments above 4GB to phys_avail[] as they are
supported now.
Reviewed by: jmallett (earlier version)
1. Use vm_paddr_t for physical addresses.
There are a few places in the MIPS platform code where vm_offset_t is
used for physical addresses, change these to use vm_paddr_t:
- phys_avail[], physmem_desc[] arrays
- pmap_mapdev(), page_is_managed(), is_cacheable_mem() pmap_map() args
- local variables of various pmap functions
2. Change init_pte_prot() return from int to pt_entry_t, as this can be
64 bit when using 64 bit TLB entries.
3. Update printing of pt_entry_t and of vm_paddr_t to use 'j' format with
uintmax_t. This will be useful later if we plan to use 64bit phsical addr
on 32 bit n32 compilation.
Reviewed by: imp
the ones which run the message ring handler.
Some bits of the interrupt mask are part of the status register which is
saved with the process context, and these bits are initialized from the
cpu on which the process is created. This means that all the processes
should have the same value for these interrupt mask bits, so that the
interrupt mask remains the same regardless of what thread is scheduled
on the cpu.
Submitted by: Sriram Gorti (srgorti at netlogicmicro dot com)
o) Clear/acknowledge receive interrupt at end of thread. This gives the
management interfaces performance on the order of 100Mbps rather than
the previous level of 10Mbps on my MR-730.
o) Add 'octm', a trivial driver for the 10/100 management ports found on some
Octeon systems.
o) Make the Simple Executive's management port helper routines compile on
FreeBSD (namely by not doing math on void pointers.)
o) Add a cvmx_mgmt_port_sendm routine to the Simple Executive to send an mbuf
so there is only one copy in the transmit path, rather than having to first
copy the mbuf to an intermediate buffer and then copy that to the Simple
Executive's transmit ring.
o) Properly work out MII addresses of management ports on the Lanner MR-730.
XXX The MR-730 also needs some patches to the MII read/write routines, but
this is sufficient for now. Media detection will be fixed in the future
when I can spend more time reading the vendor-supplied patches.
quite awful, because e.g. 4 packets will come in and get processed on 4
different cores at the same time, really battling with the TCP stack quite
painfully. For now, just run one task at a time.
This gets performance up in most cases to where it was before the correctness
fixes that got interrupts to run on all cores (except in high-load TCP transmit
cases where all we're handling receive for is ACKs) and in some cases it's
better now. What would be ideal would be to use a more advanced interrupt
mitigation strategy and possibly to use different workqueue groups per port for
multi-port systems, and so on, but this is a fine stopgap.