Commit Graph

34 Commits

Author SHA1 Message Date
Justin Hibbits
cf33fa7e80 powerpc64: Don't guard ISA 3.0 partition table setup with hw_direct_map
PowerISA 3.0 eliminated the 64-bit bridge mode which allowed 32-bit kernels
to run on 64-bit AIM/Book-S hardware.  Since therefore only a 64-bit kernel
can run on this hardware, and 64-bit native always has the direct map, there
is no need to guard it.
2019-11-13 02:22:00 +00:00
Leandro Lupori
95ca4720f0 [PPC64] Add minidump support to PowerNV
Implementation of PowerNV specific minidump code.

Reviewed by:	jhibbits
Differential Revision:	https://reviews.freebsd.org/D21643
2019-10-21 11:56:57 +00:00
Justin Hibbits
197a7e48c9 powerpc64/pmap: Simplify the code path for moea64_pte_replace_native()
Summary:
MOEA64_PTE_REPLACE() is called often with the pmap lock held, and
sometimes with the page pv lock held.  The less work done while holding
a lock, the better.  Since we are intending to replace the same PTE
(same hash index), we don't need to recalculate anything, just flat
replace the PTE.  This cuts more than 200 instructions off the
invalidating code path.  In addition, we don't need to replace a PTE
that's not occupied by this PVO.

Reviewed by:	luporl
Differential Revision:	https://reviews.freebsd.org/D21515
2019-09-06 02:45:46 +00:00
Justin Hibbits
7c382eea30 powerpc/pmap64: Make moea64 statistics optional
Summary:
It turns out statistics accounting is very expensive in the pmap driver,
and doesn't seem necessary in the common case.  Make this optional
behind a MOEA64_STATS #define, which one can set if they really need
statistics.

This saves ~7-8% on buildworld time on a POWER9.

Found by bdragon.

Reviewed by:	luporl
Differential Revision: https://reviews.freebsd.org/D20903
2019-07-25 03:47:27 +00:00
Justin Hibbits
4420fc895f powerpc/moea: Fix moea64 native VA invalidation
Summary:
moea64_insert_pteg_native()'s invalidation only works by happenstance.
The purpose of the shifts and XORs is to extract the VSID in order to
reverse-engineer the lower bits of the VPN.  Currently a segment size is 256MB
(2**28), and ADDR_API_SHFT64 is 16, so ADDR_PIDX_SHIFT is equivalent.  However,
it's semantically incorrect, in that we don't want to shift by the page shift
size, we want to shift to get to the VSID.

Tested by:	bdragon
Differential Revision: https://reviews.freebsd.org/D20467
2019-06-01 01:40:14 +00:00
Justin Hibbits
8cd3016c00 powerpc64/pmap: Reapply r334235 to OEA64 pmap, clearing HID0_RADIX
This was lost in the re-merger of ISA3 MMU into moea64_native.
2019-05-25 04:56:06 +00:00
Justin Hibbits
9f1a007da7 powerpc64: Micro-optimize moea64 native pmap tlbie
* Cache moea64_need_lock in a local variable; gcc generates slightly better
  code this way, it doesn't need to reload the value from memory each read.
* VPN cropping is only needed on PowerPC ISA 2.02 and older cores, a subset
  of those that need serialization, so move this under the need_lock check,
  so those that don't need the lock don't even need to check this.
2019-03-26 02:53:35 +00:00
Justin Hibbits
bc94b70098 powerpc: Re-merge isa3 HPT with moea64 native HPT
r345402 fixed the bug that led to the split of the ISA 3.0 HPT handling from
the existing manager.  The cause of the bug was gcc moving the register
holding VPN to a different register (not r0), which triggered bizarre
behaviors.  With the fix, things work, so they can be re-merged.  No
performance lost with the merge.
2019-03-22 22:14:14 +00:00
Justin Hibbits
091a23cbf8 powerpc64: Handle the modern (2.05+) implementaiton of tlbie
By happenstance gcc4 puts 'vpn' into r0 in all uses of TLBIE(), but modern
gcc does not.  Also, the single-argument form of tlbie zeros all unused
arguments, making the modern tlbie instruction use r0 as the RS field
(LPID).

The vpn argument has the bottom 12 bits cleared (the input having been
left-shifted by 12 bits), which just so happens, on the POWER9 and previous
incarnations, to be the number of LPID bits supported.  With those bits
being zero, the instruction:

	tlbie r0, r0

will invalidate the VPN in r0, in LPAR 0 (ignoring the upper bits of r0 for
the RS field).  One build with gcc8 yields:

	tlbie r9, r0

with r0 having arbitrary contents, not equal to r9.  This leads to strange
crashes, behaviors, and panics, due to the requested TLB entry not actually
being invalidated.

As the moea64_native must work on both old and new, we explicitly zero out
r0 so that it can work with only the single argument, built with base gcc
and modern gcc.  isa3_hashtb takes a different approach, encoding the
two-argument form, soas not to explicitly clobber r0, and instead let the
compiler decide.

Reported by:	Brandon Bergren
Tested by:	Brandon Bergren
MFC after:	1 week
2019-03-22 01:43:31 +00:00
Justin Hibbits
ebf95d96d9 Split the PowerISA 3.0 HPT implementation from historic
PowerISA 3.0 makes several changes to not only the format of the HPT but
also the behavior surrounding it.  For instance, TLBIE no longer requires
serialization.  Removing this lock cuts buildworld time in half on a
18-core/72-thread POWER9 system, demonstrating that this lock is highly
contended on such a system.

There was odd behavior observed trying to make this change in a
backwards-compatible manner in moea64_native.c, so the best option was to
fully split it, and largely revert the original changes adding POWER9
support to the original file.

Suggested by:	nwhitehorn
2018-06-14 17:23:51 +00:00
Justin Hibbits
402c7806cb Fix CTR formatting for moea64_native bootstrap
On very large memory systems 'size' can become 2GB or larger, resulting in a
negative value being formatted.  Also, moea64_pteg_count is already a long, so
format it as such.
2018-06-14 16:01:11 +00:00
Justin Hibbits
5ab39b6552 On POWER9 clear the HID0_RADIX before enabling the page tables
POWER9 supports Radix page tables in addition to Hashed page tables.  When
Radix page tables are in use, the TLB is cut in half, so that half of the
TLB is used for the page walk cache.  This is the default behavior, however
FreeBSD currently does not support Radix tables.  Clear this bit so that we
can use the full TLB.  Do this in the MMU logic so that configuration can be
localized to the specific translation format.  Once we do support Radix
tables, the setup for that will be localized to the Radix MMU kobj.
2018-05-26 04:33:19 +00:00
Justin Hibbits
204d74320d Only crop the VPN on POWER4 and derivatives for TLBIE operations
Summary:
PowerISA 2.03 and later require bits 14:65 in the RB register argument,
which is the full value of the vpn argument post-shift.  Only POWER4, POWER4+,
and PPC970* need the upper 16 bits cropped.

With this change FreeBSD can boot to multi-user on POWER9.

Reviewed by:	nwhitehorn
Differential Revision: https://reviews.freebsd.org/D15581
2018-05-26 00:41:50 +00:00
Nathan Whitehorn
b00df92b1f Final fix for alignment issues with the page table first patched with
r333273 and partially reverted with r333594.

Older CPUs implement addition of offsets into the page table by a
bitwise OR rather than actual addition, which only works if the table is
aligned at a multiple of its own size (they also require it to be aligned
at a multiple of 256KB). Newer ones do not have that requirement, but it
hardly matters to enforce it anyway.

The original code was failing on newer systems with huge amounts of RAM
(> 512 GB), in which the page table was 4 GB in size. Because the
bootstrap memory allocator took its alignment parameter as an int, this
turned into a 0, removing any alignment constraint at all and making
the MMU fail. The first round of this patch (r333273) fixed this case by
aligning it at 256 KB, which broke older CPUs. Fix this instead by widening
the alignment parameter.
2018-05-14 04:00:52 +00:00
Nathan Whitehorn
b9ff14e6e9 Revert changes to hash table alignment in r333273, which booting on all G5
systems, pending further analysis.
2018-05-13 23:56:43 +00:00
Justin Hibbits
10d0cdfc6e Add support for powernv POWER9 MMU initialization
The POWER9 MMU (PowerISA 3.0) is slightly different from current
configurations, using a partition table even for hypervisor mode, and
dropping the SDR1 register.  Key off the newly early-enabled CPU features
flags for the new architecture, and configure the MMU appropriately.

The POWER9 MMU ignores the "PSIZ" field in the PTCR, and expects a 64kB
table.  As we are enabled for powernv (hypervisor mode, no VMs), only
initialize partition table entry 0, and zero out the rest.  The actual
contents of the register are identical to SDR1 from previous architectures.

Along with this, fix a bug in the page table allocation with very large
memory.  The table can be allocated on any 256k boundary.  The
bootstrap_alloc alignment argument is an int, and with large amounts of
memory passing the size of the table as the alignment will overflow an
integer.  Hard-code the alignment at 256k as wider alignment is not
necessary.

Reviewed by:	nwhitehorn
Tested by:	Breno Leitao
Relnotes:	Yes
2018-05-05 16:00:02 +00:00
Nathan Whitehorn
f9edb09d70 Move the powerpc64 direct map base address from zero to high memory. This
accomplishes a few things:
- Makes NULL an invalid address in the kernel, which is useful for catching
  bugs.
- Lays groundwork for radix-tree translation on POWER9, which requires the
  direct map be at high memory.
- Similarly lays groundwork for a direct map on 64-bit Book-E.

The new base address is chosen as the base of the fourth radix quadrant
(the minimum kernel address in this translation mode) and because all
supported CPUs ignore at least the first two bits of addresses in real
mode, allowing direct-map addresses to be used in real-mode handlers.
This is required by Linux and is part of the architecture standard
starting in POWER ISA 3, so can be relied upon.

Reviewed by:	jhibbits, Breno Leitao
Differential Revision:	D14499
2018-03-07 17:08:07 +00:00
Justin Hibbits
bce6d88bc1 Merge AIM and Book-E PCPU fields
This is part of a long-term goal of merging Book-E and AIM into a single GENERIC
kernel.  As more work is done, the struct may be optimized further.

Reviewed by:	nwhitehorn
2018-02-17 20:59:12 +00:00
Pedro F. Giffuni
71e3c3083b sys/powerpc: further adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.

The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
2017-11-27 15:09:59 +00:00
Nathan Whitehorn
312fb3d8dd Invalidate TLB at boot using the correct IS settings on newer-than-POWER5
CPUs.

MFC after:	3 weeks
2017-11-25 22:10:10 +00:00
Nathan Whitehorn
4a38fe54fa Make native page table access endian-safe. Even on CPUs running in
little-endian mode, the hardware page table is big-endian. This is a
no-op on all currently supported systems.

MFC after:	1 month
2015-11-17 16:09:26 +00:00
Nathan Whitehorn
d4eb568e07 Fix unitialized variable. 2015-02-27 20:32:09 +00:00
Nathan Whitehorn
827cc9b981 New pmap implementation for 64-bit PowerPC processors. The main focus of
this change is to improve concurrency:
- Drop global state stored in the shadow overflow page table (and all other
  global state)
- Remove all global locks
- Use per-PTE lock bits to allow parallel page insertion
- Reconstruct state when requested for evicted PTEs instead of buffering
  it during overflow

This drops total wall time for make buildworld on a 32-thread POWER8 system
by a factor of two and system time by a factor of three, providing performance
20% better than similarly clocked Core i7 Xeons per-core. Performance on
smaller SMP systems, where PMAP lock contention was not as much of an issue,
is nearly unchanged.

Tested on:	POWER8, POWER5+, G5 UP, G5 SMP (64-bit and 32-bit kernels)
Merged from:	user/nwhitehorn/ppc64-pmap-rework
Looked over by:	jhibbits, andreast
MFC after:	3 months
Relnotes:	yes
Sponsored by:	FreeBSD Foundation
2015-02-24 21:37:20 +00:00
Ed Maste
0fcefb433d Update NetBSD Foundation copyrights to 2-clause BSD
The NetBSD Foundation states "Third parties are encouraged to change the
license on any files which have a 4-clause license contributed to the
NetBSD Foundation to a 2-clause license."

This change removes clauses 3 and 4 from copyright / license blocks that
list The NetBSD Foundation as the only copyright holder.

Sponsored by:	The FreeBSD Foundation
2014-03-18 01:40:25 +00:00
Attilio Rao
590f9303e5 Merge from vmobj-rwlock branch:
Remove unused inclusion of vm/vm_pager.h and vm/vnode_pager.h.

Sponsored by:	EMC / Isilon storage division
Tested by:	pho
Reviewed by:	alc
2013-02-26 01:00:11 +00:00
Nathan Whitehorn
284ea61312 Fix build on 32-bit systems. 2012-04-28 14:42:49 +00:00
Nathan Whitehorn
50e13823c8 After switching mutexes to use lwsync, they no longer provide sufficient
guarantees on acquire for the tlbie mutex. Conversely, the TLB invalidation
sequence provides guarantees that do not need to be redundantly applied on
release. Roll a small custom lock that is just right. Simultaneously,
convert the SLB tree changes back to lwsync, as changing them to sync
was a misdiagnosis of the tlbie barrier problem this commit actually fixes.
2012-04-28 00:12:23 +00:00
Nathan Whitehorn
b7d0d1fabf Execute an initial ptesync if and only if the PTE is actually being
invalidated, as opposed to a ref/changed bit update.
2012-04-06 22:33:13 +00:00
Nathan Whitehorn
7e55df27cb More PMAP performance improvements: skip 256 MB segments entirely if they
are are not mapped during ranged operations and reduce the scope of the
tlbie lock only to the actual tlbie instruction instead of the entire
sequence. There are a few more optimization possibilities here as well.
2012-03-28 17:25:29 +00:00
Nathan Whitehorn
e71dfa7b84 More PMAP performance improvements: on powerpc64, when TLBIE can be run
with exceptions enabled, leave them enabled and use a regular mutex to
guard TLB invalidations instead of a spinlock.
2012-03-25 06:01:34 +00:00
Nathan Whitehorn
97f7cde42c Remove some dead code: unnecessary isyncs and memory sorting, which are
handled in mtmsr() and mem_regions(), respectively.
2011-06-02 14:15:44 +00:00
Nathan Whitehorn
52a190480f Only keep track of PTE validity statistics for pages not locked in the
table. The 'locked' attribute is used to circumvent the regular page table
locking for some special pages, with the result that including locked pages
here causes races when updating the stats.
2010-12-28 17:02:15 +00:00
Nathan Whitehorn
41f15bbbd9 Add some isync()s related to the 64-bit MMU scratch page to avoid race
conditions on its invalidation.
2010-12-11 20:29:52 +00:00
Nathan Whitehorn
bef5da7f98 Add an abstraction layer to the 64-bit AIM MMU's page table manipulation
logic to support modifying the page table through a hypervisor. This
uses KOBJ inheritance to provide subclasses of the base 64-bit AIM MMU
class with additional methods for page table manipulation.

Many thanks to Peter Grehan for suggesting this design and implementing
the MMU KOBJ inheritance mechanism.
2010-12-04 02:42:52 +00:00