2001-07-31 06:05:05 +00:00
|
|
|
/*-
|
|
|
|
* Copyright (c) 2001 Jake Burkholder.
|
2010-03-17 20:23:14 +00:00
|
|
|
* Copyright (c) 2008, 2010 Marius Strobl <marius@FreeBSD.org>
|
2001-07-31 06:05:05 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
2001-08-09 02:09:34 +00:00
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
2001-07-31 06:05:05 +00:00
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
2001-08-09 02:09:34 +00:00
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
2001-07-31 06:05:05 +00:00
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* $FreeBSD$
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _MACHINE_TLB_H_
|
|
|
|
#define _MACHINE_TLB_H_
|
|
|
|
|
- Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
to use 4mb tlb entries instead of 8k. This requires that the system only
use the direct mapped addresses when they have the same virtual colour as
all other mappings of the same page, instead of being able to choose the
colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
being able to choose the colour or cachability of the direct mapped
address. This adds a lot more cases to handle. Basically when a page has
a different colour than its direct mapped address we have a choice between
bypassing the data cache and using physical addresses directly, which
requires a cache flush, or mapping it at the right colour, which requires
a tlb flush. For now we choose to map the page and do the tlb flush.
This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
|
|
|
#define TLB_DIRECT_ADDRESS_BITS (43)
|
|
|
|
#define TLB_DIRECT_PAGE_BITS (PAGE_SHIFT_4M)
|
|
|
|
|
|
|
|
#define TLB_DIRECT_ADDRESS_MASK ((1UL << TLB_DIRECT_ADDRESS_BITS) - 1)
|
|
|
|
#define TLB_DIRECT_PAGE_MASK ((1UL << TLB_DIRECT_PAGE_BITS) - 1)
|
|
|
|
|
2010-03-17 20:23:14 +00:00
|
|
|
#define TLB_PHYS_TO_DIRECT(pa) \
|
- Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
to use 4mb tlb entries instead of 8k. This requires that the system only
use the direct mapped addresses when they have the same virtual colour as
all other mappings of the same page, instead of being able to choose the
colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
being able to choose the colour or cachability of the direct mapped
address. This adds a lot more cases to handle. Basically when a page has
a different colour than its direct mapped address we have a choice between
bypassing the data cache and using physical addresses directly, which
requires a cache flush, or mapping it at the right colour, which requires
a tlb flush. For now we choose to map the page and do the tlb flush.
This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
|
|
|
((pa) | VM_MIN_DIRECT_ADDRESS)
|
2010-03-17 20:23:14 +00:00
|
|
|
#define TLB_DIRECT_TO_PHYS(va) \
|
- Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
to use 4mb tlb entries instead of 8k. This requires that the system only
use the direct mapped addresses when they have the same virtual colour as
all other mappings of the same page, instead of being able to choose the
colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
being able to choose the colour or cachability of the direct mapped
address. This adds a lot more cases to handle. Basically when a page has
a different colour than its direct mapped address we have a choice between
bypassing the data cache and using physical addresses directly, which
requires a cache flush, or mapping it at the right colour, which requires
a tlb flush. For now we choose to map the page and do the tlb flush.
This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
|
|
|
((va) & TLB_DIRECT_ADDRESS_MASK)
|
2010-03-17 20:23:14 +00:00
|
|
|
#define TLB_DIRECT_TO_TTE_MASK \
|
- Change the way the direct mapped region is implemented to be generally
useful for accessing more than 1 page of contiguous physical memory, and
to use 4mb tlb entries instead of 8k. This requires that the system only
use the direct mapped addresses when they have the same virtual colour as
all other mappings of the same page, instead of being able to choose the
colour and cachability of the mapping.
- Adapt the physical page copying and zeroing functions to account for not
being able to choose the colour or cachability of the direct mapped
address. This adds a lot more cases to handle. Basically when a page has
a different colour than its direct mapped address we have a choice between
bypassing the data cache and using physical addresses directly, which
requires a cache flush, or mapping it at the right colour, which requires
a tlb flush. For now we choose to map the page and do the tlb flush.
This will allows the direct mapped addresses to be used for more things
that don't require normal pmap handling, including mapping the vm_page
structures, the message buffer, temporary mappings for crash dumps, and will
provide greater benefit for implementing uma_small_alloc, due to the much
greater tlb coverage.
2002-12-23 23:39:57 +00:00
|
|
|
(TD_V | TD_4M | (TLB_DIRECT_ADDRESS_MASK - TLB_DIRECT_PAGE_MASK))
|
2002-07-27 21:57:38 +00:00
|
|
|
|
2001-07-31 06:05:05 +00:00
|
|
|
#define TLB_DAR_SLOT_SHIFT (3)
|
2011-07-02 11:14:54 +00:00
|
|
|
#define TLB_DAR_TLB_SHIFT (16)
|
|
|
|
#define TLB_DAR_SLOT(tlb, slot) \
|
|
|
|
((tlb) << TLB_DAR_TLB_SHIFT | (slot) << TLB_DAR_SLOT_SHIFT)
|
|
|
|
#define TLB_DAR_T16 (0) /* US-III{,i,+}, IV{,+} */
|
|
|
|
#define TLB_DAR_T32 (0) /* US-I, II{,e,i} */
|
|
|
|
#define TLB_DAR_DT512_0 (2) /* US-III{,i,+}, IV{,+} */
|
|
|
|
#define TLB_DAR_DT512_1 (3) /* US-III{,i,+}, IV{,+} */
|
|
|
|
#define TLB_DAR_IT128 (2) /* US-III{,i,+}, IV */
|
|
|
|
#define TLB_DAR_IT512 (2) /* US-IV+ */
|
|
|
|
#define TLB_DAR_FTLB (0) /* SPARC64 V, VI, VII, VIIIfx */
|
|
|
|
#define TLB_DAR_STLB (2) /* SPARC64 V, VI, VII, VIIIfx */
|
2001-07-31 06:05:05 +00:00
|
|
|
|
2002-02-25 04:56:50 +00:00
|
|
|
#define TAR_VPN_SHIFT (13)
|
|
|
|
#define TAR_CTX_MASK ((1 << TAR_VPN_SHIFT) - 1)
|
|
|
|
|
|
|
|
#define TLB_TAR_VA(va) ((va) & ~TAR_CTX_MASK)
|
|
|
|
#define TLB_TAR_CTX(ctx) ((ctx) & TAR_CTX_MASK)
|
2001-07-31 06:05:05 +00:00
|
|
|
|
2008-09-08 21:24:25 +00:00
|
|
|
#define TLB_CXR_CTX_BITS (13)
|
|
|
|
#define TLB_CXR_CTX_MASK \
|
|
|
|
(((1UL << TLB_CXR_CTX_BITS) - 1) << TLB_CXR_CTX_SHIFT)
|
|
|
|
#define TLB_CXR_CTX_SHIFT (0)
|
|
|
|
#define TLB_CXR_PGSZ_BITS (3)
|
2010-03-17 20:23:14 +00:00
|
|
|
#define TLB_CXR_PGSZ_MASK (~TLB_CXR_CTX_MASK)
|
|
|
|
#define TLB_PCXR_N_IPGSZ0_SHIFT (53) /* SPARC64 VI, VII, VIIIfx */
|
|
|
|
#define TLB_PCXR_N_IPGSZ1_SHIFT (50) /* SPARC64 VI, VII, VIIIfx */
|
2008-09-08 21:24:25 +00:00
|
|
|
#define TLB_PCXR_N_PGSZ0_SHIFT (61)
|
|
|
|
#define TLB_PCXR_N_PGSZ1_SHIFT (58)
|
2010-03-17 20:23:14 +00:00
|
|
|
#define TLB_PCXR_N_PGSZ_I_SHIFT (55) /* US-IV+ */
|
|
|
|
#define TLB_PCXR_P_IPGSZ0_SHIFT (24) /* SPARC64 VI, VII, VIIIfx */
|
|
|
|
#define TLB_PCXR_P_IPGSZ1_SHIFT (27) /* SPARC64 VI, VII, VIIIfx */
|
2008-09-08 21:24:25 +00:00
|
|
|
#define TLB_PCXR_P_PGSZ0_SHIFT (16)
|
|
|
|
#define TLB_PCXR_P_PGSZ1_SHIFT (19)
|
2010-03-17 20:23:14 +00:00
|
|
|
/*
|
|
|
|
* Note that the US-IV+ documentation appears to have TLB_PCXR_P_PGSZ_I_SHIFT
|
|
|
|
* and TLB_PCXR_P_PGSZ0_SHIFT erroneously inverted.
|
|
|
|
*/
|
|
|
|
#define TLB_PCXR_P_PGSZ_I_SHIFT (22) /* US-IV+ */
|
2008-09-08 21:24:25 +00:00
|
|
|
#define TLB_SCXR_S_PGSZ1_SHIFT (19)
|
|
|
|
#define TLB_SCXR_S_PGSZ0_SHIFT (16)
|
|
|
|
|
|
|
|
#define TLB_TAE_PGSZ_BITS (3)
|
|
|
|
#define TLB_TAE_PGSZ0_MASK \
|
|
|
|
(((1UL << TLB_TAE_PGSZ_BITS) - 1) << TLB_TAE_PGSZ0_SHIFT)
|
|
|
|
#define TLB_TAE_PGSZ1_MASK \
|
|
|
|
(((1UL << TLB_TAE_PGSZ_BITS) - 1) << TLB_TAE_PGSZ1_SHIFT)
|
|
|
|
#define TLB_TAE_PGSZ0_SHIFT (16)
|
|
|
|
#define TLB_TAE_PGSZ1_SHIFT (19)
|
|
|
|
|
2001-07-31 06:05:05 +00:00
|
|
|
#define TLB_DEMAP_ID_SHIFT (4)
|
|
|
|
#define TLB_DEMAP_ID_PRIMARY (0)
|
|
|
|
#define TLB_DEMAP_ID_SECONDARY (1)
|
|
|
|
#define TLB_DEMAP_ID_NUCLEUS (2)
|
|
|
|
|
|
|
|
#define TLB_DEMAP_TYPE_SHIFT (6)
|
|
|
|
#define TLB_DEMAP_TYPE_PAGE (0)
|
|
|
|
#define TLB_DEMAP_TYPE_CONTEXT (1)
|
2010-03-17 20:23:14 +00:00
|
|
|
#define TLB_DEMAP_TYPE_ALL (2) /* US-III and beyond only */
|
2001-07-31 06:05:05 +00:00
|
|
|
|
|
|
|
#define TLB_DEMAP_VA(va) ((va) & ~PAGE_MASK)
|
|
|
|
#define TLB_DEMAP_ID(id) ((id) << TLB_DEMAP_ID_SHIFT)
|
|
|
|
#define TLB_DEMAP_TYPE(type) ((type) << TLB_DEMAP_TYPE_SHIFT)
|
|
|
|
|
|
|
|
#define TLB_DEMAP_PAGE (TLB_DEMAP_TYPE(TLB_DEMAP_TYPE_PAGE))
|
|
|
|
#define TLB_DEMAP_CONTEXT (TLB_DEMAP_TYPE(TLB_DEMAP_TYPE_CONTEXT))
|
2008-03-09 15:53:34 +00:00
|
|
|
#define TLB_DEMAP_ALL (TLB_DEMAP_TYPE(TLB_DEMAP_TYPE_ALL))
|
2001-07-31 06:05:05 +00:00
|
|
|
|
|
|
|
#define TLB_DEMAP_PRIMARY (TLB_DEMAP_ID(TLB_DEMAP_ID_PRIMARY))
|
|
|
|
#define TLB_DEMAP_SECONDARY (TLB_DEMAP_ID(TLB_DEMAP_ID_SECONDARY))
|
|
|
|
#define TLB_DEMAP_NUCLEUS (TLB_DEMAP_ID(TLB_DEMAP_ID_NUCLEUS))
|
|
|
|
|
|
|
|
#define TLB_CTX_KERNEL (0)
|
2002-03-04 05:20:29 +00:00
|
|
|
#define TLB_CTX_USER_MIN (1)
|
|
|
|
#define TLB_CTX_USER_MAX (8192)
|
2001-07-31 06:05:05 +00:00
|
|
|
|
2001-08-06 02:21:53 +00:00
|
|
|
#define MMU_SFSR_ASI_SHIFT (16)
|
|
|
|
#define MMU_SFSR_FT_SHIFT (7)
|
|
|
|
#define MMU_SFSR_E_SHIFT (6)
|
|
|
|
#define MMU_SFSR_CT_SHIFT (4)
|
|
|
|
#define MMU_SFSR_PR_SHIFT (3)
|
|
|
|
#define MMU_SFSR_W_SHIFT (2)
|
|
|
|
#define MMU_SFSR_OW_SHIFT (1)
|
|
|
|
#define MMU_SFSR_FV_SHIFT (0)
|
|
|
|
|
|
|
|
#define MMU_SFSR_ASI_SIZE (8)
|
|
|
|
#define MMU_SFSR_FT_SIZE (6)
|
|
|
|
#define MMU_SFSR_CT_SIZE (2)
|
|
|
|
|
2010-03-17 20:23:14 +00:00
|
|
|
#define MMU_SFSR_GET_ASI(sfsr) \
|
2002-08-16 00:57:37 +00:00
|
|
|
(((sfsr) >> MMU_SFSR_ASI_SHIFT) & ((1UL << MMU_SFSR_ASI_SIZE) - 1))
|
2010-03-17 20:23:14 +00:00
|
|
|
#define MMU_SFSR_GET_FT(sfsr) \
|
|
|
|
(((sfsr) >> MMU_SFSR_FT_SHIFT) & ((1UL << MMU_SFSR_FT_SIZE) - 1))
|
|
|
|
#define MMU_SFSR_GET_CT(sfsr) \
|
|
|
|
(((sfsr) >> MMU_SFSR_CT_SHIFT) & ((1UL << MMU_SFSR_CT_SIZE) - 1))
|
|
|
|
|
|
|
|
#define MMU_SFSR_E (1UL << MMU_SFSR_E_SHIFT)
|
|
|
|
#define MMU_SFSR_PR (1UL << MMU_SFSR_PR_SHIFT)
|
2002-08-16 00:57:37 +00:00
|
|
|
#define MMU_SFSR_W (1UL << MMU_SFSR_W_SHIFT)
|
2010-03-17 20:23:14 +00:00
|
|
|
#define MMU_SFSR_OW (1UL << MMU_SFSR_OW_SHIFT)
|
2002-08-16 00:57:37 +00:00
|
|
|
#define MMU_SFSR_FV (1UL << MMU_SFSR_FV_SHIFT)
|
2001-08-06 02:21:53 +00:00
|
|
|
|
2008-03-09 15:53:34 +00:00
|
|
|
typedef void tlb_flush_nonlocked_t(void);
|
2003-04-13 21:54:58 +00:00
|
|
|
typedef void tlb_flush_user_t(void);
|
|
|
|
|
|
|
|
struct pmap;
|
2002-05-29 05:49:59 +00:00
|
|
|
struct tlb_entry;
|
|
|
|
|
2009-01-01 14:01:21 +00:00
|
|
|
extern int dtlb_slots;
|
|
|
|
extern int itlb_slots;
|
2002-03-04 07:07:10 +00:00
|
|
|
extern int kernel_tlb_slots;
|
2002-05-29 05:49:59 +00:00
|
|
|
extern struct tlb_entry *kernel_tlbs;
|
2002-03-04 07:07:10 +00:00
|
|
|
|
2002-05-20 16:10:17 +00:00
|
|
|
void tlb_context_demap(struct pmap *pm);
|
2002-07-26 15:54:04 +00:00
|
|
|
void tlb_page_demap(struct pmap *pm, vm_offset_t va);
|
2002-05-20 16:10:17 +00:00
|
|
|
void tlb_range_demap(struct pmap *pm, vm_offset_t start, vm_offset_t end);
|
2003-04-13 21:54:58 +00:00
|
|
|
|
2008-03-09 15:53:34 +00:00
|
|
|
tlb_flush_nonlocked_t cheetah_tlb_flush_nonlocked;
|
2003-04-13 21:54:58 +00:00
|
|
|
tlb_flush_user_t cheetah_tlb_flush_user;
|
2008-03-09 15:53:34 +00:00
|
|
|
|
|
|
|
tlb_flush_nonlocked_t spitfire_tlb_flush_nonlocked;
|
2003-04-13 21:54:58 +00:00
|
|
|
tlb_flush_user_t spitfire_tlb_flush_user;
|
|
|
|
|
2011-07-02 11:14:54 +00:00
|
|
|
tlb_flush_nonlocked_t zeus_tlb_flush_nonlocked;
|
|
|
|
tlb_flush_user_t zeus_tlb_flush_user;
|
|
|
|
|
2008-03-09 15:53:34 +00:00
|
|
|
extern tlb_flush_nonlocked_t *tlb_flush_nonlocked;
|
2003-04-13 21:54:58 +00:00
|
|
|
extern tlb_flush_user_t *tlb_flush_user;
|
2001-07-31 06:05:05 +00:00
|
|
|
|
|
|
|
#endif /* !_MACHINE_TLB_H_ */
|