in the radeonkms driver.
Note: In PCI mode virtual addresses on the graphics card that map to system
RAM are translated to physical addresses by the graphics card itself. In
AGP mode address translation is done by the AGP chipset so fictitious
addresses appear on the system bus. For the CPU cache management to work
correctly when the CPU accesses this memory it needs to use the same
fictitious addresses (and let the chipset translate them) instead of using
the physical addresses directly.
Reviewed by: kib
MFC after: 1 month
vm_phys_fictitious_to_vm_page should not be called directly, even when
operating on a range that has been registered using
vm_phys_fictitious_reg_range. PHYS_TO_VM_PAGE should be used instead
because on arches that use VM_PHYSSEG_DENSE the page might come
directly from vm_page_array.
Reported by: nwhitehorn
Tested by: nwhitehorn, David Mackay <davidm.jx8p@gmail.com>
Sponsored by: Citrix Systems R&D
the queue where to enqueue pages that are going to be unwired.
- Add stronger checks to the enqueue/dequeue for the pagequeues when
adding and removing pages to them.
Of course, for unmanaged pages the queue parameter of vm_page_unwire() will
be ignored, just as the active parameter today.
This makes adding new pagequeues quicker.
This change effectively modifies the KPI. __FreeBSD_version will be,
however, bumped just when the full cache of free pages will be
evicted.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc
Tested by: pho
... for msleep/cv_*wait() return values, where wait_event*() is used
on Linux. ERESTARTSYS is the return code expected by callers when the
operation was interrupted.
For instance, this is the case of radeon_cs_ioctl() (radeon_cs.c): if
an error occurs, and the code isn't ERESTARTSYS (eg. EINTR), it logs an
error.
Note that ERESTARTSYS is defined as ERESTART, but this keeps callers'
code close to Linux.
Submitted by: avg@ (previous version)
Previously the code would just iterate over the whole tree as if it were
just a list.
Without this change I would observe X server becoming more and more
jerky over time.
MFC after: 5 days
shifts into the sign bit. Instead use (1U << 31) which gets the
expected result.
This fix is not ideal as it assumes a 32 bit int, but does fix the issue
for most cases.
A similar change was made in OpenBSD.
Discussed with: -arch, rdivacky
Reviewed by: cperciva
Add a new ttm_bo_release_mmap() function to unmap pages in a
vm_object_t. Pages are freed when the buffer object is later released.
This function is called in ttm_bo_unmap_virtual_locked(), replacing
Linux' unmap_mapping_range(). In particular this is called when a buffer
object is about to be moved, so that its mapping is invalidated.
However, we don't use this function in ttm_bo_vm_dtor(), because the
vm_object_t is already marked as OBJ_DEAD and the pages will be
unmapped.
Approved by: kib@
This fixes a crash where a SIGLALRM, heavily used by X.Org, would
interrupt the wait, causing the page fault to fail and the "Xorg"
process to receive a SIGSEGV.
Approved by: kib@
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Mon Jan 14 15:08:14 2013 +0100
drm/ttm: fix fence locking in ttm_buffer_object_transfer, 2nd try
This fixes up
commit e8e89622ed361c46bf90ba4828e685a8b603f7e5
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Tue Dec 18 22:25:11 2012 +0100
drm/ttm: fix fence locking in ttm_buffer_object_transfer
which leaves behind a might_sleep in atomic context, since the
fence_lock spinlock is held over a kmalloc(GFP_KERNEL) call. The fix
is to revert the above commit and only take the lock where we need it,
around the call to ->sync_obj_ref.
v2: Fixup things noticed by Maarten Lankhorst:
- Brown paper bag locking bug.
- No need for kzalloc if we clear the entire thing on the next line.
- check for bo->sync_obj (totally unlikely race, but still someone
else could have snuck in) and clear fbo->sync_obj if it's cleared
already.
Reported-by: Dave Airlie <airlied@gmail.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Approved by: kib@
Author: Dave Airlie <airlied@gmail.com>
Date: Wed Jan 16 15:58:34 2013 +1000
ttm: on move memory failure don't leave a node dangling
if we have a move notify callback, when moving fails, we call move notify
the opposite way around, however this ends up with *mem containing the mm_node
from the bo, which means we double free it. This is a follow on to the previous
fix.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Approved by: kib@
Author: Dave Airlie <airlied@gmail.com>
Date: Wed Jan 16 14:25:44 2013 +1000
ttm: don't destroy old mm_node on memcpy failure
When we are using memcpy to move objects around, and we fail to memcpy
due to lack of memory to populate or failure to finish the copy, we don't
want to destroy the mm_node that has been copied into old_copy.
While working on a new kms driver that uses memcpy, if I overallocated bo's
up to the memory limits, and eviction failed, then machine would oops soon
after due to having an active bo with an already freed drm_mm embedded in it,
freeing it a second time didn't end well.
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Approved by: kib@
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date: Tue Jan 15 14:57:28 2013 +0100
drm/ttm: unexport ttm_bo_wait_unreserved
All legitimate users of this function outside ttm_bo.c are gone, now
it's only an implementation detail.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Approved by: kib@
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date: Tue Jan 15 14:57:10 2013 +0100
drm/ttm: use ttm_bo_reserve_slowpath_nolru in ttm_eu_reserve_buffers, v2
This requires re-use of the seqno, which increases fairness slightly.
Instead of spinning with a new seqno every time we keep the current one,
but still drop all other reservations we hold. Only when we succeed,
we try to get back our other reservations again.
This should increase fairness slightly as well.
Changes since v1:
- Increase val_seq before calling ttm_bo_reserve_slowpath_nolru and
retrying to take all entries to prevent a race.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Approved by: kib@
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date: Tue Jan 15 14:57:05 2013 +0100
drm/ttm: add ttm_bo_reserve_slowpath
Instead of dropping everything, waiting for the bo to be unreserved
and trying over, a better strategy would be to do a blocking wait.
This can be mapped a lot better to a mutex_lock-like call.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Approved by: kib@
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date: Tue Jan 15 14:56:48 2013 +0100
drm/ttm: cleanup ttm_eu_reserve_buffers handling
With the lru lock no longer required for protecting reservations we
can just do a ttm_bo_reserve_nolru on -EBUSY, and handle all errors
in a single path.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date: Tue Jan 15 14:56:37 2013 +0100
drm/ttm: remove lru_lock around ttm_bo_reserve
There should no longer be assumptions that reserve will always succeed
with the lru lock held, so we can safely break the whole atomic
reserve/lru thing. As a bonus this fixes most lockdep annotations for
reservations.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Jerome Glisse <jglisse@redhat.com>
The flag was mandatory since r209792, where vm_page_grab(9) was
changed to only support the alloc retry semantic.
Suggested and reviewed by: alc
Sponsored by: The FreeBSD Foundation
additional information, when the page is guaranteed to not belong to a
paging queue. Usually, this results in a lot of type casts which make
reasoning about the code correctness harder.
Sometimes m->object is used instead of pageq, which could cause real
and confusing bugs if non-NULL m->object is leaked. See r141955 and
r253140 for examples.
Change the pageq member into a union containing explicitly-typed
members. Use them instead of type-punning or abusing m->object in x86
pmaps, uma and vm_page_alloc_contig().
Requested and reviewed by: alc
Sponsored by: The FreeBSD Foundation
for nodes used in vm_radix.
On architectures supporting direct mapping, also avoid to pre-allocate
the KVA for such nodes.
In order to do so make the operations derived from vm_radix_insert()
to fail and handle all the deriving failure of those.
vm_radix-wise introduce a new function called vm_radix_replace(),
which can replace a leaf node, already present, with a new one,
and take into account the possibility, during vm_radix_insert()
allocation, that the operations on the radix trie can recurse.
This means that if operations in vm_radix_insert() recursed
vm_radix_insert() will start from scratch again.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc (older version)
Reviewed by: jeff
Tested by: pho, scottl
Unify the 2 concept into a real, minimal, sxlock where the shared
acquisition represent the soft busy and the exclusive acquisition
represent the hard busy.
The old VPO_WANTED mechanism becames the hard-path for this new lock
and it becomes per-page rather than per-object.
The vm_object lock becames an interlock for this functionality:
it can be held in both read or write mode.
However, if the vm_object lock is held in read mode while acquiring
or releasing the busy state, the thread owner cannot make any
assumption on the busy state unless it is also busying it.
Also:
- Add a new flag to directly shared busy pages while vm_page_alloc
and vm_page_grab are being executed. This will be very helpful
once these functions happen under a read object lock.
- Move the swapping sleep into its own per-object flag
The KPI is heavilly changed this is why the version is bumped.
It is very likely that some VM ports users will need to change
their own code.
Sponsored by: EMC / Isilon storage division
Discussed with: alc
Reviewed by: jeff, kib
Tested by: gavin, bapt (older version)
Tested by: pho, scottl
transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_*
- Those that allocate address space are named kva_*
- Those that operate on maps are named kmap_*
- Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc
Tested by: pho
Sponsored by: EMC / Isilon Storage Division
held. The ttm_buffer_object_transfer() does not need the mutex locked
at all, except for the call to the driver sync_obj_ref() method.
Reported and tested by: dumbbell
MFC after: 2 weeks
future further optimizations where the vm_object lock will be held
in read mode most of the time the page cache resident pool of pages
are accessed for reading purposes.
The change is mostly mechanical but few notes are reported:
* The KPI changes as follow:
- VM_OBJECT_LOCK() -> VM_OBJECT_WLOCK()
- VM_OBJECT_TRYLOCK() -> VM_OBJECT_TRYWLOCK()
- VM_OBJECT_UNLOCK() -> VM_OBJECT_WUNLOCK()
- VM_OBJECT_LOCK_ASSERT(MA_OWNED) -> VM_OBJECT_ASSERT_WLOCKED()
(in order to avoid visibility of implementation details)
- The read-mode operations are added:
VM_OBJECT_RLOCK(), VM_OBJECT_TRYRLOCK(), VM_OBJECT_RUNLOCK(),
VM_OBJECT_ASSERT_RLOCKED(), VM_OBJECT_ASSERT_LOCKED()
* The vm/vm_pager.h namespace pollution avoidance (forcing requiring
sys/mutex.h in consumers directly to cater its inlining functions
using VM_OBJECT_LOCK()) imposes that all the vm/vm_pager.h
consumers now must include also sys/rwlock.h.
* zfs requires a quite convoluted fix to include FreeBSD rwlocks into
the compat layer because the name clash between FreeBSD and solaris
versions must be avoided.
At this purpose zfs redefines the vm_object locking functions
directly, isolating the FreeBSD components in specific compat stubs.
The KPI results heavilly broken by this commit. Thirdy part ports must
be updated accordingly (I can think off-hand of VirtualBox, for example).
Sponsored by: EMC / Isilon storage division
Reviewed by: jeff
Reviewed by: pjd (ZFS specific review)
Discussed with: alc
Tested by: pho
The early commit is done to facilitate the off-tree work on the
porting of the Radeon driver.
Sponsored by: The FreeBSD Foundation
Debugged and tested by: dumbbell
MFC after: 1 month