This patch updates dma_buf_vunmap() and dma-buf's vunmap callback to
use struct dma_buf_map. The interfaces used to receive a buffer address.
This address is now given in an instance of the structure.
Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.
v2:
* include dma-buf-heaps and i915 selftests (kernel test robot)
* initialize cma_obj before using it in drm_gem_cma_free_object()
(kernel test robot)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Tomasz Figa <tfiga@chromium.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20200925115601.23955-4-tzimmermann@suse.de
This patch updates dma_buf_vmap() and dma-buf's vmap callback to use
struct dma_buf_map.
The interfaces used to return a buffer address. This address now gets
stored in an instance of the structure that is given as an additional
argument. The functions return an errno code on errors.
Users of the functions are updated accordingly. This is only an interface
change. It is currently expected that dma-buf memory can be accessed with
system memory load/store operations.
v3:
* update fastrpc driver (kernel test robot)
v2:
* always clear map parameter in dma_buf_vmap() (Daniel)
* include dma-buf-heaps and i915 selftests (kernel test robot)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Tomasz Figa <tfiga@chromium.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20200925115601.23955-3-tzimmermann@suse.de
The new type struct dma_buf_map represents a mapping of dma-buf memory
into kernel space. It contains a flag, is_iomem, that signals users to
access the mapped memory with I/O operations instead of regular loads
and stores.
It was assumed that DMA buffer memory can be accessed with regular load
and store operations. Some architectures, such as sparc64, require the
use of I/O operations to access dma-map buffers that are located in I/O
memory. Providing struct dma_buf_map allows drivers to implement this.
This was specifically a problem when refreshing the graphics framebuffer
on such systems. [1]
As the first step, struct dma_buf stores an instance of struct dma_buf_map
internally. Afterwards, dma-buf's vmap and vunmap interfaces are be
converted. Finally, affected drivers can be fixed.
v3:
* moved documentation into separate patch
* test for NULL pointers with !<ptr>
[1] https://lore.kernel.org/dri-devel/20200725191012.GA434957@ravnborg.org/
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20200925115601.23955-2-tzimmermann@suse.de
This makes blob resources available to guest userspace. They are needed
for GL4.5, Vulkan and zero-copy virtio-gpu.
For Mesa, blob resources have been tested with Piglit's ARB_buffer_storage
tests and apitraces. Apitraces of GL4.5 games show we're between 70%
to 80% of host performance on Iris, based on a apitrace of a 2013 GL4.5
game:
11.204 FPS (guest)
15.947 FPS (host)
This is still better than the status quo, when said game was unplayable
with Virgl due to an inefficient GL4.3 fallback. But there's still room
for improvement if we want to match HW-assisted virtualization.
For Vulkan, blob resources have been tested with dEQP.vk.memory* and
running Vulkan applications in production with the "Cuttlefish" virtual
Android device. This has been done with Lingfeng Yang's "gfxstream"
Vulkan implementation, which virtualizes Vulkan across many Google
products.
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Acked-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Chia-I Wu <olvaffe@gmail.com>
Acked-by: Lingfeng Yang <lfy@google.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200924003214.662-5-gurchetansingh@chromium.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
This patch adds a new virtgpu feature that allows directly
mapping host allocated resources.
This is based on virtio shared memory regions, which allows
querying for memory regions using PCI transport. Each shared
memory region has an associated "shmid", the meaning of which
is device specific.
For virtio-gpu, we can define the shared memory region with id
VIRTIO_GPU_SHM_ID_HOST_VISIBLE to be the "host visible memory
region".
The presence of the host visible memory region means the following
hypercalls are supported:
1) VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB
This hypercall tells the host to inject the host resource's
mapping in an offset into virtio-gpu's PCI address space.
This is typically done via KVM_SET_USER_MEMORY_REGION on Linux
hosts.
On success, VIRTIO_GPU_RESP_OK_MAP_INFO is returned, which
specifies the host buffer's caching type and possibly in the
future performance hints about the buffer..
2) VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB
This hypercall tells the host to remove the host resource's
mapping from the guest VM.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Lingfeng Yang <lfy@google.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200924003214.662-4-gurchetansingh@chromium.org
Co-developed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
A blob resource is a container for:
- VIRTIO_GPU_BLOB_MEM_GUEST: a guest memory allocation
(referred to as a "guest-only blob resource")
- VIRTIO_GPU_BLOB_MEM_HOST3D: a host3d memory allocation
(referred to as a "host-only blob resource")
- VIRTIO_GPU_BLOB_MEM_HOST3D_GUEST: a guest + host3d memory allocation
(referred to as a "default blob resource").
The memory properties of the blob resource must be described by
`blob_mem`.
For default and guest only blob resources set, `nents` guest system
pages are assigned to the resource. For default blob resources,
these guest pages are used for transfer operations. Attach/detach is
also possible to allow swap-in/swap-out, but isn't required since it
may not be applicable to future blob mem types
(shared guest/guest vram).
Host allocations depend on whether the 3D is supported. If 3D is not
supported, the only valid field for `blob_mem` is
VIRTIO_GPU_BLOB_MEM_GUEST.
If 3D is supported, the virtio-gpu resource is created from the
context local object identified by the `blob_id`. The actual host
allocation done by the CMD_SUBMIT_3D.
Userspace must specify if the blob resource is intended to be used
for userspace mapping, sharing between virtio-gpu contexts and/or
sharing between virtio devices. This is done via `blob_flags`.
For 3D hosts, both VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D and
VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D may be used to update
the host resource. There is no restriction on the image/buffer
view the guest/host userspace has on the blob resource.
VIRTIO_GPU_CMD_SET_SCANOUT_BLOB / VIRTIO_GPU_CMD_RESOURCE_FLUSH may
be used with blob resources as well. The modifier is intentionally
left out of SCANOUT_BLOB, and auxilary blobs are also left out
as a simplification.
The use case for blob resources is zero-copy, needed for coherent
memory in virglrenderer. Host only blob resources are not mappable
without the feature described in the next patch, but are shareable.
Future work:
- Emulated coherent `blob_mem` type for QEMU/vhost-user
- A `blob_mem` type for guest-only resources imported in
cache-coherent FOSS GPU/display drivers.
- Display integration involving the blob model using seamless
Wayland windows.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Acked-by: Chia-I Wu <olvaffe@gmail.com>
Acked-by: Lingfeng Yang <lfy@google.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20200924003214.662-3-gurchetansingh@chromium.org
Co-developed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Several GEM and PRIME callbacks have been deprecated in favor of
per-instance GEM object functions. Remove the callbacks as they are
now unused. The only exception is .gem_prime_mmap, which is still
in use by several drivers.
What is also gone is gem_vm_ops in struct drm_driver. All drivers now
use struct drm_gem_object_funcs.vm_ops instead.
While at it, the patch also improves error handling around calls
to .free and .get_sg_table callbacks.
v3:
* restore default call to drm_gem_prime_export() in
drm_gem_prime_handle_to_fd()
* return -ENOSYS if get_sg_table is not set
* drop all checks for obj->funcs
* clean up TODO list and documentation
v2:
* update related TODO item (Sam)
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200923102159.24084-23-tzimmermann@suse.de
When we swapout/in a BO we try to change the caching
attributes of the pages before/after doing the copy.
On x86 this is done by calling set_pages_uc(),
set_memory_wc() or set_pages_wb() for not highmem pages
to update the linear mapping of the page.
On all other platforms we do exactly nothing.
Now on x86 this is unnecessary because copy_highpage() will
either create a temporary mapping of the page which is wb
anyway and destroyed immediately again or use the linear
mapping with the correct caching attributes.
So stop this nonsense and just keep the caching as it is and
return an error when a driver tries to change the caching of
an already populated TT object.
This is much more defensive since changing caching
attributes is platform and driver specific and usually
doesn't work after the page was initially allocated.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/391293/