In this patch, intel_color_get_config() is enabled and support
for read_luts() will be added platform by platform incrementally
in the follow-up patches.
v4: -Renamed intel_get_color_config to intel_color_get_config [Jani]
-Added the user early on such that support for get_color_config()
can be added platform by platform incrementally [Jani]
v5: -Incorrect place for calling intel_color_get_config() in
haswell_get_pipe_config() [Ville]
v6: -Renamed intel_color_read_luts() to intel_color_get_config()
[Jani and Ville]
Signed-off-by: Swati Sharma <swati2.sharma@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1559123462-7343-3-git-send-email-swati2.sharma@intel.com
In this patch, a vfunc read_luts() is introduced to create a hw lut
i.e. lut having values read from gamma/degamma registers which will
later be used to compare with sw lut to validate gamma/degamma lut values.
v3: -Rebase
v4: -Renamed intel_get_color_config to intel_color_get_config [Jani]
-Wrapped get_color_config() [Jani]
v5: -Renamed intel_color_get_config() to intel_color_read_luts()
-Renamed get_color_config to read_luts
v6: -Renamed intel_color_read_luts() back to intel_color_get_config()
[Jani and Ville]
Signed-off-by: Swati Sharma <swati2.sharma@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1559123462-7343-2-git-send-email-swati2.sharma@intel.com
Instead of relying on the caller holding struct_mutex across the
allocation, push the allocation under a tree of spinlocks stored inside
the page tables. Not only should this allow us to avoid struct_mutex
here, but it will allow multiple users to lock independent ranges for
concurrent allocations, and operate independently. This is vital for
pushing the GTT manipulation into a background thread where dependency
on struct_mutex is verboten, and for allowing other callers to avoid
struct_mutex altogether.
v2: Restore lost GEM_BUG_ON for removing too many PTE from
gen6_ppgtt_clear_range.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190604153830.19096-1-chris@chris-wilson.co.uk
As the fence registers are not part of the engine powerwells, we do not
need to fiddle with forcewake in order to update a fence. Avoid using
the heavyweight debug checking normal mmio writes as the checking
dominates the selftest runtime and is superfluous!
In the process, retire the I915_WRITE() implicit macro with the new
intel_uncore_write interface.
v2: s/unc/uncore/
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190604120022.20472-2-chris@chris-wilson.co.uk
Stop dumping plane->state for planes. That is the old state most of the
time and dumping stale information only serves to confuse people.
Instead dump the new state just for the planes included in the
operation. For now we'll include only the planes for the modeset/fastset
pipes in the dumps. But probably we want to dump them all eventually,
just not quite sure how to present that information nicely to the user.
And while at it let's dump a few more interesting bits from the state.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190517193132.8140-14-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Currently, we try to report to the shrinker the precise number of
objects (pages) that are available to be reaped at this moment. This
requires searching all objects with allocated pages to see if they
fulfill the search criteria, and this count is performed quite
frequently. (The shrinker tries to free ~128 pages on each invocation,
before which we count all the objects; counting takes longer than
unbinding the objects!) If we take the pragmatic view that with
sufficient desire, all objects are eventually reapable (they become
inactive, or no longer used as framebuffer etc), we can simply return
the count of pinned pages maintained during get_pages/put_pages rather
than walk the lists every time.
The downside is that we may (slightly) over-report the number of
objects/pages we could shrink and so penalize ourselves by shrinking
more than required. This is mitigated by keeping the order in which we
shrink objects such that we avoid penalizing active and frequently used
objects, and if memory is so tight that we need to free them we would
need to anyway.
v2: Only expose shrinkable objects to the shrinker; a small reduction in
not considering stolen and foreign objects.
v3: Restore the tracking from a "backup" copy from before the gem/ split
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190530203500.26272-2-chris@chris-wilson.co.uk
Currently the purgeable objects, I915_MADV_DONTNEED, are mixed in the
normal bound/unbound lists. Every shrinker pass starts with an attempt
to purge from this set of unneeded objects, which entails us doing a
walk over both lists looking for any candidates. If there are none, and
since we are shrinking we can reasonably assume that the lists are
full!, this becomes a very slow futile walk.
If we separate out the purgeable objects into own list, this search then
becomes its own phase that is preferentially handled during shrinking.
Instead the cost becomes that we then need to filter the purgeable list
if we want to distinguish between bound and unbound objects.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190530203500.26272-1-chris@chris-wilson.co.uk
The i915.alpha_support module parameter has caused some confusion along
the way. Add new i915.force_probe parameter to specify PCI IDs of
devices to probe, when the devices are recognized but not automatically
probed by the driver. The name is intended to reflect what the parameter
effectively does, avoiding any overloaded semantics of "alpha" and
"support".
The parameter supports "" to disable, "<pci-id>,[<pci-id>,...]" to
enable force probe for one or more devices, and "*" to enable force
probe for all known devices.
Also add new CONFIG_DRM_I915_FORCE_PROBE config option to replace the
DRM_I915_ALPHA_SUPPORT option. This defaults to "*" if
DRM_I915_ALPHA_SUPPORT=y.
Instead of replacing i915.alpha_support immediately, let the two coexist
for a while, with a deprecation message, for a transition period.
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190506134801.28751-1-jani.nikula@intel.com
In order to support driver hot unbind, some cleanup operations, now
performed on PCI driver remove, must be called later, after all device
file descriptors are closed.
Split out those operations from the tail of pci_driver.remove()
callback and put them into drm_driver.release() which is called as soon
as all references to the driver are put. As a result, those cleanups
will be now run on last drm_dev_put(), either still called from
pci_driver.remove() if all device file descriptors are already closed,
or on last drm_release() file operation.
Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190530133105.30467-1-janusz.krzysztofik@linux.intel.com
An interesting issue cropped with making the pagetables be allocated and
freed concurrently (i.e. removing their grandeous struct_mutex guard)
was that we would overflow the page stash. This happens when we have
multiple allocators grabbing WC pages such that we fill the vm's local
page stash and then when we free another page, the page stash is already
full and we overflow.
The fix is quite simple: to check for a full page stash before adding
another. This results in us keeping a vm local page stash around for
much longer, which is both a blessing and a curse.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190529093407.31697-1-chris@chris-wilson.co.uk
Currently, the subslice_mask runtime parameter is stored as an
array of subslices per slice. Expand the subslice mask array to
better match what is presented to userspace through the
I915_QUERY_TOPOLOGY_INFO ioctl. The index into this array is
then calculated:
slice * subslice stride + subslice index / 8
v2: fix spacing in set_sseu_info args
use set_sseu_info to initialize sseu data when building
device status in debugfs
rename variables in intel_engine_types.h to avoid checkpatch
warnings
v3: update headers in intel_sseu.h
v4: add const to some sseu_dev_info variables
use sseu->eu_stride for EU stride calculations
v5: address review comments from Tvrtko and Daniele
v6: remove extra space in intel_sseu_get_subslices
return the correct subslice enable in for_each_instdone
add GEM_BUG_ON to ensure user doesn't pass invalid ss_mask size
use printk formatted string for subslice mask
v7: remove string.h header and rebase
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190524154022.13575-6-stuart.summers@intel.com