|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3 |
|
| #
d7a5ac67 |
| 10-Dec-2024 |
Rob Clark <[email protected]> |
drm/msm: Extend gpu devcore dumps with pgtbl info
In the case of iova fault triggered devcore dumps, include additional debug information based on what we think is the current page tables, including
drm/msm: Extend gpu devcore dumps with pgtbl info
In the case of iova fault triggered devcore dumps, include additional debug information based on what we think is the current page tables, including the TTBR0 value (which should match what we have in adreno_smmu_fault_info unless things have gone horribly wrong), and the pagetable entries traversed in the process of resolving the faulting iova.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/628117/
show more ...
|
|
Revision tags: v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1 |
|
| #
b04e317b |
| 26-Sep-2024 |
Frederic Weisbecker <[email protected]> |
treewide: Introduce kthread_run_worker[_on_cpu]()
kthread_create() creates a kthread without running it yet. kthread_run() creates a kthread and runs it.
On the other hand, kthread_create_worker()
treewide: Introduce kthread_run_worker[_on_cpu]()
kthread_create() creates a kthread without running it yet. kthread_run() creates a kthread and runs it.
On the other hand, kthread_create_worker() creates a kthread worker and runs it.
This difference in behaviours is confusing. Also there is no way to create a kthread worker and affine it using kthread_bind_mask() or kthread_affine_preferred() before starting it.
Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]() that behaves just like kthread_run(). kthread_create_worker[_on_cpu]() will now only create a kthread worker without starting it.
Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Dan Carpenter <[email protected]>
show more ...
|
| #
3241504e |
| 03-Oct-2024 |
Antonino Maniscalco <[email protected]> |
drm/msm/a6xx: Track current_ctx_seqno per ring
With preemption it is not enough to track the current_ctx_seqno globally as execution might switch between rings.
This is especially problematic when
drm/msm/a6xx: Track current_ctx_seqno per ring
With preemption it is not enough to track the current_ctx_seqno globally as execution might switch between rings.
This is especially problematic when current_ctx_seqno is used to determine whether a page table switch is necessary as it might lead to security bugs.
Track current context per ring.
Tested-by: Rob Clark <[email protected]> Tested-by: Neil Armstrong <[email protected]> # on SM8650-QRD Tested-by: Neil Armstrong <[email protected]> # on SM8550-QRD Tested-by: Neil Armstrong <[email protected]> # on SM8450-HDK Signed-off-by: Antonino Maniscalco <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/618012/ Signed-off-by: Rob Clark <[email protected]>
show more ...
|
|
Revision tags: v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10 |
|
| #
16007768 |
| 09-Jul-2024 |
Konrad Dybcio <[email protected]> |
drm/msm/adreno: Assign msm_gpu->pdev earlier to avoid nullptrs
There are some cases, such as the one uncovered by Commit 46d4efcccc68 ("drm/msm/a6xx: Avoid a nullptr dereference when speedbin settin
drm/msm/adreno: Assign msm_gpu->pdev earlier to avoid nullptrs
There are some cases, such as the one uncovered by Commit 46d4efcccc68 ("drm/msm/a6xx: Avoid a nullptr dereference when speedbin setting fails") where
msm_gpu_cleanup() : platform_set_drvdata(gpu->pdev, NULL);
is called on gpu->pdev == NULL, as the GPU device has not been fully initialized yet.
Turns out that there's more than just the aforementioned path that causes this to happen (e.g. the case when there's speedbin data in the catalog, but opp-supported-hw is missing in DT).
Assigning msm_gpu->pdev earlier seems like the least painful solution to this, therefore do so.
Signed-off-by: Konrad Dybcio <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/602742/ Signed-off-by: Rob Clark <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1 |
|
| #
f2608b70 |
| 13-May-2024 |
Rob Clark <[email protected]> |
drm/msm: Add obj flags to gpu devcoredump
When debugging faults, it is useful to know how the BO is mapped (cached vs WC, gpu readonly, etc).
Signed-off-by: Rob Clark <[email protected]> Revie
drm/msm: Add obj flags to gpu devcoredump
When debugging faults, it is useful to know how the BO is mapped (cached vs WC, gpu readonly, etc).
Signed-off-by: Rob Clark <[email protected]> Reviewed-by: Akhil P Oommen <[email protected]> Acked-by: Konrad Dybcio <[email protected]> Reviewed-by: Dmitry Baryshkov <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/593854/
show more ...
|
|
Revision tags: v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3 |
|
| #
84935a85 |
| 01-Apr-2024 |
Dmitry Baryshkov <[email protected]> |
drm/msm: remove dependencies from core onto adreno headers
Two core driver files include headers from Adreno subdir, which also brings dependency on the Adreno register headers. Rework those include
drm/msm: remove dependencies from core onto adreno headers
Two core driver files include headers from Adreno subdir, which also brings dependency on the Adreno register headers. Rework those includes to remove unnecessary dependency.
Signed-off-by: Dmitry Baryshkov <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/585850/ Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1 |
|
| #
917e9b7c |
| 09-Jan-2024 |
Rob Clark <[email protected]> |
Revert "drm/msm/gpu: Push gpu lock down past runpm"
This reverts commit abe2023b4cea192ab266b351fd38dc9dbd846df0.
Changing the locking order means that scheduler/msm_job_run() can race with the rec
Revert "drm/msm/gpu: Push gpu lock down past runpm"
This reverts commit abe2023b4cea192ab266b351fd38dc9dbd846df0.
Changing the locking order means that scheduler/msm_job_run() can race with the recovery kthread worker, with the result that the GPU gets an extra runpm get when we are trying to power it off. Leaving the GPU in an unrecovered state.
I'll need to come up with a different scheme for appeasing lockdep.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/573835/
show more ...
|
|
Revision tags: v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2 |
|
| #
12578c07 |
| 17-Nov-2023 |
Rob Clark <[email protected]> |
drm/msm/gpu: Skip retired submits in recover worker
If we somehow raced with submit retiring, either while waiting for worker to have a chance to run or acquiring the gpu lock, then the recover work
drm/msm/gpu: Skip retired submits in recover worker
If we somehow raced with submit retiring, either while waiting for worker to have a chance to run or acquiring the gpu lock, then the recover worker should just bail.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/568034/
show more ...
|
| #
548b61a8 |
| 15-Nov-2023 |
Rob Clark <[email protected]> |
drm/msm/gpu: Move gpu devcore's to gpu device
The dpu devcore's are already associated with the dpu device. So we should associate the gpu devcore's with the gpu device, for easier classification.
drm/msm/gpu: Move gpu devcore's to gpu device
The dpu devcore's are already associated with the dpu device. So we should associate the gpu devcore's with the gpu device, for easier classification.
Signed-off-by: Rob Clark <[email protected]> Reviewed-by: Abhinav Kumar <[email protected]> Reviewed-by: Dmitry Baryshkov <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/567738/
show more ...
|
|
Revision tags: v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6 |
|
| #
abe2023b |
| 10-Aug-2023 |
Rob Clark <[email protected]> |
drm/msm/gpu: Push gpu lock down past runpm
Avoid holding gpu lock when calling runpm, to avoid this lockdep splat:
====================================================== WARNING: possible cir
drm/msm/gpu: Push gpu lock down past runpm
Avoid holding gpu lock when calling runpm, to avoid this lockdep splat:
====================================================== WARNING: possible circular locking dependency detected 6.4.3-debug+ #14 Not tainted ------------------------------------------------------ ring0/373 is trying to acquire lock: ffffffead86efb98 (prepare_lock){+.+.}-{3:3}, at: clk_prepare_lock+0x70/0x98
but task is already holding lock: ffffff809cd19170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x7c/0x128 [msm]
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #4 (&gpu->lock){+.+.}-{3:3}: __mutex_lock+0xc8/0x388 mutex_lock_nested+0x2c/0x38 msm_job_run+0x7c/0x128 [msm] drm_sched_main+0x264/0x354 [gpu_sched] kthread+0xf0/0x100 ret_from_fork+0x10/0x20
-> #3 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f0/0x2e8 do_one_initcall+0xb4/0x214 kernel_init_freeable+0x338/0x33c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20
-> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x7c/0x9c slab_pre_alloc_hook.constprop.0+0x40/0x250 __kmem_cache_alloc_node+0x60/0x18c kmalloc_node_trace+0x40/0x84 alloc_worker+0x2c/0x64 init_rescuer+0x34/0xe0 workqueue_init+0x168/0x1fc kernel_init_freeable+0x15c/0x33c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20
-> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x50/0x9c slab_pre_alloc_hook.constprop.0+0x40/0x250 __kmem_cache_alloc_node+0x60/0x18c kmalloc_trace+0x44/0x88 clk_rcg2_dfs_determine_rate+0x60/0x214 clk_core_determine_round_nolock+0xb8/0xf0 clk_core_round_rate_nolock+0x84/0x118 clk_core_round_rate_nolock+0xd8/0x118 clk_round_rate+0x6c/0xd0 geni_se_clk_tbl_get+0x78/0xc0 geni_se_clk_freq_match+0x44/0xe4 get_spi_clk_cfg+0x50/0xf4 geni_spi_set_clock_and_bw+0x54/0x104 spi_geni_prepare_message+0x130/0x174 __spi_pump_transfer_message+0x200/0x4d8 __spi_sync+0x13c/0x23c spi_sync_locked+0x18/0x24 do_cros_ec_pkt_xfer_spi+0x124/0x3f0 cros_ec_xfer_high_pri_work+0x28/0x3c kthread_worker_fn+0x14c/0x27c kthread+0xf0/0x100 ret_from_fork+0x10/0x20
-> #0 (prepare_lock){+.+.}-{3:3}: __lock_acquire+0xdf8/0x109c lock_acquire+0x234/0x284 __mutex_lock+0xc8/0x388 mutex_lock_nested+0x2c/0x38 clk_prepare_lock+0x70/0x98 clk_prepare+0x24/0x50 clk_bulk_prepare+0x50/0x9c a6xx_gmu_resume+0x94/0x800 [msm] a6xx_gmu_pm_resume+0x38/0x158 [msm] adreno_runtime_resume+0x2c/0x38 [msm] pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x4c/0x134 rpm_callback+0x78/0x7c rpm_resume+0x3a4/0x46c __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 [msm] msm_gpu_submit+0x4c/0x12c [msm] msm_job_run+0x88/0x128 [msm] drm_sched_main+0x264/0x354 [gpu_sched] kthread+0xf0/0x100 ret_from_fork+0x10/0x20
other info that might help us debug this: Chain exists of: prepare_lock --> dma_fence_map --> &gpu->lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&gpu->lock); lock(dma_fence_map); lock(&gpu->lock); lock(prepare_lock);
*** DEADLOCK *** 2 locks held by ring0/373: #0: ffffffead875ae50 (dma_fence_map){++++}-{0:0}, at: drm_sched_main+0x54/0x354 [gpu_sched] #1: ffffff809cd19170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x7c/0x128 [msm]
stack backtrace: CPU: 2 PID: 373 Comm: ring0 Not tainted 6.4.3-debug+ #14 Hardware name: Google Villager (rev1+) with LTE (DT) Call trace: dump_backtrace+0xb4/0xf0 show_stack+0x20/0x30 dump_stack_lvl+0x60/0x84 dump_stack+0x18/0x24 print_circular_bug+0x1cc/0x234 check_noncircular+0x78/0xac __lock_acquire+0xdf8/0x109c lock_acquire+0x234/0x284 __mutex_lock+0xc8/0x388 mutex_lock_nested+0x2c/0x38 clk_prepare_lock+0x70/0x98 clk_prepare+0x24/0x50 clk_bulk_prepare+0x50/0x9c a6xx_gmu_resume+0x94/0x800 [msm] a6xx_gmu_pm_resume+0x38/0x158 [msm] adreno_runtime_resume+0x2c/0x38 [msm] pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x4c/0x134 rpm_callback+0x78/0x7c rpm_resume+0x3a4/0x46c __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 [msm] msm_gpu_submit+0x4c/0x12c [msm] msm_job_run+0x88/0x128 [msm] drm_sched_main+0x264/0x354 [gpu_sched] kthread+0xf0/0x100 ret_from_fork+0x10/0x20
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/552298/
show more ...
|
|
Revision tags: v6.5-rc5 |
|
| #
6ba5daa5 |
| 02-Aug-2023 |
Rob Clark <[email protected]> |
drm/msm: Use drm_gem_object in submit bos table
Basically everywhere wants the base ptr type. So store that instead of msm_gem_object.
Signed-off-by: Rob Clark <[email protected]> Patchwork:
drm/msm: Use drm_gem_object in submit bos table
Basically everywhere wants the base ptr type. So store that instead of msm_gem_object.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/551021/
show more ...
|
|
Revision tags: v6.5-rc4 |
|
| #
f09f5459 |
| 27-Jul-2023 |
Ruan Jinjie <[email protected]> |
drm/msm: Remove redundant DRM_DEV_ERROR()
There is no need to call the DRM_DEV_ERROR() function directly to print a custom message when handling an error from platform_get_irq() function as it is go
drm/msm: Remove redundant DRM_DEV_ERROR()
There is no need to call the DRM_DEV_ERROR() function directly to print a custom message when handling an error from platform_get_irq() function as it is going to display an appropriate error message in case of a failure.
Signed-off-by: Ruan Jinjie <[email protected]> Reviewed-by: Dmitry Baryshkov <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/549499/ Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Dmitry Baryshkov <[email protected]>
show more ...
|
|
Revision tags: v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3 |
|
| #
171f580e |
| 17-Apr-2023 |
Rob Clark <[email protected]> |
drm/msm: Move cmdstream dumping out of sched kthread
This is something that can block for arbitrary amounts of time as userspace consumes from the FIFO. So we don't really want this to be in the fe
drm/msm: Move cmdstream dumping out of sched kthread
This is something that can block for arbitrary amounts of time as userspace consumes from the FIFO. So we don't really want this to be in the fence signaling path.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/532617/
show more ...
|
| #
51d86ee5 |
| 24-May-2023 |
Rob Clark <[email protected]> |
drm/msm: Switch to fdinfo helper
Now that we have a common helper, use it.
v2: Rebase on drm-misc-next
Signed-off-by: Rob Clark <[email protected]> Reviewed-by: Dmitry Baryshkov <dmitry.barys
drm/msm: Switch to fdinfo helper
Now that we have a common helper, use it.
v2: Rebase on drm-misc-next
Signed-off-by: Rob Clark <[email protected]> Reviewed-by: Dmitry Baryshkov <[email protected]> Acked-by: Dave Airlie <[email protected]> Signed-off-by: Neil Armstrong <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
show more ...
|
|
Revision tags: v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1 |
|
| #
9f251f93 |
| 23-Feb-2023 |
Konrad Dybcio <[email protected]> |
drm/msm/adreno: Use OPP for every GPU generation
Some older GPUs (namely a2xx with no opp tables at all and a320 with downstream-remnants gpu pwrlevels) used not to have OPP tables. They both howeve
drm/msm/adreno: Use OPP for every GPU generation
Some older GPUs (namely a2xx with no opp tables at all and a320 with downstream-remnants gpu pwrlevels) used not to have OPP tables. They both however had just one frequency defined, making it extremely easy to construct such an OPP table from within the driver if need be.
Do so and switch all clk_set_rate calls on core_clk to their OPP counterparts.
Reviewed-by: Dmitry Baryshkov <[email protected]> Signed-off-by: Konrad Dybcio <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/523784/ Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Rob Clark <[email protected]>
show more ...
|
|
Revision tags: v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3 |
|
| #
d4843012 |
| 02-Jan-2023 |
Akhil P Oommen <[email protected]> |
drm/msm/a6xx: Remove cx gdsc polling using 'reset'
Remove the unused 'reset' interface which was supposed to help to ensure that cx gdsc has collapsed during gpu recovery. This is was not enabled so
drm/msm/a6xx: Remove cx gdsc polling using 'reset'
Remove the unused 'reset' interface which was supposed to help to ensure that cx gdsc has collapsed during gpu recovery. This is was not enabled so far due to missing gpucc driver support. Similar functionality using genpd framework will be implemented in the upcoming patch.
This effectively reverts commit 1f6cca404918 ("drm/msm/a6xx: Ensure CX collapse during gpu recovery").
Signed-off-by: Akhil P Oommen <[email protected]> Reviewed-by: Ulf Hansson <[email protected]> Reviewed-by: Philipp Zabel <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/516470/ Link: https://lore.kernel.org/r/20230102161757.v5.4.I96e0bf9eaf96dd866111c1eec8a4c9b70fd7cbcb@changeid Signed-off-by: Rob Clark <[email protected]>
show more ...
|
| #
a66f1efc |
| 10-Jan-2023 |
Rob Clark <[email protected]> |
drm/msm/gpu: Fix potential double-free
If userspace was calling the MSM_SET_PARAM ioctl on multiple threads to set the COMM or CMDLINE param, it could trigger a race causing the previous value to be
drm/msm/gpu: Fix potential double-free
If userspace was calling the MSM_SET_PARAM ioctl on multiple threads to set the COMM or CMDLINE param, it could trigger a race causing the previous value to be kfree'd multiple times. Fix this by serializing on the gpu lock.
Signed-off-by: Rob Clark <[email protected]> Fixes: d4726d770068 ("drm/msm: Add a way to override processes comm/cmdline") Patchwork: https://patchwork.freedesktop.org/patch/517778/ Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6 |
|
| #
d73b1d02 |
| 14-Nov-2022 |
Rob Clark <[email protected]> |
drm/msm: Hangcheck progress detection
If the hangcheck timer expires, check if the fw's position in the cmdstream has advanced (changed) since last timer expiration, and allow it up to three additio
drm/msm: Hangcheck progress detection
If the hangcheck timer expires, check if the fw's position in the cmdstream has advanced (changed) since last timer expiration, and allow it up to three additional "extensions" to it's alotted time. The intention is to continue to catch "shader stuck in a loop" type hangs quickly, but allow more time for things that are actually making forward progress.
Because we need to sample the CP state twice to detect if there has not been progress, this also cuts the the timer's duration in half.
v2: Fix typo (REG_A6XX_CP_CSQ_IB2_STAT), add comment v3: Only halve hangcheck timer duration for generations which support progress detection (hdanton); removed unused a5xx progress (without knowing how to adjust for data buffered in ROQ it is too likely to report a false negative) v4: Comment updates to better describe the total hangcheck duration when progress detection is applied
Reviewed-by: Chia-I Wu <[email protected]> Tested-by: Chia-I Wu <[email protected]> # dEQP-GLES2.functional.flush_finish.wait Signed-off-by: Rob Clark <[email protected]> Reviewed-by: Akhil P Oommen <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/511584/ Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0 |
|
| #
76efc245 |
| 28-Sep-2022 |
Akhil P Oommen <[email protected]> |
drm/msm/gpu: Fix crash during system suspend after unbind
In adreno_unbind, we should clean up gpu device's drvdata to avoid accessing a stale pointer during system suspend. Also, check for NULL ptr
drm/msm/gpu: Fix crash during system suspend after unbind
In adreno_unbind, we should clean up gpu device's drvdata to avoid accessing a stale pointer during system suspend. Also, check for NULL ptr in both system suspend/resume callbacks.
Signed-off-by: Akhil P Oommen <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/505075/ Link: https://lore.kernel.org/r/20220928124830.2.I5ee0ac073ccdeb81961e5ec0cce5f741a7207a71@changeid Signed-off-by: Rob Clark <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2 |
|
| #
1f6cca40 |
| 18-Aug-2022 |
Akhil P Oommen <[email protected]> |
drm/msm/a6xx: Ensure CX collapse during gpu recovery
Because there could be transient votes from other drivers/tz/hyp which may keep the cx gdsc enabled, we should poll until cx gdsc collapses. We c
drm/msm/a6xx: Ensure CX collapse during gpu recovery
Because there could be transient votes from other drivers/tz/hyp which may keep the cx gdsc enabled, we should poll until cx gdsc collapses. We can use the reset framework to poll for cx gdsc collapse from gpucc clk driver.
This feature requires support from the platform's gpucc driver.
Signed-off-by: Akhil P Oommen <[email protected]> Reviewed-by: Dmitry Baryshkov <[email protected]> Reviewed-by: Philipp Zabel <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/498397/ Link: https://lore.kernel.org/r/20220819015030.v5.5.I176567525af2b9439a7e485d0ca130528666a55c@changeid Signed-off-by: Rob Clark <[email protected]>
show more ...
|
| #
f350bfb9 |
| 18-Aug-2022 |
Akhil P Oommen <[email protected]> |
drm/msm: Fix cx collapse issue during recovery
There are some hardware logic under CX domain. For a successful recovery, we should ensure cx headswitch collapses to ensure all the stale states are c
drm/msm: Fix cx collapse issue during recovery
There are some hardware logic under CX domain. For a successful recovery, we should ensure cx headswitch collapses to ensure all the stale states are cleard out. This is especially true to for a6xx family where we can GMU co-processor.
Currently, cx doesn't collapse due to a devlink between gpu and its smmu. So the *struct gpu device* needs to be runtime suspended to ensure that the iommu driver removes its vote on cx gdsc.
Signed-off-by: Akhil P Oommen <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/498398/ Link: https://lore.kernel.org/r/20220819015030.v5.4.I4ac27a0b34ea796ce0f938bb509e257516bc6f57@changeid Signed-off-by: Rob Clark <[email protected]>
show more ...
|
| #
06097e37 |
| 18-Aug-2022 |
Akhil P Oommen <[email protected]> |
drm/msm: Correct pm_runtime votes in recover worker
In the scenario where there is one a single submit which is hung, gpu is power collapsed when it is retired. Because of this, by the time we call
drm/msm: Correct pm_runtime votes in recover worker
In the scenario where there is one a single submit which is hung, gpu is power collapsed when it is retired. Because of this, by the time we call reover(), gpu state would be already clear. Fix this by correctly managing the pm runtime votes.
Signed-off-by: Akhil P Oommen <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/498391/ Link: https://lore.kernel.org/r/20220819015030.v5.3.Ib07ecec3d5c17cb0e1efa6fcddaaa019ec2fb556@changeid Signed-off-by: Rob Clark <[email protected]>
show more ...
|
| #
5b26f37d |
| 18-Aug-2022 |
Akhil P Oommen <[email protected]> |
drm/msm: Take single rpm refcount on behalf of all submits
Instead of separate refcount for each submit, take single rpm refcount on behalf of all the submits. This makes it easier to drop the rpm r
drm/msm: Take single rpm refcount on behalf of all submits
Instead of separate refcount for each submit, take single rpm refcount on behalf of all the submits. This makes it easier to drop the rpm refcount during recovery in an upcoming patch.
Signed-off-by: Akhil P Oommen <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/498392/ Link: https://lore.kernel.org/r/20220819015030.v5.2.Ifee853f6d8217a0fdacc459092bbc9e81a8a7ac7@changeid Signed-off-by: Rob Clark <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc1 |
|
| #
b352ba54 |
| 02-Aug-2022 |
Rob Clark <[email protected]> |
drm/msm/gem: Convert to using drm_gem_lru
This converts over to use the shared GEM LRU/shrinker helpers. Note that it means we are no longer tracking purgeable or willneed buffers that are active s
drm/msm/gem: Convert to using drm_gem_lru
This converts over to use the shared GEM LRU/shrinker helpers. Note that it means we are no longer tracking purgeable or willneed buffers that are active separately. But the most recently pinned buffers should be at the tail of the various LRUs, and the shrinker is already prepared to encounter objects which are still active.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/496131/ Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3 |
|
| #
8b5de735 |
| 13-Jun-2022 |
Rob Clark <[email protected]> |
drm/msm: Deprecate MSM_BO_UNCACHED harder
Handle the demotion to MSM_BO_WC at the userspace ABI level, and fix the remaining internal MSM_BO_UNCACHED user.
Signed-off-by: Rob Clark <robdclark@chrom
drm/msm: Deprecate MSM_BO_UNCACHED harder
Handle the demotion to MSM_BO_WC at the userspace ABI level, and fix the remaining internal MSM_BO_UNCACHED user.
Signed-off-by: Rob Clark <[email protected]> Patchwork: https://patchwork.freedesktop.org/patch/489339/ Link: https://lore.kernel.org/r/[email protected]
show more ...
|