History log of /linux-6.15/kernel/dma/mapping.c (Results 1 – 25 of 89)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4
# cae5572e 22-Apr-2025 Balbir Singh <[email protected]>

dma-mapping: Fix warning reported for missing prototype

lkp reported a warning about missing prototype for a recent patch.

The kernel-doc style comments are out of sync, move them to the right
func

dma-mapping: Fix warning reported for missing prototype

lkp reported a warning about missing prototype for a recent patch.

The kernel-doc style comments are out of sync, move them to the right
function.

Cc: Marek Szyprowski <[email protected]>
Cc: Christoph Hellwig <[email protected]>

Reported-by: kernel test robot <[email protected]>
Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

Signed-off-by: Balbir Singh <[email protected]>
[mszyprow: reformatted subject]
Signed-off-by: Marek Szyprowski <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


Revision tags: v6.15-rc3
# 2042c352 14-Apr-2025 Balbir Singh <[email protected]>

dma/mapping.c: dev_dbg support for dma_addressing_limited

In the debug and resolution of an issue involving forced use of bounce
buffers, 7170130e4c72 ("x86/mm/init: Handle the special case of devic

dma/mapping.c: dev_dbg support for dma_addressing_limited

In the debug and resolution of an issue involving forced use of bounce
buffers, 7170130e4c72 ("x86/mm/init: Handle the special case of device
private pages in add_pages(), to not increase max_pfn and trigger
dma_addressing_limited() bounce buffers"). It would have been easier
to debug the issue if dma_addressing_limited() had debug information
about the device not being able to address all of memory and thus forcing
all accesses through a bounce buffer. Please see[2]

Implement dev_dbg to debug the potential use of bounce buffers
when we hit the condition. When swiotlb is used,
dma_addressing_limited() is used to determine the size of maximum dma
buffer size in dma_direct_max_mapping_size(). The debug prints could be
triggered in that check as well (when enabled).

Link: https://lore.kernel.org/lkml/[email protected]/ [1]
Link: https://lore.kernel.org/lkml/[email protected]/ [2]

Cc: Marek Szyprowski <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: "Christian König" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Bert Karwatzki <[email protected]>
Cc: Christoph Hellwig <[email protected]>

Signed-off-by: Balbir Singh <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Marek Szyprowski <[email protected]>
Link: https://lore.kernel.org/r/[email protected]

show more ...


Revision tags: v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6
# d5bbfbad 31-Oct-2024 Sean Anderson <[email protected]>

dma-mapping: fix swapped dir/flags arguments to trace_dma_alloc_sgt_err

trace_dma_alloc_sgt_err was called with the dir and flags arguments
swapped. Fix this.

Fixes: 68b6dbf1f441 ("dma-mapping: tra

dma-mapping: fix swapped dir/flags arguments to trace_dma_alloc_sgt_err

trace_dma_alloc_sgt_err was called with the dir and flags arguments
swapped. Fix this.

Fixes: 68b6dbf1f441 ("dma-mapping: trace more error paths")
Signed-off-by: Sean Anderson <[email protected]>
Reported-by: kernel test robot <[email protected]>
Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.12-rc5, v6.12-rc4
# 68b6dbf1 18-Oct-2024 Sean Anderson <[email protected]>

dma-mapping: trace more error paths

It can be surprising to the user if DMA functions are only traced on
success. On failure, it can be unclear what the source of the problem
is. Fix this by tracing

dma-mapping: trace more error paths

It can be surprising to the user if DMA functions are only traced on
success. On failure, it can be unclear what the source of the problem
is. Fix this by tracing all functions even when they fail. Cases where
we BUG/WARN are skipped, since those should be sufficiently noisy
already.

Signed-off-by: Sean Anderson <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# c4484ab8 18-Oct-2024 Sean Anderson <[email protected]>

dma-mapping: use trace_dma_alloc for dma_alloc* instead of using trace_dma_map

In some cases, we use trace_dma_map to trace dma_alloc* functions. This
generally follows dma_debug. However, this does

dma-mapping: use trace_dma_alloc for dma_alloc* instead of using trace_dma_map

In some cases, we use trace_dma_map to trace dma_alloc* functions. This
generally follows dma_debug. However, this does not record all of the
relevant information for allocations, such as GFP flags. Create new
dma_alloc tracepoints for these functions. Note that while
dma_alloc_noncontiguous may allocate discontiguous pages (from the CPU's
point of view), the device will only see one contiguous mapping.
Therefore, we just need to trace dma_addr and size.

Signed-off-by: Sean Anderson <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# 3afff779 18-Oct-2024 Sean Anderson <[email protected]>

dma-mapping: trace dma_alloc/free direction

In preparation for using these tracepoints in a few more places, trace
the DMA direction as well. For coherent allocations this is always
bidirectional.

dma-mapping: trace dma_alloc/free direction

In preparation for using these tracepoints in a few more places, trace
the DMA direction as well. For coherent allocations this is always
bidirectional.

Signed-off-by: Sean Anderson <[email protected]>
Reviewed-by: Steven Rostedt (Google) <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.12-rc3, v6.12-rc2, v6.12-rc1
# b348b6d1 22-Sep-2024 Leon Romanovsky <[email protected]>

dma-mapping: report unlimited DMA addressing in IOMMU DMA path

While using the IOMMU DMA path, the dma_addressing_limited() function
checks ops struct which doesn't exist in the IOMMU case. This cau

dma-mapping: report unlimited DMA addressing in IOMMU DMA path

While using the IOMMU DMA path, the dma_addressing_limited() function
checks ops struct which doesn't exist in the IOMMU case. This causes
to the kernel panic while loading ADMGPU driver.

BUG: kernel NULL pointer dereference, address: 00000000000000a0
PGD 0 P4D 0
Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 10 UID: 0 PID: 611 Comm: (udev-worker) Tainted: G T 6.11.0-clang-07154-g726e2d0cf2bb #257
Tainted: [T]=RANDSTRUCT
Hardware name: ASUS System Product Name/ROG STRIX Z690-G GAMING WIFI, BIOS 3701 07/03/2024
RIP: 0010:dma_addressing_limited+0x53/0xa0
Code: 8b 93 48 02 00 00 48 39 d1 49 89 d6 4c 0f 42 f1 48 85 d2 4c 0f 44 f1 f6 83 fc 02 00 00 40 75 0a 48 89 df e8 1f 09 00 00 eb 24 <4c> 8b 1c 25 a0 00 00 00 4d 85 db 74 17 48 89 df 41 ba 8b 84 2d 55
RSP: 0018:ffffa8d2c12cf740 EFLAGS: 00010202
RAX: 00000000ffffffff RBX: ffff8948820220c8 RCX: 000000ffffffffff
RDX: 0000000000000000 RSI: ffffffffc124dc6d RDI: ffff8948820220c8
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff894883c3f040
R13: ffff89488dac8828 R14: 000000ffffffffff R15: ffff8948820220c8
FS: 00007fe6ba881900(0000) GS:ffff894fdf700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000000a0 CR3: 0000000111984000 CR4: 0000000000f50ef0
PKRU: 55555554
Call Trace:
<TASK>
? __die_body+0x65/0xc0
? page_fault_oops+0x3b9/0x450
? _prb_read_valid+0x212/0x390
? do_user_addr_fault+0x608/0x680
? exc_page_fault+0x4e/0xa0
? asm_exc_page_fault+0x26/0x30
? dma_addressing_limited+0x53/0xa0
amdgpu_ttm_init+0x56/0x4b0 [amdgpu]
gmc_v8_0_sw_init+0x561/0x670 [amdgpu]
amdgpu_device_ip_init+0xf5/0x570 [amdgpu]
amdgpu_device_init+0x1a57/0x1ea0 [amdgpu]
? _raw_spin_unlock_irqrestore+0x1a/0x40
? pci_conf1_read+0xc0/0xe0
? pci_bus_read_config_word+0x52/0xa0
amdgpu_driver_load_kms+0x15/0xa0 [amdgpu]
amdgpu_pci_probe+0x1b7/0x4c0 [amdgpu]
pci_device_probe+0x1c5/0x260
really_probe+0x130/0x470
__driver_probe_device+0x77/0x150
driver_probe_device+0x19/0x120
__driver_attach+0xb1/0x1e0
? __cfi___driver_attach+0x10/0x10
bus_for_each_dev+0x115/0x170
bus_add_driver+0x192/0x2d0
driver_register+0x5c/0xf0
? __cfi_init_module+0x10/0x10 [amdgpu]
do_one_initcall+0x128/0x380
? idr_alloc_cyclic+0x139/0x1d0
? security_kernfs_init_security+0x42/0x140
? __kernfs_new_node+0x1be/0x250
? sysvec_apic_timer_interrupt+0xb6/0xc0
? asm_sysvec_apic_timer_interrupt+0x1a/0x20
? _raw_spin_unlock+0x11/0x30
? free_unref_page+0x283/0x650
? kfree+0x274/0x3a0
? kfree+0x274/0x3a0
? kfree+0x274/0x3a0
? load_module+0xf2e/0x1130
? __kmalloc_cache_noprof+0x12a/0x2e0
do_init_module+0x7d/0x240
__se_sys_init_module+0x19e/0x220
do_syscall_64+0x8a/0x150
? __irq_exit_rcu+0x5e/0x100
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7fe6bb5980ee
Code: 48 8b 0d 3d ed 12 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 0a ed 12 00 f7 d8 64 89 01 48
RSP: 002b:00007ffd462219d8 EFLAGS: 00000206 ORIG_RAX: 00000000000000af
RAX: ffffffffffffffda RBX: 0000556caf0d0670 RCX: 00007fe6bb5980ee
RDX: 0000556caf0d3080 RSI: 0000000002893458 RDI: 00007fe6b3400010
RBP: 0000000000020000 R08: 0000000000020010 R09: 0000000000000080
R10: c26073c166186e00 R11: 0000000000000206 R12: 0000556caf0d3430
R13: 0000556caf0d0670 R14: 0000556caf0d3080 R15: 0000556caf0ce700
</TASK>
Modules linked in: amdgpu(+) i915(+) drm_suballoc_helper intel_gtt drm_exec drm_buddy iTCO_wdt i2c_algo_bit intel_pmc_bxt drm_display_helper iTCO_vendor_support gpu_sched drm_ttm_helper cec ttm amdxcp video backlight pinctrl_alderlake nct6775 hwmon_vid nct6775_core coretemp
CR2: 00000000000000a0
---[ end trace 0000000000000000 ]---
RIP: 0010:dma_addressing_limited+0x53/0xa0
Code: 8b 93 48 02 00 00 48 39 d1 49 89 d6 4c 0f 42 f1 48 85 d2 4c 0f 44 f1 f6 83 fc 02 00 00 40 75 0a 48 89 df e8 1f 09 00 00 eb 24 <4c> 8b 1c 25 a0 00 00 00 4d 85 db 74 17 48 89 df 41 ba 8b 84 2d 55
RSP: 0018:ffffa8d2c12cf740 EFLAGS: 00010202
RAX: 00000000ffffffff RBX: ffff8948820220c8 RCX: 000000ffffffffff
RDX: 0000000000000000 RSI: ffffffffc124dc6d RDI: ffff8948820220c8
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff894883c3f040
R13: ffff89488dac8828 R14: 000000ffffffffff R15: ffff8948820220c8
FS: 00007fe6ba881900(0000) GS:ffff894fdf700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000000a0 CR3: 0000000111984000 CR4: 0000000000f50ef0
PKRU: 55555554

Fixes: b5c58b2fdc42 ("dma-mapping: direct calls for dma-iommu")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=219292
Reported-by: Niklāvs Koļesņikovs <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Niklāvs Koļesņikovs <[email protected]>

show more ...


# bb0e3919 22-Sep-2024 Christoph Hellwig <[email protected]>

dma-mapping: fix vmap and mmap of noncontiougs allocations

Commit b5c58b2fdc42 ("dma-mapping: direct calls for dma-iommu") switched
to use direct calls to dma-iommu, but missed the dma_vmap_nonconti

dma-mapping: fix vmap and mmap of noncontiougs allocations

Commit b5c58b2fdc42 ("dma-mapping: direct calls for dma-iommu") switched
to use direct calls to dma-iommu, but missed the dma_vmap_noncontiguous,
dma_vunmap_noncontiguous and dma_mmap_noncontiguous behavior keyed off the
presence of the alloc_noncontiguous method.

Fix this by removing the now unused alloc_noncontiguous and
free_noncontiguous methods and moving the vmapping and mmaping of the
noncontiguous allocations into the iommu code, as it is the only provider
of actually noncontiguous allocations.

Fixes: b5c58b2fdc42 ("dma-mapping: direct calls for dma-iommu")
Reported-by: Xi Ruoyao <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Tested-by: Xi Ruoyao <[email protected]>

show more ...


Revision tags: v6.11
# a5fb217f 12-Sep-2024 Christoph Hellwig <[email protected]>

dma-mapping: reflow dma_supported

dma_supported has become too much spaghetti for my taste. Reflow it to
remove the duplicate use_dma_iommu condition and make the main path more
obvious.

Signed-of

dma-mapping: reflow dma_supported

dma_supported has become too much spaghetti for my taste. Reflow it to
remove the duplicate use_dma_iommu condition and make the main path more
obvious.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>

show more ...


# f45cfab2 11-Sep-2024 Leon Romanovsky <[email protected]>

dma-mapping: reliably inform about DMA support for IOMMU

If the DMA IOMMU path is going to be used, the appropriate check should
return that DMA is supported.

Fixes: b5c58b2fdc42 ("dma-mapping: dir

dma-mapping: reliably inform about DMA support for IOMMU

If the DMA IOMMU path is going to be used, the appropriate check should
return that DMA is supported.

Fixes: b5c58b2fdc42 ("dma-mapping: direct calls for dma-iommu")
Closes: https://lore.kernel.org/all/181e06ff-35a3-434f-b505-672f430bd1cb@notapiano
Reported-by: Nícolas F. R. A. Prado <[email protected]> #KernelCI
Signed-off-by: Leon Romanovsky <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Tested-by: Nícolas F. R. A. Prado <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.11-rc7
# 038eb433 06-Sep-2024 Sean Anderson <[email protected]>

dma-mapping: add tracing for dma-mapping API calls

When debugging drivers, it can often be useful to trace when memory gets
(un)mapped for DMA (and can be accessed by the device). Add some
tracepoin

dma-mapping: add tracing for dma-mapping API calls

When debugging drivers, it can often be useful to trace when memory gets
(un)mapped for DMA (and can be accessed by the device). Add some
tracepoints for this purpose.

Use u64 instead of phys_addr_t and dma_addr_t (and similarly %llx instead
of %pa) because libtraceevent can't handle typedefs in all cases.

Signed-off-by: Sean Anderson <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# 19156263 05-Sep-2024 Leon Romanovsky <[email protected]>

dma-mapping: use IOMMU DMA calls for common alloc/free page calls

Common alloca and free pages routines are called when IOMMU DMA is used,
and internally it calls to DMA ops structure which is not a

dma-mapping: use IOMMU DMA calls for common alloc/free page calls

Common alloca and free pages routines are called when IOMMU DMA is used,
and internally it calls to DMA ops structure which is not available for
default IOMMU. This patch adds necessary if checks to call IOMMU DMA.

It fixes the following crash:

Unable to handle kernel NULL pointer dereference at virtual address 0000000000000040
Mem abort info:
ESR = 0x0000000096000006
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x06: level 2 translation fault
Data abort info:
ISV = 0, ISS = 0x00000006, ISS2 = 0x00000000
CM = 0, WnR = 0, TnD = 0, TagAccess = 0
GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=00000000d20bb000
[0000000000000040] pgd=08000000d20c1003
, p4d=08000000d20c1003
, pud=08000000d20c2003, pmd=0000000000000000
Internal error: Oops: 0000000096000006 [#1] PREEMPT SMP
Modules linked in: ipv6 hci_uart venus_core btqca
v4l2_mem2mem btrtl qcom_spmi_adc5 sbs_battery btbcm qcom_vadc_common
cros_ec_typec videobuf2_v4l2 leds_cros_ec cros_kbd_led_backlight
cros_ec_chardev videodev elan_i2c
videobuf2_common qcom_stats mc bluetooth coresight_stm stm_core
ecdh_generic ecc pwrseq_core panel_edp icc_bwmon ath10k_snoc ath10k_core
ath mac80211 phy_qcom_qmp_combo aux_bridge libarc4 coresight_replicator
coresight_etm4x coresight_tmc
coresight_funnel cfg80211 rfkill coresight qcom_wdt cbmem ramoops
reed_solomon pwm_bl coreboot_table backlight crct10dif_ce
CPU: 7 UID: 0 PID: 70 Comm: kworker/u32:4 Not tainted 6.11.0-rc6-next-20240903-00003-gdfc6015d0711 #660
Hardware name: Google Lazor Limozeen without Touchscreen (rev5 - rev8) (DT)
Workqueue: events_unbound deferred_probe_work_func
hub 2-1:1.0: 4 ports detected

pstate: 80400009 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : dma_common_alloc_pages+0x54/0x1b4
lr : dma_common_alloc_pages+0x4c/0x1b4
sp : ffff8000807d3730
x29: ffff8000807d3730 x28: ffff02a7d312f880 x27: 0000000000000001
x26: 000000000000c000 x25: 0000000000000000 x24: 0000000000000001
x23: ffff02a7d23b6898 x22: 0000000000006cc0 x21: 000000000000c000
x20: ffff02a7858bf410 x19: fffffe0a60006000 x18: 0000000000000001
x17: 00000000000000d5 x16: 1fffe054f0bcc261 x15: 0000000000000001
x14: ffff02a7844dc680 x13: 0000000000100180 x12: dead000000000100
x11: dead000000000122 x10: 00000000001001ff x9 : ffff02a87f7b7b00
x8 : ffff02a87f7b7b00 x7 : ffff405977d6b000 x6 : ffff8000807d3310
x5 : ffff02a87f6b6398 x4 : 0000000000000001 x3 : ffff405977d6b000
x2 : ffff02a7844dc600 x1 : 0000000100000000 x0 : fffffe0a60006000
Call trace:
dma_common_alloc_pages+0x54/0x1b4
__dma_alloc_pages+0x68/0x90
dma_alloc_pages+0x10/0x1c
snd_dma_noncoherent_alloc+0x28/0x8c
__snd_dma_alloc_pages+0x30/0x50
snd_dma_alloc_dir_pages+0x40/0x80
do_alloc_pages+0xb8/0x13c
preallocate_pcm_pages+0x6c/0xf8
preallocate_pages+0x160/0x1a4
snd_pcm_set_managed_buffer_all+0x64/0xb0
lpass_platform_pcm_new+0xc0/0xe8
snd_soc_pcm_component_new+0x3c/0xc8
soc_new_pcm+0x4fc/0x668
snd_soc_bind_card+0xabc/0xbac
snd_soc_register_card+0xf0/0x108
devm_snd_soc_register_card+0x4c/0xa4
sc7180_snd_platform_probe+0x180/0x224
platform_probe+0x68/0xc0
really_probe+0xbc/0x298
__driver_probe_device+0x78/0x12c
driver_probe_device+0x3c/0x15c
__device_attach_driver+0xb8/0x134
bus_for_each_drv+0x84/0xe0
__device_attach+0x9c/0x188
device_initial_probe+0x14/0x20
bus_probe_device+0xac/0xb0
deferred_probe_work_func+0x88/0xc0
process_one_work+0x14c/0x28c
worker_thread+0x2cc/0x3d4
kthread+0x114/0x118
ret_from_fork+0x10/0x20
Code: f9411c19 940000c9 aa0003f3 b4000460 (f9402326)
---[ end trace 0000000000000000 ]---

Fixes: b5c58b2fdc42 ("dma-mapping: direct calls for dma-iommu")
Closes: https://lore.kernel.org/all/10431dfd-ce04-4e0f-973b-c78477303c18@notapiano
Reported-by: Nícolas F. R. A. Prado <[email protected]> #KernelCI
Signed-off-by: Leon Romanovsky <[email protected]>
Tested-by: Nícolas F. R. A. Prado <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1
# b5c58b2f 24-Jul-2024 Leon Romanovsky <[email protected]>

dma-mapping: direct calls for dma-iommu

Directly call into dma-iommu just like we have been doing for dma-direct
for a while. This avoids the indirect call overhead for IOMMU ops and
removes the ne

dma-mapping: direct calls for dma-iommu

Directly call into dma-iommu just like we have been doing for dma-direct
for a while. This avoids the indirect call overhead for IOMMU ops and
removes the need to have DMA ops entirely for many common configurations.

Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
Acked-by: Robin Murphy <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# f69e342e 24-Jul-2024 Leon Romanovsky <[email protected]>

dma-mapping: call ->unmap_page and ->unmap_sg unconditionally

Almost all instances of the dma_map_ops ->map_page()/map_sg() methods
implement ->unmap_page()/unmap_sg() too. The once instance which

dma-mapping: call ->unmap_page and ->unmap_sg unconditionally

Almost all instances of the dma_map_ops ->map_page()/map_sg() methods
implement ->unmap_page()/unmap_sg() too. The once instance which doesn't
dma_dummy_ops which is used to fail the DMA mapping and thus there won't
be any calls to ->unmap_page()/unmap_sg().

Remove the checks for ->unmap_page()/unmap_sg() and call them directly to
create an interface that is symmetrical to ->map_page()/map_sg().

Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# 28e8b740 18-Jul-2024 Lance Richardson <[email protected]>

dma: fix call order in dmam_free_coherent

dmam_free_coherent() frees a DMA allocation, which makes the
freed vaddr available for reuse, then calls devres_destroy()
to remove and free the data struct

dma: fix call order in dmam_free_coherent

dmam_free_coherent() frees a DMA allocation, which makes the
freed vaddr available for reuse, then calls devres_destroy()
to remove and free the data structure used to track the DMA
allocation. Between the two calls, it is possible for a
concurrent task to make an allocation with the same vaddr
and add it to the devres list.

If this happens, there will be two entries in the devres list
with the same vaddr and devres_destroy() can free the wrong
entry, triggering the WARN_ON() in dmam_match.

Fix by destroying the devres entry before freeing the DMA
allocation.

Tested:
kokonut //net/encryption
http://sponge2/b9145fe6-0f72-4325-ac2f-a84d81075b03

Fixes: 9ac7849e35f7 ("devres: device resource management")
Signed-off-by: Lance Richardson <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9
# a6016aac 09-May-2024 Alexander Lobakin <[email protected]>

dma: fix DMA sync for drivers not calling dma_set_mask*()

There are several reports that the DMA sync shortcut broke non-coherent
devices.
dev->dma_need_sync is false after the &device allocation an

dma: fix DMA sync for drivers not calling dma_set_mask*()

There are several reports that the DMA sync shortcut broke non-coherent
devices.
dev->dma_need_sync is false after the &device allocation and if a driver
didn't call dma_set_mask*(), it will still be false even if the device
is not DMA-coherent and thus needs synchronizing. Due to historical
reasons, there's still a lot of drivers not calling it.
Invert the boolean, so that the sync will be performed by default and
the shortcut will be enabled only when calling dma_set_mask*().

Reported-by: Steven Price <[email protected]>
Closes: https://lore.kernel.org/lkml/[email protected]
Reported-by: Marek Szyprowski <[email protected]>
Closes: https://lore.kernel.org/lkml/[email protected]
Fixes: f406c8e4b770. ("dma: avoid redundant calls for sync operations")
Signed-off-by: Alexander Lobakin <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Steven Price <[email protected]>
Tested-by: Marek Szyprowski <[email protected]>

show more ...


# f406c8e4 07-May-2024 Alexander Lobakin <[email protected]>

dma: avoid redundant calls for sync operations

Quite often, devices do not need dma_sync operations on x86_64 at least.
Indeed, when dev_is_dma_coherent(dev) is true and
dev_use_swiotlb(dev) is fals

dma: avoid redundant calls for sync operations

Quite often, devices do not need dma_sync operations on x86_64 at least.
Indeed, when dev_is_dma_coherent(dev) is true and
dev_use_swiotlb(dev) is false, iommu_dma_sync_single_for_cpu()
and friends do nothing.

However, indirectly calling them when CONFIG_RETPOLINE=y consumes about
10% of cycles on a cpu receiving packets from softirq at ~100Gbit rate.
Even if/when CONFIG_RETPOLINE is not set, there is a cost of about 3%.

Add dev->need_dma_sync boolean and turn it off during the device
initialization (dma_set_mask()) depending on the setup:
dev_is_dma_coherent() for the direct DMA, !(sync_single_for_device ||
sync_single_for_cpu) or the new dma_map_ops flag, %DMA_F_CAN_SKIP_SYNC,
advertised for non-NULL DMA ops.
Then later, if/when swiotlb is used for the first time, the flag
is reset back to on, from swiotlb_tbl_map_single().

On iavf, the UDP trafficgen with XDP_DROP in skb mode test shows
+3-5% increase for direct DMA.

Suggested-by: Christoph Hellwig <[email protected]> # direct DMA shortcut
Co-developed-by: Eric Dumazet <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# fe7514b1 07-May-2024 Alexander Lobakin <[email protected]>

dma: compile-out DMA sync op calls when not used

Some platforms do have DMA, but DMA there is always direct and coherent.
Currently, even on such platforms DMA sync operations are compiled and
calle

dma: compile-out DMA sync op calls when not used

Some platforms do have DMA, but DMA there is always direct and coherent.
Currently, even on such platforms DMA sync operations are compiled and
called.
Add a new hidden Kconfig symbol, DMA_NEED_SYNC, and set it only when
either sync operations are needed or there is DMA ops or swiotlb
or DMA debug is enabled. Compile global dma_sync_*() and dma_need_sync()
only when it's set, otherwise provide empty inline stubs.
The change allows for future optimizations of DMA sync calls depending
on runtime conditions.

Signed-off-by: Alexander Lobakin <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1
# 8a2f1187 21-Mar-2024 Suren Baghdasaryan <[email protected]>

change alloc_pages name in dma_map_ops to avoid name conflicts

After redefining alloc_pages, all uses of that name are being replaced.
Change the conflicting names to prevent preprocessor from repl

change alloc_pages name in dma_map_ops to avoid name conflicts

After redefining alloc_pages, all uses of that name are being replaced.
Change the conflicting names to prevent preprocessor from replacing them
when it's not intended.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Suren Baghdasaryan <[email protected]>
Tested-by: Kees Cook <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Alex Gaynor <[email protected]>
Cc: Alice Ryhl <[email protected]>
Cc: Andreas Hindborg <[email protected]>
Cc: Benno Lossin <[email protected]>
Cc: "Björn Roy Baron" <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Gary Guo <[email protected]>
Cc: Kent Overstreet <[email protected]>
Cc: Miguel Ojeda <[email protected]>
Cc: Pasha Tatashin <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wedson Almeida Filho <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>

show more ...


Revision tags: v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6
# a409d960 28-Oct-2023 Jia He <[email protected]>

dma-mapping: fix dma_addressing_limited() if dma_range_map can't cover all system RAM

There is an unusual case that the range map covers right up to the top
of system RAM, but leaves a hole somewher

dma-mapping: fix dma_addressing_limited() if dma_range_map can't cover all system RAM

There is an unusual case that the range map covers right up to the top
of system RAM, but leaves a hole somewhere lower down. Then it prevents
the nvme device dma mapping in the checking path of phys_to_dma() and
causes the hangs at boot.

E.g. On an Armv8 Ampere server, the dsdt ACPI table is:
Method (_DMA, 0, Serialized) // _DMA: Direct Memory Access
{
Name (RBUF, ResourceTemplate ()
{
QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
0x0000000000000000, // Granularity
0x0000000000000000, // Range Minimum
0x00000000FFFFFFFF, // Range Maximum
0x0000000000000000, // Translation Offset
0x0000000100000000, // Length
,, , AddressRangeMemory, TypeStatic)
QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
0x0000000000000000, // Granularity
0x0000006010200000, // Range Minimum
0x000000602FFFFFFF, // Range Maximum
0x0000000000000000, // Translation Offset
0x000000001FE00000, // Length
,, , AddressRangeMemory, TypeStatic)
QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
0x0000000000000000, // Granularity
0x00000060F0000000, // Range Minimum
0x00000060FFFFFFFF, // Range Maximum
0x0000000000000000, // Translation Offset
0x0000000010000000, // Length
,, , AddressRangeMemory, TypeStatic)
QWordMemory (ResourceConsumer, PosDecode, MinFixed,
MaxFixed, Cacheable, ReadWrite,
0x0000000000000000, // Granularity
0x0000007000000000, // Range Minimum
0x000003FFFFFFFFFF, // Range Maximum
0x0000000000000000, // Translation Offset
0x0000039000000000, // Length
,, , AddressRangeMemory, TypeStatic)
})

But the System RAM ranges are:
cat /proc/iomem |grep -i ram
90000000-91ffffff : System RAM
92900000-fffbffff : System RAM
880000000-fffffffff : System RAM
8800000000-bff5990fff : System RAM
bff59d0000-bff5a4ffff : System RAM
bff8000000-bfffffffff : System RAM
So some RAM ranges are out of dma_range_map.

Fix it by checking whether each of the system RAM resources can be
properly encompassed within the dma_range_map.

Signed-off-by: Jia He <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# 8ae0e970 28-Oct-2023 Jia He <[email protected]>

dma-mapping: move dma_addressing_limited() out of line

This patch moves dma_addressing_limited() out of line, serving as a
preliminary step to prevent the introduction of a new publicly accessible
l

dma-mapping: move dma_addressing_limited() out of line

This patch moves dma_addressing_limited() out of line, serving as a
preliminary step to prevent the introduction of a new publicly accessible
low-level helper when validating whether all system RAM is mapped within
the DMA mapping range.

Suggested-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jia He <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4
# 3d6f126b 27-Jul-2023 Arnd Bergmann <[email protected]>

dma-mapping: move arch_dma_set_mask() declaration to header

This function has a __weak definition and an override that is only used on
freescale powerpc chips. The powerpc definition however does no

dma-mapping: move arch_dma_set_mask() declaration to header

This function has a __weak definition and an override that is only used on
freescale powerpc chips. The powerpc definition however does not see the
declaration that is in a .c file:

arch/powerpc/kernel/dma-mask.c:7:6: error: no previous prototype for 'arch_dma_set_mask' [-Werror=missing-prototypes]

Move it into the linux/dma-map-ops.h header where the other arch_dma_* functions
are declared.

Signed-off-by: Arnd Bergmann <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5
# 1d3f56b2 01-Apr-2023 Jiaxun Yang <[email protected]>

dma-mapping: provide CONFIG_ARCH_DMA_DEFAULT_COHERENT

Provide a kconfig option to allow arches to manipulate default
value of dma_default_coherent in Kconfig.

Signed-off-by: Jiaxun Yang <jiaxun.yan

dma-mapping: provide CONFIG_ARCH_DMA_DEFAULT_COHERENT

Provide a kconfig option to allow arches to manipulate default
value of dma_default_coherent in Kconfig.

Signed-off-by: Jiaxun Yang <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# fe4e5efa 01-Apr-2023 Jiaxun Yang <[email protected]>

dma-mapping: provide a fallback dma_default_coherent

dma_default_coherent was decleared unconditionally at kernel/dma/mapping.c
but only decleared when any of non-coherent options is enabled in
dma-

dma-mapping: provide a fallback dma_default_coherent

dma_default_coherent was decleared unconditionally at kernel/dma/mapping.c
but only decleared when any of non-coherent options is enabled in
dma-map-ops.h.

Guard the declaration in mapping.c with non-coherent options and provide
a fallback definition.

Signed-off-by: Jiaxun Yang <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1
# 3622b86f 20-Dec-2022 Christoph Hellwig <[email protected]>

dma-mapping: reject GFP_COMP for noncoherent allocations

While not quite as bogus as for the dma-coherent allocations that were
fixed earlier, GFP_COMP for these allocations has no benefits for
the

dma-mapping: reject GFP_COMP for noncoherent allocations

While not quite as bogus as for the dma-coherent allocations that were
fixed earlier, GFP_COMP for these allocations has no benefits for
the dma-direct case, and can't be supported at all by dma dma-iommu
backend which splits up allocations into smaller orders. Due to an
oversight in ffcb75458460 that flag stopped being cleared for all
dma allocations, but only got rejected for coherent ones, so fix up
these callers to not allow __GFP_COMP as well after the sound code
has been fixed to not ask for it.

Fixes: ffcb75458460 ("dma-mapping: reject __GFP_COMP in dma_alloc_attrs")
Reported-by: Mikhail Gavrilov <[email protected]>
Reported-by: Kai Vehmanen <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Takashi Iwai <[email protected]>
Tested-by: Mikhail Gavrilov <[email protected]>
Tested-by: Kai Vehmanen <[email protected]>

show more ...


1234