History log of /linux-6.15/include/linux/dma-direct.h (Results 1 – 25 of 47)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5
# b66e2ee7 27-Feb-2025 Suzuki K Poulose <[email protected]>

dma: Introduce generic dma_addr_*crypted helpers

AMD SME added __sme_set/__sme_clr primitives to modify the DMA address for
encrypted/decrypted traffic. However this doesn't fit in with other models

dma: Introduce generic dma_addr_*crypted helpers

AMD SME added __sme_set/__sme_clr primitives to modify the DMA address for
encrypted/decrypted traffic. However this doesn't fit in with other models,
e.g., Arm CCA where the meanings are the opposite. i.e., "decrypted" traffic
has a bit set and "encrypted" traffic has the top bit cleared.

In preparation for adding the support for Arm CCA DMA conversions, convert the
existing primitives to more generic ones that can be provided by the backends.
i.e., add helpers to
1. dma_addr_encrypted - Convert a DMA address to "encrypted" [ == __sme_set() ]
2. dma_addr_unencrypted - Convert a DMA address to "decrypted" [ None exists today ]
3. dma_addr_canonical - Clear any "encryption"/"decryption" bits from DMA
address [ SME uses __sme_clr() ] and convert to a canonical DMA address.

Since the original __sme_xxx helpers come from linux/mem_encrypt.h, use that
as the home for the new definitions and provide dummy ones when none is provided
by the architectures.

With the above, phys_to_dma_unencrypted() uses the newly added dma_addr_unencrypted()
helper and to make it a bit more easier to read and avoid double conversion,
provide __phys_to_dma().

Suggested-by: Robin Murphy <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Jean-Philippe Brucker <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Steven Price <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Reviewed-by: Gavin Shan <[email protected]>
Acked-by: Marek Szyprowski <[email protected]>
Fixes: 42be24a4178f ("arm64: Enable memory encrypt for Realms")
Acked-by: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


# c3809317 27-Feb-2025 Suzuki K Poulose <[email protected]>

dma: Fix encryption bit clearing for dma_to_phys

phys_to_dma() sets the encryption bit on the translated DMA address. But
dma_to_phys() clears the encryption bit after it has been translated back
to

dma: Fix encryption bit clearing for dma_to_phys

phys_to_dma() sets the encryption bit on the translated DMA address. But
dma_to_phys() clears the encryption bit after it has been translated back
to the physical address, which could fail if the device uses DMA ranges.

AMD SME doesn't use the DMA ranges and thus this is harmless. But as we
are about to add support for other architectures, let us fix this.

Reported-by: Aneesh Kumar K.V <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Cc: Will Deacon <[email protected]>
Cc: Jean-Philippe Brucker <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Steven Price <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Tom Lendacky <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Acked-by: Tom Lendacky <[email protected]>
Signed-off-by: Suzuki K Poulose <[email protected]>
Reviewed-by: Gavin Shan <[email protected]>
Acked-by: Marek Szyprowski <[email protected]>
Fixes: 42be24a4178f ("arm64: Enable memory encrypt for Realms")
Acked-by: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>

show more ...


Revision tags: v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3
# ba0fb44a 11-Aug-2024 Catalin Marinas <[email protected]>

dma-mapping: replace zone_dma_bits by zone_dma_limit

The hardware DMA limit might not be power of 2. When RAM range starts
above 0, say 4GB, DMA limit of 30 bits should end at 5GB. A single high
bi

dma-mapping: replace zone_dma_bits by zone_dma_limit

The hardware DMA limit might not be power of 2. When RAM range starts
above 0, say 4GB, DMA limit of 30 bits should end at 5GB. A single high
bit can not encode this limit.

Use a plain address for the DMA zone limit instead.

Since the DMA zone can now potentially span beyond 4GB physical limit of
DMA32, make sure to use DMA zone for GFP_DMA32 allocations in that case.

Signed-off-by: Catalin Marinas <[email protected]>
Co-developed-by: Baruch Siach <[email protected]>
Signed-off-by: Baruch Siach <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Reviewed-by: Petr Tesarik <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5
# fece6530 19-Apr-2024 Robin Murphy <[email protected]>

dma-mapping: Add helpers for dma_range_map bounds

Several places want to compute the lower and/or upper bounds of a
dma_range_map, so let's factor that out into reusable helpers.

Acked-by: Rob Herr

dma-mapping: Add helpers for dma_range_map bounds

Several places want to compute the lower and/or upper bounds of a
dma_range_map, so let's factor that out into reusable helpers.

Acked-by: Rob Herring <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Hanjun Guo <[email protected]> # For arm64
Reviewed-by: Jason Gunthorpe <[email protected]>
Tested-by: Hanjun Guo <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
Link: https://lore.kernel.org/r/45ec52f033ec4dfb364e23f48abaf787f612fa53.1713523152.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <[email protected]>

show more ...


Revision tags: v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3
# 4ad4c1f3 24-Nov-2023 Robin Murphy <[email protected]>

dma-mapping: don't store redundant offsets

A bus_dma_region necessarily stores both CPU and DMA base addresses for
a range, so there's no need to also store the difference between them.

Signed-off-

dma-mapping: don't store redundant offsets

A bus_dma_region necessarily stores both CPU and DMA base addresses for
a range, so there's no need to also store the difference between them.

Signed-off-by: Robin Murphy <[email protected]>
Acked-by: Rob Herring <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7
# 9f4df96b 22-Sep-2020 Christoph Hellwig <[email protected]>

dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>

Move more nitty gritty DMA implementation details into the common
internal header.

Signed-off-by: Christoph Hellwig <hch@lst.

dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>

Move more nitty gritty DMA implementation details into the common
internal header.

Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# 19c65c3d 22-Sep-2020 Christoph Hellwig <[email protected]>

dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma

Most of the dma_direct symbols should only be used by direct.c and
mapping.c, so move them to kernel/dma. In fact more of dma-dir

dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma

Most of the dma_direct symbols should only be used by direct.c and
mapping.c, so move them to kernel/dma. In fact more of dma-direct.h
should eventually move, but that will require more coordination with
other subsystems.

Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v5.9-rc6, v5.9-rc5, v5.9-rc4
# efa70f2f 01-Sep-2020 Christoph Hellwig <[email protected]>

dma-mapping: add a new dma_alloc_pages API

This API is the equivalent of alloc_pages, except that the returned memory
is guaranteed to be DMA addressable by the passed in device. The
implementation

dma-mapping: add a new dma_alloc_pages API

This API is the equivalent of alloc_pages, except that the returned memory
is guaranteed to be DMA addressable by the passed in device. The
implementation will also be used to provide a more sensible replacement
for DMA_ATTR_NON_CONSISTENT flag.

Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages
as its backend.

Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Thomas Bogendoerfer <[email protected]> (MIPS part)

show more ...


# e0d07278 17-Sep-2020 Jim Quinlan <[email protected]>

dma-mapping: introduce DMA range map, supplanting dma_pfn_offset

The new field 'dma_range_map' in struct device is used to facilitate the
use of single or multiple offsets between mapping regions of

dma-mapping: introduce DMA range map, supplanting dma_pfn_offset

The new field 'dma_range_map' in struct device is used to facilitate the
use of single or multiple offsets between mapping regions of cpu addrs and
dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only
capable of holding a single uniform offset and had no region bounds
checking.

The function of_dma_get_range() has been modified so that it takes a single
argument -- the device node -- and returns a map, NULL, or an error code.
The map is an array that holds the information regarding the DMA regions.
Each range entry contains the address offset, the cpu_start address, the
dma_start address, and the size of the region.

of_dma_configure() is the typical manner to set range offsets but there are
a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel
driver code. These cases now invoke the function
dma_direct_set_offset(dev, cpu_addr, dma_addr, size).

Signed-off-by: Jim Quinlan <[email protected]>
[hch: various interface cleanups]
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Mathieu Poirier <[email protected]>
Tested-by: Mathieu Poirier <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>

show more ...


# f959dcd6 17-Sep-2020 Thomas Tai <[email protected]>

dma-direct: Fix potential NULL pointer dereference

When booting the kernel v5.9-rc4 on a VM, the kernel would panic when
printing a warning message in swiotlb_map(). The dev->dma_mask must not
be a

dma-direct: Fix potential NULL pointer dereference

When booting the kernel v5.9-rc4 on a VM, the kernel would panic when
printing a warning message in swiotlb_map(). The dev->dma_mask must not
be a NULL pointer when calling the dma mapping layer. A NULL pointer
check can potentially avoid the panic.

Signed-off-by: Thomas Tai <[email protected]>
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v5.9-rc3, v5.9-rc2
# 5ceda740 17-Aug-2020 Christoph Hellwig <[email protected]>

dma-direct: rename and cleanup __phys_to_dma

The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try
to improve the situation by renaming __phys_to_dma to
phys_to_dma_unencryped, an

dma-direct: rename and cleanup __phys_to_dma

The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try
to improve the situation by renaming __phys_to_dma to
phys_to_dma_unencryped, and not forcing architectures that want to
override phys_to_dma to actually provide __phys_to_dma.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>

show more ...


# 7bc5c428 08-Sep-2020 Christoph Hellwig <[email protected]>

dma-direct: remove __dma_to_phys

There is no harm in just always clearing the SME encryption bit, while
significantly simplifying the interface.

Signed-off-by: Christoph Hellwig <[email protected]>
Review

dma-direct: remove __dma_to_phys

There is no harm in just always clearing the SME encryption bit, while
significantly simplifying the interface.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>

show more ...


# 2f5388a2 17-Aug-2020 Christoph Hellwig <[email protected]>

dma-direct: remove dma_direct_{alloc,free}_pages

Just merge these helpers into the main dma_direct_{alloc,free} routines,
as the additional checks are always false for the two callers.

Signed-off-b

dma-direct: remove dma_direct_{alloc,free}_pages

Just merge these helpers into the main dma_direct_{alloc,free} routines,
as the additional checks are always false for the two callers.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>

show more ...


# abdaf11a 17-Aug-2020 Christoph Hellwig <[email protected]>

dma-mapping: add (back) arch_dma_mark_clean for ia64

Add back a hook to optimize dcache flushing after reading executable
code using DMA. This gets ia64 out of the business of pretending to
be dma

dma-mapping: add (back) arch_dma_mark_clean for ia64

Add back a hook to optimize dcache flushing after reading executable
code using DMA. This gets ia64 out of the business of pretending to
be dma incoherent just for this optimization.

Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v5.9-rc1
# 9420139f 14-Aug-2020 Christoph Hellwig <[email protected]>

dma-pool: fix coherent pool allocations for IOMMU mappings

When allocating coherent pool memory for an IOMMU mapping we don't care
about the DMA mask. Move the guess for the initial GFP mask into t

dma-pool: fix coherent pool allocations for IOMMU mappings

When allocating coherent pool memory for an IOMMU mapping we don't care
about the DMA mask. Move the guess for the initial GFP mask into the
dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer
argument so that it doesn't get applied to the IOMMU case.

Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Amit Pundir <[email protected]>

show more ...


Revision tags: v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5
# b4174173 08-Jul-2020 Christoph Hellwig <[email protected]>

dma-mapping: inline the fast path dma-direct calls

Inline the single page map/unmap/sync dma-direct calls into the now
out of line generic wrappers. This restores the behavior of a single
function

dma-mapping: inline the fast path dma-direct calls

Inline the single page map/unmap/sync dma-direct calls into the now
out of line generic wrappers. This restores the behavior of a single
function call that we had before moving the generic calls out of line.
Besides the dma-mapping callers there are just a few callers in IOMMU
drivers that have a bypass mode, and more of those are going to be
switched to the generic bypass soon.

Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Alexey Kardashevskiy <[email protected]>
Reviewed-by: Alexey Kardashevskiy <[email protected]>

show more ...


# d3fa60d7 08-Jul-2020 Christoph Hellwig <[email protected]>

dma-mapping: move the remaining DMA API calls out of line

For a long time the DMA API has been implemented inline in dma-mapping.h,
but the function bodies can be quite large. Move them all out of

dma-mapping: move the remaining DMA API calls out of line

For a long time the DMA API has been implemented inline in dma-mapping.h,
but the function bodies can be quite large. Move them all out of line.

This also removes all the dma_direct_* exports as those are just
implementation details and should never be used by drivers directly.

Signed-off-by: Christoph Hellwig <[email protected]>
Tested-by: Alexey Kardashevskiy <[email protected]>
Reviewed-by: Alexey Kardashevskiy <[email protected]>

show more ...


# 567f6a6e 14-Jul-2020 Nicolas Saenz Julienne <[email protected]>

dma-direct: provide function to check physical memory area validity

dma_coherent_ok() checks if a physical memory area fits a device's DMA
constraints.

Signed-off-by: Nicolas Saenz Julienne <nsaenz

dma-direct: provide function to check physical memory area validity

dma_coherent_ok() checks if a physical memory area fits a device's DMA
constraints.

Signed-off-by: Nicolas Saenz Julienne <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v5.8-rc4
# 3aa91625 29-Jun-2020 Christoph Hellwig <[email protected]>

dma-mapping: Add a new dma_need_sync API

Add a new API to check if calls to dma_sync_single_for_{device,cpu} are
required for a given DMA streaming mapping.

Signed-off-by: Christoph Hellwig <hch@ls

dma-mapping: Add a new dma_need_sync API

Add a new API to check if calls to dma_sync_single_for_{device,cpu} are
required for a given DMA streaming mapping.

Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]

show more ...


Revision tags: v5.8-rc3, v5.8-rc2
# 26749b32 15-Jun-2020 Christoph Hellwig <[email protected]>

dma-direct: mark __dma_direct_alloc_pages static

Signed-off-by: Christoph Hellwig <[email protected]>


Revision tags: v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2
# c84dc6e6 15-Apr-2020 David Rientjes <[email protected]>

dma-pool: add additional coherent pools to map to gfp mask

The single atomic pool is allocated from the lowest zone possible since
it is guaranteed to be applicable for any DMA allocation.

Devices

dma-pool: add additional coherent pools to map to gfp mask

The single atomic pool is allocated from the lowest zone possible since
it is guaranteed to be applicable for any DMA allocation.

Devices may allocate through the DMA API but not have a strict reliance
on GFP_DMA memory. Since the atomic pool will be used for all
non-blockable allocations, returning all memory from ZONE_DMA may
unnecessarily deplete the zone.

Provision for multiple atomic pools that will map to the optimal gfp
mask of the device.

When allocating non-blockable memory, determine the optimal gfp mask of
the device and use the appropriate atomic pool.

The coherent DMA mask will remain the same between allocation and free
and, thus, memory will be freed to the same atomic pool it was allocated
from.

__dma_atomic_pool_init() will be changed to return struct gen_pool *
later once dynamic expansion is added.

Signed-off-by: David Rientjes <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


Revision tags: v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4
# a7ba70f1 21-Nov-2019 Nicolas Saenz Julienne <[email protected]>

dma-mapping: treat dev->bus_dma_mask as a DMA limit

Using a mask to represent bus DMA constraints has a set of limitations.
The biggest one being it can only hold a power of two (minus one). The
DMA

dma-mapping: treat dev->bus_dma_mask as a DMA limit

Using a mask to represent bus DMA constraints has a set of limitations.
The biggest one being it can only hold a power of two (minus one). The
DMA mapping code is already aware of this and treats dev->bus_dma_mask
as a limit. This quirk is already used by some architectures although
still rare.

With the introduction of the Raspberry Pi 4 we've found a new contender
for the use of bus DMA limits, as its PCIe bus can only address the
lower 3GB of memory (of a total of 4GB). This is impossible to represent
with a mask. To make things worse the device-tree code rounds non power
of two bus DMA limits to the next power of two, which is unacceptable in
this case.

In the light of this, rename dev->bus_dma_mask to dev->bus_dma_limit all
over the tree and treat it as such. Note that dev->bus_dma_limit should
contain the higher accessible DMA address.

Signed-off-by: Nicolas Saenz Julienne <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>

show more ...


# 68a33b17 19-Nov-2019 Christoph Hellwig <[email protected]>

dma-direct: exclude dma_direct_map_resource from the min_low_pfn check

The valid memory address check in dma_capable only makes sense when mapping
normal memory, not when using dma_map_resource to m

dma-direct: exclude dma_direct_map_resource from the min_low_pfn check

The valid memory address check in dma_capable only makes sense when mapping
normal memory, not when using dma_map_resource to map a device resource.
Add a new boolean argument to dma_capable to exclude that check for the
dma_map_resource case.

Fixes: b12d66278dd6 ("dma-direct: check for overflows on 32 bit DMA addresses")
Reported-by: Marek Szyprowski <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Marek Szyprowski <[email protected]>
Tested-by: Marek Szyprowski <[email protected]>

show more ...


Revision tags: v5.4-rc8
# c7345159 12-Nov-2019 Christoph Hellwig <[email protected]>

dma-direct: avoid a forward declaration for phys_to_dma

Move dma_capable down a bit so that we don't need a forward declaration
for phys_to_dma.

Signed-off-by: Christoph Hellwig <[email protected]>
Review

dma-direct: avoid a forward declaration for phys_to_dma

Move dma_capable down a bit so that we don't need a forward declaration
for phys_to_dma.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Nicolas Saenz Julienne <[email protected]>

show more ...


# 130c1ccb 12-Nov-2019 Christoph Hellwig <[email protected]>

dma-direct: unify the dma_capable definitions

Currently each architectures that wants to override dma_to_phys and
phys_to_dma also has to provide dma_capable. But there isn't really
any good reason

dma-direct: unify the dma_capable definitions

Currently each architectures that wants to override dma_to_phys and
phys_to_dma also has to provide dma_capable. But there isn't really
any good reason for that. powerpc and mips just have copies of the
generic one minus the latests fix, and the arm one was the inspiration
for said fix, but misses the bus_dma_mask handling.
Make all architectures use the generic version instead.

Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Michael Ellerman <[email protected]> (powerpc)
Reviewed-by: Nicolas Saenz Julienne <[email protected]>

show more ...


12