Searched refs:alloc_align_mask (Results 1 – 2 of 2) sorted by relevance
1018 unsigned int alloc_align_mask) in swiotlb_search_pool_area() argument1045 alloc_align_mask = PAGE_SIZE - 1; in swiotlb_search_pool_area()1052 alloc_align_mask |= (IO_TLB_SIZE - 1); in swiotlb_search_pool_area()1053 iotlb_align_mask &= ~alloc_align_mask; in swiotlb_search_pool_area()1074 if ((tlb_addr & alloc_align_mask) || in swiotlb_search_pool_area()1157 alloc_align_mask); in swiotlb_search_area()1182 size_t alloc_size, unsigned int alloc_align_mask, in swiotlb_find_slots() argument1199 alloc_align_mask, &pool); in swiotlb_find_slots()1217 alloc_size, alloc_align_mask); in swiotlb_find_slots()1269 alloc_size, alloc_align_mask); in swiotlb_find_slots()[all …]
127 swiotlb_tbl_map_single() also takes an "alloc_align_mask" parameter. This129 physical address with the alloc_align_mask bits set to zero. But the actual133 alloc_align_mask boundary, potentially resulting in post-padding space. Any135 "alloc_align_mask" parameter is used by IOMMU code when mapping for untrusted183 The default pool is allocated with PAGE_SIZE alignment. If an alloc_align_mask185 initial slots in each slot set might not meet the alloc_align_mask criterium.188 Currently, there's no problem because alloc_align_mask is set based on IOMMU297 meet alloc_align_mask requirements described above. When298 swiotlb_tbl_map_single() allocates bounce buffer space to meet alloc_align_mask301 alloc_align_mask value that governed the allocation, and therefore the