<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="/rss.xsl.xml"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
    <title>Changes in Makefile</title>
    <description></description>
    <language>en</language>
    <copyright>Copyright 2015</copyright>
    <generator>Java</generator><item>
        <title>e2082e32 - rqspinlock: Add entry to Makefile, MAINTAINERS</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#e2082e32</link>
        <description>rqspinlock: Add entry to Makefile, MAINTAINERSEnsure that the rqspinlock code is only built when the BPF subsystem iscompiled in. Depending on queued spinlock support, we may or may not endup building the queued spinlock slowpath, and instead fallback to thetest-and-set implementation. Also add entries to MAINTAINERS file.Signed-off-by: Kumar Kartikeya Dwivedi &lt;memxor@gmail.com&gt;Link: https://lore.kernel.org/r/20250316040541.108729-18-memxor@gmail.comSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Sun, 16 Mar 2025 04:05:33 +0000</pubDate>
        <dc:creator>Kumar Kartikeya Dwivedi &lt;memxor@gmail.com&gt;</dc:creator>
    </item>
<item>
        <title>c83508da - bpf: Avoid deadlock caused by nested kprobe and fentry bpf programs</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#c83508da</link>
        <description>bpf: Avoid deadlock caused by nested kprobe and fentry bpf programsBPF program types like kprobe and fentry can cause deadlocks in certainsituations. If a function takes a lock and one of these bpf programs ishooked to some point in the function&apos;s critical section, and if thebpf program tries to call the same function and take the same lock it willlead to deadlock. These situations have been reported in the followingbug reports.In percpu_freelist -Link: https://lore.kernel.org/bpf/CAADnVQLAHwsa+2C6j9+UC6ScrDaN9Fjqv1WjB1pP9AzJLhKuLQ@mail.gmail.com/T/Link: https://lore.kernel.org/bpf/CAPPBnEYm+9zduStsZaDnq93q1jPLqO-PiKX9jy0MuL8LCXmCrQ@mail.gmail.com/T/In bpf_lru_list -Link: https://lore.kernel.org/bpf/CAPPBnEajj+DMfiR_WRWU5=6A7KKULdB5Rob_NJopFLWF+i9gCA@mail.gmail.com/T/Link: https://lore.kernel.org/bpf/CAPPBnEZQDVN6VqnQXvVqGoB+ukOtHGZ9b9U0OLJJYvRoSsMY_g@mail.gmail.com/T/Link: https://lore.kernel.org/bpf/CAPPBnEaCB1rFAYU7Wf8UxqcqOWKmRPU1Nuzk3_oLk6qXR7LBOA@mail.gmail.com/T/Similar bugs have been reported by syzbot.In queue_stack_maps -Link: https://lore.kernel.org/lkml/0000000000004c3fc90615f37756@google.com/Link: https://lore.kernel.org/all/20240418230932.2689-1-hdanton@sina.com/T/In lpm_trie -Link: https://lore.kernel.org/linux-kernel/00000000000035168a061a47fa38@google.com/T/In ringbuf -Link: https://lore.kernel.org/bpf/20240313121345.2292-1-hdanton@sina.com/T/Prevent kprobe and fentry bpf programs from attaching to these criticalsections by removing CC_FLAGS_FTRACE for percpu_freelist.o,bpf_lru_list.o, queue_stack_maps.o, lpm_trie.o, ringbuf.o files.The bugs reported by syzbot are due to tracepoint bpf programs beingcalled in the critical sections. This patch does not aim to fix deadlockscaused by tracepoint programs. However, it does prevent deadlocks fromoccurring in similar situations due to kprobe and fentry programs.Signed-off-by: Priya Bala Govindasamy &lt;pgovind2@uci.edu&gt;Link: https://lore.kernel.org/r/CAPPBnEZpjGnsuA26Mf9kYibSaGLm=oF6=12L21X1GEQdqjLnzQ@mail.gmail.comSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Sat, 14 Dec 2024 01:58:58 +0000</pubDate>
        <dc:creator>Priya Bala Govindasamy &lt;pgovind2@uci.edu&gt;</dc:creator>
    </item>
<item>
        <title>b7953797 - bpf: Introduce range_tree data structure and use it in bpf arena</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#b7953797</link>
        <description>bpf: Introduce range_tree data structure and use it in bpf arenaIntroduce range_tree data structure and use it in bpf arena to trackranges of allocated pages. range_tree is a large bitmap that isimplemented as interval tree plus rbtree. The contiguous sequence ofbits represents unallocated pages.Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;Signed-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Acked-by: Kumar Kartikeya Dwivedi &lt;memxor@gmail.com&gt;Link: https://lore.kernel.org/bpf/20241108025616.17625-2-alexei.starovoitov@gmail.com

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Fri, 08 Nov 2024 02:56:15 +0000</pubDate>
        <dc:creator>Alexei Starovoitov &lt;ast@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>4971266e - bpf: Add kmem_cache iterator</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#4971266e</link>
        <description>bpf: Add kmem_cache iteratorThe new &quot;kmem_cache&quot; iterator will traverse the list of slab cachesand call attached BPF programs for each entry.  It should check theargument (ctx.s) if it&apos;s NULL before using it.Now the iteration grabs the slab_mutex only if it traverse the list andreleases the mutex when it runs the BPF program.  The kmem_cache entryis protected by a refcount during the execution.Signed-off-by: Namhyung Kim &lt;namhyung@kernel.org&gt;Acked-by: Vlastimil Babka &lt;vbabka@suse.cz&gt; #slabLink: https://lore.kernel.org/r/20241010232505.1339892-2-namhyung@kernel.orgSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Thu, 10 Oct 2024 23:25:03 +0000</pubDate>
        <dc:creator>Namhyung Kim &lt;namhyung@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>1dd7622e - bpf: Remove custom build rule</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#1dd7622e</link>
        <description>bpf: Remove custom build ruleAccording to the documentation, when building a kernel with the C=2parameter, all source files should be checked. But this does not happenfor the kernel/bpf/ directory.$ touch kernel/bpf/core.o$ make C=2 CHECK=true kernel/bpf/core.oOutputs:  CHECK   scripts/mod/empty.c  CALL    scripts/checksyscalls.sh  DESCEND objtool  INSTALL libsubcmd_headers  CC      kernel/bpf/core.oAs can be seen the compilation is done, but CHECK is not executed. Thishappens because kernel/bpf/Makefile has defined its own rule forcompilation and forgotten the macro that does the check.There is no need to duplicate the build code, and this rule can beremoved to use generic rules.Acked-by: Masahiro Yamada &lt;masahiroy@kernel.org&gt;Tested-by: Oleg Nesterov &lt;oleg@redhat.com&gt;Tested-by: Alan Maguire &lt;alan.maguire@oracle.com&gt;Signed-off-by: Alexey Gladkov &lt;legion@kernel.org&gt;Link: https://lore.kernel.org/r/20240830074350.211308-1-legion@kernel.orgSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Fri, 30 Aug 2024 07:43:50 +0000</pubDate>
        <dc:creator>Alexey Gladkov &lt;legion@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>8646db23 - libbpf,bpf: Share BTF relocate-related code with kernel</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#8646db23</link>
        <description>libbpf,bpf: Share BTF relocate-related code with kernelShare relocation implementation with the kernel.  As part of this,we also need the type/string iteration functions so also sharebtf_iter.c file. Relocation code in kernel and userspace is identicalsave for the impementation of the reparenting of split BTF to therelocated base BTF and retrieval of the BTF header from &quot;struct btf&quot;;these small functions need separate user-space and kernel implementationsfor the separate &quot;struct btf&quot;s they operate upon.One other wrinkle on the kernel side is we have to map .BTF.ids inmodules as they were generated with the type ids used at BTF encodingtime. btf_relocate() optionally returns an array mapping from old BTFids to relocated ids, so we use that to fix up these references whereneeded for kfuncs.Signed-off-by: Alan Maguire &lt;alan.maguire@oracle.com&gt;Signed-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Acked-by: Eduard Zingerman &lt;eddyz87@gmail.com&gt;Link: https://lore.kernel.org/bpf/20240620091733.1967885-5-alan.maguire@oracle.com

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Thu, 20 Jun 2024 09:17:31 +0000</pubDate>
        <dc:creator>Alan Maguire &lt;alan.maguire@oracle.com&gt;</dc:creator>
    </item>
<item>
        <title>ac2f438c - bpf: crypto: fix build when CONFIG_CRYPTO=m</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#ac2f438c</link>
        <description>bpf: crypto: fix build when CONFIG_CRYPTO=mCrypto subsytem can be build as a module. In this case we still have tobuild BPF crypto framework otherwise the build will fail.Fixes: 3e1c6f35409f (&quot;bpf: make common crypto API for TC/XDP programs&quot;)Reported-by: kernel test robot &lt;lkp@intel.com&gt;Closes: https://lore.kernel.org/oe-kbuild-all/202405011634.4JK40epY-lkp@intel.com/Signed-off-by: Vadim Fedorenko &lt;vadfed@meta.com&gt;Link: https://lore.kernel.org/r/20240501170130.1682309-1-vadfed@meta.comSigned-off-by: Martin KaFai Lau &lt;martin.lau@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 01 May 2024 17:01:30 +0000</pubDate>
        <dc:creator>Vadim Fedorenko &lt;vadfed@meta.com&gt;</dc:creator>
    </item>
<item>
        <title>3e1c6f35 - bpf: make common crypto API for TC/XDP programs</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#3e1c6f35</link>
        <description>bpf: make common crypto API for TC/XDP programsAdd crypto API support to BPF to be able to decrypt or encrypt packetsin TC/XDP BPF programs. Special care should be taken for initializationpart of crypto algo because crypto alloc) doesn&apos;t work with preemtiondisabled, it can be run only in sleepable BPF program. Also async cryptois not supported because of the very same issue - TC/XDP BPF programsare not sleepable.Signed-off-by: Vadim Fedorenko &lt;vadfed@meta.com&gt;Link: https://lore.kernel.org/r/20240422225024.2847039-2-vadfed@meta.comSigned-off-by: Martin KaFai Lau &lt;martin.lau@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Mon, 22 Apr 2024 22:50:21 +0000</pubDate>
        <dc:creator>Vadim Fedorenko &lt;vadfed@meta.com&gt;</dc:creator>
    </item>
<item>
        <title>c40845e3 - kbuild: make -Woverride-init warnings more consistent</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#c40845e3</link>
        <description>kbuild: make -Woverride-init warnings more consistentThe -Woverride-init warn about code that may be intentional or not,but the inintentional ones tend to be real bugs, so there is a bit ofdisagreement on whether this warning option should be enabled by defaultand we have multiple settings in scripts/Makefile.extrawarn as well asindividual subsystems.Older versions of clang only supported -Wno-initializer-overrides withthe same meaning as gcc&apos;s -Woverride-init, though all supported versionsnow work with both. Because of this difference, an earlier cleanup ofmine accidentally turned the clang warning off for W=1 builds and onlyleft it on for W=2, while it&apos;s still enabled for gcc with W=1.There is also one driver that only turns the warning off for newerversions of gcc but not other compilers, and some but not all theMakefiles still use a cc-disable-warning conditional that is nolonger needed with supported compilers here.Address all of the above by removing the special cases for clangand always turning the warning off unconditionally where it gotin the way, using the syntax that is supported by both compilers.Fixes: 2cd3271b7a31 (&quot;kbuild: avoid duplicate warning options&quot;)Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;Acked-by: Hamza Mahfooz &lt;hamza.mahfooz@amd.com&gt;Acked-by: Jani Nikula &lt;jani.nikula@intel.com&gt;Acked-by: Andrew Jeffery &lt;andrew@codeconstruct.com.au&gt;Signed-off-by: Jani Nikula &lt;jani.nikula@intel.com&gt;Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;Signed-off-by: Masahiro Yamada &lt;masahiroy@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Tue, 26 Mar 2024 14:47:16 +0000</pubDate>
        <dc:creator>Arnd Bergmann &lt;arnd@arndb.de&gt;</dc:creator>
    </item>
<item>
        <title>31746031 - bpf: Introduce bpf_arena.</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#31746031</link>
        <description>bpf: Introduce bpf_arena.Introduce bpf_arena, which is a sparse shared memory region between the bpfprogram and user space.Use cases:1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed   anonymous region, like memcached or any key/value storage. The bpf   program implements an in-kernel accelerator. XDP prog can search for   a key in bpf_arena and return a value without going to user space.2. The bpf program builds arbitrary data structures in bpf_arena (hash   tables, rb-trees, sparse arrays), while user space consumes it.3. bpf_arena is a &quot;heap&quot; of memory from the bpf program&apos;s point of view.   The user space may mmap it, but bpf program will not convert pointers   to user base at run-time to improve bpf program speed.Initially, the kernel vm_area and user vma are not populated. User spacecan fault in pages within the range. While servicing a page fault,bpf_arena logic will insert a new page into the kernel and user vmas. Thebpf program can allocate pages from that region viabpf_arena_alloc_pages(). This kernel function will insert pages into thekernel vm_area. The subsequent fault-in from user space will populate thatpage into the user vma. The BPF_F_SEGV_ON_FAULT flag at arena creation timecan be used to prevent fault-in from user space. In such a case, if a pageis not allocated by the bpf program and not present in the kernel vm_area,the user process will segfault. This is useful for use cases 2 and 3 above.bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pageseither at a specific address within the arena or allocates a range with themaple tree. bpf_arena_free_pages() is analogous to munmap(), which freespages and removes the range from the kernel vm_area and from user processvmas.bpf_arena can be used as a bpf program &quot;heap&quot; of up to 4GB. The speed ofbpf program is more important than ease of sharing with user space. This isuse case 3. In such a case, the BPF_F_NO_USER_CONV flag is recommended.It will tell the verifier to treat the rX = bpf_arena_cast_user(rY)instruction as a 32-bit move wX = wY, which will improve bpf progperformance. Otherwise, bpf_arena_cast_user is translated by JIT toconditionally add the upper 32 bits of user vm_start (if the pointer is notNULL) to arena pointers before they are stored into memory. This way, userspace sees them as valid 64-bit pointers.Diff https://github.com/llvm/llvm-project/pull/84410 enables LLVM BPFbackend generate the bpf_addr_space_cast() instruction to cast pointersbetween address_space(1) which is reserved for bpf_arena pointers anddefault address space zero. All arena pointers in a bpf program written inC language are tagged as __attribute__((address_space(1))). Hence, clangprovides helpful diagnostics when pointers cross address space. Libbpf andthe kernel support only address_space == 1. All other address spaceidentifiers are reserved.rX = bpf_addr_space_cast(rY, /* dst_as */ 1, /* src_as */ 0) tells theverifier that rX-&gt;type = PTR_TO_ARENA. Any further operations onPTR_TO_ARENA register have to be in the 32-bit domain. The verifier willmark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generatethem as kern_vm_start + 32bit_addr memory accesses. The behavior is similarto copy_from_kernel_nofault() except that no address checks are necessary.The address is guaranteed to be in the 4GB range. If the page is notpresent, the destination register is zeroed on read, and the operation isignored on write.rX = bpf_addr_space_cast(rY, 0, 1) tells the verifier that rX-&gt;type =unknown scalar. If arena-&gt;map_flags has BPF_F_NO_USER_CONV set, then theverifier converts such cast instructions to mov32. Otherwise, JIT will emitnative code equivalent to:rX = (u32)rY;if (rY)  rX |= clear_lo32_bits(arena-&gt;user_vm_start); /* replace hi32 bits in rX */After such conversion, the pointer becomes a valid user pointer withinbpf_arena range. The user process can access data structures created inbpf_arena without any additional computations. For example, a linked listbuilt by a bpf program can be walked natively by user space.Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;Signed-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Reviewed-by: Barret Rhoden &lt;brho@google.com&gt;Link: https://lore.kernel.org/bpf/20240308010812.89848-2-alexei.starovoitov@gmail.com

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Fri, 08 Mar 2024 01:07:59 +0000</pubDate>
        <dc:creator>Alexei Starovoitov &lt;ast@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>35f96de0 - bpf: Introduce BPF token object</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#35f96de0</link>
        <description>bpf: Introduce BPF token objectAdd new kind of BPF kernel object, BPF token. BPF token is meant toallow delegating privileged BPF functionality, like loading a BPFprogram or creating a BPF map, from privileged process to a *trusted*unprivileged process, all while having a good amount of control over whichprivileged operations could be performed using provided BPF token.This is achieved through mounting BPF FS instance with extra delegationmount options, which determine what operations are delegatable, and alsoconstraining it to the owning user namespace (as mentioned in theprevious patch).BPF token itself is just a derivative from BPF FS and can be createdthrough a new bpf() syscall command, BPF_TOKEN_CREATE, which accepts BPFFS FD, which can be attained through open() API by opening BPF FS mountpoint. Currently, BPF token &quot;inherits&quot; delegated command, map types,prog type, and attach type bit sets from BPF FS as is. In the future,having an BPF token as a separate object with its own FD, we can allowto further restrict BPF token&apos;s allowable set of things either at thecreation time or after the fact, allowing the process to guard itselffurther from unintentionally trying to load undesired kind of BPFprograms. But for now we keep things simple and just copy bit sets as is.When BPF token is created from BPF FS mount, we take reference to theBPF super block&apos;s owning user namespace, and then use that namespace forchecking all the {CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_ADMIN}capabilities that are normally only checked against init userns (usingcapable()), but now we check them using ns_capable() instead (if BPFtoken is provided). See bpf_token_capable() for details.Such setup means that BPF token in itself is not sufficient to grant BPFfunctionality. User namespaced process has to *also* have necessarycombination of capabilities inside that user namespace. So whilepreviously CAP_BPF was useless when granted within user namespace, nowit gains a meaning and allows container managers and sys admins to havea flexible control over which processes can and need to use BPFfunctionality within the user namespace (i.e., container in practice).And BPF FS delegation mount options and derived BPF tokens serve asa per-container &quot;flag&quot; to grant overall ability to use bpf() (plus furtherrestrict on which parts of bpf() syscalls are treated as namespaced).Note also, BPF_TOKEN_CREATE command itself requires ns_capable(CAP_BPF)within the BPF FS owning user namespace, rounding up the ns_capable()story of BPF token. Also creating BPF token in init user namespace iscurrently not supported, given BPF token doesn&apos;t have any effect in inituser namespace anyways.Signed-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;Acked-by: Christian Brauner &lt;brauner@kernel.org&gt;Link: https://lore.kernel.org/bpf/20240124022127.2379740-4-andrii@kernel.org

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 24 Jan 2024 02:21:00 +0000</pubDate>
        <dc:creator>Andrii Nakryiko &lt;andrii@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>d17aff80 - Revert BPF token-related functionality</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#d17aff80</link>
        <description>Revert BPF token-related functionalityThis patch includes the following revert (one  conflicting BPF FSpatch and three token patch sets, represented by merge commits):  - revert 0f5d5454c723 &quot;Merge branch &apos;bpf-fs-mount-options-parsing-follow-ups&apos;&quot;;  - revert 750e785796bb &quot;bpf: Support uid and gid when mounting bpffs&quot;;  - revert 733763285acf &quot;Merge branch &apos;bpf-token-support-in-libbpf-s-bpf-object&apos;&quot;;  - revert c35919dcce28 &quot;Merge branch &apos;bpf-token-and-bpf-fs-based-delegation&apos;&quot;.Link: https://lore.kernel.org/bpf/CAHk-=wg7JuFYwGy=GOMbRCtOL+jwSQsdUaBsRWkDVYbxipbM5A@mail.gmail.comSigned-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Tue, 19 Dec 2023 15:37:35 +0000</pubDate>
        <dc:creator>Andrii Nakryiko &lt;andrii@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>4527358b - bpf: introduce BPF token object</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#4527358b</link>
        <description>bpf: introduce BPF token objectAdd new kind of BPF kernel object, BPF token. BPF token is meant toallow delegating privileged BPF functionality, like loading a BPFprogram or creating a BPF map, from privileged process to a *trusted*unprivileged process, all while having a good amount of control over whichprivileged operations could be performed using provided BPF token.This is achieved through mounting BPF FS instance with extra delegationmount options, which determine what operations are delegatable, and alsoconstraining it to the owning user namespace (as mentioned in theprevious patch).BPF token itself is just a derivative from BPF FS and can be createdthrough a new bpf() syscall command, BPF_TOKEN_CREATE, which accepts BPFFS FD, which can be attained through open() API by opening BPF FS mountpoint. Currently, BPF token &quot;inherits&quot; delegated command, map types,prog type, and attach type bit sets from BPF FS as is. In the future,having an BPF token as a separate object with its own FD, we can allowto further restrict BPF token&apos;s allowable set of things either at thecreation time or after the fact, allowing the process to guard itselffurther from unintentionally trying to load undesired kind of BPFprograms. But for now we keep things simple and just copy bit sets as is.When BPF token is created from BPF FS mount, we take reference to theBPF super block&apos;s owning user namespace, and then use that namespace forchecking all the {CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_ADMIN}capabilities that are normally only checked against init userns (usingcapable()), but now we check them using ns_capable() instead (if BPFtoken is provided). See bpf_token_capable() for details.Such setup means that BPF token in itself is not sufficient to grant BPFfunctionality. User namespaced process has to *also* have necessarycombination of capabilities inside that user namespace. So whilepreviously CAP_BPF was useless when granted within user namespace, nowit gains a meaning and allows container managers and sys admins to havea flexible control over which processes can and need to use BPFfunctionality within the user namespace (i.e., container in practice).And BPF FS delegation mount options and derived BPF tokens serve asa per-container &quot;flag&quot; to grant overall ability to use bpf() (plus furtherrestrict on which parts of bpf() syscalls are treated as namespaced).Note also, BPF_TOKEN_CREATE command itself requires ns_capable(CAP_BPF)within the BPF FS owning user namespace, rounding up the ns_capable()story of BPF token.Signed-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Link: https://lore.kernel.org/r/20231130185229.2688956-4-andrii@kernel.orgSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Thu, 30 Nov 2023 18:52:15 +0000</pubDate>
        <dc:creator>Andrii Nakryiko &lt;andrii@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>e420bed0 - bpf: Add fd-based tcx multi-prog infra with link support</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#e420bed0</link>
        <description>bpf: Add fd-based tcx multi-prog infra with link supportThis work refactors and adds a lightweight extension (&quot;tcx&quot;) to the tc BPFingress and egress data path side for allowing BPF program management basedon fds via bpf() syscall through the newly added generic multi-prog API.The main goal behind this work which we also presented at LPC [0] last yearand a recent update at LSF/MM/BPF this year [3] is to support long-awaitedBPF link functionality for tc BPF programs, which allows for a model of safeownership and program detachment.Given the rise in tc BPF users in cloud native environments, this becomesnecessary to avoid hard to debug incidents either through stale leftoverprograms or 3rd party applications accidentally stepping on each others toes.As a recap, a BPF link represents the attachment of a BPF program to a BPFhook point. The BPF link holds a single reference to keep BPF program alive.Moreover, hook points do not reference a BPF link, only the application&apos;sfd or pinning does. A BPF link holds meta-data specific to attachment andimplements operations for link creation, (atomic) BPF program update,detachment and introspection. The motivation for BPF links for tc BPF programsis multi-fold, for example:  - From Meta: &quot;It&apos;s especially important for applications that are deployed    fleet-wide and that don&apos;t &quot;control&quot; hosts they are deployed to. If such    application crashes and no one notices and does anything about that, BPF    program will keep running draining resources or even just, say, dropping    packets. We at FB had outages due to such permanent BPF attachment    semantics. With fd-based BPF link we are getting a framework, which allows    safe, auto-detachable behavior by default, unless application explicitly    opts in by pinning the BPF link.&quot; [1]  - From Cilium-side the tc BPF programs we attach to host-facing veth devices    and phys devices build the core datapath for Kubernetes Pods, and they    implement forwarding, load-balancing, policy, EDT-management, etc, within    BPF. Currently there is no concept of &apos;safe&apos; ownership, e.g. we&apos;ve recently    experienced hard-to-debug issues in a user&apos;s staging environment where    another Kubernetes application using tc BPF attached to the same prio/handle    of cls_bpf, accidentally wiping all Cilium-based BPF programs from underneath    it. The goal is to establish a clear/safe ownership model via links which    cannot accidentally be overridden. [0,2]BPF links for tc can co-exist with non-link attachments, and the semantics arein line also with XDP links: BPF links cannot replace other BPF links, BPFlinks cannot replace non-BPF links, non-BPF links cannot replace BPF links andlastly only non-BPF links can replace non-BPF links. In case of Cilium, thiswould solve mentioned issue of safe ownership model as 3rd party applicationswould not be able to accidentally wipe Cilium programs, even if they are notBPF link aware.Earlier attempts [4] have tried to integrate BPF links into core tc machineryto solve cls_bpf, which has been intrusive to the generic tc kernel API withextensions only specific to cls_bpf and suboptimal/complex since cls_bpf couldbe wiped from the qdisc also. Locking a tc BPF program in place this way, isgetting into layering hacks given the two object models are vastly different.We instead implemented the tcx (tc &apos;express&apos;) layer which is an fd-based tc BPFattach API, so that the BPF link implementation blends in naturally similar toother link types which are fd-based and without the need for changing core tcinternal APIs. BPF programs for tc can then be successively migrated from classiccls_bpf to the new tc BPF link without needing to change the program&apos;s sourcecode, just the BPF loader mechanics for attaching is sufficient.For the current tc framework, there is no change in behavior with this changeand neither does this change touch on tc core kernel APIs. The gist of thispatch is that the ingress and egress hook have a lightweight, qdisc-lessextension for BPF to attach its tc BPF programs, in other words, a minimalentry point for tc BPF. The name tcx has been suggested from discussion ofearlier revisions of this work as a good fit, and to more easily differ betweenthe classic cls_bpf attachment and the fd-based one.For the ingress and egress tcx points, the device holds a cache-friendly arraywith program pointers which is separated from control plane (slow-path) data.Earlier versions of this work used priority to determine ordering and expressionof dependencies similar as with classic tc, but it was challenged that forsomething more future-proof a better user experience is required. Hence thisresulted in the design and development of the generic attach/detach/query APIfor multi-progs. See prior patch with its discussion on the API design. tcx isthe first user and later we plan to integrate also others, for example, onecandidate is multi-prog support for XDP which would benefit and have the same&apos;look and feel&apos; from API perspective.The goal with tcx is to have maximum compatibility to existing tc BPF programs,so they don&apos;t need to be rewritten specifically. Compatibility to call intoclassic tcf_classify() is also provided in order to allow successive migrationor both to cleanly co-exist where needed given its all one logical tc layer andthe tcx plus classic tc cls/act build one logical overall processing pipeline.tcx supports the simplified return codes TCX_NEXT which is non-terminating (goto next program) and terminating ones with TCX_PASS, TCX_DROP, TCX_REDIRECT.The fd-based API is behind a static key, so that when unused the code is alsonot entered. The struct tcx_entry&apos;s program array is currently static, butcould be made dynamic if necessary at a point in future. The a/b pair swapdesign has been chosen so that for detachment there are no allocations whichotherwise could fail.The work has been tested with tc-testing selftest suite which all passes, aswell as the tc BPF tests from the BPF CI, and also with Cilium&apos;s L4LB.Thanks also to Nikolay Aleksandrov and Martin Lau for in-depth early reviewsof this work.  [0] https://lpc.events/event/16/contributions/1353/  [1] https://lore.kernel.org/bpf/CAEf4BzbokCJN33Nw_kg82sO=xppXnKWEncGTWCTB9vGCmLB6pw@mail.gmail.com  [2] https://colocatedeventseu2023.sched.com/event/1Jo6O/tales-from-an-ebpf-programs-murder-mystery-hemanth-malla-guillaume-fournier-datadog  [3] http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf  [4] https://lore.kernel.org/bpf/20210604063116.234316-1-memxor@gmail.comSigned-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;Acked-by: Jakub Kicinski &lt;kuba@kernel.org&gt;Link: https://lore.kernel.org/r/20230719140858.13224-3-daniel@iogearbox.netSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 19 Jul 2023 14:08:52 +0000</pubDate>
        <dc:creator>Daniel Borkmann &lt;daniel@iogearbox.net&gt;</dc:creator>
    </item>
<item>
        <title>053c8e1f - bpf: Add generic attach/detach/query API for multi-progs</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#053c8e1f</link>
        <description>bpf: Add generic attach/detach/query API for multi-progsThis adds a generic layer called bpf_mprog which can be reused by differentattachment layers to enable multi-program attachment and dependency resolution.In-kernel users of the bpf_mprog don&apos;t need to care about the dependencyresolution internals, they can just consume it with few API calls.The initial idea of having a generic API sparked out of discussion [0] from anearlier revision of this work where tc&apos;s priority was reused and exposed viaBPF uapi as a way to coordinate dependencies among tc BPF programs, similaras-is for classic tc BPF. The feedback was that priority provides a bad userexperience and is hard to use [1], e.g.:  I cannot help but feel that priority logic copy-paste from old tc, netfilter  and friends is done because &quot;that&apos;s how things were done in the past&quot;. [...]  Priority gets exposed everywhere in uapi all the way to bpftool when it&apos;s  right there for users to understand. And that&apos;s the main problem with it.  The user don&apos;t want to and don&apos;t need to be aware of it, but uapi forces them  to pick the priority. [...] Your cover letter [0] example proves that in  real life different service pick the same priority. They simply don&apos;t know  any better. Priority is an unnecessary magic that apps _have_ to pick, so  they just copy-paste and everyone ends up using the same.The course of the discussion showed more and more the need for a generic,reusable API where the &quot;same look and feel&quot; can be applied for various otherprogram types beyond just tc BPF, for example XDP today does not have multi-program support in kernel, but also there was interest around this API forimproving management of cgroup program types. Such common multi-programmanagement concept is useful for BPF management daemons or user space BPFapplications coordinating internally about their attachments.Both from Cilium and Meta side [2], we&apos;ve collected the following requirementsfor a generic attach/detach/query API for multi-progs which has been implementedas part of this work:  - Support prog-based attach/detach and link API  - Dependency directives (can also be combined):    - BPF_F_{BEFORE,AFTER} with relative_{fd,id} which can be {prog,link,none}      - BPF_F_ID flag as {fd,id} toggle; the rationale for id is so that user        space application does not need CAP_SYS_ADMIN to retrieve foreign fds        via bpf_*_get_fd_by_id()      - BPF_F_LINK flag as {prog,link} toggle      - If relative_{fd,id} is none, then BPF_F_BEFORE will just prepend, and        BPF_F_AFTER will just append for attaching      - Enforced only at attach time    - BPF_F_REPLACE with replace_bpf_fd which can be prog, links have their      own infra for replacing their internal prog    - If no flags are set, then it&apos;s default append behavior for attaching  - Internal revision counter and optionally being able to pass expected_revision  - User space application can query current state with revision, and pass it    along for attachment to assert current state before doing updates  - Query also gets extension for link_ids array and link_attach_flags:    - prog_ids are always filled with program IDs    - link_ids are filled with link IDs when link was used, otherwise 0    - {prog,link}_attach_flags for holding {prog,link}-specific flags  - Must be easy to integrate/reuse for in-kernel usersThe uapi-side changes needed for supporting bpf_mprog are rather minimal,consisting of the additions of the attachment flags, revision counter, andexpanding existing union with relative_{fd,id} member.The bpf_mprog framework consists of an bpf_mprog_entry object which holdsan array of bpf_mprog_fp (fast-path structure). The bpf_mprog_cp (control-pathstructure) is part of bpf_mprog_bundle. Both have been separated, so thatfast-path gets efficient packing of bpf_prog pointers for maximum cacheefficiency. Also, array has been chosen instead of linked list or otherstructures to remove unnecessary indirections for a fast point-to-entry intc for BPF.The bpf_mprog_entry comes as a pair via bpf_mprog_bundle so that in case ofupdates the peer bpf_mprog_entry is populated and then just swapped whichavoids additional allocations that could otherwise fail, for example, indetach case. bpf_mprog_{fp,cp} arrays are currently static, but they couldbe converted to dynamic allocation if necessary at a point in future.Locking is deferred to the in-kernel user of bpf_mprog, for example, in caseof tcx which uses this API in the next patch, it piggybacks on rtnl.An extensive test suite for checking all aspects of this API for prog-basedattach/detach and link API comes as BPF selftests in this series.Thanks also to Andrii Nakryiko for early API discussions wrt Meta&apos;s BPF progmanagement.  [0] https://lore.kernel.org/bpf/20221004231143.19190-1-daniel@iogearbox.net  [1] https://lore.kernel.org/bpf/CAADnVQ+gEY3FjCR=+DmjDR4gp5bOYZUFJQXj4agKFHT9CQPZBw@mail.gmail.com  [2] http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdfSigned-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;Link: https://lore.kernel.org/r/20230719140858.13224-2-daniel@iogearbox.netSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 19 Jul 2023 14:08:51 +0000</pubDate>
        <dc:creator>Daniel Borkmann &lt;daniel@iogearbox.net&gt;</dc:creator>
    </item>
<item>
        <title>4294a0a7 - bpf: Split off basic BPF verifier log into separate file</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#4294a0a7</link>
        <description>bpf: Split off basic BPF verifier log into separate filekernel/bpf/verifier.c file is large and growing larger all the time. Soit&apos;s good to start splitting off more or less self-contained parts intoseparate files to keep source code size (somewhat) somewhat undercontrol.This patch is a one step in this direction, moving some of BPF verifier logroutines into a separate kernel/bpf/log.c. Right now it&apos;s most low-leveland isolated routines to append data to log, reset log to previousposition, etc. Eventually we could probably move verifier stateprinting logic here as well, but this patch doesn&apos;t attempt to do thatyet.Subsequent patches will add more logic to verifier log management, sohaving basics in a separate file will make sure verifier.c doesn&apos;t growmore with new changes.Signed-off-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;Acked-by: Lorenz Bauer &lt;lmb@isovalent.com&gt;Link: https://lore.kernel.org/bpf/20230406234205.323208-2-andrii@kernel.org

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Thu, 06 Apr 2023 23:41:47 +0000</pubDate>
        <dc:creator>Andrii Nakryiko &lt;andrii@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>516f4d33 - bpf: Enable cpumasks to be queried and used as kptrs</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#516f4d33</link>
        <description>bpf: Enable cpumasks to be queried and used as kptrsCertain programs may wish to be able to query cpumasks. For example, ifa program that is tracing percpu operations wishes to track which tasksend up running on which CPUs, it could be useful to associate that withthe tasks&apos; cpumasks. Similarly, programs tracking NUMA allocations, CPUscheduling domains, etc, could potentially benefit from being able tosee which CPUs a task could be migrated to.This patch enables these types of use cases by introducing a series ofbpf_cpumask_* kfuncs. Amongst these kfuncs, there are two separate&quot;classes&quot; of operations:1. kfuncs which allow the caller to allocate and mutate their own   cpumask kptrs in the form of a struct bpf_cpumask * object. Such   kfuncs include e.g. bpf_cpumask_create() to allocate the cpumask, and   bpf_cpumask_or() to mutate it. &quot;Regular&quot; cpumasks such as p-&gt;cpus_ptr   may not be passed to these kfuncs, and the verifier will ensure this   is the case by comparing BTF IDs.2. Read-only operations which operate on const struct cpumask *   arguments. For example, bpf_cpumask_test_cpu(), which tests whether a   CPU is set in the cpumask. Any trusted struct cpumask * or struct   bpf_cpumask * may be passed to these kfuncs. The verifier allows   struct bpf_cpumask * even though the kfunc is defined with struct   cpumask * because the first element of a struct bpf_cpumask is a   cpumask_t, so it is safe to cast.A follow-on patch will add selftests which validate these kfuncs, andanother will document them.Signed-off-by: David Vernet &lt;void@manifault.com&gt;Link: https://lore.kernel.org/r/20230125143816.721952-3-void@manifault.comSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 25 Jan 2023 14:38:11 +0000</pubDate>
        <dc:creator>David Vernet &lt;void@manifault.com&gt;</dc:creator>
    </item>
<item>
        <title>c4bcfb38 - bpf: Implement cgroup storage available to non-cgroup-attached bpf progs</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#c4bcfb38</link>
        <description>bpf: Implement cgroup storage available to non-cgroup-attached bpf progsSimilar to sk/inode/task storage, implement similar cgroup local storage.There already exists a local storage implementation for cgroup-attachedbpf programs.  See map type BPF_MAP_TYPE_CGROUP_STORAGE and helperbpf_get_local_storage(). But there are use cases such that non-cgroupattached bpf progs wants to access cgroup local storage data. For example,tc egress prog has access to sk and cgroup. It is possible to usesk local storage to emulate cgroup local storage by storing data in socket.But this is a waste as it could be lots of sockets belonging to a particularcgroup. Alternatively, a separate map can be created with cgroup id as the key.But this will introduce additional overhead to manipulate the new map.A cgroup local storage, similar to existing sk/inode/task storage,should help for this use case.The life-cycle of storage is managed with the life-cycle of thecgroup struct.  i.e. the storage is destroyed along with the owning cgroupwith a call to bpf_cgrp_storage_free() when cgroup itselfis deleted.The userspace map operations can be done by using a cgroup fd as a keypassed to the lookup, update and delete operations.Typically, the following code is used to get the current cgroup:    struct task_struct *task = bpf_get_current_task_btf();    ... task-&gt;cgroups-&gt;dfl_cgrp ...and in structure task_struct definition:    struct task_struct {        ....        struct css_set __rcu            *cgroups;        ....    }With sleepable program, accessing task-&gt;cgroups is not protected by rcu_read_lock.So the current implementation only supports non-sleepable program and supportingsleepable program will be the next step together with adding rcu_read_lockprotection for rcu tagged structures.Since map name BPF_MAP_TYPE_CGROUP_STORAGE has been used for old cgroup localstorage support, the new map name BPF_MAP_TYPE_CGRP_STORAGE is usedfor cgroup storage available to non-cgroup-attached bpf programs. The oldcgroup storage supports bpf_get_local_storage() helper to get the cgroup data.The new cgroup storage helper bpf_cgrp_storage_get() can provide similarfunctionality. While old cgroup storage pre-allocates storage memory, the newmechanism can also pre-allocate with a user space bpf_map_update_elem() callto avoid potential run-time memory allocation failure.Therefore, the new cgroup storage can provide all functionality w.r.t.the old one. So in uapi bpf.h, the old BPF_MAP_TYPE_CGROUP_STORAGE is alias toBPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED to indicate the old cgroup storage canbe deprecated since the new one can provide the same functionality.Acked-by: David Vernet &lt;void@manifault.com&gt;Signed-off-by: Yonghong Song &lt;yhs@fb.com&gt;Link: https://lore.kernel.org/r/20221026042850.673791-1-yhs@fb.comSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 26 Oct 2022 04:28:50 +0000</pubDate>
        <dc:creator>Yonghong Song &lt;yhs@fb.com&gt;</dc:creator>
    </item>
<item>
        <title>7c8199e2 - bpf: Introduce any context BPF specific memory allocator.</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#7c8199e2</link>
        <description>bpf: Introduce any context BPF specific memory allocator.Tracing BPF programs can attach to kprobe and fentry. Hence theyrun in unknown context where calling plain kmalloc() might not be safe.Front-end kmalloc() with minimal per-cpu cache of free elements.Refill this cache asynchronously from irq_work.BPF programs always run with migration disabled.It&apos;s safe to allocate from cache of the current cpu with irqs disabled.Free-ing is always done into bucket of the current cpu as well.irq_work trims extra free elements from buckets with kfreeand refills them with kmalloc, so global kmalloc logic takes careof freeing objects allocated by one cpu and freed on another.struct bpf_mem_alloc supports two modes:- When size != 0 create kmem_cache and bpf_mem_cache for each cpu.  This is typical bpf hash map use case when all elements have equal size.- When size == 0 allocate 11 bpf_mem_cache-s for each cpu, then rely on  kmalloc/kfree. Max allocation size is 4096 in this case.  This is bpf_dynptr and bpf_kptr use case.bpf_mem_alloc/bpf_mem_free are bpf specific &apos;wrappers&apos; of kmalloc/kfree.bpf_mem_cache_alloc/bpf_mem_cache_free are &apos;wrappers&apos; of kmem_cache_alloc/kmem_cache_free.The allocators are NMI-safe from bpf programs only. They are not NMI-safe in general.Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;Signed-off-by: Daniel Borkmann &lt;daniel@iogearbox.net&gt;Acked-by: Kumar Kartikeya Dwivedi &lt;memxor@gmail.com&gt;Acked-by: Andrii Nakryiko &lt;andrii@kernel.org&gt;Link: https://lore.kernel.org/bpf/20220902211058.60789-2-alexei.starovoitov@gmail.com

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Fri, 02 Sep 2022 21:10:43 +0000</pubDate>
        <dc:creator>Alexei Starovoitov &lt;ast@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>d4ccaf58 - bpf: Introduce cgroup iter</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/bpf/Makefile#d4ccaf58</link>
        <description>bpf: Introduce cgroup iterCgroup_iter is a type of bpf_iter. It walks over cgroups in four modes: - walking a cgroup&apos;s descendants in pre-order. - walking a cgroup&apos;s descendants in post-order. - walking a cgroup&apos;s ancestors. - process only the given cgroup.When attaching cgroup_iter, one can set a cgroup to the iter_linkcreated from attaching. This cgroup is passed as a file descriptoror cgroup id and serves as the starting point of the walk. If nocgroup is specified, the starting point will be the root cgroup v2.For walking descendants, one can specify the order: either pre-order orpost-order. For walking ancestors, the walk starts at the specifiedcgroup and ends at the root.One can also terminate the walk early by returning 1 from the iterprogram.Note that because walking cgroup hierarchy holds cgroup_mutex, the iterprogram is called with cgroup_mutex held.Currently only one session is supported, which means, depending on thevolume of data bpf program intends to send to user space, the numberof cgroups that can be walked is limited. For example, given the currentbuffer size is 8 * PAGE_SIZE, if the program sends 64B data for eachcgroup, assuming PAGE_SIZE is 4kb, the total number of cgroups that canbe walked is 512. This is a limitation of cgroup_iter. If the outputdata is larger than the kernel buffer size, after all data in thekernel buffer is consumed by user space, the subsequent read() syscallwill signal EOPNOTSUPP. In order to work around, the user may have toupdate their program to reduce the volume of data sent to output. Forexample, skip some uninteresting cgroups. In future, we may extendbpf_iter flags to allow customizing buffer size.Acked-by: Yonghong Song &lt;yhs@fb.com&gt;Acked-by: Tejun Heo &lt;tj@kernel.org&gt;Signed-off-by: Hao Luo &lt;haoluo@google.com&gt;Link: https://lore.kernel.org/r/20220824233117.1312810-2-haoluo@google.comSigned-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/bpf/Makefile</description>
        <pubDate>Wed, 24 Aug 2022 23:31:13 +0000</pubDate>
        <dc:creator>Hao Luo &lt;haoluo@google.com&gt;</dc:creator>
    </item>
</channel>
</rss>
