History log of /llvm-project-15.0.7/llvm/lib/Target/BPF/BPFMISimplifyPatchable.cpp (Results 1 – 16 of 16)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1
# 497a5f04 04-Apr-2022 Peter Klausler <[email protected]>

[BPF] Fix a bug in BPFMISimplifyPatchable pass

LLVM BPF pass SimplifyPatchable is used to do necessary
code conversion for CO-RE operations. When studying bpf
selftest 'exhandler', I found a corner

[BPF] Fix a bug in BPFMISimplifyPatchable pass

LLVM BPF pass SimplifyPatchable is used to do necessary
code conversion for CO-RE operations. When studying bpf
selftest 'exhandler', I found a corner case not handled properly.
The following is the C code, modified from original 'exhandler'
code.
int g;
int test(struct t1 *p) {
struct t2 *q = p->q;
if (q)
return 0;
struct t3 *f = q->f;
if (!f) g = 5;
return 0;
}

For code:
struct t3 *f = q->f;
if (!f) ...
The IR before BPFMISimplifyPatchable pass looks like:
%5:gpr = LD_imm64 @"llvm.t2:0:8$0:1"
%6:gpr = LDD killed %5:gpr, 0
%7:gpr = LDD killed %6:gpr, 0
JNE_ri killed %7:gpr, 0, %bb.3
JMP %bb.2
Note that compiler knows q = 0 based dataflow and value analysis.
The correct generated code after the pass should be
%5:gpr = LD_imm64 @"llvm.t2:0:8$0:1"
%7:gpr = LDD killed %5:gpr, 0
JNE_ri killed %7:gpr, 0, %bb.3
JMP %bb.2

But the current implementation did further optimization for the
above code and generates
%5:gpr = LD_imm64 @"llvm.t2:0:8$0:1"
JNE_ri killed %5:gpr, 0, %bb.3
JMP %bb.2
which is incorrect.

This patch added a cache to remember those load insns not associated
with CO-RE offset value and will skip these load insns during
transformation.

Differential Revision: https://reviews.llvm.org/D123883

show more ...


# 989f1c72 15-Mar-2022 serge-sans-paille <[email protected]>

Cleanup codegen includes

This is a (fixed) recommit of https://reviews.llvm.org/D121169

after: 1061034926
before: 1063332844

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-in

Cleanup codegen includes

This is a (fixed) recommit of https://reviews.llvm.org/D121169

after: 1061034926
before: 1063332844

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D121681

show more ...


Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3
# a278250b 10-Mar-2022 Nico Weber <[email protected]>

Revert "Cleanup codegen includes"

This reverts commit 7f230feeeac8a67b335f52bd2e900a05c6098f20.
Breaks CodeGenCUDA/link-device-bitcode.cu in check-clang,
and many LLVM tests, see comments on https:/

Revert "Cleanup codegen includes"

This reverts commit 7f230feeeac8a67b335f52bd2e900a05c6098f20.
Breaks CodeGenCUDA/link-device-bitcode.cu in check-clang,
and many LLVM tests, see comments on https://reviews.llvm.org/D121169

show more ...


# 7f230fee 07-Mar-2022 serge-sans-paille <[email protected]>

Cleanup codegen includes

after: 1061034926
before: 1063332844

Differential Revision: https://reviews.llvm.org/D121169


Revision tags: llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2
# 7e163afd 02-Jan-2022 Kazu Hirata <[email protected]>

Remove redundant void arguments (NFC)

Identified by modernize-redundant-void-arg.


Revision tags: llvmorg-13.0.1-rc1
# 2c4ba3e9 05-Nov-2021 Kazu Hirata <[email protected]>

[Target] Use make_early_inc_range (NFC)


Revision tags: llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1, llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3, llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2, llvmorg-10.0.1-rc1, llvmorg-10.0.0, llvmorg-10.0.0-rc6, llvmorg-10.0.0-rc5, llvmorg-10.0.0-rc4, llvmorg-10.0.0-rc3, llvmorg-10.0.0-rc2
# 6b01b465 12-Feb-2020 Yonghong Song <[email protected]>

[BPF] preserve debuginfo types for builtin __builtin__btf_type_id()

The builtin function
u32 btf_type_id = __builtin_btf_type_id(param, 0)
can help preserve type info for the following use case:

[BPF] preserve debuginfo types for builtin __builtin__btf_type_id()

The builtin function
u32 btf_type_id = __builtin_btf_type_id(param, 0)
can help preserve type info for the following use case:
extern void foo(..., void *data, int size);
int test(...) {
struct t { int a; int b; int c; } d;
d.a = ...; d.b = ...; d.c = ...;
foo(..., &d, sizeof(d));
}

The function "foo" in the above only see raw data and does not
know what type of the data is. In certain cases, e.g., logging,
the additional type information will help pretty print.

This patch handles the builtin in BPF backend. It includes
an IR pass to translate the IR intrinsic to a load of
a global variable which carries the metadata, and an MI
pass to remove the intermediate load of the global variable.
Finally, in AsmPrinter pass, proper instruction are generated.

In the above example, the second argument for __builtin_btf_type_id()
is 0, which means a relocation for local adjustment,
i.e., w.r.t. bpf program BTF change, will be generated.
The value 1 for the second argument means
a relocation for remote adjustment, e.g., against vmlinux.

Differential Revision: https://reviews.llvm.org/D74572

show more ...


# 3cb7e7bf 19-Apr-2020 Yonghong Song <[email protected]>

BPF: fix a CORE optimization bug

For the test case in this patch like below
struct t { int a; } __attribute__((preserve_access_index));
int foo(void *);
int test(struct t *arg) {
long pa

BPF: fix a CORE optimization bug

For the test case in this patch like below
struct t { int a; } __attribute__((preserve_access_index));
int foo(void *);
int test(struct t *arg) {
long param[1];
param[0] = (long)&arg->a;
return foo(param);
}

The IR right before BPF SimplifyPatchable phase:
%1:gpr = LD_imm64 @"llvm.t:0:0$0:0"
%2:gpr = LDD killed %1:gpr, 0
%3:gpr = ADD_rr %0:gpr(tied-def 0), killed %2:gpr
STD killed %3:gpr, %stack.0.param, 0
After SimplifyPatchable phase, the incorrect IR is generated:
%1:gpr = LD_imm64 @"llvm.t:0:0$0:0"
%3:gpr = ADD_rr %0:gpr(tied-def 0), killed %1:gpr
CORE_MEM killed %3:gpr, 306, %0:gpr, @"llvm.t:0:0$0:0"

Note that CORE_MEM pseudo op is introduced to encode
memory operations related to CORE. In the above, we intend
to check whether we have a store like
*(%3:gpr + 0) = ...
and if this is the case, we could replace it with
*(%0:gpr + @"llvm.t:0:0$0:0"_ = ...

Unfortunately, in the above, IR for the store is
*(%stack.0.param + 0) = %3:gpr
and transformation should not happen.

Note that we won't have problem if the actual CORE
dereference (arg->a) happens.

This patch fixed the problem by skip CORE optimization if
the use of ADD_rr result is not the base address of the store
operation.

Differential Revision: https://reviews.llvm.org/D78466

show more ...


Revision tags: llvmorg-10.0.0-rc1
# 795bbb36 30-Jan-2020 Yonghong Song <[email protected]>

[BPF] fix a bug in BPFMISimplifyPatchable pass with -O0

The recommended optimization level for BPF programs
is O2 since (1). BPF is running inside the kernel and
linux kernel won't work at -O0 level

[BPF] fix a bug in BPFMISimplifyPatchable pass with -O0

The recommended optimization level for BPF programs
is O2 since (1). BPF is running inside the kernel and
linux kernel won't work at -O0 level, and (2). Verifier
is not able to handle O0 code properly, e.g., potential
large stack size and a lot of spills.

But we should keep -O0 at least compiling.
This patch fixed a bug in BPFMISimplifyPatchable phase
where with -O0, a segmentation fault will happen for a
simple program like:
int test(int a, int b) { return a + b; }

A test case is added to capture such a case.

Differential Revision: https://reviews.llvm.org/D73681

show more ...


Revision tags: llvmorg-11-init
# ffd57408 19-Dec-2019 Yonghong Song <[email protected]>

[BPF] Enable relocation location for load/store/shifts

Previous btf field relocation is always at assignment like
r1 = 4
which is converted from an ld_imm64 instruction.

This patch did an optimi

[BPF] Enable relocation location for load/store/shifts

Previous btf field relocation is always at assignment like
r1 = 4
which is converted from an ld_imm64 instruction.

This patch did an optimization such that relocation
instruction might be load/store/shift. Specically, the
following insns may also have relocation, except BPF_MOV:
LDB, LDH, LDW, LDD, STB, STH, STW, STD,
LDB32, LDH32, LDW32, STB32, STH32, STW32,
SLL, SRL, SRA

To accomplish this, a few BPF target specific
codegen only instructions are invented. They
are generated at backend BPF SimplifyPatchable phase,
which is at early llc phase when SSA form is available.
The new codegen only instructions will be converted to
real proper instructions at the codegen and BTF emission stage.

Note that, as revealed by a few tests, this optimization might
be actual generating more relocations:
Scenario 1:
if (...) {
... __builtin_preserve_field_info(arg->b2, 0) ...
} else {
... __builtin_preserve_field_info(arg->b2, 0) ...
}
Compiler could do CSE to only have one relocation. But if both
of the above is translated into codegen internal instructions,
the compiler will not be able to do that.
Scenario 2:
offset = ... __builtin_preserve_field_info(arg->b2, 0) ...
...
... offset ...
... offset ...
... offset ...
For whatever reason, the compiler might be temporarily do copy
propagation of the righthand of "offset" assignment like
... __builtin_preserve_field_info(arg->b2, 0) ...
... __builtin_preserve_field_info(arg->b2, 0) ...
and CSE will be able to deduplicate later.
But if these intrinsics are converted to BPF pseudo instructions,
they will not be able to get deduplicated.

I do not expect we have big instruction count difference.
It may actually reduce instruction count since now relocation
is in deeper insn dependency chain.
For example, for test offset-reloc-fieldinfo-2.ll, this patch
generates 7 instead of 6 relocations for non-alu32 mode, but it
actually reduced instruction count from 29 to 26.

Differential Revision: https://reviews.llvm.org/D71790

show more ...


Revision tags: llvmorg-9.0.1, llvmorg-9.0.1-rc3, llvmorg-9.0.1-rc2, llvmorg-9.0.1-rc1
# a27c998c 25-Oct-2019 Yonghong Song <[email protected]>

[BPF] fix a CO-RE issue with -mattr=+alu32

Ilya Leoshkevich (<[email protected]>) reported an issue that
with -mattr=+alu32 CO-RE has a segfault in BPF MISimplifyPatchable
pass.

The pattern will be

[BPF] fix a CO-RE issue with -mattr=+alu32

Ilya Leoshkevich (<[email protected]>) reported an issue that
with -mattr=+alu32 CO-RE has a segfault in BPF MISimplifyPatchable
pass.

The pattern will be transformed by MISimplifyPatchable
pass looks like below:
r5 = ld_imm64 @"b:0:0$0:0"
r2 = ldw r5, 0
... r2 ... // use r2
The pass will remove the intermediate 'ldw' instruction
and replacing all r2 with r5 likes below:
r5 = ld_imm64 @"b:0:0$0:0"
... r5 ... // use r5
Later, the ld_imm64 insn will be replaced with
r5 = <patched immediate>
for field relocation purpose.

With -mattr=+alu32, the input code may become
r5 = ld_imm64 @"b:0:0$0:0"
w2 = ldw32 r5, 0
... w2 ... // use w2
Replacing "w2" with "r5" is incorrect and will
trigger compiler internal errors.

To fix the problem, if the register class of ldw* dest
register is sub_32, we just replace the original ldw*
register with:
w2 = w5
Directly replacing all uses of w2 with in-place
constructed w5 for the use operand seems not working in all cases.

The latest kernel will have -mattr=+alu32 on by default,
so added this flag to all CORE tests.
Tested with latest kernel bpf-next branch as well with this patch.

Differential Revision: https://reviews.llvm.org/D69438

show more ...


# 904cd3e0 19-Oct-2019 Reid Kleckner <[email protected]>

Prune a LegacyDivergenceAnalysis and MachineLoopInfo include each

Now X86ISelLowering doesn't depend on many IR analyses.

llvm-svn: 375320


# dd37a26f 10-Oct-2019 Kadir Cetinkaya <[email protected]>

Fix assertions disabled builds after rL374367

llvm-svn: 374372


# d46a6a9e 10-Oct-2019 Yonghong Song <[email protected]>

[BPF] Remove relocation for patchable externs

Previously, patchable extern relocations are introduced to patch
external variables used for multi versioning in
compile once, run everywhere use case.

[BPF] Remove relocation for patchable externs

Previously, patchable extern relocations are introduced to patch
external variables used for multi versioning in
compile once, run everywhere use case. The load instruction
will be converted into a move with an patchable immediate
which can be changed by bpf loader on the host.

The kernel verifier has evolved and is able to load
and propagate constant values, so compiler relocation
becomes unnecessary. This patch removed codes related to this.

Differential Revision: https://reviews.llvm.org/D68760

llvm-svn: 374367

show more ...


Revision tags: llvmorg-9.0.0, llvmorg-9.0.0-rc6, llvmorg-9.0.0-rc5, llvmorg-9.0.0-rc4, llvmorg-9.0.0-rc3
# 0c476111 15-Aug-2019 Daniel Sanders <[email protected]>

Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM

Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Re

Apply llvm-prefer-register-over-unsigned from clang-tidy to LLVM

Summary:
This clang-tidy check is looking for unsigned integer variables whose initializer
starts with an implicit cast from llvm::Register and changes the type of the
variable to llvm::Register (dropping the llvm:: where possible).

Partial reverts in:
X86FrameLowering.cpp - Some functions return unsigned and arguably should be MCRegister
X86FixupLEAs.cpp - Some functions return unsigned and arguably should be MCRegister
X86FrameLowering.cpp - Some functions return unsigned and arguably should be MCRegister
HexagonBitSimplify.cpp - Function takes BitTracker::RegisterRef which appears to be unsigned&
MachineVerifier.cpp - Ambiguous operator==() given MCRegister and const Register
PPCFastISel.cpp - No Register::operator-=()
PeepholeOptimizer.cpp - TargetInstrInfo::optimizeLoadInstr() takes an unsigned&
MachineTraceMetrics.cpp - MachineTraceMetrics lacks a suitable constructor

Manual fixups in:
ARMFastISel.cpp - ARMEmitLoad() now takes a Register& instead of unsigned&
HexagonSplitDouble.cpp - Ternary operator was ambiguous between unsigned/Register
HexagonConstExtenders.cpp - Has a local class named Register, used llvm::Register instead of Register.
PPCFastISel.cpp - PPCEmitLoad() now takes a Register& instead of unsigned&

Depends on D65919

Reviewers: arsenm, bogner, craig.topper, RKSimon

Reviewed By: arsenm

Subscribers: RKSimon, craig.topper, lenary, aemerson, wuzish, jholewinski, MatzeB, qcolombet, dschuff, jyknight, dylanmckay, sdardis, nemanjai, jvesely, wdng, nhaehnle, sbc100, jgravelle-google, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, javed.absar, asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, atanasyan, rogfer01, MartinMosbeck, brucehoult, the_o, tpr, PkmX, jocewei, jsji, Petar.Avramovic, asbirlea, Jim, s.egerton, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65962

llvm-svn: 369041

show more ...


Revision tags: llvmorg-9.0.0-rc2, llvmorg-9.0.0-rc1, llvmorg-10-init, llvmorg-8.0.1, llvmorg-8.0.1-rc4
# d3d88d08 09-Jul-2019 Yonghong Song <[email protected]>

[BPF] Support for compile once and run everywhere

Introduction
============

This patch added intial support for bpf program compile once
and run everywhere (CO-RE).

The main motivation is for bpf

[BPF] Support for compile once and run everywhere

Introduction
============

This patch added intial support for bpf program compile once
and run everywhere (CO-RE).

The main motivation is for bpf program which depends on
kernel headers which may vary between different kernel versions.
The initial discussion can be found at https://lwn.net/Articles/773198/.

Currently, bpf program accesses kernel internal data structure
through bpf_probe_read() helper. The idea is to capture the
kernel data structure to be accessed through bpf_probe_read()
and relocate them on different kernel versions.

On each host, right before bpf program load, the bpfloader
will look at the types of the native linux through vmlinux BTF,
calculates proper access offset and patch the instruction.

To accommodate this, three intrinsic functions
preserve_{array,union,struct}_access_index
are introduced which in clang will preserve the base pointer,
struct/union/array access_index and struct/union debuginfo type
information. Later, bpf IR pass can reconstruct the whole gep
access chains without looking at gep itself.

This patch did the following:
. An IR pass is added to convert preserve_*_access_index to
global variable who name encodes the getelementptr
access pattern. The global variable has metadata
attached to describe the corresponding struct/union
debuginfo type.
. An SimplifyPatchable MachineInstruction pass is added
to remove unnecessary loads.
. The BTF output pass is enhanced to generate relocation
records located in .BTF.ext section.

Typical CO-RE also needs support of global variables which can
be assigned to different values to different hosts. For example,
kernel version can be used to guard different versions of codes.
This patch added the support for patchable externals as well.

Example
=======

The following is an example.

struct pt_regs {
long arg1;
long arg2;
};
struct sk_buff {
int i;
struct net_device *dev;
};

#define _(x) (__builtin_preserve_access_index(x))
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr) =
(void *) 4;
extern __attribute__((section(".BPF.patchable_externs"))) unsigned __kernel_version;
int bpf_prog(struct pt_regs *ctx) {
struct net_device *dev = 0;

// ctx->arg* does not need bpf_probe_read
if (__kernel_version >= 41608)
bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg1)->dev));
else
bpf_probe_read(&dev, sizeof(dev), _(&((struct sk_buff *)ctx->arg2)->dev));
return dev != 0;
}

In the above, we want to translate the third argument of
bpf_probe_read() as relocations.

-bash-4.4$ clang -target bpf -O2 -g -S trace.c

The compiler will generate two new subsections in .BTF.ext,
OffsetReloc and ExternReloc.
OffsetReloc is to record the structure member offset operations,
and ExternalReloc is to record the external globals where
only u8, u16, u32 and u64 are supported.

BPFOffsetReloc Size
struct SecLOffsetReloc for ELF section #1
A number of struct BPFOffsetReloc for ELF section #1
struct SecOffsetReloc for ELF section #2
A number of struct BPFOffsetReloc for ELF section #2
...
BPFExternReloc Size
struct SecExternReloc for ELF section #1
A number of struct BPFExternReloc for ELF section #1
struct SecExternReloc for ELF section #2
A number of struct BPFExternReloc for ELF section #2

struct BPFOffsetReloc {
uint32_t InsnOffset; ///< Byte offset in this section
uint32_t TypeID; ///< TypeID for the relocation
uint32_t OffsetNameOff; ///< The string to traverse types
};

struct BPFExternReloc {
uint32_t InsnOffset; ///< Byte offset in this section
uint32_t ExternNameOff; ///< The string for external variable
};

Note that only externs with attribute section ".BPF.patchable_externs"
are considered for Extern Reloc which will be patched by bpf loader
right before the load.

For the above test case, two offset records and one extern record
will be generated:
OffsetReloc records:
.long .Ltmp12 # Insn Offset
.long 7 # TypeId
.long 242 # Type Decode String
.long .Ltmp18 # Insn Offset
.long 7 # TypeId
.long 242 # Type Decode String

ExternReloc record:
.long .Ltmp5 # Insn Offset
.long 165 # External Variable

In string table:
.ascii "0:1" # string offset=242
.ascii "__kernel_version" # string offset=165

The default member offset can be calculated as
the 2nd member offset (0 representing the 1st member) of struct "sk_buff".

The asm code:
.Ltmp5:
.Ltmp6:
r2 = 0
r3 = 41608
.Ltmp7:
.Ltmp8:
.loc 1 18 9 is_stmt 0 # t.c:18:9
.Ltmp9:
if r3 > r2 goto LBB0_2
.Ltmp10:
.Ltmp11:
.loc 1 0 9 # t.c:0:9
.Ltmp12:
r2 = 8
.Ltmp13:
.loc 1 19 66 is_stmt 1 # t.c:19:66
.Ltmp14:
.Ltmp15:
r3 = *(u64 *)(r1 + 0)
goto LBB0_3
.Ltmp16:
.Ltmp17:
LBB0_2:
.loc 1 0 66 is_stmt 0 # t.c:0:66
.Ltmp18:
r2 = 8
.loc 1 21 66 is_stmt 1 # t.c:21:66
.Ltmp19:
r3 = *(u64 *)(r1 + 8)
.Ltmp20:
.Ltmp21:
LBB0_3:
.loc 1 0 66 is_stmt 0 # t.c:0:66
r3 += r2
r1 = r10
.Ltmp22:
.Ltmp23:
.Ltmp24:
r1 += -8
r2 = 8
call 4

For instruction .Ltmp12 and .Ltmp18, "r2 = 8", the number
8 is the structure offset based on the current BTF.
Loader needs to adjust it if it changes on the host.

For instruction .Ltmp5, "r2 = 0", the external variable
got a default value 0, loader needs to supply an appropriate
value for the particular host.

Compiling to generate object code and disassemble:
0000000000000000 bpf_prog:
0: b7 02 00 00 00 00 00 00 r2 = 0
1: 7b 2a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r2
2: b7 02 00 00 00 00 00 00 r2 = 0
3: b7 03 00 00 88 a2 00 00 r3 = 41608
4: 2d 23 03 00 00 00 00 00 if r3 > r2 goto +3 <LBB0_2>
5: b7 02 00 00 08 00 00 00 r2 = 8
6: 79 13 00 00 00 00 00 00 r3 = *(u64 *)(r1 + 0)
7: 05 00 02 00 00 00 00 00 goto +2 <LBB0_3>

0000000000000040 LBB0_2:
8: b7 02 00 00 08 00 00 00 r2 = 8
9: 79 13 08 00 00 00 00 00 r3 = *(u64 *)(r1 + 8)

0000000000000050 LBB0_3:
10: 0f 23 00 00 00 00 00 00 r3 += r2
11: bf a1 00 00 00 00 00 00 r1 = r10
12: 07 01 00 00 f8 ff ff ff r1 += -8
13: b7 02 00 00 08 00 00 00 r2 = 8
14: 85 00 00 00 04 00 00 00 call 4

Instructions #2, #5 and #8 need relocation resoutions from the loader.

Signed-off-by: Yonghong Song <[email protected]>

Differential Revision: https://reviews.llvm.org/D61524

llvm-svn: 365503

show more ...