1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 // 9 /// \file 10 /// This file is a part of MemorySanitizer, a detector of uninitialized 11 /// reads. 12 /// 13 /// The algorithm of the tool is similar to Memcheck 14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 15 /// byte of the application memory, poison the shadow of the malloc-ed 16 /// or alloca-ed memory, load the shadow bits on every memory read, 17 /// propagate the shadow bits through some of the arithmetic 18 /// instruction (including MOV), store the shadow bits on every memory 19 /// write, report a bug on some other instructions (e.g. JMP) if the 20 /// associated shadow is poisoned. 21 /// 22 /// But there are differences too. The first and the major one: 23 /// compiler instrumentation instead of binary instrumentation. This 24 /// gives us much better register allocation, possible compiler 25 /// optimizations and a fast start-up. But this brings the major issue 26 /// as well: msan needs to see all program events, including system 27 /// calls and reads/writes in system libraries, so we either need to 28 /// compile *everything* with msan or use a binary translation 29 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 30 /// Another difference from Memcheck is that we use 8 shadow bits per 31 /// byte of application memory and use a direct shadow mapping. This 32 /// greatly simplifies the instrumentation code and avoids races on 33 /// shadow updates (Memcheck is single-threaded so races are not a 34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 35 /// path storage that uses 8 bits per byte). 36 /// 37 /// The default value of shadow is 0, which means "clean" (not poisoned). 38 /// 39 /// Every module initializer should call __msan_init to ensure that the 40 /// shadow memory is ready. On error, __msan_warning is called. Since 41 /// parameters and return values may be passed via registers, we have a 42 /// specialized thread-local shadow for return values 43 /// (__msan_retval_tls) and parameters (__msan_param_tls). 44 /// 45 /// Origin tracking. 46 /// 47 /// MemorySanitizer can track origins (allocation points) of all uninitialized 48 /// values. This behavior is controlled with a flag (msan-track-origins) and is 49 /// disabled by default. 50 /// 51 /// Origins are 4-byte values created and interpreted by the runtime library. 52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 53 /// of application memory. Propagation of origins is basically a bunch of 54 /// "select" instructions that pick the origin of a dirty argument, if an 55 /// instruction has one. 56 /// 57 /// Every 4 aligned, consecutive bytes of application memory have one origin 58 /// value associated with them. If these bytes contain uninitialized data 59 /// coming from 2 different allocations, the last store wins. Because of this, 60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 61 /// practice. 62 /// 63 /// Origins are meaningless for fully initialized values, so MemorySanitizer 64 /// avoids storing origin to memory when a fully initialized value is stored. 65 /// This way it avoids needless overwriting origin of the 4-byte region on 66 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 67 /// 68 /// Atomic handling. 69 /// 70 /// Ideally, every atomic store of application value should update the 71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 72 /// of two disjoint locations can not be done without severe slowdown. 73 /// 74 /// Therefore, we implement an approximation that may err on the safe side. 75 /// In this implementation, every atomically accessed location in the program 76 /// may only change from (partially) uninitialized to fully initialized, but 77 /// not the other way around. We load the shadow _after_ the application load, 78 /// and we store the shadow _before_ the app store. Also, we always store clean 79 /// shadow (if the application store is atomic). This way, if the store-load 80 /// pair constitutes a happens-before arc, shadow store and load are correctly 81 /// ordered such that the load will get either the value that was stored, or 82 /// some later value (which is always clean). 83 /// 84 /// This does not work very well with Compare-And-Swap (CAS) and 85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 86 /// must store the new shadow before the app operation, and load the shadow 87 /// after the app operation. Computers don't work this way. Current 88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 89 /// value. It implements the store part as a simple atomic store by storing a 90 /// clean shadow. 91 /// 92 /// Instrumenting inline assembly. 93 /// 94 /// For inline assembly code LLVM has little idea about which memory locations 95 /// become initialized depending on the arguments. It can be possible to figure 96 /// out which arguments are meant to point to inputs and outputs, but the 97 /// actual semantics can be only visible at runtime. In the Linux kernel it's 98 /// also possible that the arguments only indicate the offset for a base taken 99 /// from a segment register, so it's dangerous to treat any asm() arguments as 100 /// pointers. We take a conservative approach generating calls to 101 /// __msan_instrument_asm_store(ptr, size) 102 /// , which defer the memory unpoisoning to the runtime library. 103 /// The latter can perform more complex address checks to figure out whether 104 /// it's safe to touch the shadow memory. 105 /// Like with atomic operations, we call __msan_instrument_asm_store() before 106 /// the assembly call, so that changes to the shadow memory will be seen by 107 /// other threads together with main memory initialization. 108 /// 109 /// KernelMemorySanitizer (KMSAN) implementation. 110 /// 111 /// The major differences between KMSAN and MSan instrumentation are: 112 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 113 /// - KMSAN allocates shadow and origin memory for each page separately, so 114 /// there are no explicit accesses to shadow and origin in the 115 /// instrumentation. 116 /// Shadow and origin values for a particular X-byte memory location 117 /// (X=1,2,4,8) are accessed through pointers obtained via the 118 /// __msan_metadata_ptr_for_load_X(ptr) 119 /// __msan_metadata_ptr_for_store_X(ptr) 120 /// functions. The corresponding functions check that the X-byte accesses 121 /// are possible and returns the pointers to shadow and origin memory. 122 /// Arbitrary sized accesses are handled with: 123 /// __msan_metadata_ptr_for_load_n(ptr, size) 124 /// __msan_metadata_ptr_for_store_n(ptr, size); 125 /// - TLS variables are stored in a single per-task struct. A call to a 126 /// function __msan_get_context_state() returning a pointer to that struct 127 /// is inserted into every instrumented function before the entry block; 128 /// - __msan_warning() takes a 32-bit origin parameter; 129 /// - local variables are poisoned with __msan_poison_alloca() upon function 130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 131 /// function; 132 /// - the pass doesn't declare any global variables or add global constructors 133 /// to the translation unit. 134 /// 135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 136 /// calls, making sure we're on the safe side wrt. possible false positives. 137 /// 138 /// KernelMemorySanitizer only supports X86_64 at the moment. 139 /// 140 // 141 // FIXME: This sanitizer does not yet handle scalable vectors 142 // 143 //===----------------------------------------------------------------------===// 144 145 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h" 146 #include "llvm/ADT/APInt.h" 147 #include "llvm/ADT/ArrayRef.h" 148 #include "llvm/ADT/DepthFirstIterator.h" 149 #include "llvm/ADT/SmallSet.h" 150 #include "llvm/ADT/SmallString.h" 151 #include "llvm/ADT/SmallVector.h" 152 #include "llvm/ADT/StringExtras.h" 153 #include "llvm/ADT/StringRef.h" 154 #include "llvm/ADT/Triple.h" 155 #include "llvm/Analysis/TargetLibraryInfo.h" 156 #include "llvm/Analysis/ValueTracking.h" 157 #include "llvm/IR/Argument.h" 158 #include "llvm/IR/Attributes.h" 159 #include "llvm/IR/BasicBlock.h" 160 #include "llvm/IR/CallingConv.h" 161 #include "llvm/IR/Constant.h" 162 #include "llvm/IR/Constants.h" 163 #include "llvm/IR/DataLayout.h" 164 #include "llvm/IR/DerivedTypes.h" 165 #include "llvm/IR/Function.h" 166 #include "llvm/IR/GlobalValue.h" 167 #include "llvm/IR/GlobalVariable.h" 168 #include "llvm/IR/IRBuilder.h" 169 #include "llvm/IR/InlineAsm.h" 170 #include "llvm/IR/InstVisitor.h" 171 #include "llvm/IR/InstrTypes.h" 172 #include "llvm/IR/Instruction.h" 173 #include "llvm/IR/Instructions.h" 174 #include "llvm/IR/IntrinsicInst.h" 175 #include "llvm/IR/Intrinsics.h" 176 #include "llvm/IR/IntrinsicsX86.h" 177 #include "llvm/IR/LLVMContext.h" 178 #include "llvm/IR/MDBuilder.h" 179 #include "llvm/IR/Module.h" 180 #include "llvm/IR/Type.h" 181 #include "llvm/IR/Value.h" 182 #include "llvm/IR/ValueMap.h" 183 #include "llvm/InitializePasses.h" 184 #include "llvm/Pass.h" 185 #include "llvm/Support/AtomicOrdering.h" 186 #include "llvm/Support/Casting.h" 187 #include "llvm/Support/CommandLine.h" 188 #include "llvm/Support/Compiler.h" 189 #include "llvm/Support/Debug.h" 190 #include "llvm/Support/ErrorHandling.h" 191 #include "llvm/Support/MathExtras.h" 192 #include "llvm/Support/raw_ostream.h" 193 #include "llvm/Transforms/Instrumentation.h" 194 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 195 #include "llvm/Transforms/Utils/Local.h" 196 #include "llvm/Transforms/Utils/ModuleUtils.h" 197 #include <algorithm> 198 #include <cassert> 199 #include <cstddef> 200 #include <cstdint> 201 #include <memory> 202 #include <string> 203 #include <tuple> 204 205 using namespace llvm; 206 207 #define DEBUG_TYPE "msan" 208 209 static const unsigned kOriginSize = 4; 210 static const Align kMinOriginAlignment = Align(4); 211 static const Align kShadowTLSAlignment = Align(8); 212 213 // These constants must be kept in sync with the ones in msan.h. 214 static const unsigned kParamTLSSize = 800; 215 static const unsigned kRetvalTLSSize = 800; 216 217 // Accesses sizes are powers of two: 1, 2, 4, 8. 218 static const size_t kNumberOfAccessSizes = 4; 219 220 /// Track origins of uninitialized values. 221 /// 222 /// Adds a section to MemorySanitizer report that points to the allocation 223 /// (stack or heap) the uninitialized bits came from originally. 224 static cl::opt<int> ClTrackOrigins("msan-track-origins", 225 cl::desc("Track origins (allocation sites) of poisoned memory"), 226 cl::Hidden, cl::init(0)); 227 228 static cl::opt<bool> ClKeepGoing("msan-keep-going", 229 cl::desc("keep going after reporting a UMR"), 230 cl::Hidden, cl::init(false)); 231 232 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 233 cl::desc("poison uninitialized stack variables"), 234 cl::Hidden, cl::init(true)); 235 236 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 237 cl::desc("poison uninitialized stack variables with a call"), 238 cl::Hidden, cl::init(false)); 239 240 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 241 cl::desc("poison uninitialized stack variables with the given pattern"), 242 cl::Hidden, cl::init(0xff)); 243 244 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 245 cl::desc("poison undef temps"), 246 cl::Hidden, cl::init(true)); 247 248 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 249 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 250 cl::Hidden, cl::init(true)); 251 252 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 253 cl::desc("exact handling of relational integer ICmp"), 254 cl::Hidden, cl::init(false)); 255 256 static cl::opt<bool> ClHandleLifetimeIntrinsics( 257 "msan-handle-lifetime-intrinsics", 258 cl::desc( 259 "when possible, poison scoped variables at the beginning of the scope " 260 "(slower, but more precise)"), 261 cl::Hidden, cl::init(true)); 262 263 // When compiling the Linux kernel, we sometimes see false positives related to 264 // MSan being unable to understand that inline assembly calls may initialize 265 // local variables. 266 // This flag makes the compiler conservatively unpoison every memory location 267 // passed into an assembly call. Note that this may cause false positives. 268 // Because it's impossible to figure out the array sizes, we can only unpoison 269 // the first sizeof(type) bytes for each type* pointer. 270 // The instrumentation is only enabled in KMSAN builds, and only if 271 // -msan-handle-asm-conservative is on. This is done because we may want to 272 // quickly disable assembly instrumentation when it breaks. 273 static cl::opt<bool> ClHandleAsmConservative( 274 "msan-handle-asm-conservative", 275 cl::desc("conservative handling of inline assembly"), cl::Hidden, 276 cl::init(true)); 277 278 // This flag controls whether we check the shadow of the address 279 // operand of load or store. Such bugs are very rare, since load from 280 // a garbage address typically results in SEGV, but still happen 281 // (e.g. only lower bits of address are garbage, or the access happens 282 // early at program startup where malloc-ed memory is more likely to 283 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 284 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 285 cl::desc("report accesses through a pointer which has poisoned shadow"), 286 cl::Hidden, cl::init(true)); 287 288 static cl::opt<bool> ClEagerChecks( 289 "msan-eager-checks", 290 cl::desc("check arguments and return values at function call boundaries"), 291 cl::Hidden, cl::init(false)); 292 293 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 294 cl::desc("print out instructions with default strict semantics"), 295 cl::Hidden, cl::init(false)); 296 297 static cl::opt<int> ClInstrumentationWithCallThreshold( 298 "msan-instrumentation-with-call-threshold", 299 cl::desc( 300 "If the function being instrumented requires more than " 301 "this number of checks and origin stores, use callbacks instead of " 302 "inline checks (-1 means never use callbacks)."), 303 cl::Hidden, cl::init(3500)); 304 305 static cl::opt<bool> 306 ClEnableKmsan("msan-kernel", 307 cl::desc("Enable KernelMemorySanitizer instrumentation"), 308 cl::Hidden, cl::init(false)); 309 310 // This is an experiment to enable handling of cases where shadow is a non-zero 311 // compile-time constant. For some unexplainable reason they were silently 312 // ignored in the instrumentation. 313 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 314 cl::desc("Insert checks for constant shadow values"), 315 cl::Hidden, cl::init(false)); 316 317 // This is off by default because of a bug in gold: 318 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 319 static cl::opt<bool> ClWithComdat("msan-with-comdat", 320 cl::desc("Place MSan constructors in comdat sections"), 321 cl::Hidden, cl::init(false)); 322 323 // These options allow to specify custom memory map parameters 324 // See MemoryMapParams for details. 325 static cl::opt<uint64_t> ClAndMask("msan-and-mask", 326 cl::desc("Define custom MSan AndMask"), 327 cl::Hidden, cl::init(0)); 328 329 static cl::opt<uint64_t> ClXorMask("msan-xor-mask", 330 cl::desc("Define custom MSan XorMask"), 331 cl::Hidden, cl::init(0)); 332 333 static cl::opt<uint64_t> ClShadowBase("msan-shadow-base", 334 cl::desc("Define custom MSan ShadowBase"), 335 cl::Hidden, cl::init(0)); 336 337 static cl::opt<uint64_t> ClOriginBase("msan-origin-base", 338 cl::desc("Define custom MSan OriginBase"), 339 cl::Hidden, cl::init(0)); 340 341 static const char *const kMsanModuleCtorName = "msan.module_ctor"; 342 static const char *const kMsanInitName = "__msan_init"; 343 344 namespace { 345 346 // Memory map parameters used in application-to-shadow address calculation. 347 // Offset = (Addr & ~AndMask) ^ XorMask 348 // Shadow = ShadowBase + Offset 349 // Origin = OriginBase + Offset 350 struct MemoryMapParams { 351 uint64_t AndMask; 352 uint64_t XorMask; 353 uint64_t ShadowBase; 354 uint64_t OriginBase; 355 }; 356 357 struct PlatformMemoryMapParams { 358 const MemoryMapParams *bits32; 359 const MemoryMapParams *bits64; 360 }; 361 362 } // end anonymous namespace 363 364 // i386 Linux 365 static const MemoryMapParams Linux_I386_MemoryMapParams = { 366 0x000080000000, // AndMask 367 0, // XorMask (not used) 368 0, // ShadowBase (not used) 369 0x000040000000, // OriginBase 370 }; 371 372 // x86_64 Linux 373 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 374 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 375 0x400000000000, // AndMask 376 0, // XorMask (not used) 377 0, // ShadowBase (not used) 378 0x200000000000, // OriginBase 379 #else 380 0, // AndMask (not used) 381 0x500000000000, // XorMask 382 0, // ShadowBase (not used) 383 0x100000000000, // OriginBase 384 #endif 385 }; 386 387 // mips64 Linux 388 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 389 0, // AndMask (not used) 390 0x008000000000, // XorMask 391 0, // ShadowBase (not used) 392 0x002000000000, // OriginBase 393 }; 394 395 // ppc64 Linux 396 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 397 0xE00000000000, // AndMask 398 0x100000000000, // XorMask 399 0x080000000000, // ShadowBase 400 0x1C0000000000, // OriginBase 401 }; 402 403 // s390x Linux 404 static const MemoryMapParams Linux_S390X_MemoryMapParams = { 405 0xC00000000000, // AndMask 406 0, // XorMask (not used) 407 0x080000000000, // ShadowBase 408 0x1C0000000000, // OriginBase 409 }; 410 411 // aarch64 Linux 412 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 413 0, // AndMask (not used) 414 0x06000000000, // XorMask 415 0, // ShadowBase (not used) 416 0x01000000000, // OriginBase 417 }; 418 419 // i386 FreeBSD 420 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 421 0x000180000000, // AndMask 422 0x000040000000, // XorMask 423 0x000020000000, // ShadowBase 424 0x000700000000, // OriginBase 425 }; 426 427 // x86_64 FreeBSD 428 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 429 0xc00000000000, // AndMask 430 0x200000000000, // XorMask 431 0x100000000000, // ShadowBase 432 0x380000000000, // OriginBase 433 }; 434 435 // x86_64 NetBSD 436 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 437 0, // AndMask 438 0x500000000000, // XorMask 439 0, // ShadowBase 440 0x100000000000, // OriginBase 441 }; 442 443 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 444 &Linux_I386_MemoryMapParams, 445 &Linux_X86_64_MemoryMapParams, 446 }; 447 448 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 449 nullptr, 450 &Linux_MIPS64_MemoryMapParams, 451 }; 452 453 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 454 nullptr, 455 &Linux_PowerPC64_MemoryMapParams, 456 }; 457 458 static const PlatformMemoryMapParams Linux_S390_MemoryMapParams = { 459 nullptr, 460 &Linux_S390X_MemoryMapParams, 461 }; 462 463 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 464 nullptr, 465 &Linux_AArch64_MemoryMapParams, 466 }; 467 468 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 469 &FreeBSD_I386_MemoryMapParams, 470 &FreeBSD_X86_64_MemoryMapParams, 471 }; 472 473 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 474 nullptr, 475 &NetBSD_X86_64_MemoryMapParams, 476 }; 477 478 namespace { 479 480 /// Instrument functions of a module to detect uninitialized reads. 481 /// 482 /// Instantiating MemorySanitizer inserts the msan runtime library API function 483 /// declarations into the module if they don't exist already. Instantiating 484 /// ensures the __msan_init function is in the list of global constructors for 485 /// the module. 486 class MemorySanitizer { 487 public: 488 MemorySanitizer(Module &M, MemorySanitizerOptions Options) 489 : CompileKernel(Options.Kernel), TrackOrigins(Options.TrackOrigins), 490 Recover(Options.Recover) { 491 initializeModule(M); 492 } 493 494 // MSan cannot be moved or copied because of MapParams. 495 MemorySanitizer(MemorySanitizer &&) = delete; 496 MemorySanitizer &operator=(MemorySanitizer &&) = delete; 497 MemorySanitizer(const MemorySanitizer &) = delete; 498 MemorySanitizer &operator=(const MemorySanitizer &) = delete; 499 500 bool sanitizeFunction(Function &F, TargetLibraryInfo &TLI); 501 502 private: 503 friend struct MemorySanitizerVisitor; 504 friend struct VarArgAMD64Helper; 505 friend struct VarArgMIPS64Helper; 506 friend struct VarArgAArch64Helper; 507 friend struct VarArgPowerPC64Helper; 508 friend struct VarArgSystemZHelper; 509 510 void initializeModule(Module &M); 511 void initializeCallbacks(Module &M); 512 void createKernelApi(Module &M); 513 void createUserspaceApi(Module &M); 514 515 /// True if we're compiling the Linux kernel. 516 bool CompileKernel; 517 /// Track origins (allocation points) of uninitialized values. 518 int TrackOrigins; 519 bool Recover; 520 521 LLVMContext *C; 522 Type *IntptrTy; 523 Type *OriginTy; 524 525 // XxxTLS variables represent the per-thread state in MSan and per-task state 526 // in KMSAN. 527 // For the userspace these point to thread-local globals. In the kernel land 528 // they point to the members of a per-task struct obtained via a call to 529 // __msan_get_context_state(). 530 531 /// Thread-local shadow storage for function parameters. 532 Value *ParamTLS; 533 534 /// Thread-local origin storage for function parameters. 535 Value *ParamOriginTLS; 536 537 /// Thread-local shadow storage for function return value. 538 Value *RetvalTLS; 539 540 /// Thread-local origin storage for function return value. 541 Value *RetvalOriginTLS; 542 543 /// Thread-local shadow storage for in-register va_arg function 544 /// parameters (x86_64-specific). 545 Value *VAArgTLS; 546 547 /// Thread-local shadow storage for in-register va_arg function 548 /// parameters (x86_64-specific). 549 Value *VAArgOriginTLS; 550 551 /// Thread-local shadow storage for va_arg overflow area 552 /// (x86_64-specific). 553 Value *VAArgOverflowSizeTLS; 554 555 /// Are the instrumentation callbacks set up? 556 bool CallbacksInitialized = false; 557 558 /// The run-time callback to print a warning. 559 FunctionCallee WarningFn; 560 561 // These arrays are indexed by log2(AccessSize). 562 FunctionCallee MaybeWarningFn[kNumberOfAccessSizes]; 563 FunctionCallee MaybeStoreOriginFn[kNumberOfAccessSizes]; 564 565 /// Run-time helper that generates a new origin value for a stack 566 /// allocation. 567 FunctionCallee MsanSetAllocaOrigin4Fn; 568 569 /// Run-time helper that poisons stack on function entry. 570 FunctionCallee MsanPoisonStackFn; 571 572 /// Run-time helper that records a store (or any event) of an 573 /// uninitialized value and returns an updated origin id encoding this info. 574 FunctionCallee MsanChainOriginFn; 575 576 /// Run-time helper that paints an origin over a region. 577 FunctionCallee MsanSetOriginFn; 578 579 /// MSan runtime replacements for memmove, memcpy and memset. 580 FunctionCallee MemmoveFn, MemcpyFn, MemsetFn; 581 582 /// KMSAN callback for task-local function argument shadow. 583 StructType *MsanContextStateTy; 584 FunctionCallee MsanGetContextStateFn; 585 586 /// Functions for poisoning/unpoisoning local variables 587 FunctionCallee MsanPoisonAllocaFn, MsanUnpoisonAllocaFn; 588 589 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 590 /// pointers. 591 FunctionCallee MsanMetadataPtrForLoadN, MsanMetadataPtrForStoreN; 592 FunctionCallee MsanMetadataPtrForLoad_1_8[4]; 593 FunctionCallee MsanMetadataPtrForStore_1_8[4]; 594 FunctionCallee MsanInstrumentAsmStoreFn; 595 596 /// Helper to choose between different MsanMetadataPtrXxx(). 597 FunctionCallee getKmsanShadowOriginAccessFn(bool isStore, int size); 598 599 /// Memory map parameters used in application-to-shadow calculation. 600 const MemoryMapParams *MapParams; 601 602 /// Custom memory map parameters used when -msan-shadow-base or 603 // -msan-origin-base is provided. 604 MemoryMapParams CustomMapParams; 605 606 MDNode *ColdCallWeights; 607 608 /// Branch weights for origin store. 609 MDNode *OriginStoreWeights; 610 }; 611 612 void insertModuleCtor(Module &M) { 613 getOrCreateSanitizerCtorAndInitFunctions( 614 M, kMsanModuleCtorName, kMsanInitName, 615 /*InitArgTypes=*/{}, 616 /*InitArgs=*/{}, 617 // This callback is invoked when the functions are created the first 618 // time. Hook them into the global ctors list in that case: 619 [&](Function *Ctor, FunctionCallee) { 620 if (!ClWithComdat) { 621 appendToGlobalCtors(M, Ctor, 0); 622 return; 623 } 624 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 625 Ctor->setComdat(MsanCtorComdat); 626 appendToGlobalCtors(M, Ctor, 0, Ctor); 627 }); 628 } 629 630 /// A legacy function pass for msan instrumentation. 631 /// 632 /// Instruments functions to detect uninitialized reads. 633 struct MemorySanitizerLegacyPass : public FunctionPass { 634 // Pass identification, replacement for typeid. 635 static char ID; 636 637 MemorySanitizerLegacyPass(MemorySanitizerOptions Options = {}) 638 : FunctionPass(ID), Options(Options) { 639 initializeMemorySanitizerLegacyPassPass(*PassRegistry::getPassRegistry()); 640 } 641 StringRef getPassName() const override { return "MemorySanitizerLegacyPass"; } 642 643 void getAnalysisUsage(AnalysisUsage &AU) const override { 644 AU.addRequired<TargetLibraryInfoWrapperPass>(); 645 } 646 647 bool runOnFunction(Function &F) override { 648 return MSan->sanitizeFunction( 649 F, getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F)); 650 } 651 bool doInitialization(Module &M) override; 652 653 Optional<MemorySanitizer> MSan; 654 MemorySanitizerOptions Options; 655 }; 656 657 template <class T> T getOptOrDefault(const cl::opt<T> &Opt, T Default) { 658 return (Opt.getNumOccurrences() > 0) ? Opt : Default; 659 } 660 661 } // end anonymous namespace 662 663 MemorySanitizerOptions::MemorySanitizerOptions(int TO, bool R, bool K) 664 : Kernel(getOptOrDefault(ClEnableKmsan, K)), 665 TrackOrigins(getOptOrDefault(ClTrackOrigins, Kernel ? 2 : TO)), 666 Recover(getOptOrDefault(ClKeepGoing, Kernel || R)) {} 667 668 PreservedAnalyses MemorySanitizerPass::run(Function &F, 669 FunctionAnalysisManager &FAM) { 670 MemorySanitizer Msan(*F.getParent(), Options); 671 if (Msan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F))) 672 return PreservedAnalyses::none(); 673 return PreservedAnalyses::all(); 674 } 675 676 PreservedAnalyses MemorySanitizerPass::run(Module &M, 677 ModuleAnalysisManager &AM) { 678 if (Options.Kernel) 679 return PreservedAnalyses::all(); 680 insertModuleCtor(M); 681 return PreservedAnalyses::none(); 682 } 683 684 char MemorySanitizerLegacyPass::ID = 0; 685 686 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass, "msan", 687 "MemorySanitizer: detects uninitialized reads.", false, 688 false) 689 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 690 INITIALIZE_PASS_END(MemorySanitizerLegacyPass, "msan", 691 "MemorySanitizer: detects uninitialized reads.", false, 692 false) 693 694 FunctionPass * 695 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options) { 696 return new MemorySanitizerLegacyPass(Options); 697 } 698 699 /// Create a non-const global initialized with the given string. 700 /// 701 /// Creates a writable global for Str so that we can pass it to the 702 /// run-time lib. Runtime uses first 4 bytes of the string to store the 703 /// frame ID, so the string needs to be mutable. 704 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 705 StringRef Str) { 706 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 707 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 708 GlobalValue::PrivateLinkage, StrConst, ""); 709 } 710 711 /// Create KMSAN API callbacks. 712 void MemorySanitizer::createKernelApi(Module &M) { 713 IRBuilder<> IRB(*C); 714 715 // These will be initialized in insertKmsanPrologue(). 716 RetvalTLS = nullptr; 717 RetvalOriginTLS = nullptr; 718 ParamTLS = nullptr; 719 ParamOriginTLS = nullptr; 720 VAArgTLS = nullptr; 721 VAArgOriginTLS = nullptr; 722 VAArgOverflowSizeTLS = nullptr; 723 724 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 725 IRB.getInt32Ty()); 726 // Requests the per-task context state (kmsan_context_state*) from the 727 // runtime library. 728 MsanContextStateTy = StructType::get( 729 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 730 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 731 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 732 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), /* va_arg_origin */ 733 IRB.getInt64Ty(), ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 734 OriginTy); 735 MsanGetContextStateFn = M.getOrInsertFunction( 736 "__msan_get_context_state", PointerType::get(MsanContextStateTy, 0)); 737 738 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 739 PointerType::get(IRB.getInt32Ty(), 0)); 740 741 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 742 std::string name_load = 743 "__msan_metadata_ptr_for_load_" + std::to_string(size); 744 std::string name_store = 745 "__msan_metadata_ptr_for_store_" + std::to_string(size); 746 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 747 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 748 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 749 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 750 } 751 752 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 753 "__msan_metadata_ptr_for_load_n", RetTy, 754 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 755 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 756 "__msan_metadata_ptr_for_store_n", RetTy, 757 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 758 759 // Functions for poisoning and unpoisoning memory. 760 MsanPoisonAllocaFn = 761 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 762 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 763 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 764 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 765 } 766 767 static Constant *getOrInsertGlobal(Module &M, StringRef Name, Type *Ty) { 768 return M.getOrInsertGlobal(Name, Ty, [&] { 769 return new GlobalVariable(M, Ty, false, GlobalVariable::ExternalLinkage, 770 nullptr, Name, nullptr, 771 GlobalVariable::InitialExecTLSModel); 772 }); 773 } 774 775 /// Insert declarations for userspace-specific functions and globals. 776 void MemorySanitizer::createUserspaceApi(Module &M) { 777 IRBuilder<> IRB(*C); 778 779 // Create the callback. 780 // FIXME: this function should have "Cold" calling conv, 781 // which is not yet implemented. 782 StringRef WarningFnName = Recover ? "__msan_warning_with_origin" 783 : "__msan_warning_with_origin_noreturn"; 784 WarningFn = 785 M.getOrInsertFunction(WarningFnName, IRB.getVoidTy(), IRB.getInt32Ty()); 786 787 // Create the global TLS variables. 788 RetvalTLS = 789 getOrInsertGlobal(M, "__msan_retval_tls", 790 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8)); 791 792 RetvalOriginTLS = getOrInsertGlobal(M, "__msan_retval_origin_tls", OriginTy); 793 794 ParamTLS = 795 getOrInsertGlobal(M, "__msan_param_tls", 796 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 797 798 ParamOriginTLS = 799 getOrInsertGlobal(M, "__msan_param_origin_tls", 800 ArrayType::get(OriginTy, kParamTLSSize / 4)); 801 802 VAArgTLS = 803 getOrInsertGlobal(M, "__msan_va_arg_tls", 804 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 805 806 VAArgOriginTLS = 807 getOrInsertGlobal(M, "__msan_va_arg_origin_tls", 808 ArrayType::get(OriginTy, kParamTLSSize / 4)); 809 810 VAArgOverflowSizeTLS = 811 getOrInsertGlobal(M, "__msan_va_arg_overflow_size_tls", IRB.getInt64Ty()); 812 813 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 814 AccessSizeIndex++) { 815 unsigned AccessSize = 1 << AccessSizeIndex; 816 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 817 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeWarningFnAttrs; 818 MaybeWarningFnAttrs.push_back(std::make_pair( 819 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 820 MaybeWarningFnAttrs.push_back(std::make_pair( 821 AttributeList::FirstArgIndex + 1, Attribute::get(*C, Attribute::ZExt))); 822 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 823 FunctionName, AttributeList::get(*C, MaybeWarningFnAttrs), 824 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt32Ty()); 825 826 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 827 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeStoreOriginFnAttrs; 828 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 829 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 830 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 831 AttributeList::FirstArgIndex + 2, Attribute::get(*C, Attribute::ZExt))); 832 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 833 FunctionName, AttributeList::get(*C, MaybeStoreOriginFnAttrs), 834 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt8PtrTy(), 835 IRB.getInt32Ty()); 836 } 837 838 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 839 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 840 IRB.getInt8PtrTy(), IntptrTy); 841 MsanPoisonStackFn = 842 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 843 IRB.getInt8PtrTy(), IntptrTy); 844 } 845 846 /// Insert extern declaration of runtime-provided functions and globals. 847 void MemorySanitizer::initializeCallbacks(Module &M) { 848 // Only do this once. 849 if (CallbacksInitialized) 850 return; 851 852 IRBuilder<> IRB(*C); 853 // Initialize callbacks that are common for kernel and userspace 854 // instrumentation. 855 MsanChainOriginFn = M.getOrInsertFunction( 856 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 857 MsanSetOriginFn = 858 M.getOrInsertFunction("__msan_set_origin", IRB.getVoidTy(), 859 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt32Ty()); 860 MemmoveFn = M.getOrInsertFunction( 861 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 862 IRB.getInt8PtrTy(), IntptrTy); 863 MemcpyFn = M.getOrInsertFunction( 864 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 865 IntptrTy); 866 MemsetFn = M.getOrInsertFunction( 867 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 868 IntptrTy); 869 870 MsanInstrumentAsmStoreFn = 871 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 872 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 873 874 if (CompileKernel) { 875 createKernelApi(M); 876 } else { 877 createUserspaceApi(M); 878 } 879 CallbacksInitialized = true; 880 } 881 882 FunctionCallee MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, 883 int size) { 884 FunctionCallee *Fns = 885 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 886 switch (size) { 887 case 1: 888 return Fns[0]; 889 case 2: 890 return Fns[1]; 891 case 4: 892 return Fns[2]; 893 case 8: 894 return Fns[3]; 895 default: 896 return nullptr; 897 } 898 } 899 900 /// Module-level initialization. 901 /// 902 /// inserts a call to __msan_init to the module's constructor list. 903 void MemorySanitizer::initializeModule(Module &M) { 904 auto &DL = M.getDataLayout(); 905 906 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 907 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 908 // Check the overrides first 909 if (ShadowPassed || OriginPassed) { 910 CustomMapParams.AndMask = ClAndMask; 911 CustomMapParams.XorMask = ClXorMask; 912 CustomMapParams.ShadowBase = ClShadowBase; 913 CustomMapParams.OriginBase = ClOriginBase; 914 MapParams = &CustomMapParams; 915 } else { 916 Triple TargetTriple(M.getTargetTriple()); 917 switch (TargetTriple.getOS()) { 918 case Triple::FreeBSD: 919 switch (TargetTriple.getArch()) { 920 case Triple::x86_64: 921 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 922 break; 923 case Triple::x86: 924 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 925 break; 926 default: 927 report_fatal_error("unsupported architecture"); 928 } 929 break; 930 case Triple::NetBSD: 931 switch (TargetTriple.getArch()) { 932 case Triple::x86_64: 933 MapParams = NetBSD_X86_MemoryMapParams.bits64; 934 break; 935 default: 936 report_fatal_error("unsupported architecture"); 937 } 938 break; 939 case Triple::Linux: 940 switch (TargetTriple.getArch()) { 941 case Triple::x86_64: 942 MapParams = Linux_X86_MemoryMapParams.bits64; 943 break; 944 case Triple::x86: 945 MapParams = Linux_X86_MemoryMapParams.bits32; 946 break; 947 case Triple::mips64: 948 case Triple::mips64el: 949 MapParams = Linux_MIPS_MemoryMapParams.bits64; 950 break; 951 case Triple::ppc64: 952 case Triple::ppc64le: 953 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 954 break; 955 case Triple::systemz: 956 MapParams = Linux_S390_MemoryMapParams.bits64; 957 break; 958 case Triple::aarch64: 959 case Triple::aarch64_be: 960 MapParams = Linux_ARM_MemoryMapParams.bits64; 961 break; 962 default: 963 report_fatal_error("unsupported architecture"); 964 } 965 break; 966 default: 967 report_fatal_error("unsupported operating system"); 968 } 969 } 970 971 C = &(M.getContext()); 972 IRBuilder<> IRB(*C); 973 IntptrTy = IRB.getIntPtrTy(DL); 974 OriginTy = IRB.getInt32Ty(); 975 976 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 977 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 978 979 if (!CompileKernel) { 980 if (TrackOrigins) 981 M.getOrInsertGlobal("__msan_track_origins", IRB.getInt32Ty(), [&] { 982 return new GlobalVariable( 983 M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 984 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 985 }); 986 987 if (Recover) 988 M.getOrInsertGlobal("__msan_keep_going", IRB.getInt32Ty(), [&] { 989 return new GlobalVariable(M, IRB.getInt32Ty(), true, 990 GlobalValue::WeakODRLinkage, 991 IRB.getInt32(Recover), "__msan_keep_going"); 992 }); 993 } 994 } 995 996 bool MemorySanitizerLegacyPass::doInitialization(Module &M) { 997 if (!Options.Kernel) 998 insertModuleCtor(M); 999 MSan.emplace(M, Options); 1000 return true; 1001 } 1002 1003 namespace { 1004 1005 /// A helper class that handles instrumentation of VarArg 1006 /// functions on a particular platform. 1007 /// 1008 /// Implementations are expected to insert the instrumentation 1009 /// necessary to propagate argument shadow through VarArg function 1010 /// calls. Visit* methods are called during an InstVisitor pass over 1011 /// the function, and should avoid creating new basic blocks. A new 1012 /// instance of this class is created for each instrumented function. 1013 struct VarArgHelper { 1014 virtual ~VarArgHelper() = default; 1015 1016 /// Visit a CallBase. 1017 virtual void visitCallBase(CallBase &CB, IRBuilder<> &IRB) = 0; 1018 1019 /// Visit a va_start call. 1020 virtual void visitVAStartInst(VAStartInst &I) = 0; 1021 1022 /// Visit a va_copy call. 1023 virtual void visitVACopyInst(VACopyInst &I) = 0; 1024 1025 /// Finalize function instrumentation. 1026 /// 1027 /// This method is called after visiting all interesting (see above) 1028 /// instructions in a function. 1029 virtual void finalizeInstrumentation() = 0; 1030 }; 1031 1032 struct MemorySanitizerVisitor; 1033 1034 } // end anonymous namespace 1035 1036 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 1037 MemorySanitizerVisitor &Visitor); 1038 1039 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 1040 if (TypeSize <= 8) return 0; 1041 return Log2_32_Ceil((TypeSize + 7) / 8); 1042 } 1043 1044 namespace { 1045 1046 /// This class does all the work for a given function. Store and Load 1047 /// instructions store and load corresponding shadow and origin 1048 /// values. Most instructions propagate shadow from arguments to their 1049 /// return values. Certain instructions (most importantly, BranchInst) 1050 /// test their argument shadow and print reports (with a runtime call) if it's 1051 /// non-zero. 1052 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 1053 Function &F; 1054 MemorySanitizer &MS; 1055 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 1056 ValueMap<Value*, Value*> ShadowMap, OriginMap; 1057 std::unique_ptr<VarArgHelper> VAHelper; 1058 const TargetLibraryInfo *TLI; 1059 BasicBlock *ActualFnStart; 1060 1061 // The following flags disable parts of MSan instrumentation based on 1062 // exclusion list contents and command-line options. 1063 bool InsertChecks; 1064 bool PropagateShadow; 1065 bool PoisonStack; 1066 bool PoisonUndef; 1067 1068 struct ShadowOriginAndInsertPoint { 1069 Value *Shadow; 1070 Value *Origin; 1071 Instruction *OrigIns; 1072 1073 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 1074 : Shadow(S), Origin(O), OrigIns(I) {} 1075 }; 1076 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 1077 bool InstrumentLifetimeStart = ClHandleLifetimeIntrinsics; 1078 SmallSet<AllocaInst *, 16> AllocaSet; 1079 SmallVector<std::pair<IntrinsicInst *, AllocaInst *>, 16> LifetimeStartList; 1080 SmallVector<StoreInst *, 16> StoreList; 1081 1082 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS, 1083 const TargetLibraryInfo &TLI) 1084 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)), TLI(&TLI) { 1085 bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory); 1086 InsertChecks = SanitizeFunction; 1087 PropagateShadow = SanitizeFunction; 1088 PoisonStack = SanitizeFunction && ClPoisonStack; 1089 PoisonUndef = SanitizeFunction && ClPoisonUndef; 1090 1091 MS.initializeCallbacks(*F.getParent()); 1092 if (MS.CompileKernel) 1093 ActualFnStart = insertKmsanPrologue(F); 1094 else 1095 ActualFnStart = &F.getEntryBlock(); 1096 1097 LLVM_DEBUG(if (!InsertChecks) dbgs() 1098 << "MemorySanitizer is not inserting checks into '" 1099 << F.getName() << "'\n"); 1100 } 1101 1102 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1103 if (MS.TrackOrigins <= 1) return V; 1104 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1105 } 1106 1107 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1108 const DataLayout &DL = F.getParent()->getDataLayout(); 1109 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1110 if (IntptrSize == kOriginSize) return Origin; 1111 assert(IntptrSize == kOriginSize * 2); 1112 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1113 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1114 } 1115 1116 /// Fill memory range with the given origin value. 1117 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1118 unsigned Size, Align Alignment) { 1119 const DataLayout &DL = F.getParent()->getDataLayout(); 1120 const Align IntptrAlignment = DL.getABITypeAlign(MS.IntptrTy); 1121 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1122 assert(IntptrAlignment >= kMinOriginAlignment); 1123 assert(IntptrSize >= kOriginSize); 1124 1125 unsigned Ofs = 0; 1126 Align CurrentAlignment = Alignment; 1127 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1128 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1129 Value *IntptrOriginPtr = 1130 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1131 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1132 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1133 : IntptrOriginPtr; 1134 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1135 Ofs += IntptrSize / kOriginSize; 1136 CurrentAlignment = IntptrAlignment; 1137 } 1138 } 1139 1140 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1141 Value *GEP = 1142 i ? IRB.CreateConstGEP1_32(MS.OriginTy, OriginPtr, i) : OriginPtr; 1143 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1144 CurrentAlignment = kMinOriginAlignment; 1145 } 1146 } 1147 1148 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1149 Value *OriginPtr, Align Alignment, bool AsCall) { 1150 const DataLayout &DL = F.getParent()->getDataLayout(); 1151 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1152 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1153 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1154 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1155 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1156 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1157 OriginAlignment); 1158 return; 1159 } 1160 1161 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1162 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1163 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1164 FunctionCallee Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1165 Value *ConvertedShadow2 = 1166 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1167 IRB.CreateCall(Fn, 1168 {ConvertedShadow2, 1169 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), Origin}); 1170 } else { 1171 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1172 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1173 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1174 IRBuilder<> IRBNew(CheckTerm); 1175 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1176 OriginAlignment); 1177 } 1178 } 1179 1180 void materializeStores(bool InstrumentWithCalls) { 1181 for (StoreInst *SI : StoreList) { 1182 IRBuilder<> IRB(SI); 1183 Value *Val = SI->getValueOperand(); 1184 Value *Addr = SI->getPointerOperand(); 1185 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1186 Value *ShadowPtr, *OriginPtr; 1187 Type *ShadowTy = Shadow->getType(); 1188 const Align Alignment = assumeAligned(SI->getAlignment()); 1189 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1190 std::tie(ShadowPtr, OriginPtr) = 1191 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1192 1193 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1194 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1195 (void)NewSI; 1196 1197 if (SI->isAtomic()) 1198 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1199 1200 if (MS.TrackOrigins && !SI->isAtomic()) 1201 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1202 OriginAlignment, InstrumentWithCalls); 1203 } 1204 } 1205 1206 /// Helper function to insert a warning at IRB's current insert point. 1207 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1208 if (!Origin) 1209 Origin = (Value *)IRB.getInt32(0); 1210 assert(Origin->getType()->isIntegerTy()); 1211 IRB.CreateCall(MS.WarningFn, Origin)->setCannotMerge(); 1212 // FIXME: Insert UnreachableInst if !MS.Recover? 1213 // This may invalidate some of the following checks and needs to be done 1214 // at the very end. 1215 } 1216 1217 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1218 bool AsCall) { 1219 IRBuilder<> IRB(OrigIns); 1220 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1221 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1222 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1223 1224 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1225 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1226 insertWarningFn(IRB, Origin); 1227 } 1228 return; 1229 } 1230 1231 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1232 1233 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1234 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1235 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1236 FunctionCallee Fn = MS.MaybeWarningFn[SizeIndex]; 1237 Value *ConvertedShadow2 = 1238 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1239 IRB.CreateCall(Fn, {ConvertedShadow2, MS.TrackOrigins && Origin 1240 ? Origin 1241 : (Value *)IRB.getInt32(0)}); 1242 } else { 1243 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1244 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1245 Cmp, OrigIns, 1246 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1247 1248 IRB.SetInsertPoint(CheckTerm); 1249 insertWarningFn(IRB, Origin); 1250 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1251 } 1252 } 1253 1254 void materializeChecks(bool InstrumentWithCalls) { 1255 for (const auto &ShadowData : InstrumentationList) { 1256 Instruction *OrigIns = ShadowData.OrigIns; 1257 Value *Shadow = ShadowData.Shadow; 1258 Value *Origin = ShadowData.Origin; 1259 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1260 } 1261 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1262 } 1263 1264 BasicBlock *insertKmsanPrologue(Function &F) { 1265 BasicBlock *ret = 1266 SplitBlock(&F.getEntryBlock(), F.getEntryBlock().getFirstNonPHI()); 1267 IRBuilder<> IRB(F.getEntryBlock().getFirstNonPHI()); 1268 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1269 Constant *Zero = IRB.getInt32(0); 1270 MS.ParamTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1271 {Zero, IRB.getInt32(0)}, "param_shadow"); 1272 MS.RetvalTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1273 {Zero, IRB.getInt32(1)}, "retval_shadow"); 1274 MS.VAArgTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1275 {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1276 MS.VAArgOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1277 {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1278 MS.VAArgOverflowSizeTLS = 1279 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1280 {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1281 MS.ParamOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1282 {Zero, IRB.getInt32(5)}, "param_origin"); 1283 MS.RetvalOriginTLS = 1284 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1285 {Zero, IRB.getInt32(6)}, "retval_origin"); 1286 return ret; 1287 } 1288 1289 /// Add MemorySanitizer instrumentation to a function. 1290 bool runOnFunction() { 1291 // In the presence of unreachable blocks, we may see Phi nodes with 1292 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1293 // blocks, such nodes will not have any shadow value associated with them. 1294 // It's easier to remove unreachable blocks than deal with missing shadow. 1295 removeUnreachableBlocks(F); 1296 1297 // Iterate all BBs in depth-first order and create shadow instructions 1298 // for all instructions (where applicable). 1299 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1300 for (BasicBlock *BB : depth_first(ActualFnStart)) 1301 visit(*BB); 1302 1303 // Finalize PHI nodes. 1304 for (PHINode *PN : ShadowPHINodes) { 1305 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1306 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1307 size_t NumValues = PN->getNumIncomingValues(); 1308 for (size_t v = 0; v < NumValues; v++) { 1309 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1310 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1311 } 1312 } 1313 1314 VAHelper->finalizeInstrumentation(); 1315 1316 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to 1317 // instrumenting only allocas. 1318 if (InstrumentLifetimeStart) { 1319 for (auto Item : LifetimeStartList) { 1320 instrumentAlloca(*Item.second, Item.first); 1321 AllocaSet.erase(Item.second); 1322 } 1323 } 1324 // Poison the allocas for which we didn't instrument the corresponding 1325 // lifetime intrinsics. 1326 for (AllocaInst *AI : AllocaSet) 1327 instrumentAlloca(*AI); 1328 1329 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1330 InstrumentationList.size() + StoreList.size() > 1331 (unsigned)ClInstrumentationWithCallThreshold; 1332 1333 // Insert shadow value checks. 1334 materializeChecks(InstrumentWithCalls); 1335 1336 // Delayed instrumentation of StoreInst. 1337 // This may not add new address checks. 1338 materializeStores(InstrumentWithCalls); 1339 1340 return true; 1341 } 1342 1343 /// Compute the shadow type that corresponds to a given Value. 1344 Type *getShadowTy(Value *V) { 1345 return getShadowTy(V->getType()); 1346 } 1347 1348 /// Compute the shadow type that corresponds to a given Type. 1349 Type *getShadowTy(Type *OrigTy) { 1350 if (!OrigTy->isSized()) { 1351 return nullptr; 1352 } 1353 // For integer type, shadow is the same as the original type. 1354 // This may return weird-sized types like i1. 1355 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1356 return IT; 1357 const DataLayout &DL = F.getParent()->getDataLayout(); 1358 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1359 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1360 return FixedVectorType::get(IntegerType::get(*MS.C, EltSize), 1361 cast<FixedVectorType>(VT)->getNumElements()); 1362 } 1363 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1364 return ArrayType::get(getShadowTy(AT->getElementType()), 1365 AT->getNumElements()); 1366 } 1367 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1368 SmallVector<Type*, 4> Elements; 1369 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1370 Elements.push_back(getShadowTy(ST->getElementType(i))); 1371 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1372 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1373 return Res; 1374 } 1375 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1376 return IntegerType::get(*MS.C, TypeSize); 1377 } 1378 1379 /// Flatten a vector type. 1380 Type *getShadowTyNoVec(Type *ty) { 1381 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1382 return IntegerType::get(*MS.C, 1383 vt->getPrimitiveSizeInBits().getFixedSize()); 1384 return ty; 1385 } 1386 1387 /// Extract combined shadow of struct elements as a bool 1388 Value *collapseStructShadow(StructType *Struct, Value *Shadow, 1389 IRBuilder<> &IRB) { 1390 Value *FalseVal = IRB.getIntN(/* width */ 1, /* value */ 0); 1391 Value *Aggregator = FalseVal; 1392 1393 for (unsigned Idx = 0; Idx < Struct->getNumElements(); Idx++) { 1394 // Combine by ORing together each element's bool shadow 1395 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1396 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1397 Value *ShadowBool = convertToBool(ShadowInner, IRB); 1398 1399 if (Aggregator != FalseVal) 1400 Aggregator = IRB.CreateOr(Aggregator, ShadowBool); 1401 else 1402 Aggregator = ShadowBool; 1403 } 1404 1405 return Aggregator; 1406 } 1407 1408 // Extract combined shadow of array elements 1409 Value *collapseArrayShadow(ArrayType *Array, Value *Shadow, 1410 IRBuilder<> &IRB) { 1411 if (!Array->getNumElements()) 1412 return IRB.getIntN(/* width */ 1, /* value */ 0); 1413 1414 Value *FirstItem = IRB.CreateExtractValue(Shadow, 0); 1415 Value *Aggregator = convertShadowToScalar(FirstItem, IRB); 1416 1417 for (unsigned Idx = 1; Idx < Array->getNumElements(); Idx++) { 1418 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1419 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1420 Aggregator = IRB.CreateOr(Aggregator, ShadowInner); 1421 } 1422 return Aggregator; 1423 } 1424 1425 /// Convert a shadow value to it's flattened variant. The resulting 1426 /// shadow may not necessarily have the same bit width as the input 1427 /// value, but it will always be comparable to zero. 1428 Value *convertShadowToScalar(Value *V, IRBuilder<> &IRB) { 1429 if (StructType *Struct = dyn_cast<StructType>(V->getType())) 1430 return collapseStructShadow(Struct, V, IRB); 1431 if (ArrayType *Array = dyn_cast<ArrayType>(V->getType())) 1432 return collapseArrayShadow(Array, V, IRB); 1433 Type *Ty = V->getType(); 1434 Type *NoVecTy = getShadowTyNoVec(Ty); 1435 if (Ty == NoVecTy) return V; 1436 return IRB.CreateBitCast(V, NoVecTy); 1437 } 1438 1439 // Convert a scalar value to an i1 by comparing with 0 1440 Value *convertToBool(Value *V, IRBuilder<> &IRB, const Twine &name = "") { 1441 Type *VTy = V->getType(); 1442 assert(VTy->isIntegerTy()); 1443 if (VTy->getIntegerBitWidth() == 1) 1444 // Just converting a bool to a bool, so do nothing. 1445 return V; 1446 return IRB.CreateICmpNE(V, ConstantInt::get(VTy, 0), name); 1447 } 1448 1449 /// Compute the integer shadow offset that corresponds to a given 1450 /// application address. 1451 /// 1452 /// Offset = (Addr & ~AndMask) ^ XorMask 1453 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1454 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1455 1456 uint64_t AndMask = MS.MapParams->AndMask; 1457 if (AndMask) 1458 OffsetLong = 1459 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1460 1461 uint64_t XorMask = MS.MapParams->XorMask; 1462 if (XorMask) 1463 OffsetLong = 1464 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1465 return OffsetLong; 1466 } 1467 1468 /// Compute the shadow and origin addresses corresponding to a given 1469 /// application address. 1470 /// 1471 /// Shadow = ShadowBase + Offset 1472 /// Origin = (OriginBase + Offset) & ~3ULL 1473 std::pair<Value *, Value *> 1474 getShadowOriginPtrUserspace(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1475 MaybeAlign Alignment) { 1476 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1477 Value *ShadowLong = ShadowOffset; 1478 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1479 if (ShadowBase != 0) { 1480 ShadowLong = 1481 IRB.CreateAdd(ShadowLong, 1482 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1483 } 1484 Value *ShadowPtr = 1485 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1486 Value *OriginPtr = nullptr; 1487 if (MS.TrackOrigins) { 1488 Value *OriginLong = ShadowOffset; 1489 uint64_t OriginBase = MS.MapParams->OriginBase; 1490 if (OriginBase != 0) 1491 OriginLong = IRB.CreateAdd(OriginLong, 1492 ConstantInt::get(MS.IntptrTy, OriginBase)); 1493 if (!Alignment || *Alignment < kMinOriginAlignment) { 1494 uint64_t Mask = kMinOriginAlignment.value() - 1; 1495 OriginLong = 1496 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1497 } 1498 OriginPtr = 1499 IRB.CreateIntToPtr(OriginLong, PointerType::get(MS.OriginTy, 0)); 1500 } 1501 return std::make_pair(ShadowPtr, OriginPtr); 1502 } 1503 1504 std::pair<Value *, Value *> getShadowOriginPtrKernel(Value *Addr, 1505 IRBuilder<> &IRB, 1506 Type *ShadowTy, 1507 bool isStore) { 1508 Value *ShadowOriginPtrs; 1509 const DataLayout &DL = F.getParent()->getDataLayout(); 1510 int Size = DL.getTypeStoreSize(ShadowTy); 1511 1512 FunctionCallee Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1513 Value *AddrCast = 1514 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1515 if (Getter) { 1516 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1517 } else { 1518 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1519 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1520 : MS.MsanMetadataPtrForLoadN, 1521 {AddrCast, SizeVal}); 1522 } 1523 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1524 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1525 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1526 1527 return std::make_pair(ShadowPtr, OriginPtr); 1528 } 1529 1530 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1531 Type *ShadowTy, 1532 MaybeAlign Alignment, 1533 bool isStore) { 1534 if (MS.CompileKernel) 1535 return getShadowOriginPtrKernel(Addr, IRB, ShadowTy, isStore); 1536 return getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1537 } 1538 1539 /// Compute the shadow address for a given function argument. 1540 /// 1541 /// Shadow = ParamTLS+ArgOffset. 1542 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1543 int ArgOffset) { 1544 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1545 if (ArgOffset) 1546 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1547 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1548 "_msarg"); 1549 } 1550 1551 /// Compute the origin address for a given function argument. 1552 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1553 int ArgOffset) { 1554 if (!MS.TrackOrigins) 1555 return nullptr; 1556 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1557 if (ArgOffset) 1558 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1559 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1560 "_msarg_o"); 1561 } 1562 1563 /// Compute the shadow address for a retval. 1564 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1565 return IRB.CreatePointerCast(MS.RetvalTLS, 1566 PointerType::get(getShadowTy(A), 0), 1567 "_msret"); 1568 } 1569 1570 /// Compute the origin address for a retval. 1571 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1572 // We keep a single origin for the entire retval. Might be too optimistic. 1573 return MS.RetvalOriginTLS; 1574 } 1575 1576 /// Set SV to be the shadow value for V. 1577 void setShadow(Value *V, Value *SV) { 1578 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1579 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1580 } 1581 1582 /// Set Origin to be the origin value for V. 1583 void setOrigin(Value *V, Value *Origin) { 1584 if (!MS.TrackOrigins) return; 1585 assert(!OriginMap.count(V) && "Values may only have one origin"); 1586 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1587 OriginMap[V] = Origin; 1588 } 1589 1590 Constant *getCleanShadow(Type *OrigTy) { 1591 Type *ShadowTy = getShadowTy(OrigTy); 1592 if (!ShadowTy) 1593 return nullptr; 1594 return Constant::getNullValue(ShadowTy); 1595 } 1596 1597 /// Create a clean shadow value for a given value. 1598 /// 1599 /// Clean shadow (all zeroes) means all bits of the value are defined 1600 /// (initialized). 1601 Constant *getCleanShadow(Value *V) { 1602 return getCleanShadow(V->getType()); 1603 } 1604 1605 /// Create a dirty shadow of a given shadow type. 1606 Constant *getPoisonedShadow(Type *ShadowTy) { 1607 assert(ShadowTy); 1608 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1609 return Constant::getAllOnesValue(ShadowTy); 1610 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1611 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1612 getPoisonedShadow(AT->getElementType())); 1613 return ConstantArray::get(AT, Vals); 1614 } 1615 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1616 SmallVector<Constant *, 4> Vals; 1617 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1618 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1619 return ConstantStruct::get(ST, Vals); 1620 } 1621 llvm_unreachable("Unexpected shadow type"); 1622 } 1623 1624 /// Create a dirty shadow for a given value. 1625 Constant *getPoisonedShadow(Value *V) { 1626 Type *ShadowTy = getShadowTy(V); 1627 if (!ShadowTy) 1628 return nullptr; 1629 return getPoisonedShadow(ShadowTy); 1630 } 1631 1632 /// Create a clean (zero) origin. 1633 Value *getCleanOrigin() { 1634 return Constant::getNullValue(MS.OriginTy); 1635 } 1636 1637 /// Get the shadow value for a given Value. 1638 /// 1639 /// This function either returns the value set earlier with setShadow, 1640 /// or extracts if from ParamTLS (for function arguments). 1641 Value *getShadow(Value *V) { 1642 if (!PropagateShadow) return getCleanShadow(V); 1643 if (Instruction *I = dyn_cast<Instruction>(V)) { 1644 if (I->getMetadata("nosanitize")) 1645 return getCleanShadow(V); 1646 // For instructions the shadow is already stored in the map. 1647 Value *Shadow = ShadowMap[V]; 1648 if (!Shadow) { 1649 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1650 (void)I; 1651 assert(Shadow && "No shadow for a value"); 1652 } 1653 return Shadow; 1654 } 1655 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1656 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1657 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1658 (void)U; 1659 return AllOnes; 1660 } 1661 if (Argument *A = dyn_cast<Argument>(V)) { 1662 // For arguments we compute the shadow on demand and store it in the map. 1663 Value **ShadowPtr = &ShadowMap[V]; 1664 if (*ShadowPtr) 1665 return *ShadowPtr; 1666 Function *F = A->getParent(); 1667 IRBuilder<> EntryIRB(ActualFnStart->getFirstNonPHI()); 1668 unsigned ArgOffset = 0; 1669 const DataLayout &DL = F->getParent()->getDataLayout(); 1670 for (auto &FArg : F->args()) { 1671 if (!FArg.getType()->isSized()) { 1672 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1673 continue; 1674 } 1675 1676 bool FArgByVal = FArg.hasByValAttr(); 1677 bool FArgNoUndef = FArg.hasAttribute(Attribute::NoUndef); 1678 bool FArgEagerCheck = ClEagerChecks && !FArgByVal && FArgNoUndef; 1679 unsigned Size = 1680 FArg.hasByValAttr() 1681 ? DL.getTypeAllocSize(FArg.getParamByValType()) 1682 : DL.getTypeAllocSize(FArg.getType()); 1683 1684 if (A == &FArg) { 1685 bool Overflow = ArgOffset + Size > kParamTLSSize; 1686 if (FArgEagerCheck) { 1687 *ShadowPtr = getCleanShadow(V); 1688 setOrigin(A, getCleanOrigin()); 1689 continue; 1690 } else if (FArgByVal) { 1691 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1692 // ByVal pointer itself has clean shadow. We copy the actual 1693 // argument shadow to the underlying memory. 1694 // Figure out maximal valid memcpy alignment. 1695 const Align ArgAlign = DL.getValueOrABITypeAlignment( 1696 MaybeAlign(FArg.getParamAlignment()), FArg.getParamByValType()); 1697 Value *CpShadowPtr = 1698 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1699 /*isStore*/ true) 1700 .first; 1701 // TODO(glider): need to copy origins. 1702 if (Overflow) { 1703 // ParamTLS overflow. 1704 EntryIRB.CreateMemSet( 1705 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1706 Size, ArgAlign); 1707 } else { 1708 const Align CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1709 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1710 CopyAlign, Size); 1711 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1712 (void)Cpy; 1713 } 1714 *ShadowPtr = getCleanShadow(V); 1715 } else { 1716 // Shadow over TLS 1717 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1718 if (Overflow) { 1719 // ParamTLS overflow. 1720 *ShadowPtr = getCleanShadow(V); 1721 } else { 1722 *ShadowPtr = EntryIRB.CreateAlignedLoad(getShadowTy(&FArg), Base, 1723 kShadowTLSAlignment); 1724 } 1725 } 1726 LLVM_DEBUG(dbgs() 1727 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1728 if (MS.TrackOrigins && !Overflow) { 1729 Value *OriginPtr = 1730 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1731 setOrigin(A, EntryIRB.CreateLoad(MS.OriginTy, OriginPtr)); 1732 } else { 1733 setOrigin(A, getCleanOrigin()); 1734 } 1735 } 1736 1737 if (!FArgEagerCheck) 1738 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1739 } 1740 assert(*ShadowPtr && "Could not find shadow for an argument"); 1741 return *ShadowPtr; 1742 } 1743 // For everything else the shadow is zero. 1744 return getCleanShadow(V); 1745 } 1746 1747 /// Get the shadow for i-th argument of the instruction I. 1748 Value *getShadow(Instruction *I, int i) { 1749 return getShadow(I->getOperand(i)); 1750 } 1751 1752 /// Get the origin for a value. 1753 Value *getOrigin(Value *V) { 1754 if (!MS.TrackOrigins) return nullptr; 1755 if (!PropagateShadow) return getCleanOrigin(); 1756 if (isa<Constant>(V)) return getCleanOrigin(); 1757 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1758 "Unexpected value type in getOrigin()"); 1759 if (Instruction *I = dyn_cast<Instruction>(V)) { 1760 if (I->getMetadata("nosanitize")) 1761 return getCleanOrigin(); 1762 } 1763 Value *Origin = OriginMap[V]; 1764 assert(Origin && "Missing origin"); 1765 return Origin; 1766 } 1767 1768 /// Get the origin for i-th argument of the instruction I. 1769 Value *getOrigin(Instruction *I, int i) { 1770 return getOrigin(I->getOperand(i)); 1771 } 1772 1773 /// Remember the place where a shadow check should be inserted. 1774 /// 1775 /// This location will be later instrumented with a check that will print a 1776 /// UMR warning in runtime if the shadow value is not 0. 1777 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1778 assert(Shadow); 1779 if (!InsertChecks) return; 1780 #ifndef NDEBUG 1781 Type *ShadowTy = Shadow->getType(); 1782 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy) || 1783 isa<StructType>(ShadowTy) || isa<ArrayType>(ShadowTy)) && 1784 "Can only insert checks for integer, vector, and aggregate shadow " 1785 "types"); 1786 #endif 1787 InstrumentationList.push_back( 1788 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1789 } 1790 1791 /// Remember the place where a shadow check should be inserted. 1792 /// 1793 /// This location will be later instrumented with a check that will print a 1794 /// UMR warning in runtime if the value is not fully defined. 1795 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1796 assert(Val); 1797 Value *Shadow, *Origin; 1798 if (ClCheckConstantShadow) { 1799 Shadow = getShadow(Val); 1800 if (!Shadow) return; 1801 Origin = getOrigin(Val); 1802 } else { 1803 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1804 if (!Shadow) return; 1805 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1806 } 1807 insertShadowCheck(Shadow, Origin, OrigIns); 1808 } 1809 1810 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1811 switch (a) { 1812 case AtomicOrdering::NotAtomic: 1813 return AtomicOrdering::NotAtomic; 1814 case AtomicOrdering::Unordered: 1815 case AtomicOrdering::Monotonic: 1816 case AtomicOrdering::Release: 1817 return AtomicOrdering::Release; 1818 case AtomicOrdering::Acquire: 1819 case AtomicOrdering::AcquireRelease: 1820 return AtomicOrdering::AcquireRelease; 1821 case AtomicOrdering::SequentiallyConsistent: 1822 return AtomicOrdering::SequentiallyConsistent; 1823 } 1824 llvm_unreachable("Unknown ordering"); 1825 } 1826 1827 Value *makeAddReleaseOrderingTable(IRBuilder<> &IRB) { 1828 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1829 uint32_t OrderingTable[NumOrderings] = {}; 1830 1831 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1832 OrderingTable[(int)AtomicOrderingCABI::release] = 1833 (int)AtomicOrderingCABI::release; 1834 OrderingTable[(int)AtomicOrderingCABI::consume] = 1835 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1836 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1837 (int)AtomicOrderingCABI::acq_rel; 1838 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1839 (int)AtomicOrderingCABI::seq_cst; 1840 1841 return ConstantDataVector::get(IRB.getContext(), 1842 makeArrayRef(OrderingTable, NumOrderings)); 1843 } 1844 1845 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1846 switch (a) { 1847 case AtomicOrdering::NotAtomic: 1848 return AtomicOrdering::NotAtomic; 1849 case AtomicOrdering::Unordered: 1850 case AtomicOrdering::Monotonic: 1851 case AtomicOrdering::Acquire: 1852 return AtomicOrdering::Acquire; 1853 case AtomicOrdering::Release: 1854 case AtomicOrdering::AcquireRelease: 1855 return AtomicOrdering::AcquireRelease; 1856 case AtomicOrdering::SequentiallyConsistent: 1857 return AtomicOrdering::SequentiallyConsistent; 1858 } 1859 llvm_unreachable("Unknown ordering"); 1860 } 1861 1862 Value *makeAddAcquireOrderingTable(IRBuilder<> &IRB) { 1863 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1864 uint32_t OrderingTable[NumOrderings] = {}; 1865 1866 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1867 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1868 OrderingTable[(int)AtomicOrderingCABI::consume] = 1869 (int)AtomicOrderingCABI::acquire; 1870 OrderingTable[(int)AtomicOrderingCABI::release] = 1871 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1872 (int)AtomicOrderingCABI::acq_rel; 1873 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1874 (int)AtomicOrderingCABI::seq_cst; 1875 1876 return ConstantDataVector::get(IRB.getContext(), 1877 makeArrayRef(OrderingTable, NumOrderings)); 1878 } 1879 1880 // ------------------- Visitors. 1881 using InstVisitor<MemorySanitizerVisitor>::visit; 1882 void visit(Instruction &I) { 1883 if (!I.getMetadata("nosanitize")) 1884 InstVisitor<MemorySanitizerVisitor>::visit(I); 1885 } 1886 1887 /// Instrument LoadInst 1888 /// 1889 /// Loads the corresponding shadow and (optionally) origin. 1890 /// Optionally, checks that the load address is fully defined. 1891 void visitLoadInst(LoadInst &I) { 1892 assert(I.getType()->isSized() && "Load type must have size"); 1893 assert(!I.getMetadata("nosanitize")); 1894 IRBuilder<> IRB(I.getNextNode()); 1895 Type *ShadowTy = getShadowTy(&I); 1896 Value *Addr = I.getPointerOperand(); 1897 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 1898 const Align Alignment = assumeAligned(I.getAlignment()); 1899 if (PropagateShadow) { 1900 std::tie(ShadowPtr, OriginPtr) = 1901 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1902 setShadow(&I, 1903 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 1904 } else { 1905 setShadow(&I, getCleanShadow(&I)); 1906 } 1907 1908 if (ClCheckAccessAddress) 1909 insertShadowCheck(I.getPointerOperand(), &I); 1910 1911 if (I.isAtomic()) 1912 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1913 1914 if (MS.TrackOrigins) { 1915 if (PropagateShadow) { 1916 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1917 setOrigin( 1918 &I, IRB.CreateAlignedLoad(MS.OriginTy, OriginPtr, OriginAlignment)); 1919 } else { 1920 setOrigin(&I, getCleanOrigin()); 1921 } 1922 } 1923 } 1924 1925 /// Instrument StoreInst 1926 /// 1927 /// Stores the corresponding shadow and (optionally) origin. 1928 /// Optionally, checks that the store address is fully defined. 1929 void visitStoreInst(StoreInst &I) { 1930 StoreList.push_back(&I); 1931 if (ClCheckAccessAddress) 1932 insertShadowCheck(I.getPointerOperand(), &I); 1933 } 1934 1935 void handleCASOrRMW(Instruction &I) { 1936 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1937 1938 IRBuilder<> IRB(&I); 1939 Value *Addr = I.getOperand(0); 1940 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, I.getType(), Align(1), 1941 /*isStore*/ true) 1942 .first; 1943 1944 if (ClCheckAccessAddress) 1945 insertShadowCheck(Addr, &I); 1946 1947 // Only test the conditional argument of cmpxchg instruction. 1948 // The other argument can potentially be uninitialized, but we can not 1949 // detect this situation reliably without possible false positives. 1950 if (isa<AtomicCmpXchgInst>(I)) 1951 insertShadowCheck(I.getOperand(1), &I); 1952 1953 IRB.CreateStore(getCleanShadow(&I), ShadowPtr); 1954 1955 setShadow(&I, getCleanShadow(&I)); 1956 setOrigin(&I, getCleanOrigin()); 1957 } 1958 1959 void visitAtomicRMWInst(AtomicRMWInst &I) { 1960 handleCASOrRMW(I); 1961 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1962 } 1963 1964 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1965 handleCASOrRMW(I); 1966 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1967 } 1968 1969 // Vector manipulation. 1970 void visitExtractElementInst(ExtractElementInst &I) { 1971 insertShadowCheck(I.getOperand(1), &I); 1972 IRBuilder<> IRB(&I); 1973 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 1974 "_msprop")); 1975 setOrigin(&I, getOrigin(&I, 0)); 1976 } 1977 1978 void visitInsertElementInst(InsertElementInst &I) { 1979 insertShadowCheck(I.getOperand(2), &I); 1980 IRBuilder<> IRB(&I); 1981 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 1982 I.getOperand(2), "_msprop")); 1983 setOriginForNaryOp(I); 1984 } 1985 1986 void visitShuffleVectorInst(ShuffleVectorInst &I) { 1987 IRBuilder<> IRB(&I); 1988 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 1989 I.getShuffleMask(), "_msprop")); 1990 setOriginForNaryOp(I); 1991 } 1992 1993 // Casts. 1994 void visitSExtInst(SExtInst &I) { 1995 IRBuilder<> IRB(&I); 1996 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 1997 setOrigin(&I, getOrigin(&I, 0)); 1998 } 1999 2000 void visitZExtInst(ZExtInst &I) { 2001 IRBuilder<> IRB(&I); 2002 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 2003 setOrigin(&I, getOrigin(&I, 0)); 2004 } 2005 2006 void visitTruncInst(TruncInst &I) { 2007 IRBuilder<> IRB(&I); 2008 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 2009 setOrigin(&I, getOrigin(&I, 0)); 2010 } 2011 2012 void visitBitCastInst(BitCastInst &I) { 2013 // Special case: if this is the bitcast (there is exactly 1 allowed) between 2014 // a musttail call and a ret, don't instrument. New instructions are not 2015 // allowed after a musttail call. 2016 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 2017 if (CI->isMustTailCall()) 2018 return; 2019 IRBuilder<> IRB(&I); 2020 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 2021 setOrigin(&I, getOrigin(&I, 0)); 2022 } 2023 2024 void visitPtrToIntInst(PtrToIntInst &I) { 2025 IRBuilder<> IRB(&I); 2026 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2027 "_msprop_ptrtoint")); 2028 setOrigin(&I, getOrigin(&I, 0)); 2029 } 2030 2031 void visitIntToPtrInst(IntToPtrInst &I) { 2032 IRBuilder<> IRB(&I); 2033 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2034 "_msprop_inttoptr")); 2035 setOrigin(&I, getOrigin(&I, 0)); 2036 } 2037 2038 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 2039 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 2040 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 2041 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 2042 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 2043 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 2044 2045 /// Propagate shadow for bitwise AND. 2046 /// 2047 /// This code is exact, i.e. if, for example, a bit in the left argument 2048 /// is defined and 0, then neither the value not definedness of the 2049 /// corresponding bit in B don't affect the resulting shadow. 2050 void visitAnd(BinaryOperator &I) { 2051 IRBuilder<> IRB(&I); 2052 // "And" of 0 and a poisoned value results in unpoisoned value. 2053 // 1&1 => 1; 0&1 => 0; p&1 => p; 2054 // 1&0 => 0; 0&0 => 0; p&0 => 0; 2055 // 1&p => p; 0&p => 0; p&p => p; 2056 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 2057 Value *S1 = getShadow(&I, 0); 2058 Value *S2 = getShadow(&I, 1); 2059 Value *V1 = I.getOperand(0); 2060 Value *V2 = I.getOperand(1); 2061 if (V1->getType() != S1->getType()) { 2062 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2063 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2064 } 2065 Value *S1S2 = IRB.CreateAnd(S1, S2); 2066 Value *V1S2 = IRB.CreateAnd(V1, S2); 2067 Value *S1V2 = IRB.CreateAnd(S1, V2); 2068 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2069 setOriginForNaryOp(I); 2070 } 2071 2072 void visitOr(BinaryOperator &I) { 2073 IRBuilder<> IRB(&I); 2074 // "Or" of 1 and a poisoned value results in unpoisoned value. 2075 // 1|1 => 1; 0|1 => 1; p|1 => 1; 2076 // 1|0 => 1; 0|0 => 0; p|0 => p; 2077 // 1|p => 1; 0|p => p; p|p => p; 2078 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 2079 Value *S1 = getShadow(&I, 0); 2080 Value *S2 = getShadow(&I, 1); 2081 Value *V1 = IRB.CreateNot(I.getOperand(0)); 2082 Value *V2 = IRB.CreateNot(I.getOperand(1)); 2083 if (V1->getType() != S1->getType()) { 2084 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2085 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2086 } 2087 Value *S1S2 = IRB.CreateAnd(S1, S2); 2088 Value *V1S2 = IRB.CreateAnd(V1, S2); 2089 Value *S1V2 = IRB.CreateAnd(S1, V2); 2090 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2091 setOriginForNaryOp(I); 2092 } 2093 2094 /// Default propagation of shadow and/or origin. 2095 /// 2096 /// This class implements the general case of shadow propagation, used in all 2097 /// cases where we don't know and/or don't care about what the operation 2098 /// actually does. It converts all input shadow values to a common type 2099 /// (extending or truncating as necessary), and bitwise OR's them. 2100 /// 2101 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 2102 /// fully initialized), and less prone to false positives. 2103 /// 2104 /// This class also implements the general case of origin propagation. For a 2105 /// Nary operation, result origin is set to the origin of an argument that is 2106 /// not entirely initialized. If there is more than one such arguments, the 2107 /// rightmost of them is picked. It does not matter which one is picked if all 2108 /// arguments are initialized. 2109 template <bool CombineShadow> 2110 class Combiner { 2111 Value *Shadow = nullptr; 2112 Value *Origin = nullptr; 2113 IRBuilder<> &IRB; 2114 MemorySanitizerVisitor *MSV; 2115 2116 public: 2117 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 2118 : IRB(IRB), MSV(MSV) {} 2119 2120 /// Add a pair of shadow and origin values to the mix. 2121 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 2122 if (CombineShadow) { 2123 assert(OpShadow); 2124 if (!Shadow) 2125 Shadow = OpShadow; 2126 else { 2127 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 2128 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 2129 } 2130 } 2131 2132 if (MSV->MS.TrackOrigins) { 2133 assert(OpOrigin); 2134 if (!Origin) { 2135 Origin = OpOrigin; 2136 } else { 2137 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 2138 // No point in adding something that might result in 0 origin value. 2139 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 2140 Value *FlatShadow = MSV->convertShadowToScalar(OpShadow, IRB); 2141 Value *Cond = 2142 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 2143 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 2144 } 2145 } 2146 } 2147 return *this; 2148 } 2149 2150 /// Add an application value to the mix. 2151 Combiner &Add(Value *V) { 2152 Value *OpShadow = MSV->getShadow(V); 2153 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 2154 return Add(OpShadow, OpOrigin); 2155 } 2156 2157 /// Set the current combined values as the given instruction's shadow 2158 /// and origin. 2159 void Done(Instruction *I) { 2160 if (CombineShadow) { 2161 assert(Shadow); 2162 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 2163 MSV->setShadow(I, Shadow); 2164 } 2165 if (MSV->MS.TrackOrigins) { 2166 assert(Origin); 2167 MSV->setOrigin(I, Origin); 2168 } 2169 } 2170 }; 2171 2172 using ShadowAndOriginCombiner = Combiner<true>; 2173 using OriginCombiner = Combiner<false>; 2174 2175 /// Propagate origin for arbitrary operation. 2176 void setOriginForNaryOp(Instruction &I) { 2177 if (!MS.TrackOrigins) return; 2178 IRBuilder<> IRB(&I); 2179 OriginCombiner OC(this, IRB); 2180 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2181 OC.Add(OI->get()); 2182 OC.Done(&I); 2183 } 2184 2185 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2186 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2187 "Vector of pointers is not a valid shadow type"); 2188 return Ty->isVectorTy() ? cast<FixedVectorType>(Ty)->getNumElements() * 2189 Ty->getScalarSizeInBits() 2190 : Ty->getPrimitiveSizeInBits(); 2191 } 2192 2193 /// Cast between two shadow types, extending or truncating as 2194 /// necessary. 2195 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2196 bool Signed = false) { 2197 Type *srcTy = V->getType(); 2198 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2199 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2200 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2201 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2202 2203 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2204 return IRB.CreateIntCast(V, dstTy, Signed); 2205 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2206 cast<FixedVectorType>(dstTy)->getNumElements() == 2207 cast<FixedVectorType>(srcTy)->getNumElements()) 2208 return IRB.CreateIntCast(V, dstTy, Signed); 2209 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2210 Value *V2 = 2211 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2212 return IRB.CreateBitCast(V2, dstTy); 2213 // TODO: handle struct types. 2214 } 2215 2216 /// Cast an application value to the type of its own shadow. 2217 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2218 Type *ShadowTy = getShadowTy(V); 2219 if (V->getType() == ShadowTy) 2220 return V; 2221 if (V->getType()->isPtrOrPtrVectorTy()) 2222 return IRB.CreatePtrToInt(V, ShadowTy); 2223 else 2224 return IRB.CreateBitCast(V, ShadowTy); 2225 } 2226 2227 /// Propagate shadow for arbitrary operation. 2228 void handleShadowOr(Instruction &I) { 2229 IRBuilder<> IRB(&I); 2230 ShadowAndOriginCombiner SC(this, IRB); 2231 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2232 SC.Add(OI->get()); 2233 SC.Done(&I); 2234 } 2235 2236 void visitFNeg(UnaryOperator &I) { handleShadowOr(I); } 2237 2238 // Handle multiplication by constant. 2239 // 2240 // Handle a special case of multiplication by constant that may have one or 2241 // more zeros in the lower bits. This makes corresponding number of lower bits 2242 // of the result zero as well. We model it by shifting the other operand 2243 // shadow left by the required number of bits. Effectively, we transform 2244 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2245 // We use multiplication by 2**N instead of shift to cover the case of 2246 // multiplication by 0, which may occur in some elements of a vector operand. 2247 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2248 Value *OtherArg) { 2249 Constant *ShadowMul; 2250 Type *Ty = ConstArg->getType(); 2251 if (auto *VTy = dyn_cast<VectorType>(Ty)) { 2252 unsigned NumElements = cast<FixedVectorType>(VTy)->getNumElements(); 2253 Type *EltTy = VTy->getElementType(); 2254 SmallVector<Constant *, 16> Elements; 2255 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2256 if (ConstantInt *Elt = 2257 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2258 const APInt &V = Elt->getValue(); 2259 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2260 Elements.push_back(ConstantInt::get(EltTy, V2)); 2261 } else { 2262 Elements.push_back(ConstantInt::get(EltTy, 1)); 2263 } 2264 } 2265 ShadowMul = ConstantVector::get(Elements); 2266 } else { 2267 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2268 const APInt &V = Elt->getValue(); 2269 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2270 ShadowMul = ConstantInt::get(Ty, V2); 2271 } else { 2272 ShadowMul = ConstantInt::get(Ty, 1); 2273 } 2274 } 2275 2276 IRBuilder<> IRB(&I); 2277 setShadow(&I, 2278 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2279 setOrigin(&I, getOrigin(OtherArg)); 2280 } 2281 2282 void visitMul(BinaryOperator &I) { 2283 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2284 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2285 if (constOp0 && !constOp1) 2286 handleMulByConstant(I, constOp0, I.getOperand(1)); 2287 else if (constOp1 && !constOp0) 2288 handleMulByConstant(I, constOp1, I.getOperand(0)); 2289 else 2290 handleShadowOr(I); 2291 } 2292 2293 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2294 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2295 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2296 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2297 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2298 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2299 2300 void handleIntegerDiv(Instruction &I) { 2301 IRBuilder<> IRB(&I); 2302 // Strict on the second argument. 2303 insertShadowCheck(I.getOperand(1), &I); 2304 setShadow(&I, getShadow(&I, 0)); 2305 setOrigin(&I, getOrigin(&I, 0)); 2306 } 2307 2308 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2309 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2310 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2311 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2312 2313 // Floating point division is side-effect free. We can not require that the 2314 // divisor is fully initialized and must propagate shadow. See PR37523. 2315 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2316 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2317 2318 /// Instrument == and != comparisons. 2319 /// 2320 /// Sometimes the comparison result is known even if some of the bits of the 2321 /// arguments are not. 2322 void handleEqualityComparison(ICmpInst &I) { 2323 IRBuilder<> IRB(&I); 2324 Value *A = I.getOperand(0); 2325 Value *B = I.getOperand(1); 2326 Value *Sa = getShadow(A); 2327 Value *Sb = getShadow(B); 2328 2329 // Get rid of pointers and vectors of pointers. 2330 // For ints (and vectors of ints), types of A and Sa match, 2331 // and this is a no-op. 2332 A = IRB.CreatePointerCast(A, Sa->getType()); 2333 B = IRB.CreatePointerCast(B, Sb->getType()); 2334 2335 // A == B <==> (C = A^B) == 0 2336 // A != B <==> (C = A^B) != 0 2337 // Sc = Sa | Sb 2338 Value *C = IRB.CreateXor(A, B); 2339 Value *Sc = IRB.CreateOr(Sa, Sb); 2340 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2341 // Result is defined if one of the following is true 2342 // * there is a defined 1 bit in C 2343 // * C is fully defined 2344 // Si = !(C & ~Sc) && Sc 2345 Value *Zero = Constant::getNullValue(Sc->getType()); 2346 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2347 Value *Si = 2348 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2349 IRB.CreateICmpEQ( 2350 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2351 Si->setName("_msprop_icmp"); 2352 setShadow(&I, Si); 2353 setOriginForNaryOp(I); 2354 } 2355 2356 /// Build the lowest possible value of V, taking into account V's 2357 /// uninitialized bits. 2358 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2359 bool isSigned) { 2360 if (isSigned) { 2361 // Split shadow into sign bit and other bits. 2362 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2363 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2364 // Maximise the undefined shadow bit, minimize other undefined bits. 2365 return 2366 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2367 } else { 2368 // Minimize undefined bits. 2369 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2370 } 2371 } 2372 2373 /// Build the highest possible value of V, taking into account V's 2374 /// uninitialized bits. 2375 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2376 bool isSigned) { 2377 if (isSigned) { 2378 // Split shadow into sign bit and other bits. 2379 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2380 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2381 // Minimise the undefined shadow bit, maximise other undefined bits. 2382 return 2383 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2384 } else { 2385 // Maximize undefined bits. 2386 return IRB.CreateOr(A, Sa); 2387 } 2388 } 2389 2390 /// Instrument relational comparisons. 2391 /// 2392 /// This function does exact shadow propagation for all relational 2393 /// comparisons of integers, pointers and vectors of those. 2394 /// FIXME: output seems suboptimal when one of the operands is a constant 2395 void handleRelationalComparisonExact(ICmpInst &I) { 2396 IRBuilder<> IRB(&I); 2397 Value *A = I.getOperand(0); 2398 Value *B = I.getOperand(1); 2399 Value *Sa = getShadow(A); 2400 Value *Sb = getShadow(B); 2401 2402 // Get rid of pointers and vectors of pointers. 2403 // For ints (and vectors of ints), types of A and Sa match, 2404 // and this is a no-op. 2405 A = IRB.CreatePointerCast(A, Sa->getType()); 2406 B = IRB.CreatePointerCast(B, Sb->getType()); 2407 2408 // Let [a0, a1] be the interval of possible values of A, taking into account 2409 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2410 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2411 bool IsSigned = I.isSigned(); 2412 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2413 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2414 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2415 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2416 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2417 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2418 Value *Si = IRB.CreateXor(S1, S2); 2419 setShadow(&I, Si); 2420 setOriginForNaryOp(I); 2421 } 2422 2423 /// Instrument signed relational comparisons. 2424 /// 2425 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2426 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2427 void handleSignedRelationalComparison(ICmpInst &I) { 2428 Constant *constOp; 2429 Value *op = nullptr; 2430 CmpInst::Predicate pre; 2431 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2432 op = I.getOperand(0); 2433 pre = I.getPredicate(); 2434 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2435 op = I.getOperand(1); 2436 pre = I.getSwappedPredicate(); 2437 } else { 2438 handleShadowOr(I); 2439 return; 2440 } 2441 2442 if ((constOp->isNullValue() && 2443 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2444 (constOp->isAllOnesValue() && 2445 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2446 IRBuilder<> IRB(&I); 2447 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2448 "_msprop_icmp_s"); 2449 setShadow(&I, Shadow); 2450 setOrigin(&I, getOrigin(op)); 2451 } else { 2452 handleShadowOr(I); 2453 } 2454 } 2455 2456 void visitICmpInst(ICmpInst &I) { 2457 if (!ClHandleICmp) { 2458 handleShadowOr(I); 2459 return; 2460 } 2461 if (I.isEquality()) { 2462 handleEqualityComparison(I); 2463 return; 2464 } 2465 2466 assert(I.isRelational()); 2467 if (ClHandleICmpExact) { 2468 handleRelationalComparisonExact(I); 2469 return; 2470 } 2471 if (I.isSigned()) { 2472 handleSignedRelationalComparison(I); 2473 return; 2474 } 2475 2476 assert(I.isUnsigned()); 2477 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2478 handleRelationalComparisonExact(I); 2479 return; 2480 } 2481 2482 handleShadowOr(I); 2483 } 2484 2485 void visitFCmpInst(FCmpInst &I) { 2486 handleShadowOr(I); 2487 } 2488 2489 void handleShift(BinaryOperator &I) { 2490 IRBuilder<> IRB(&I); 2491 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2492 // Otherwise perform the same shift on S1. 2493 Value *S1 = getShadow(&I, 0); 2494 Value *S2 = getShadow(&I, 1); 2495 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2496 S2->getType()); 2497 Value *V2 = I.getOperand(1); 2498 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2499 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2500 setOriginForNaryOp(I); 2501 } 2502 2503 void visitShl(BinaryOperator &I) { handleShift(I); } 2504 void visitAShr(BinaryOperator &I) { handleShift(I); } 2505 void visitLShr(BinaryOperator &I) { handleShift(I); } 2506 2507 /// Instrument llvm.memmove 2508 /// 2509 /// At this point we don't know if llvm.memmove will be inlined or not. 2510 /// If we don't instrument it and it gets inlined, 2511 /// our interceptor will not kick in and we will lose the memmove. 2512 /// If we instrument the call here, but it does not get inlined, 2513 /// we will memove the shadow twice: which is bad in case 2514 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2515 /// 2516 /// Similar situation exists for memcpy and memset. 2517 void visitMemMoveInst(MemMoveInst &I) { 2518 IRBuilder<> IRB(&I); 2519 IRB.CreateCall( 2520 MS.MemmoveFn, 2521 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2522 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2523 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2524 I.eraseFromParent(); 2525 } 2526 2527 // Similar to memmove: avoid copying shadow twice. 2528 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2529 // FIXME: consider doing manual inline for small constant sizes and proper 2530 // alignment. 2531 void visitMemCpyInst(MemCpyInst &I) { 2532 IRBuilder<> IRB(&I); 2533 IRB.CreateCall( 2534 MS.MemcpyFn, 2535 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2536 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2537 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2538 I.eraseFromParent(); 2539 } 2540 2541 // Same as memcpy. 2542 void visitMemSetInst(MemSetInst &I) { 2543 IRBuilder<> IRB(&I); 2544 IRB.CreateCall( 2545 MS.MemsetFn, 2546 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2547 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2548 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2549 I.eraseFromParent(); 2550 } 2551 2552 void visitVAStartInst(VAStartInst &I) { 2553 VAHelper->visitVAStartInst(I); 2554 } 2555 2556 void visitVACopyInst(VACopyInst &I) { 2557 VAHelper->visitVACopyInst(I); 2558 } 2559 2560 /// Handle vector store-like intrinsics. 2561 /// 2562 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2563 /// has 1 pointer argument and 1 vector argument, returns void. 2564 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2565 IRBuilder<> IRB(&I); 2566 Value* Addr = I.getArgOperand(0); 2567 Value *Shadow = getShadow(&I, 1); 2568 Value *ShadowPtr, *OriginPtr; 2569 2570 // We don't know the pointer alignment (could be unaligned SSE store!). 2571 // Have to assume to worst case. 2572 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2573 Addr, IRB, Shadow->getType(), Align(1), /*isStore*/ true); 2574 IRB.CreateAlignedStore(Shadow, ShadowPtr, Align(1)); 2575 2576 if (ClCheckAccessAddress) 2577 insertShadowCheck(Addr, &I); 2578 2579 // FIXME: factor out common code from materializeStores 2580 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2581 return true; 2582 } 2583 2584 /// Handle vector load-like intrinsics. 2585 /// 2586 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2587 /// has 1 pointer argument, returns a vector. 2588 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2589 IRBuilder<> IRB(&I); 2590 Value *Addr = I.getArgOperand(0); 2591 2592 Type *ShadowTy = getShadowTy(&I); 2593 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 2594 if (PropagateShadow) { 2595 // We don't know the pointer alignment (could be unaligned SSE load!). 2596 // Have to assume to worst case. 2597 const Align Alignment = Align(1); 2598 std::tie(ShadowPtr, OriginPtr) = 2599 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2600 setShadow(&I, 2601 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 2602 } else { 2603 setShadow(&I, getCleanShadow(&I)); 2604 } 2605 2606 if (ClCheckAccessAddress) 2607 insertShadowCheck(Addr, &I); 2608 2609 if (MS.TrackOrigins) { 2610 if (PropagateShadow) 2611 setOrigin(&I, IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2612 else 2613 setOrigin(&I, getCleanOrigin()); 2614 } 2615 return true; 2616 } 2617 2618 /// Handle (SIMD arithmetic)-like intrinsics. 2619 /// 2620 /// Instrument intrinsics with any number of arguments of the same type, 2621 /// equal to the return type. The type should be simple (no aggregates or 2622 /// pointers; vectors are fine). 2623 /// Caller guarantees that this intrinsic does not access memory. 2624 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2625 Type *RetTy = I.getType(); 2626 if (!(RetTy->isIntOrIntVectorTy() || 2627 RetTy->isFPOrFPVectorTy() || 2628 RetTy->isX86_MMXTy())) 2629 return false; 2630 2631 unsigned NumArgOperands = I.getNumArgOperands(); 2632 2633 for (unsigned i = 0; i < NumArgOperands; ++i) { 2634 Type *Ty = I.getArgOperand(i)->getType(); 2635 if (Ty != RetTy) 2636 return false; 2637 } 2638 2639 IRBuilder<> IRB(&I); 2640 ShadowAndOriginCombiner SC(this, IRB); 2641 for (unsigned i = 0; i < NumArgOperands; ++i) 2642 SC.Add(I.getArgOperand(i)); 2643 SC.Done(&I); 2644 2645 return true; 2646 } 2647 2648 /// Heuristically instrument unknown intrinsics. 2649 /// 2650 /// The main purpose of this code is to do something reasonable with all 2651 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2652 /// We recognize several classes of intrinsics by their argument types and 2653 /// ModRefBehaviour and apply special instrumentation when we are reasonably 2654 /// sure that we know what the intrinsic does. 2655 /// 2656 /// We special-case intrinsics where this approach fails. See llvm.bswap 2657 /// handling as an example of that. 2658 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2659 unsigned NumArgOperands = I.getNumArgOperands(); 2660 if (NumArgOperands == 0) 2661 return false; 2662 2663 if (NumArgOperands == 2 && 2664 I.getArgOperand(0)->getType()->isPointerTy() && 2665 I.getArgOperand(1)->getType()->isVectorTy() && 2666 I.getType()->isVoidTy() && 2667 !I.onlyReadsMemory()) { 2668 // This looks like a vector store. 2669 return handleVectorStoreIntrinsic(I); 2670 } 2671 2672 if (NumArgOperands == 1 && 2673 I.getArgOperand(0)->getType()->isPointerTy() && 2674 I.getType()->isVectorTy() && 2675 I.onlyReadsMemory()) { 2676 // This looks like a vector load. 2677 return handleVectorLoadIntrinsic(I); 2678 } 2679 2680 if (I.doesNotAccessMemory()) 2681 if (maybeHandleSimpleNomemIntrinsic(I)) 2682 return true; 2683 2684 // FIXME: detect and handle SSE maskstore/maskload 2685 return false; 2686 } 2687 2688 void handleInvariantGroup(IntrinsicInst &I) { 2689 setShadow(&I, getShadow(&I, 0)); 2690 setOrigin(&I, getOrigin(&I, 0)); 2691 } 2692 2693 void handleLifetimeStart(IntrinsicInst &I) { 2694 if (!PoisonStack) 2695 return; 2696 AllocaInst *AI = llvm::findAllocaForValue(I.getArgOperand(1)); 2697 if (!AI) 2698 InstrumentLifetimeStart = false; 2699 LifetimeStartList.push_back(std::make_pair(&I, AI)); 2700 } 2701 2702 void handleBswap(IntrinsicInst &I) { 2703 IRBuilder<> IRB(&I); 2704 Value *Op = I.getArgOperand(0); 2705 Type *OpType = Op->getType(); 2706 Function *BswapFunc = Intrinsic::getDeclaration( 2707 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2708 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2709 setOrigin(&I, getOrigin(Op)); 2710 } 2711 2712 // Instrument vector convert intrinsic. 2713 // 2714 // This function instruments intrinsics like cvtsi2ss: 2715 // %Out = int_xxx_cvtyyy(%ConvertOp) 2716 // or 2717 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2718 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2719 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2720 // elements from \p CopyOp. 2721 // In most cases conversion involves floating-point value which may trigger a 2722 // hardware exception when not fully initialized. For this reason we require 2723 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2724 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2725 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2726 // return a fully initialized value. 2727 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements) { 2728 IRBuilder<> IRB(&I); 2729 Value *CopyOp, *ConvertOp; 2730 2731 switch (I.getNumArgOperands()) { 2732 case 3: 2733 assert(isa<ConstantInt>(I.getArgOperand(2)) && "Invalid rounding mode"); 2734 LLVM_FALLTHROUGH; 2735 case 2: 2736 CopyOp = I.getArgOperand(0); 2737 ConvertOp = I.getArgOperand(1); 2738 break; 2739 case 1: 2740 ConvertOp = I.getArgOperand(0); 2741 CopyOp = nullptr; 2742 break; 2743 default: 2744 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2745 } 2746 2747 // The first *NumUsedElements* elements of ConvertOp are converted to the 2748 // same number of output elements. The rest of the output is copied from 2749 // CopyOp, or (if not available) filled with zeroes. 2750 // Combine shadow for elements of ConvertOp that are used in this operation, 2751 // and insert a check. 2752 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2753 // int->any conversion. 2754 Value *ConvertShadow = getShadow(ConvertOp); 2755 Value *AggShadow = nullptr; 2756 if (ConvertOp->getType()->isVectorTy()) { 2757 AggShadow = IRB.CreateExtractElement( 2758 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2759 for (int i = 1; i < NumUsedElements; ++i) { 2760 Value *MoreShadow = IRB.CreateExtractElement( 2761 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2762 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2763 } 2764 } else { 2765 AggShadow = ConvertShadow; 2766 } 2767 assert(AggShadow->getType()->isIntegerTy()); 2768 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2769 2770 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2771 // ConvertOp. 2772 if (CopyOp) { 2773 assert(CopyOp->getType() == I.getType()); 2774 assert(CopyOp->getType()->isVectorTy()); 2775 Value *ResultShadow = getShadow(CopyOp); 2776 Type *EltTy = cast<VectorType>(ResultShadow->getType())->getElementType(); 2777 for (int i = 0; i < NumUsedElements; ++i) { 2778 ResultShadow = IRB.CreateInsertElement( 2779 ResultShadow, ConstantInt::getNullValue(EltTy), 2780 ConstantInt::get(IRB.getInt32Ty(), i)); 2781 } 2782 setShadow(&I, ResultShadow); 2783 setOrigin(&I, getOrigin(CopyOp)); 2784 } else { 2785 setShadow(&I, getCleanShadow(&I)); 2786 setOrigin(&I, getCleanOrigin()); 2787 } 2788 } 2789 2790 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2791 // zeroes if it is zero, and all ones otherwise. 2792 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2793 if (S->getType()->isVectorTy()) 2794 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2795 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2796 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2797 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2798 } 2799 2800 // Given a vector, extract its first element, and return all 2801 // zeroes if it is zero, and all ones otherwise. 2802 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2803 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2804 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2805 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2806 } 2807 2808 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2809 Type *T = S->getType(); 2810 assert(T->isVectorTy()); 2811 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2812 return IRB.CreateSExt(S2, T); 2813 } 2814 2815 // Instrument vector shift intrinsic. 2816 // 2817 // This function instruments intrinsics like int_x86_avx2_psll_w. 2818 // Intrinsic shifts %In by %ShiftSize bits. 2819 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2820 // size, and the rest is ignored. Behavior is defined even if shift size is 2821 // greater than register (or field) width. 2822 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2823 assert(I.getNumArgOperands() == 2); 2824 IRBuilder<> IRB(&I); 2825 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2826 // Otherwise perform the same shift on S1. 2827 Value *S1 = getShadow(&I, 0); 2828 Value *S2 = getShadow(&I, 1); 2829 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2830 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2831 Value *V1 = I.getOperand(0); 2832 Value *V2 = I.getOperand(1); 2833 Value *Shift = IRB.CreateCall(I.getFunctionType(), I.getCalledOperand(), 2834 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2835 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2836 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2837 setOriginForNaryOp(I); 2838 } 2839 2840 // Get an X86_MMX-sized vector type. 2841 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2842 const unsigned X86_MMXSizeInBits = 64; 2843 assert(EltSizeInBits != 0 && (X86_MMXSizeInBits % EltSizeInBits) == 0 && 2844 "Illegal MMX vector element size"); 2845 return FixedVectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2846 X86_MMXSizeInBits / EltSizeInBits); 2847 } 2848 2849 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2850 // intrinsic. 2851 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2852 switch (id) { 2853 case Intrinsic::x86_sse2_packsswb_128: 2854 case Intrinsic::x86_sse2_packuswb_128: 2855 return Intrinsic::x86_sse2_packsswb_128; 2856 2857 case Intrinsic::x86_sse2_packssdw_128: 2858 case Intrinsic::x86_sse41_packusdw: 2859 return Intrinsic::x86_sse2_packssdw_128; 2860 2861 case Intrinsic::x86_avx2_packsswb: 2862 case Intrinsic::x86_avx2_packuswb: 2863 return Intrinsic::x86_avx2_packsswb; 2864 2865 case Intrinsic::x86_avx2_packssdw: 2866 case Intrinsic::x86_avx2_packusdw: 2867 return Intrinsic::x86_avx2_packssdw; 2868 2869 case Intrinsic::x86_mmx_packsswb: 2870 case Intrinsic::x86_mmx_packuswb: 2871 return Intrinsic::x86_mmx_packsswb; 2872 2873 case Intrinsic::x86_mmx_packssdw: 2874 return Intrinsic::x86_mmx_packssdw; 2875 default: 2876 llvm_unreachable("unexpected intrinsic id"); 2877 } 2878 } 2879 2880 // Instrument vector pack intrinsic. 2881 // 2882 // This function instruments intrinsics like x86_mmx_packsswb, that 2883 // packs elements of 2 input vectors into half as many bits with saturation. 2884 // Shadow is propagated with the signed variant of the same intrinsic applied 2885 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2886 // EltSizeInBits is used only for x86mmx arguments. 2887 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2888 assert(I.getNumArgOperands() == 2); 2889 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2890 IRBuilder<> IRB(&I); 2891 Value *S1 = getShadow(&I, 0); 2892 Value *S2 = getShadow(&I, 1); 2893 assert(isX86_MMX || S1->getType()->isVectorTy()); 2894 2895 // SExt and ICmpNE below must apply to individual elements of input vectors. 2896 // In case of x86mmx arguments, cast them to appropriate vector types and 2897 // back. 2898 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2899 if (isX86_MMX) { 2900 S1 = IRB.CreateBitCast(S1, T); 2901 S2 = IRB.CreateBitCast(S2, T); 2902 } 2903 Value *S1_ext = IRB.CreateSExt( 2904 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2905 Value *S2_ext = IRB.CreateSExt( 2906 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2907 if (isX86_MMX) { 2908 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2909 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2910 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2911 } 2912 2913 Function *ShadowFn = Intrinsic::getDeclaration( 2914 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2915 2916 Value *S = 2917 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2918 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2919 setShadow(&I, S); 2920 setOriginForNaryOp(I); 2921 } 2922 2923 // Instrument sum-of-absolute-differences intrinsic. 2924 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2925 const unsigned SignificantBitsPerResultElement = 16; 2926 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2927 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2928 unsigned ZeroBitsPerResultElement = 2929 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2930 2931 IRBuilder<> IRB(&I); 2932 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2933 S = IRB.CreateBitCast(S, ResTy); 2934 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2935 ResTy); 2936 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2937 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2938 setShadow(&I, S); 2939 setOriginForNaryOp(I); 2940 } 2941 2942 // Instrument multiply-add intrinsic. 2943 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2944 unsigned EltSizeInBits = 0) { 2945 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2946 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2947 IRBuilder<> IRB(&I); 2948 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2949 S = IRB.CreateBitCast(S, ResTy); 2950 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2951 ResTy); 2952 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2953 setShadow(&I, S); 2954 setOriginForNaryOp(I); 2955 } 2956 2957 // Instrument compare-packed intrinsic. 2958 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 2959 // all-ones shadow. 2960 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 2961 IRBuilder<> IRB(&I); 2962 Type *ResTy = getShadowTy(&I); 2963 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2964 Value *S = IRB.CreateSExt( 2965 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 2966 setShadow(&I, S); 2967 setOriginForNaryOp(I); 2968 } 2969 2970 // Instrument compare-scalar intrinsic. 2971 // This handles both cmp* intrinsics which return the result in the first 2972 // element of a vector, and comi* which return the result as i32. 2973 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 2974 IRBuilder<> IRB(&I); 2975 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2976 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 2977 setShadow(&I, S); 2978 setOriginForNaryOp(I); 2979 } 2980 2981 // Instrument generic vector reduction intrinsics 2982 // by ORing together all their fields. 2983 void handleVectorReduceIntrinsic(IntrinsicInst &I) { 2984 IRBuilder<> IRB(&I); 2985 Value *S = IRB.CreateOrReduce(getShadow(&I, 0)); 2986 setShadow(&I, S); 2987 setOrigin(&I, getOrigin(&I, 0)); 2988 } 2989 2990 // Instrument experimental.vector.reduce.or intrinsic. 2991 // Valid (non-poisoned) set bits in the operand pull low the 2992 // corresponding shadow bits. 2993 void handleVectorReduceOrIntrinsic(IntrinsicInst &I) { 2994 IRBuilder<> IRB(&I); 2995 Value *OperandShadow = getShadow(&I, 0); 2996 Value *OperandUnsetBits = IRB.CreateNot(I.getOperand(0)); 2997 Value *OperandUnsetOrPoison = IRB.CreateOr(OperandUnsetBits, OperandShadow); 2998 // Bit N is clean if any field's bit N is 1 and unpoison 2999 Value *OutShadowMask = IRB.CreateAndReduce(OperandUnsetOrPoison); 3000 // Otherwise, it is clean if every field's bit N is unpoison 3001 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3002 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3003 3004 setShadow(&I, S); 3005 setOrigin(&I, getOrigin(&I, 0)); 3006 } 3007 3008 // Instrument experimental.vector.reduce.or intrinsic. 3009 // Valid (non-poisoned) unset bits in the operand pull down the 3010 // corresponding shadow bits. 3011 void handleVectorReduceAndIntrinsic(IntrinsicInst &I) { 3012 IRBuilder<> IRB(&I); 3013 Value *OperandShadow = getShadow(&I, 0); 3014 Value *OperandSetOrPoison = IRB.CreateOr(I.getOperand(0), OperandShadow); 3015 // Bit N is clean if any field's bit N is 0 and unpoison 3016 Value *OutShadowMask = IRB.CreateAndReduce(OperandSetOrPoison); 3017 // Otherwise, it is clean if every field's bit N is unpoison 3018 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3019 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3020 3021 setShadow(&I, S); 3022 setOrigin(&I, getOrigin(&I, 0)); 3023 } 3024 3025 void handleStmxcsr(IntrinsicInst &I) { 3026 IRBuilder<> IRB(&I); 3027 Value* Addr = I.getArgOperand(0); 3028 Type *Ty = IRB.getInt32Ty(); 3029 Value *ShadowPtr = 3030 getShadowOriginPtr(Addr, IRB, Ty, Align(1), /*isStore*/ true).first; 3031 3032 IRB.CreateStore(getCleanShadow(Ty), 3033 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 3034 3035 if (ClCheckAccessAddress) 3036 insertShadowCheck(Addr, &I); 3037 } 3038 3039 void handleLdmxcsr(IntrinsicInst &I) { 3040 if (!InsertChecks) return; 3041 3042 IRBuilder<> IRB(&I); 3043 Value *Addr = I.getArgOperand(0); 3044 Type *Ty = IRB.getInt32Ty(); 3045 const Align Alignment = Align(1); 3046 Value *ShadowPtr, *OriginPtr; 3047 std::tie(ShadowPtr, OriginPtr) = 3048 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 3049 3050 if (ClCheckAccessAddress) 3051 insertShadowCheck(Addr, &I); 3052 3053 Value *Shadow = IRB.CreateAlignedLoad(Ty, ShadowPtr, Alignment, "_ldmxcsr"); 3054 Value *Origin = MS.TrackOrigins ? IRB.CreateLoad(MS.OriginTy, OriginPtr) 3055 : getCleanOrigin(); 3056 insertShadowCheck(Shadow, Origin, &I); 3057 } 3058 3059 void handleMaskedStore(IntrinsicInst &I) { 3060 IRBuilder<> IRB(&I); 3061 Value *V = I.getArgOperand(0); 3062 Value *Addr = I.getArgOperand(1); 3063 const Align Alignment( 3064 cast<ConstantInt>(I.getArgOperand(2))->getZExtValue()); 3065 Value *Mask = I.getArgOperand(3); 3066 Value *Shadow = getShadow(V); 3067 3068 Value *ShadowPtr; 3069 Value *OriginPtr; 3070 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 3071 Addr, IRB, Shadow->getType(), Alignment, /*isStore*/ true); 3072 3073 if (ClCheckAccessAddress) { 3074 insertShadowCheck(Addr, &I); 3075 // Uninitialized mask is kind of like uninitialized address, but not as 3076 // scary. 3077 insertShadowCheck(Mask, &I); 3078 } 3079 3080 IRB.CreateMaskedStore(Shadow, ShadowPtr, Alignment, Mask); 3081 3082 if (MS.TrackOrigins) { 3083 auto &DL = F.getParent()->getDataLayout(); 3084 paintOrigin(IRB, getOrigin(V), OriginPtr, 3085 DL.getTypeStoreSize(Shadow->getType()), 3086 std::max(Alignment, kMinOriginAlignment)); 3087 } 3088 } 3089 3090 bool handleMaskedLoad(IntrinsicInst &I) { 3091 IRBuilder<> IRB(&I); 3092 Value *Addr = I.getArgOperand(0); 3093 const Align Alignment( 3094 cast<ConstantInt>(I.getArgOperand(1))->getZExtValue()); 3095 Value *Mask = I.getArgOperand(2); 3096 Value *PassThru = I.getArgOperand(3); 3097 3098 Type *ShadowTy = getShadowTy(&I); 3099 Value *ShadowPtr, *OriginPtr; 3100 if (PropagateShadow) { 3101 std::tie(ShadowPtr, OriginPtr) = 3102 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 3103 setShadow(&I, IRB.CreateMaskedLoad(ShadowPtr, Alignment, Mask, 3104 getShadow(PassThru), "_msmaskedld")); 3105 } else { 3106 setShadow(&I, getCleanShadow(&I)); 3107 } 3108 3109 if (ClCheckAccessAddress) { 3110 insertShadowCheck(Addr, &I); 3111 insertShadowCheck(Mask, &I); 3112 } 3113 3114 if (MS.TrackOrigins) { 3115 if (PropagateShadow) { 3116 // Choose between PassThru's and the loaded value's origins. 3117 Value *MaskedPassThruShadow = IRB.CreateAnd( 3118 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 3119 3120 Value *Acc = IRB.CreateExtractElement( 3121 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 3122 for (int i = 1, N = cast<FixedVectorType>(PassThru->getType()) 3123 ->getNumElements(); 3124 i < N; ++i) { 3125 Value *More = IRB.CreateExtractElement( 3126 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 3127 Acc = IRB.CreateOr(Acc, More); 3128 } 3129 3130 Value *Origin = IRB.CreateSelect( 3131 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 3132 getOrigin(PassThru), IRB.CreateLoad(MS.OriginTy, OriginPtr)); 3133 3134 setOrigin(&I, Origin); 3135 } else { 3136 setOrigin(&I, getCleanOrigin()); 3137 } 3138 } 3139 return true; 3140 } 3141 3142 // Instrument BMI / BMI2 intrinsics. 3143 // All of these intrinsics are Z = I(X, Y) 3144 // where the types of all operands and the result match, and are either i32 or i64. 3145 // The following instrumentation happens to work for all of them: 3146 // Sz = I(Sx, Y) | (sext (Sy != 0)) 3147 void handleBmiIntrinsic(IntrinsicInst &I) { 3148 IRBuilder<> IRB(&I); 3149 Type *ShadowTy = getShadowTy(&I); 3150 3151 // If any bit of the mask operand is poisoned, then the whole thing is. 3152 Value *SMask = getShadow(&I, 1); 3153 SMask = IRB.CreateSExt(IRB.CreateICmpNE(SMask, getCleanShadow(ShadowTy)), 3154 ShadowTy); 3155 // Apply the same intrinsic to the shadow of the first operand. 3156 Value *S = IRB.CreateCall(I.getCalledFunction(), 3157 {getShadow(&I, 0), I.getOperand(1)}); 3158 S = IRB.CreateOr(SMask, S); 3159 setShadow(&I, S); 3160 setOriginForNaryOp(I); 3161 } 3162 3163 SmallVector<int, 8> getPclmulMask(unsigned Width, bool OddElements) { 3164 SmallVector<int, 8> Mask; 3165 for (unsigned X = OddElements ? 1 : 0; X < Width; X += 2) { 3166 Mask.append(2, X); 3167 } 3168 return Mask; 3169 } 3170 3171 // Instrument pclmul intrinsics. 3172 // These intrinsics operate either on odd or on even elements of the input 3173 // vectors, depending on the constant in the 3rd argument, ignoring the rest. 3174 // Replace the unused elements with copies of the used ones, ex: 3175 // (0, 1, 2, 3) -> (0, 0, 2, 2) (even case) 3176 // or 3177 // (0, 1, 2, 3) -> (1, 1, 3, 3) (odd case) 3178 // and then apply the usual shadow combining logic. 3179 void handlePclmulIntrinsic(IntrinsicInst &I) { 3180 IRBuilder<> IRB(&I); 3181 Type *ShadowTy = getShadowTy(&I); 3182 unsigned Width = 3183 cast<FixedVectorType>(I.getArgOperand(0)->getType())->getNumElements(); 3184 assert(isa<ConstantInt>(I.getArgOperand(2)) && 3185 "pclmul 3rd operand must be a constant"); 3186 unsigned Imm = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 3187 Value *Shuf0 = 3188 IRB.CreateShuffleVector(getShadow(&I, 0), UndefValue::get(ShadowTy), 3189 getPclmulMask(Width, Imm & 0x01)); 3190 Value *Shuf1 = 3191 IRB.CreateShuffleVector(getShadow(&I, 1), UndefValue::get(ShadowTy), 3192 getPclmulMask(Width, Imm & 0x10)); 3193 ShadowAndOriginCombiner SOC(this, IRB); 3194 SOC.Add(Shuf0, getOrigin(&I, 0)); 3195 SOC.Add(Shuf1, getOrigin(&I, 1)); 3196 SOC.Done(&I); 3197 } 3198 3199 // Instrument _mm_*_sd intrinsics 3200 void handleUnarySdIntrinsic(IntrinsicInst &I) { 3201 IRBuilder<> IRB(&I); 3202 Value *First = getShadow(&I, 0); 3203 Value *Second = getShadow(&I, 1); 3204 // High word of first operand, low word of second 3205 Value *Shadow = 3206 IRB.CreateShuffleVector(First, Second, llvm::makeArrayRef<int>({2, 1})); 3207 3208 setShadow(&I, Shadow); 3209 setOriginForNaryOp(I); 3210 } 3211 3212 void handleBinarySdIntrinsic(IntrinsicInst &I) { 3213 IRBuilder<> IRB(&I); 3214 Value *First = getShadow(&I, 0); 3215 Value *Second = getShadow(&I, 1); 3216 Value *OrShadow = IRB.CreateOr(First, Second); 3217 // High word of first operand, low word of both OR'd together 3218 Value *Shadow = IRB.CreateShuffleVector(First, OrShadow, 3219 llvm::makeArrayRef<int>({2, 1})); 3220 3221 setShadow(&I, Shadow); 3222 setOriginForNaryOp(I); 3223 } 3224 3225 void visitIntrinsicInst(IntrinsicInst &I) { 3226 switch (I.getIntrinsicID()) { 3227 case Intrinsic::lifetime_start: 3228 handleLifetimeStart(I); 3229 break; 3230 case Intrinsic::launder_invariant_group: 3231 case Intrinsic::strip_invariant_group: 3232 handleInvariantGroup(I); 3233 break; 3234 case Intrinsic::bswap: 3235 handleBswap(I); 3236 break; 3237 case Intrinsic::masked_store: 3238 handleMaskedStore(I); 3239 break; 3240 case Intrinsic::masked_load: 3241 handleMaskedLoad(I); 3242 break; 3243 case Intrinsic::experimental_vector_reduce_and: 3244 handleVectorReduceAndIntrinsic(I); 3245 break; 3246 case Intrinsic::experimental_vector_reduce_or: 3247 handleVectorReduceOrIntrinsic(I); 3248 break; 3249 case Intrinsic::experimental_vector_reduce_add: 3250 case Intrinsic::experimental_vector_reduce_xor: 3251 case Intrinsic::experimental_vector_reduce_mul: 3252 handleVectorReduceIntrinsic(I); 3253 break; 3254 case Intrinsic::x86_sse_stmxcsr: 3255 handleStmxcsr(I); 3256 break; 3257 case Intrinsic::x86_sse_ldmxcsr: 3258 handleLdmxcsr(I); 3259 break; 3260 case Intrinsic::x86_avx512_vcvtsd2usi64: 3261 case Intrinsic::x86_avx512_vcvtsd2usi32: 3262 case Intrinsic::x86_avx512_vcvtss2usi64: 3263 case Intrinsic::x86_avx512_vcvtss2usi32: 3264 case Intrinsic::x86_avx512_cvttss2usi64: 3265 case Intrinsic::x86_avx512_cvttss2usi: 3266 case Intrinsic::x86_avx512_cvttsd2usi64: 3267 case Intrinsic::x86_avx512_cvttsd2usi: 3268 case Intrinsic::x86_avx512_cvtusi2ss: 3269 case Intrinsic::x86_avx512_cvtusi642sd: 3270 case Intrinsic::x86_avx512_cvtusi642ss: 3271 case Intrinsic::x86_sse2_cvtsd2si64: 3272 case Intrinsic::x86_sse2_cvtsd2si: 3273 case Intrinsic::x86_sse2_cvtsd2ss: 3274 case Intrinsic::x86_sse2_cvttsd2si64: 3275 case Intrinsic::x86_sse2_cvttsd2si: 3276 case Intrinsic::x86_sse_cvtss2si64: 3277 case Intrinsic::x86_sse_cvtss2si: 3278 case Intrinsic::x86_sse_cvttss2si64: 3279 case Intrinsic::x86_sse_cvttss2si: 3280 handleVectorConvertIntrinsic(I, 1); 3281 break; 3282 case Intrinsic::x86_sse_cvtps2pi: 3283 case Intrinsic::x86_sse_cvttps2pi: 3284 handleVectorConvertIntrinsic(I, 2); 3285 break; 3286 3287 case Intrinsic::x86_avx512_psll_w_512: 3288 case Intrinsic::x86_avx512_psll_d_512: 3289 case Intrinsic::x86_avx512_psll_q_512: 3290 case Intrinsic::x86_avx512_pslli_w_512: 3291 case Intrinsic::x86_avx512_pslli_d_512: 3292 case Intrinsic::x86_avx512_pslli_q_512: 3293 case Intrinsic::x86_avx512_psrl_w_512: 3294 case Intrinsic::x86_avx512_psrl_d_512: 3295 case Intrinsic::x86_avx512_psrl_q_512: 3296 case Intrinsic::x86_avx512_psra_w_512: 3297 case Intrinsic::x86_avx512_psra_d_512: 3298 case Intrinsic::x86_avx512_psra_q_512: 3299 case Intrinsic::x86_avx512_psrli_w_512: 3300 case Intrinsic::x86_avx512_psrli_d_512: 3301 case Intrinsic::x86_avx512_psrli_q_512: 3302 case Intrinsic::x86_avx512_psrai_w_512: 3303 case Intrinsic::x86_avx512_psrai_d_512: 3304 case Intrinsic::x86_avx512_psrai_q_512: 3305 case Intrinsic::x86_avx512_psra_q_256: 3306 case Intrinsic::x86_avx512_psra_q_128: 3307 case Intrinsic::x86_avx512_psrai_q_256: 3308 case Intrinsic::x86_avx512_psrai_q_128: 3309 case Intrinsic::x86_avx2_psll_w: 3310 case Intrinsic::x86_avx2_psll_d: 3311 case Intrinsic::x86_avx2_psll_q: 3312 case Intrinsic::x86_avx2_pslli_w: 3313 case Intrinsic::x86_avx2_pslli_d: 3314 case Intrinsic::x86_avx2_pslli_q: 3315 case Intrinsic::x86_avx2_psrl_w: 3316 case Intrinsic::x86_avx2_psrl_d: 3317 case Intrinsic::x86_avx2_psrl_q: 3318 case Intrinsic::x86_avx2_psra_w: 3319 case Intrinsic::x86_avx2_psra_d: 3320 case Intrinsic::x86_avx2_psrli_w: 3321 case Intrinsic::x86_avx2_psrli_d: 3322 case Intrinsic::x86_avx2_psrli_q: 3323 case Intrinsic::x86_avx2_psrai_w: 3324 case Intrinsic::x86_avx2_psrai_d: 3325 case Intrinsic::x86_sse2_psll_w: 3326 case Intrinsic::x86_sse2_psll_d: 3327 case Intrinsic::x86_sse2_psll_q: 3328 case Intrinsic::x86_sse2_pslli_w: 3329 case Intrinsic::x86_sse2_pslli_d: 3330 case Intrinsic::x86_sse2_pslli_q: 3331 case Intrinsic::x86_sse2_psrl_w: 3332 case Intrinsic::x86_sse2_psrl_d: 3333 case Intrinsic::x86_sse2_psrl_q: 3334 case Intrinsic::x86_sse2_psra_w: 3335 case Intrinsic::x86_sse2_psra_d: 3336 case Intrinsic::x86_sse2_psrli_w: 3337 case Intrinsic::x86_sse2_psrli_d: 3338 case Intrinsic::x86_sse2_psrli_q: 3339 case Intrinsic::x86_sse2_psrai_w: 3340 case Intrinsic::x86_sse2_psrai_d: 3341 case Intrinsic::x86_mmx_psll_w: 3342 case Intrinsic::x86_mmx_psll_d: 3343 case Intrinsic::x86_mmx_psll_q: 3344 case Intrinsic::x86_mmx_pslli_w: 3345 case Intrinsic::x86_mmx_pslli_d: 3346 case Intrinsic::x86_mmx_pslli_q: 3347 case Intrinsic::x86_mmx_psrl_w: 3348 case Intrinsic::x86_mmx_psrl_d: 3349 case Intrinsic::x86_mmx_psrl_q: 3350 case Intrinsic::x86_mmx_psra_w: 3351 case Intrinsic::x86_mmx_psra_d: 3352 case Intrinsic::x86_mmx_psrli_w: 3353 case Intrinsic::x86_mmx_psrli_d: 3354 case Intrinsic::x86_mmx_psrli_q: 3355 case Intrinsic::x86_mmx_psrai_w: 3356 case Intrinsic::x86_mmx_psrai_d: 3357 handleVectorShiftIntrinsic(I, /* Variable */ false); 3358 break; 3359 case Intrinsic::x86_avx2_psllv_d: 3360 case Intrinsic::x86_avx2_psllv_d_256: 3361 case Intrinsic::x86_avx512_psllv_d_512: 3362 case Intrinsic::x86_avx2_psllv_q: 3363 case Intrinsic::x86_avx2_psllv_q_256: 3364 case Intrinsic::x86_avx512_psllv_q_512: 3365 case Intrinsic::x86_avx2_psrlv_d: 3366 case Intrinsic::x86_avx2_psrlv_d_256: 3367 case Intrinsic::x86_avx512_psrlv_d_512: 3368 case Intrinsic::x86_avx2_psrlv_q: 3369 case Intrinsic::x86_avx2_psrlv_q_256: 3370 case Intrinsic::x86_avx512_psrlv_q_512: 3371 case Intrinsic::x86_avx2_psrav_d: 3372 case Intrinsic::x86_avx2_psrav_d_256: 3373 case Intrinsic::x86_avx512_psrav_d_512: 3374 case Intrinsic::x86_avx512_psrav_q_128: 3375 case Intrinsic::x86_avx512_psrav_q_256: 3376 case Intrinsic::x86_avx512_psrav_q_512: 3377 handleVectorShiftIntrinsic(I, /* Variable */ true); 3378 break; 3379 3380 case Intrinsic::x86_sse2_packsswb_128: 3381 case Intrinsic::x86_sse2_packssdw_128: 3382 case Intrinsic::x86_sse2_packuswb_128: 3383 case Intrinsic::x86_sse41_packusdw: 3384 case Intrinsic::x86_avx2_packsswb: 3385 case Intrinsic::x86_avx2_packssdw: 3386 case Intrinsic::x86_avx2_packuswb: 3387 case Intrinsic::x86_avx2_packusdw: 3388 handleVectorPackIntrinsic(I); 3389 break; 3390 3391 case Intrinsic::x86_mmx_packsswb: 3392 case Intrinsic::x86_mmx_packuswb: 3393 handleVectorPackIntrinsic(I, 16); 3394 break; 3395 3396 case Intrinsic::x86_mmx_packssdw: 3397 handleVectorPackIntrinsic(I, 32); 3398 break; 3399 3400 case Intrinsic::x86_mmx_psad_bw: 3401 case Intrinsic::x86_sse2_psad_bw: 3402 case Intrinsic::x86_avx2_psad_bw: 3403 handleVectorSadIntrinsic(I); 3404 break; 3405 3406 case Intrinsic::x86_sse2_pmadd_wd: 3407 case Intrinsic::x86_avx2_pmadd_wd: 3408 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3409 case Intrinsic::x86_avx2_pmadd_ub_sw: 3410 handleVectorPmaddIntrinsic(I); 3411 break; 3412 3413 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3414 handleVectorPmaddIntrinsic(I, 8); 3415 break; 3416 3417 case Intrinsic::x86_mmx_pmadd_wd: 3418 handleVectorPmaddIntrinsic(I, 16); 3419 break; 3420 3421 case Intrinsic::x86_sse_cmp_ss: 3422 case Intrinsic::x86_sse2_cmp_sd: 3423 case Intrinsic::x86_sse_comieq_ss: 3424 case Intrinsic::x86_sse_comilt_ss: 3425 case Intrinsic::x86_sse_comile_ss: 3426 case Intrinsic::x86_sse_comigt_ss: 3427 case Intrinsic::x86_sse_comige_ss: 3428 case Intrinsic::x86_sse_comineq_ss: 3429 case Intrinsic::x86_sse_ucomieq_ss: 3430 case Intrinsic::x86_sse_ucomilt_ss: 3431 case Intrinsic::x86_sse_ucomile_ss: 3432 case Intrinsic::x86_sse_ucomigt_ss: 3433 case Intrinsic::x86_sse_ucomige_ss: 3434 case Intrinsic::x86_sse_ucomineq_ss: 3435 case Intrinsic::x86_sse2_comieq_sd: 3436 case Intrinsic::x86_sse2_comilt_sd: 3437 case Intrinsic::x86_sse2_comile_sd: 3438 case Intrinsic::x86_sse2_comigt_sd: 3439 case Intrinsic::x86_sse2_comige_sd: 3440 case Intrinsic::x86_sse2_comineq_sd: 3441 case Intrinsic::x86_sse2_ucomieq_sd: 3442 case Intrinsic::x86_sse2_ucomilt_sd: 3443 case Intrinsic::x86_sse2_ucomile_sd: 3444 case Intrinsic::x86_sse2_ucomigt_sd: 3445 case Intrinsic::x86_sse2_ucomige_sd: 3446 case Intrinsic::x86_sse2_ucomineq_sd: 3447 handleVectorCompareScalarIntrinsic(I); 3448 break; 3449 3450 case Intrinsic::x86_sse_cmp_ps: 3451 case Intrinsic::x86_sse2_cmp_pd: 3452 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3453 // generates reasonably looking IR that fails in the backend with "Do not 3454 // know how to split the result of this operator!". 3455 handleVectorComparePackedIntrinsic(I); 3456 break; 3457 3458 case Intrinsic::x86_bmi_bextr_32: 3459 case Intrinsic::x86_bmi_bextr_64: 3460 case Intrinsic::x86_bmi_bzhi_32: 3461 case Intrinsic::x86_bmi_bzhi_64: 3462 case Intrinsic::x86_bmi_pdep_32: 3463 case Intrinsic::x86_bmi_pdep_64: 3464 case Intrinsic::x86_bmi_pext_32: 3465 case Intrinsic::x86_bmi_pext_64: 3466 handleBmiIntrinsic(I); 3467 break; 3468 3469 case Intrinsic::x86_pclmulqdq: 3470 case Intrinsic::x86_pclmulqdq_256: 3471 case Intrinsic::x86_pclmulqdq_512: 3472 handlePclmulIntrinsic(I); 3473 break; 3474 3475 case Intrinsic::x86_sse41_round_sd: 3476 handleUnarySdIntrinsic(I); 3477 break; 3478 case Intrinsic::x86_sse2_max_sd: 3479 case Intrinsic::x86_sse2_min_sd: 3480 handleBinarySdIntrinsic(I); 3481 break; 3482 3483 case Intrinsic::is_constant: 3484 // The result of llvm.is.constant() is always defined. 3485 setShadow(&I, getCleanShadow(&I)); 3486 setOrigin(&I, getCleanOrigin()); 3487 break; 3488 3489 default: 3490 if (!handleUnknownIntrinsic(I)) 3491 visitInstruction(I); 3492 break; 3493 } 3494 } 3495 3496 void visitLibAtomicLoad(CallBase &CB) { 3497 IRBuilder<> IRB(&CB); 3498 Value *Size = CB.getArgOperand(0); 3499 Value *SrcPtr = CB.getArgOperand(1); 3500 Value *DstPtr = CB.getArgOperand(2); 3501 Value *Ordering = CB.getArgOperand(3); 3502 // Convert the call to have at least Acquire ordering to make sure 3503 // the shadow operations aren't reordered before it. 3504 Value *NewOrdering = 3505 IRB.CreateExtractElement(makeAddAcquireOrderingTable(IRB), Ordering); 3506 CB.setArgOperand(3, NewOrdering); 3507 3508 IRBuilder<> NextIRB(CB.getNextNode()); 3509 NextIRB.SetCurrentDebugLocation(CB.getDebugLoc()); 3510 3511 Value *SrcShadowPtr, *SrcOriginPtr; 3512 std::tie(SrcShadowPtr, SrcOriginPtr) = 3513 getShadowOriginPtr(SrcPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3514 /*isStore*/ false); 3515 Value *DstShadowPtr = 3516 getShadowOriginPtr(DstPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3517 /*isStore*/ true) 3518 .first; 3519 3520 NextIRB.CreateMemCpy(DstShadowPtr, Align(1), SrcShadowPtr, Align(1), Size); 3521 if (MS.TrackOrigins) { 3522 Value *SrcOrigin = NextIRB.CreateAlignedLoad(MS.OriginTy, SrcOriginPtr, 3523 kMinOriginAlignment); 3524 Value *NewOrigin = updateOrigin(SrcOrigin, NextIRB); 3525 NextIRB.CreateCall(MS.MsanSetOriginFn, {DstPtr, Size, NewOrigin}); 3526 } 3527 } 3528 3529 void visitLibAtomicStore(CallBase &CB) { 3530 IRBuilder<> IRB(&CB); 3531 Value *Size = CB.getArgOperand(0); 3532 Value *DstPtr = CB.getArgOperand(2); 3533 Value *Ordering = CB.getArgOperand(3); 3534 // Convert the call to have at least Release ordering to make sure 3535 // the shadow operations aren't reordered after it. 3536 Value *NewOrdering = 3537 IRB.CreateExtractElement(makeAddReleaseOrderingTable(IRB), Ordering); 3538 CB.setArgOperand(3, NewOrdering); 3539 3540 Value *DstShadowPtr = 3541 getShadowOriginPtr(DstPtr, IRB, IRB.getInt8Ty(), Align(1), 3542 /*isStore*/ true) 3543 .first; 3544 3545 // Atomic store always paints clean shadow/origin. See file header. 3546 IRB.CreateMemSet(DstShadowPtr, getCleanShadow(IRB.getInt8Ty()), Size, 3547 Align(1)); 3548 } 3549 3550 void visitCallBase(CallBase &CB) { 3551 assert(!CB.getMetadata("nosanitize")); 3552 if (CB.isInlineAsm()) { 3553 // For inline asm (either a call to asm function, or callbr instruction), 3554 // do the usual thing: check argument shadow and mark all outputs as 3555 // clean. Note that any side effects of the inline asm that are not 3556 // immediately visible in its constraints are not handled. 3557 if (ClHandleAsmConservative && MS.CompileKernel) 3558 visitAsmInstruction(CB); 3559 else 3560 visitInstruction(CB); 3561 return; 3562 } 3563 LibFunc LF; 3564 if (TLI->getLibFunc(CB, LF)) { 3565 // libatomic.a functions need to have special handling because there isn't 3566 // a good way to intercept them or compile the library with 3567 // instrumentation. 3568 switch (LF) { 3569 case LibFunc_atomic_load: 3570 visitLibAtomicLoad(CB); 3571 return; 3572 case LibFunc_atomic_store: 3573 visitLibAtomicStore(CB); 3574 return; 3575 default: 3576 break; 3577 } 3578 } 3579 3580 if (auto *Call = dyn_cast<CallInst>(&CB)) { 3581 assert(!isa<IntrinsicInst>(Call) && "intrinsics are handled elsewhere"); 3582 3583 // We are going to insert code that relies on the fact that the callee 3584 // will become a non-readonly function after it is instrumented by us. To 3585 // prevent this code from being optimized out, mark that function 3586 // non-readonly in advance. 3587 if (Function *Func = Call->getCalledFunction()) { 3588 // Clear out readonly/readnone attributes. 3589 AttrBuilder B; 3590 B.addAttribute(Attribute::ReadOnly) 3591 .addAttribute(Attribute::ReadNone) 3592 .addAttribute(Attribute::WriteOnly) 3593 .addAttribute(Attribute::ArgMemOnly) 3594 .addAttribute(Attribute::Speculatable); 3595 Func->removeAttributes(AttributeList::FunctionIndex, B); 3596 } 3597 3598 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3599 } 3600 IRBuilder<> IRB(&CB); 3601 bool MayCheckCall = ClEagerChecks; 3602 if (Function *Func = CB.getCalledFunction()) { 3603 // __sanitizer_unaligned_{load,store} functions may be called by users 3604 // and always expects shadows in the TLS. So don't check them. 3605 MayCheckCall &= !Func->getName().startswith("__sanitizer_unaligned_"); 3606 } 3607 3608 unsigned ArgOffset = 0; 3609 LLVM_DEBUG(dbgs() << " CallSite: " << CB << "\n"); 3610 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 3611 ++ArgIt) { 3612 Value *A = *ArgIt; 3613 unsigned i = ArgIt - CB.arg_begin(); 3614 if (!A->getType()->isSized()) { 3615 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << CB << "\n"); 3616 continue; 3617 } 3618 unsigned Size = 0; 3619 Value *Store = nullptr; 3620 // Compute the Shadow for arg even if it is ByVal, because 3621 // in that case getShadow() will copy the actual arg shadow to 3622 // __msan_param_tls. 3623 Value *ArgShadow = getShadow(A); 3624 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3625 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3626 << " Shadow: " << *ArgShadow << "\n"); 3627 bool ArgIsInitialized = false; 3628 const DataLayout &DL = F.getParent()->getDataLayout(); 3629 3630 bool ByVal = CB.paramHasAttr(i, Attribute::ByVal); 3631 bool NoUndef = CB.paramHasAttr(i, Attribute::NoUndef); 3632 bool EagerCheck = MayCheckCall && !ByVal && NoUndef; 3633 3634 if (EagerCheck) { 3635 insertShadowCheck(A, &CB); 3636 continue; 3637 } 3638 if (ByVal) { 3639 // ByVal requires some special handling as it's too big for a single 3640 // load 3641 assert(A->getType()->isPointerTy() && 3642 "ByVal argument is not a pointer!"); 3643 Size = DL.getTypeAllocSize(CB.getParamByValType(i)); 3644 if (ArgOffset + Size > kParamTLSSize) break; 3645 const MaybeAlign ParamAlignment(CB.getParamAlign(i)); 3646 MaybeAlign Alignment = llvm::None; 3647 if (ParamAlignment) 3648 Alignment = std::min(*ParamAlignment, kShadowTLSAlignment); 3649 Value *AShadowPtr = 3650 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3651 /*isStore*/ false) 3652 .first; 3653 3654 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3655 Alignment, Size); 3656 // TODO(glider): need to copy origins. 3657 } else { 3658 // Any other parameters mean we need bit-grained tracking of uninit data 3659 Size = DL.getTypeAllocSize(A->getType()); 3660 if (ArgOffset + Size > kParamTLSSize) break; 3661 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3662 kShadowTLSAlignment); 3663 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3664 if (Cst && Cst->isNullValue()) ArgIsInitialized = true; 3665 } 3666 if (MS.TrackOrigins && !ArgIsInitialized) 3667 IRB.CreateStore(getOrigin(A), 3668 getOriginPtrForArgument(A, IRB, ArgOffset)); 3669 (void)Store; 3670 assert(Size != 0 && Store != nullptr); 3671 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3672 ArgOffset += alignTo(Size, 8); 3673 } 3674 LLVM_DEBUG(dbgs() << " done with call args\n"); 3675 3676 FunctionType *FT = CB.getFunctionType(); 3677 if (FT->isVarArg()) { 3678 VAHelper->visitCallBase(CB, IRB); 3679 } 3680 3681 // Now, get the shadow for the RetVal. 3682 if (!CB.getType()->isSized()) 3683 return; 3684 // Don't emit the epilogue for musttail call returns. 3685 if (isa<CallInst>(CB) && cast<CallInst>(CB).isMustTailCall()) 3686 return; 3687 3688 if (MayCheckCall && CB.hasRetAttr(Attribute::NoUndef)) { 3689 setShadow(&CB, getCleanShadow(&CB)); 3690 setOrigin(&CB, getCleanOrigin()); 3691 return; 3692 } 3693 3694 IRBuilder<> IRBBefore(&CB); 3695 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3696 Value *Base = getShadowPtrForRetval(&CB, IRBBefore); 3697 IRBBefore.CreateAlignedStore(getCleanShadow(&CB), Base, 3698 kShadowTLSAlignment); 3699 BasicBlock::iterator NextInsn; 3700 if (isa<CallInst>(CB)) { 3701 NextInsn = ++CB.getIterator(); 3702 assert(NextInsn != CB.getParent()->end()); 3703 } else { 3704 BasicBlock *NormalDest = cast<InvokeInst>(CB).getNormalDest(); 3705 if (!NormalDest->getSinglePredecessor()) { 3706 // FIXME: this case is tricky, so we are just conservative here. 3707 // Perhaps we need to split the edge between this BB and NormalDest, 3708 // but a naive attempt to use SplitEdge leads to a crash. 3709 setShadow(&CB, getCleanShadow(&CB)); 3710 setOrigin(&CB, getCleanOrigin()); 3711 return; 3712 } 3713 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3714 // Anything inserted there will be instrumented by MSan later! 3715 NextInsn = NormalDest->getFirstInsertionPt(); 3716 assert(NextInsn != NormalDest->end() && 3717 "Could not find insertion point for retval shadow load"); 3718 } 3719 IRBuilder<> IRBAfter(&*NextInsn); 3720 Value *RetvalShadow = IRBAfter.CreateAlignedLoad( 3721 getShadowTy(&CB), getShadowPtrForRetval(&CB, IRBAfter), 3722 kShadowTLSAlignment, "_msret"); 3723 setShadow(&CB, RetvalShadow); 3724 if (MS.TrackOrigins) 3725 setOrigin(&CB, IRBAfter.CreateLoad(MS.OriginTy, 3726 getOriginPtrForRetval(IRBAfter))); 3727 } 3728 3729 bool isAMustTailRetVal(Value *RetVal) { 3730 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3731 RetVal = I->getOperand(0); 3732 } 3733 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3734 return I->isMustTailCall(); 3735 } 3736 return false; 3737 } 3738 3739 void visitReturnInst(ReturnInst &I) { 3740 IRBuilder<> IRB(&I); 3741 Value *RetVal = I.getReturnValue(); 3742 if (!RetVal) return; 3743 // Don't emit the epilogue for musttail call returns. 3744 if (isAMustTailRetVal(RetVal)) return; 3745 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3746 bool HasNoUndef = 3747 F.hasAttribute(AttributeList::ReturnIndex, Attribute::NoUndef); 3748 bool StoreShadow = !(ClEagerChecks && HasNoUndef); 3749 // FIXME: Consider using SpecialCaseList to specify a list of functions that 3750 // must always return fully initialized values. For now, we hardcode "main". 3751 bool EagerCheck = (ClEagerChecks && HasNoUndef) || (F.getName() == "main"); 3752 3753 Value *Shadow = getShadow(RetVal); 3754 bool StoreOrigin = true; 3755 if (EagerCheck) { 3756 insertShadowCheck(RetVal, &I); 3757 Shadow = getCleanShadow(RetVal); 3758 StoreOrigin = false; 3759 } 3760 3761 // The caller may still expect information passed over TLS if we pass our 3762 // check 3763 if (StoreShadow) { 3764 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3765 if (MS.TrackOrigins && StoreOrigin) 3766 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3767 } 3768 } 3769 3770 void visitPHINode(PHINode &I) { 3771 IRBuilder<> IRB(&I); 3772 if (!PropagateShadow) { 3773 setShadow(&I, getCleanShadow(&I)); 3774 setOrigin(&I, getCleanOrigin()); 3775 return; 3776 } 3777 3778 ShadowPHINodes.push_back(&I); 3779 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3780 "_msphi_s")); 3781 if (MS.TrackOrigins) 3782 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3783 "_msphi_o")); 3784 } 3785 3786 Value *getLocalVarDescription(AllocaInst &I) { 3787 SmallString<2048> StackDescriptionStorage; 3788 raw_svector_ostream StackDescription(StackDescriptionStorage); 3789 // We create a string with a description of the stack allocation and 3790 // pass it into __msan_set_alloca_origin. 3791 // It will be printed by the run-time if stack-originated UMR is found. 3792 // The first 4 bytes of the string are set to '----' and will be replaced 3793 // by __msan_va_arg_overflow_size_tls at the first call. 3794 StackDescription << "----" << I.getName() << "@" << F.getName(); 3795 return createPrivateNonConstGlobalForString(*F.getParent(), 3796 StackDescription.str()); 3797 } 3798 3799 void poisonAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3800 if (PoisonStack && ClPoisonStackWithCall) { 3801 IRB.CreateCall(MS.MsanPoisonStackFn, 3802 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3803 } else { 3804 Value *ShadowBase, *OriginBase; 3805 std::tie(ShadowBase, OriginBase) = getShadowOriginPtr( 3806 &I, IRB, IRB.getInt8Ty(), Align(1), /*isStore*/ true); 3807 3808 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3809 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, 3810 MaybeAlign(I.getAlignment())); 3811 } 3812 3813 if (PoisonStack && MS.TrackOrigins) { 3814 Value *Descr = getLocalVarDescription(I); 3815 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3816 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3817 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3818 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3819 } 3820 } 3821 3822 void poisonAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3823 Value *Descr = getLocalVarDescription(I); 3824 if (PoisonStack) { 3825 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3826 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3827 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3828 } else { 3829 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3830 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3831 } 3832 } 3833 3834 void instrumentAlloca(AllocaInst &I, Instruction *InsPoint = nullptr) { 3835 if (!InsPoint) 3836 InsPoint = &I; 3837 IRBuilder<> IRB(InsPoint->getNextNode()); 3838 const DataLayout &DL = F.getParent()->getDataLayout(); 3839 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3840 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3841 if (I.isArrayAllocation()) 3842 Len = IRB.CreateMul(Len, I.getArraySize()); 3843 3844 if (MS.CompileKernel) 3845 poisonAllocaKmsan(I, IRB, Len); 3846 else 3847 poisonAllocaUserspace(I, IRB, Len); 3848 } 3849 3850 void visitAllocaInst(AllocaInst &I) { 3851 setShadow(&I, getCleanShadow(&I)); 3852 setOrigin(&I, getCleanOrigin()); 3853 // We'll get to this alloca later unless it's poisoned at the corresponding 3854 // llvm.lifetime.start. 3855 AllocaSet.insert(&I); 3856 } 3857 3858 void visitSelectInst(SelectInst& I) { 3859 IRBuilder<> IRB(&I); 3860 // a = select b, c, d 3861 Value *B = I.getCondition(); 3862 Value *C = I.getTrueValue(); 3863 Value *D = I.getFalseValue(); 3864 Value *Sb = getShadow(B); 3865 Value *Sc = getShadow(C); 3866 Value *Sd = getShadow(D); 3867 3868 // Result shadow if condition shadow is 0. 3869 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3870 Value *Sa1; 3871 if (I.getType()->isAggregateType()) { 3872 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3873 // an extra "select". This results in much more compact IR. 3874 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3875 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3876 } else { 3877 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3878 // If Sb (condition is poisoned), look for bits in c and d that are equal 3879 // and both unpoisoned. 3880 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3881 3882 // Cast arguments to shadow-compatible type. 3883 C = CreateAppToShadowCast(IRB, C); 3884 D = CreateAppToShadowCast(IRB, D); 3885 3886 // Result shadow if condition shadow is 1. 3887 Sa1 = IRB.CreateOr({IRB.CreateXor(C, D), Sc, Sd}); 3888 } 3889 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3890 setShadow(&I, Sa); 3891 if (MS.TrackOrigins) { 3892 // Origins are always i32, so any vector conditions must be flattened. 3893 // FIXME: consider tracking vector origins for app vectors? 3894 if (B->getType()->isVectorTy()) { 3895 Type *FlatTy = getShadowTyNoVec(B->getType()); 3896 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3897 ConstantInt::getNullValue(FlatTy)); 3898 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3899 ConstantInt::getNullValue(FlatTy)); 3900 } 3901 // a = select b, c, d 3902 // Oa = Sb ? Ob : (b ? Oc : Od) 3903 setOrigin( 3904 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3905 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3906 getOrigin(I.getFalseValue())))); 3907 } 3908 } 3909 3910 void visitLandingPadInst(LandingPadInst &I) { 3911 // Do nothing. 3912 // See https://github.com/google/sanitizers/issues/504 3913 setShadow(&I, getCleanShadow(&I)); 3914 setOrigin(&I, getCleanOrigin()); 3915 } 3916 3917 void visitCatchSwitchInst(CatchSwitchInst &I) { 3918 setShadow(&I, getCleanShadow(&I)); 3919 setOrigin(&I, getCleanOrigin()); 3920 } 3921 3922 void visitFuncletPadInst(FuncletPadInst &I) { 3923 setShadow(&I, getCleanShadow(&I)); 3924 setOrigin(&I, getCleanOrigin()); 3925 } 3926 3927 void visitGetElementPtrInst(GetElementPtrInst &I) { 3928 handleShadowOr(I); 3929 } 3930 3931 void visitExtractValueInst(ExtractValueInst &I) { 3932 IRBuilder<> IRB(&I); 3933 Value *Agg = I.getAggregateOperand(); 3934 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 3935 Value *AggShadow = getShadow(Agg); 3936 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3937 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 3938 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 3939 setShadow(&I, ResShadow); 3940 setOriginForNaryOp(I); 3941 } 3942 3943 void visitInsertValueInst(InsertValueInst &I) { 3944 IRBuilder<> IRB(&I); 3945 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 3946 Value *AggShadow = getShadow(I.getAggregateOperand()); 3947 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 3948 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3949 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 3950 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 3951 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 3952 setShadow(&I, Res); 3953 setOriginForNaryOp(I); 3954 } 3955 3956 void dumpInst(Instruction &I) { 3957 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 3958 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 3959 } else { 3960 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 3961 } 3962 errs() << "QQQ " << I << "\n"; 3963 } 3964 3965 void visitResumeInst(ResumeInst &I) { 3966 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 3967 // Nothing to do here. 3968 } 3969 3970 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 3971 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 3972 // Nothing to do here. 3973 } 3974 3975 void visitCatchReturnInst(CatchReturnInst &CRI) { 3976 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 3977 // Nothing to do here. 3978 } 3979 3980 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 3981 const DataLayout &DL, bool isOutput) { 3982 // For each assembly argument, we check its value for being initialized. 3983 // If the argument is a pointer, we assume it points to a single element 3984 // of the corresponding type (or to a 8-byte word, if the type is unsized). 3985 // Each such pointer is instrumented with a call to the runtime library. 3986 Type *OpType = Operand->getType(); 3987 // Check the operand value itself. 3988 insertShadowCheck(Operand, &I); 3989 if (!OpType->isPointerTy() || !isOutput) { 3990 assert(!isOutput); 3991 return; 3992 } 3993 Type *ElType = OpType->getPointerElementType(); 3994 if (!ElType->isSized()) 3995 return; 3996 int Size = DL.getTypeStoreSize(ElType); 3997 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 3998 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 3999 IRB.CreateCall(MS.MsanInstrumentAsmStoreFn, {Ptr, SizeVal}); 4000 } 4001 4002 /// Get the number of output arguments returned by pointers. 4003 int getNumOutputArgs(InlineAsm *IA, CallBase *CB) { 4004 int NumRetOutputs = 0; 4005 int NumOutputs = 0; 4006 Type *RetTy = cast<Value>(CB)->getType(); 4007 if (!RetTy->isVoidTy()) { 4008 // Register outputs are returned via the CallInst return value. 4009 auto *ST = dyn_cast<StructType>(RetTy); 4010 if (ST) 4011 NumRetOutputs = ST->getNumElements(); 4012 else 4013 NumRetOutputs = 1; 4014 } 4015 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 4016 for (size_t i = 0, n = Constraints.size(); i < n; i++) { 4017 InlineAsm::ConstraintInfo Info = Constraints[i]; 4018 switch (Info.Type) { 4019 case InlineAsm::isOutput: 4020 NumOutputs++; 4021 break; 4022 default: 4023 break; 4024 } 4025 } 4026 return NumOutputs - NumRetOutputs; 4027 } 4028 4029 void visitAsmInstruction(Instruction &I) { 4030 // Conservative inline assembly handling: check for poisoned shadow of 4031 // asm() arguments, then unpoison the result and all the memory locations 4032 // pointed to by those arguments. 4033 // An inline asm() statement in C++ contains lists of input and output 4034 // arguments used by the assembly code. These are mapped to operands of the 4035 // CallInst as follows: 4036 // - nR register outputs ("=r) are returned by value in a single structure 4037 // (SSA value of the CallInst); 4038 // - nO other outputs ("=m" and others) are returned by pointer as first 4039 // nO operands of the CallInst; 4040 // - nI inputs ("r", "m" and others) are passed to CallInst as the 4041 // remaining nI operands. 4042 // The total number of asm() arguments in the source is nR+nO+nI, and the 4043 // corresponding CallInst has nO+nI+1 operands (the last operand is the 4044 // function to be called). 4045 const DataLayout &DL = F.getParent()->getDataLayout(); 4046 CallBase *CB = cast<CallBase>(&I); 4047 IRBuilder<> IRB(&I); 4048 InlineAsm *IA = cast<InlineAsm>(CB->getCalledOperand()); 4049 int OutputArgs = getNumOutputArgs(IA, CB); 4050 // The last operand of a CallInst is the function itself. 4051 int NumOperands = CB->getNumOperands() - 1; 4052 4053 // Check input arguments. Doing so before unpoisoning output arguments, so 4054 // that we won't overwrite uninit values before checking them. 4055 for (int i = OutputArgs; i < NumOperands; i++) { 4056 Value *Operand = CB->getOperand(i); 4057 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 4058 } 4059 // Unpoison output arguments. This must happen before the actual InlineAsm 4060 // call, so that the shadow for memory published in the asm() statement 4061 // remains valid. 4062 for (int i = 0; i < OutputArgs; i++) { 4063 Value *Operand = CB->getOperand(i); 4064 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 4065 } 4066 4067 setShadow(&I, getCleanShadow(&I)); 4068 setOrigin(&I, getCleanOrigin()); 4069 } 4070 4071 void visitFreezeInst(FreezeInst &I) { 4072 // Freeze always returns a fully defined value. 4073 setShadow(&I, getCleanShadow(&I)); 4074 setOrigin(&I, getCleanOrigin()); 4075 } 4076 4077 void visitInstruction(Instruction &I) { 4078 // Everything else: stop propagating and check for poisoned shadow. 4079 if (ClDumpStrictInstructions) 4080 dumpInst(I); 4081 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 4082 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 4083 Value *Operand = I.getOperand(i); 4084 if (Operand->getType()->isSized()) 4085 insertShadowCheck(Operand, &I); 4086 } 4087 setShadow(&I, getCleanShadow(&I)); 4088 setOrigin(&I, getCleanOrigin()); 4089 } 4090 }; 4091 4092 /// AMD64-specific implementation of VarArgHelper. 4093 struct VarArgAMD64Helper : public VarArgHelper { 4094 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 4095 // See a comment in visitCallBase for more details. 4096 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 4097 static const unsigned AMD64FpEndOffsetSSE = 176; 4098 // If SSE is disabled, fp_offset in va_list is zero. 4099 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 4100 4101 unsigned AMD64FpEndOffset; 4102 Function &F; 4103 MemorySanitizer &MS; 4104 MemorySanitizerVisitor &MSV; 4105 Value *VAArgTLSCopy = nullptr; 4106 Value *VAArgTLSOriginCopy = nullptr; 4107 Value *VAArgOverflowSize = nullptr; 4108 4109 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4110 4111 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4112 4113 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 4114 MemorySanitizerVisitor &MSV) 4115 : F(F), MS(MS), MSV(MSV) { 4116 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 4117 for (const auto &Attr : F.getAttributes().getFnAttributes()) { 4118 if (Attr.isStringAttribute() && 4119 (Attr.getKindAsString() == "target-features")) { 4120 if (Attr.getValueAsString().contains("-sse")) 4121 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 4122 break; 4123 } 4124 } 4125 } 4126 4127 ArgKind classifyArgument(Value* arg) { 4128 // A very rough approximation of X86_64 argument classification rules. 4129 Type *T = arg->getType(); 4130 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 4131 return AK_FloatingPoint; 4132 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4133 return AK_GeneralPurpose; 4134 if (T->isPointerTy()) 4135 return AK_GeneralPurpose; 4136 return AK_Memory; 4137 } 4138 4139 // For VarArg functions, store the argument shadow in an ABI-specific format 4140 // that corresponds to va_list layout. 4141 // We do this because Clang lowers va_arg in the frontend, and this pass 4142 // only sees the low level code that deals with va_list internals. 4143 // A much easier alternative (provided that Clang emits va_arg instructions) 4144 // would have been to associate each live instance of va_list with a copy of 4145 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 4146 // order. 4147 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4148 unsigned GpOffset = 0; 4149 unsigned FpOffset = AMD64GpEndOffset; 4150 unsigned OverflowOffset = AMD64FpEndOffset; 4151 const DataLayout &DL = F.getParent()->getDataLayout(); 4152 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4153 ++ArgIt) { 4154 Value *A = *ArgIt; 4155 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4156 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4157 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4158 if (IsByVal) { 4159 // ByVal arguments always go to the overflow area. 4160 // Fixed arguments passed through the overflow area will be stepped 4161 // over by va_start, so don't count them towards the offset. 4162 if (IsFixed) 4163 continue; 4164 assert(A->getType()->isPointerTy()); 4165 Type *RealTy = CB.getParamByValType(ArgNo); 4166 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4167 Value *ShadowBase = getShadowPtrForVAArgument( 4168 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 4169 Value *OriginBase = nullptr; 4170 if (MS.TrackOrigins) 4171 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 4172 OverflowOffset += alignTo(ArgSize, 8); 4173 if (!ShadowBase) 4174 continue; 4175 Value *ShadowPtr, *OriginPtr; 4176 std::tie(ShadowPtr, OriginPtr) = 4177 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 4178 /*isStore*/ false); 4179 4180 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 4181 kShadowTLSAlignment, ArgSize); 4182 if (MS.TrackOrigins) 4183 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 4184 kShadowTLSAlignment, ArgSize); 4185 } else { 4186 ArgKind AK = classifyArgument(A); 4187 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 4188 AK = AK_Memory; 4189 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 4190 AK = AK_Memory; 4191 Value *ShadowBase, *OriginBase = nullptr; 4192 switch (AK) { 4193 case AK_GeneralPurpose: 4194 ShadowBase = 4195 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 4196 if (MS.TrackOrigins) 4197 OriginBase = 4198 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 4199 GpOffset += 8; 4200 break; 4201 case AK_FloatingPoint: 4202 ShadowBase = 4203 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 4204 if (MS.TrackOrigins) 4205 OriginBase = 4206 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 4207 FpOffset += 16; 4208 break; 4209 case AK_Memory: 4210 if (IsFixed) 4211 continue; 4212 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4213 ShadowBase = 4214 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 4215 if (MS.TrackOrigins) 4216 OriginBase = 4217 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 4218 OverflowOffset += alignTo(ArgSize, 8); 4219 } 4220 // Take fixed arguments into account for GpOffset and FpOffset, 4221 // but don't actually store shadows for them. 4222 // TODO(glider): don't call get*PtrForVAArgument() for them. 4223 if (IsFixed) 4224 continue; 4225 if (!ShadowBase) 4226 continue; 4227 Value *Shadow = MSV.getShadow(A); 4228 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 4229 if (MS.TrackOrigins) { 4230 Value *Origin = MSV.getOrigin(A); 4231 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 4232 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 4233 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 4234 } 4235 } 4236 } 4237 Constant *OverflowSize = 4238 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 4239 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4240 } 4241 4242 /// Compute the shadow address for a given va_arg. 4243 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4244 unsigned ArgOffset, unsigned ArgSize) { 4245 // Make sure we don't overflow __msan_va_arg_tls. 4246 if (ArgOffset + ArgSize > kParamTLSSize) 4247 return nullptr; 4248 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4249 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4250 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4251 "_msarg_va_s"); 4252 } 4253 4254 /// Compute the origin address for a given va_arg. 4255 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 4256 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 4257 // getOriginPtrForVAArgument() is always called after 4258 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 4259 // overflow. 4260 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4261 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 4262 "_msarg_va_o"); 4263 } 4264 4265 void unpoisonVAListTagForInst(IntrinsicInst &I) { 4266 IRBuilder<> IRB(&I); 4267 Value *VAListTag = I.getArgOperand(0); 4268 Value *ShadowPtr, *OriginPtr; 4269 const Align Alignment = Align(8); 4270 std::tie(ShadowPtr, OriginPtr) = 4271 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 4272 /*isStore*/ true); 4273 4274 // Unpoison the whole __va_list_tag. 4275 // FIXME: magic ABI constants. 4276 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4277 /* size */ 24, Alignment, false); 4278 // We shouldn't need to zero out the origins, as they're only checked for 4279 // nonzero shadow. 4280 } 4281 4282 void visitVAStartInst(VAStartInst &I) override { 4283 if (F.getCallingConv() == CallingConv::Win64) 4284 return; 4285 VAStartInstrumentationList.push_back(&I); 4286 unpoisonVAListTagForInst(I); 4287 } 4288 4289 void visitVACopyInst(VACopyInst &I) override { 4290 if (F.getCallingConv() == CallingConv::Win64) return; 4291 unpoisonVAListTagForInst(I); 4292 } 4293 4294 void finalizeInstrumentation() override { 4295 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4296 "finalizeInstrumentation called twice"); 4297 if (!VAStartInstrumentationList.empty()) { 4298 // If there is a va_start in this function, make a backup copy of 4299 // va_arg_tls somewhere in the function entry block. 4300 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4301 VAArgOverflowSize = 4302 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4303 Value *CopySize = 4304 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 4305 VAArgOverflowSize); 4306 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4307 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4308 if (MS.TrackOrigins) { 4309 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4310 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 4311 Align(8), CopySize); 4312 } 4313 } 4314 4315 // Instrument va_start. 4316 // Copy va_list shadow from the backup copy of the TLS contents. 4317 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4318 CallInst *OrigInst = VAStartInstrumentationList[i]; 4319 IRBuilder<> IRB(OrigInst->getNextNode()); 4320 Value *VAListTag = OrigInst->getArgOperand(0); 4321 4322 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4323 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 4324 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4325 ConstantInt::get(MS.IntptrTy, 16)), 4326 PointerType::get(RegSaveAreaPtrTy, 0)); 4327 Value *RegSaveAreaPtr = 4328 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4329 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4330 const Align Alignment = Align(16); 4331 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4332 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4333 Alignment, /*isStore*/ true); 4334 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4335 AMD64FpEndOffset); 4336 if (MS.TrackOrigins) 4337 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 4338 Alignment, AMD64FpEndOffset); 4339 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4340 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 4341 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4342 ConstantInt::get(MS.IntptrTy, 8)), 4343 PointerType::get(OverflowArgAreaPtrTy, 0)); 4344 Value *OverflowArgAreaPtr = 4345 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 4346 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 4347 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 4348 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 4349 Alignment, /*isStore*/ true); 4350 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 4351 AMD64FpEndOffset); 4352 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 4353 VAArgOverflowSize); 4354 if (MS.TrackOrigins) { 4355 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 4356 AMD64FpEndOffset); 4357 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 4358 VAArgOverflowSize); 4359 } 4360 } 4361 } 4362 }; 4363 4364 /// MIPS64-specific implementation of VarArgHelper. 4365 struct VarArgMIPS64Helper : public VarArgHelper { 4366 Function &F; 4367 MemorySanitizer &MS; 4368 MemorySanitizerVisitor &MSV; 4369 Value *VAArgTLSCopy = nullptr; 4370 Value *VAArgSize = nullptr; 4371 4372 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4373 4374 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 4375 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4376 4377 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4378 unsigned VAArgOffset = 0; 4379 const DataLayout &DL = F.getParent()->getDataLayout(); 4380 for (auto ArgIt = CB.arg_begin() + CB.getFunctionType()->getNumParams(), 4381 End = CB.arg_end(); 4382 ArgIt != End; ++ArgIt) { 4383 Triple TargetTriple(F.getParent()->getTargetTriple()); 4384 Value *A = *ArgIt; 4385 Value *Base; 4386 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4387 if (TargetTriple.getArch() == Triple::mips64) { 4388 // Adjusting the shadow for argument with size < 8 to match the placement 4389 // of bits in big endian system 4390 if (ArgSize < 8) 4391 VAArgOffset += (8 - ArgSize); 4392 } 4393 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 4394 VAArgOffset += ArgSize; 4395 VAArgOffset = alignTo(VAArgOffset, 8); 4396 if (!Base) 4397 continue; 4398 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4399 } 4400 4401 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 4402 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4403 // a new class member i.e. it is the total size of all VarArgs. 4404 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4405 } 4406 4407 /// Compute the shadow address for a given va_arg. 4408 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4409 unsigned ArgOffset, unsigned ArgSize) { 4410 // Make sure we don't overflow __msan_va_arg_tls. 4411 if (ArgOffset + ArgSize > kParamTLSSize) 4412 return nullptr; 4413 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4414 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4415 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4416 "_msarg"); 4417 } 4418 4419 void visitVAStartInst(VAStartInst &I) override { 4420 IRBuilder<> IRB(&I); 4421 VAStartInstrumentationList.push_back(&I); 4422 Value *VAListTag = I.getArgOperand(0); 4423 Value *ShadowPtr, *OriginPtr; 4424 const Align Alignment = Align(8); 4425 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4426 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4427 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4428 /* size */ 8, Alignment, false); 4429 } 4430 4431 void visitVACopyInst(VACopyInst &I) override { 4432 IRBuilder<> IRB(&I); 4433 VAStartInstrumentationList.push_back(&I); 4434 Value *VAListTag = I.getArgOperand(0); 4435 Value *ShadowPtr, *OriginPtr; 4436 const Align Alignment = Align(8); 4437 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4438 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4439 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4440 /* size */ 8, Alignment, false); 4441 } 4442 4443 void finalizeInstrumentation() override { 4444 assert(!VAArgSize && !VAArgTLSCopy && 4445 "finalizeInstrumentation called twice"); 4446 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4447 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4448 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4449 VAArgSize); 4450 4451 if (!VAStartInstrumentationList.empty()) { 4452 // If there is a va_start in this function, make a backup copy of 4453 // va_arg_tls somewhere in the function entry block. 4454 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4455 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4456 } 4457 4458 // Instrument va_start. 4459 // Copy va_list shadow from the backup copy of the TLS contents. 4460 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4461 CallInst *OrigInst = VAStartInstrumentationList[i]; 4462 IRBuilder<> IRB(OrigInst->getNextNode()); 4463 Value *VAListTag = OrigInst->getArgOperand(0); 4464 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4465 Value *RegSaveAreaPtrPtr = 4466 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4467 PointerType::get(RegSaveAreaPtrTy, 0)); 4468 Value *RegSaveAreaPtr = 4469 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4470 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4471 const Align Alignment = Align(8); 4472 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4473 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4474 Alignment, /*isStore*/ true); 4475 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4476 CopySize); 4477 } 4478 } 4479 }; 4480 4481 /// AArch64-specific implementation of VarArgHelper. 4482 struct VarArgAArch64Helper : public VarArgHelper { 4483 static const unsigned kAArch64GrArgSize = 64; 4484 static const unsigned kAArch64VrArgSize = 128; 4485 4486 static const unsigned AArch64GrBegOffset = 0; 4487 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 4488 // Make VR space aligned to 16 bytes. 4489 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 4490 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 4491 + kAArch64VrArgSize; 4492 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 4493 4494 Function &F; 4495 MemorySanitizer &MS; 4496 MemorySanitizerVisitor &MSV; 4497 Value *VAArgTLSCopy = nullptr; 4498 Value *VAArgOverflowSize = nullptr; 4499 4500 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4501 4502 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4503 4504 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 4505 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4506 4507 ArgKind classifyArgument(Value* arg) { 4508 Type *T = arg->getType(); 4509 if (T->isFPOrFPVectorTy()) 4510 return AK_FloatingPoint; 4511 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4512 || (T->isPointerTy())) 4513 return AK_GeneralPurpose; 4514 return AK_Memory; 4515 } 4516 4517 // The instrumentation stores the argument shadow in a non ABI-specific 4518 // format because it does not know which argument is named (since Clang, 4519 // like x86_64 case, lowers the va_args in the frontend and this pass only 4520 // sees the low level code that deals with va_list internals). 4521 // The first seven GR registers are saved in the first 56 bytes of the 4522 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4523 // the remaining arguments. 4524 // Using constant offset within the va_arg TLS array allows fast copy 4525 // in the finalize instrumentation. 4526 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4527 unsigned GrOffset = AArch64GrBegOffset; 4528 unsigned VrOffset = AArch64VrBegOffset; 4529 unsigned OverflowOffset = AArch64VAEndOffset; 4530 4531 const DataLayout &DL = F.getParent()->getDataLayout(); 4532 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4533 ++ArgIt) { 4534 Value *A = *ArgIt; 4535 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4536 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4537 ArgKind AK = classifyArgument(A); 4538 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4539 AK = AK_Memory; 4540 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4541 AK = AK_Memory; 4542 Value *Base; 4543 switch (AK) { 4544 case AK_GeneralPurpose: 4545 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4546 GrOffset += 8; 4547 break; 4548 case AK_FloatingPoint: 4549 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4550 VrOffset += 16; 4551 break; 4552 case AK_Memory: 4553 // Don't count fixed arguments in the overflow area - va_start will 4554 // skip right over them. 4555 if (IsFixed) 4556 continue; 4557 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4558 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4559 alignTo(ArgSize, 8)); 4560 OverflowOffset += alignTo(ArgSize, 8); 4561 break; 4562 } 4563 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4564 // bother to actually store a shadow. 4565 if (IsFixed) 4566 continue; 4567 if (!Base) 4568 continue; 4569 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4570 } 4571 Constant *OverflowSize = 4572 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4573 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4574 } 4575 4576 /// Compute the shadow address for a given va_arg. 4577 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4578 unsigned ArgOffset, unsigned ArgSize) { 4579 // Make sure we don't overflow __msan_va_arg_tls. 4580 if (ArgOffset + ArgSize > kParamTLSSize) 4581 return nullptr; 4582 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4583 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4584 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4585 "_msarg"); 4586 } 4587 4588 void visitVAStartInst(VAStartInst &I) override { 4589 IRBuilder<> IRB(&I); 4590 VAStartInstrumentationList.push_back(&I); 4591 Value *VAListTag = I.getArgOperand(0); 4592 Value *ShadowPtr, *OriginPtr; 4593 const Align Alignment = Align(8); 4594 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4595 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4596 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4597 /* size */ 32, Alignment, false); 4598 } 4599 4600 void visitVACopyInst(VACopyInst &I) override { 4601 IRBuilder<> IRB(&I); 4602 VAStartInstrumentationList.push_back(&I); 4603 Value *VAListTag = I.getArgOperand(0); 4604 Value *ShadowPtr, *OriginPtr; 4605 const Align Alignment = Align(8); 4606 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4607 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4608 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4609 /* size */ 32, Alignment, false); 4610 } 4611 4612 // Retrieve a va_list field of 'void*' size. 4613 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4614 Value *SaveAreaPtrPtr = 4615 IRB.CreateIntToPtr( 4616 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4617 ConstantInt::get(MS.IntptrTy, offset)), 4618 Type::getInt64PtrTy(*MS.C)); 4619 return IRB.CreateLoad(Type::getInt64Ty(*MS.C), SaveAreaPtrPtr); 4620 } 4621 4622 // Retrieve a va_list field of 'int' size. 4623 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4624 Value *SaveAreaPtr = 4625 IRB.CreateIntToPtr( 4626 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4627 ConstantInt::get(MS.IntptrTy, offset)), 4628 Type::getInt32PtrTy(*MS.C)); 4629 Value *SaveArea32 = IRB.CreateLoad(IRB.getInt32Ty(), SaveAreaPtr); 4630 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4631 } 4632 4633 void finalizeInstrumentation() override { 4634 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4635 "finalizeInstrumentation called twice"); 4636 if (!VAStartInstrumentationList.empty()) { 4637 // If there is a va_start in this function, make a backup copy of 4638 // va_arg_tls somewhere in the function entry block. 4639 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4640 VAArgOverflowSize = 4641 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4642 Value *CopySize = 4643 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4644 VAArgOverflowSize); 4645 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4646 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4647 } 4648 4649 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4650 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4651 4652 // Instrument va_start, copy va_list shadow from the backup copy of 4653 // the TLS contents. 4654 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4655 CallInst *OrigInst = VAStartInstrumentationList[i]; 4656 IRBuilder<> IRB(OrigInst->getNextNode()); 4657 4658 Value *VAListTag = OrigInst->getArgOperand(0); 4659 4660 // The variadic ABI for AArch64 creates two areas to save the incoming 4661 // argument registers (one for 64-bit general register xn-x7 and another 4662 // for 128-bit FP/SIMD vn-v7). 4663 // We need then to propagate the shadow arguments on both regions 4664 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4665 // The remaining arguments are saved on shadow for 'va::stack'. 4666 // One caveat is it requires only to propagate the non-named arguments, 4667 // however on the call site instrumentation 'all' the arguments are 4668 // saved. So to copy the shadow values from the va_arg TLS array 4669 // we need to adjust the offset for both GR and VR fields based on 4670 // the __{gr,vr}_offs value (since they are stores based on incoming 4671 // named arguments). 4672 4673 // Read the stack pointer from the va_list. 4674 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4675 4676 // Read both the __gr_top and __gr_off and add them up. 4677 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4678 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4679 4680 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4681 4682 // Read both the __vr_top and __vr_off and add them up. 4683 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4684 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4685 4686 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4687 4688 // It does not know how many named arguments is being used and, on the 4689 // callsite all the arguments were saved. Since __gr_off is defined as 4690 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4691 // argument by ignoring the bytes of shadow from named arguments. 4692 Value *GrRegSaveAreaShadowPtrOff = 4693 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4694 4695 Value *GrRegSaveAreaShadowPtr = 4696 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4697 Align(8), /*isStore*/ true) 4698 .first; 4699 4700 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4701 GrRegSaveAreaShadowPtrOff); 4702 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4703 4704 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, Align(8), GrSrcPtr, Align(8), 4705 GrCopySize); 4706 4707 // Again, but for FP/SIMD values. 4708 Value *VrRegSaveAreaShadowPtrOff = 4709 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4710 4711 Value *VrRegSaveAreaShadowPtr = 4712 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4713 Align(8), /*isStore*/ true) 4714 .first; 4715 4716 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4717 IRB.getInt8Ty(), 4718 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4719 IRB.getInt32(AArch64VrBegOffset)), 4720 VrRegSaveAreaShadowPtrOff); 4721 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4722 4723 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, Align(8), VrSrcPtr, Align(8), 4724 VrCopySize); 4725 4726 // And finally for remaining arguments. 4727 Value *StackSaveAreaShadowPtr = 4728 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4729 Align(16), /*isStore*/ true) 4730 .first; 4731 4732 Value *StackSrcPtr = 4733 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4734 IRB.getInt32(AArch64VAEndOffset)); 4735 4736 IRB.CreateMemCpy(StackSaveAreaShadowPtr, Align(16), StackSrcPtr, 4737 Align(16), VAArgOverflowSize); 4738 } 4739 } 4740 }; 4741 4742 /// PowerPC64-specific implementation of VarArgHelper. 4743 struct VarArgPowerPC64Helper : public VarArgHelper { 4744 Function &F; 4745 MemorySanitizer &MS; 4746 MemorySanitizerVisitor &MSV; 4747 Value *VAArgTLSCopy = nullptr; 4748 Value *VAArgSize = nullptr; 4749 4750 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4751 4752 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4753 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4754 4755 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4756 // For PowerPC, we need to deal with alignment of stack arguments - 4757 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4758 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4759 // For that reason, we compute current offset from stack pointer (which is 4760 // always properly aligned), and offset for the first vararg, then subtract 4761 // them. 4762 unsigned VAArgBase; 4763 Triple TargetTriple(F.getParent()->getTargetTriple()); 4764 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4765 // and 32 bytes for ABIv2. This is usually determined by target 4766 // endianness, but in theory could be overridden by function attribute. 4767 if (TargetTriple.getArch() == Triple::ppc64) 4768 VAArgBase = 48; 4769 else 4770 VAArgBase = 32; 4771 unsigned VAArgOffset = VAArgBase; 4772 const DataLayout &DL = F.getParent()->getDataLayout(); 4773 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4774 ++ArgIt) { 4775 Value *A = *ArgIt; 4776 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4777 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4778 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4779 if (IsByVal) { 4780 assert(A->getType()->isPointerTy()); 4781 Type *RealTy = CB.getParamByValType(ArgNo); 4782 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4783 MaybeAlign ArgAlign = CB.getParamAlign(ArgNo); 4784 if (!ArgAlign || *ArgAlign < Align(8)) 4785 ArgAlign = Align(8); 4786 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4787 if (!IsFixed) { 4788 Value *Base = getShadowPtrForVAArgument( 4789 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4790 if (Base) { 4791 Value *AShadowPtr, *AOriginPtr; 4792 std::tie(AShadowPtr, AOriginPtr) = 4793 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4794 kShadowTLSAlignment, /*isStore*/ false); 4795 4796 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4797 kShadowTLSAlignment, ArgSize); 4798 } 4799 } 4800 VAArgOffset += alignTo(ArgSize, 8); 4801 } else { 4802 Value *Base; 4803 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4804 uint64_t ArgAlign = 8; 4805 if (A->getType()->isArrayTy()) { 4806 // Arrays are aligned to element size, except for long double 4807 // arrays, which are aligned to 8 bytes. 4808 Type *ElementTy = A->getType()->getArrayElementType(); 4809 if (!ElementTy->isPPC_FP128Ty()) 4810 ArgAlign = DL.getTypeAllocSize(ElementTy); 4811 } else if (A->getType()->isVectorTy()) { 4812 // Vectors are naturally aligned. 4813 ArgAlign = DL.getTypeAllocSize(A->getType()); 4814 } 4815 if (ArgAlign < 8) 4816 ArgAlign = 8; 4817 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4818 if (DL.isBigEndian()) { 4819 // Adjusting the shadow for argument with size < 8 to match the placement 4820 // of bits in big endian system 4821 if (ArgSize < 8) 4822 VAArgOffset += (8 - ArgSize); 4823 } 4824 if (!IsFixed) { 4825 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4826 VAArgOffset - VAArgBase, ArgSize); 4827 if (Base) 4828 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4829 } 4830 VAArgOffset += ArgSize; 4831 VAArgOffset = alignTo(VAArgOffset, 8); 4832 } 4833 if (IsFixed) 4834 VAArgBase = VAArgOffset; 4835 } 4836 4837 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4838 VAArgOffset - VAArgBase); 4839 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4840 // a new class member i.e. it is the total size of all VarArgs. 4841 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4842 } 4843 4844 /// Compute the shadow address for a given va_arg. 4845 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4846 unsigned ArgOffset, unsigned ArgSize) { 4847 // Make sure we don't overflow __msan_va_arg_tls. 4848 if (ArgOffset + ArgSize > kParamTLSSize) 4849 return nullptr; 4850 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4851 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4852 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4853 "_msarg"); 4854 } 4855 4856 void visitVAStartInst(VAStartInst &I) override { 4857 IRBuilder<> IRB(&I); 4858 VAStartInstrumentationList.push_back(&I); 4859 Value *VAListTag = I.getArgOperand(0); 4860 Value *ShadowPtr, *OriginPtr; 4861 const Align Alignment = Align(8); 4862 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4863 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4864 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4865 /* size */ 8, Alignment, false); 4866 } 4867 4868 void visitVACopyInst(VACopyInst &I) override { 4869 IRBuilder<> IRB(&I); 4870 Value *VAListTag = I.getArgOperand(0); 4871 Value *ShadowPtr, *OriginPtr; 4872 const Align Alignment = Align(8); 4873 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4874 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4875 // Unpoison the whole __va_list_tag. 4876 // FIXME: magic ABI constants. 4877 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4878 /* size */ 8, Alignment, false); 4879 } 4880 4881 void finalizeInstrumentation() override { 4882 assert(!VAArgSize && !VAArgTLSCopy && 4883 "finalizeInstrumentation called twice"); 4884 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4885 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4886 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4887 VAArgSize); 4888 4889 if (!VAStartInstrumentationList.empty()) { 4890 // If there is a va_start in this function, make a backup copy of 4891 // va_arg_tls somewhere in the function entry block. 4892 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4893 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4894 } 4895 4896 // Instrument va_start. 4897 // Copy va_list shadow from the backup copy of the TLS contents. 4898 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4899 CallInst *OrigInst = VAStartInstrumentationList[i]; 4900 IRBuilder<> IRB(OrigInst->getNextNode()); 4901 Value *VAListTag = OrigInst->getArgOperand(0); 4902 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4903 Value *RegSaveAreaPtrPtr = 4904 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4905 PointerType::get(RegSaveAreaPtrTy, 0)); 4906 Value *RegSaveAreaPtr = 4907 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4908 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4909 const Align Alignment = Align(8); 4910 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4911 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4912 Alignment, /*isStore*/ true); 4913 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4914 CopySize); 4915 } 4916 } 4917 }; 4918 4919 /// SystemZ-specific implementation of VarArgHelper. 4920 struct VarArgSystemZHelper : public VarArgHelper { 4921 static const unsigned SystemZGpOffset = 16; 4922 static const unsigned SystemZGpEndOffset = 56; 4923 static const unsigned SystemZFpOffset = 128; 4924 static const unsigned SystemZFpEndOffset = 160; 4925 static const unsigned SystemZMaxVrArgs = 8; 4926 static const unsigned SystemZRegSaveAreaSize = 160; 4927 static const unsigned SystemZOverflowOffset = 160; 4928 static const unsigned SystemZVAListTagSize = 32; 4929 static const unsigned SystemZOverflowArgAreaPtrOffset = 16; 4930 static const unsigned SystemZRegSaveAreaPtrOffset = 24; 4931 4932 Function &F; 4933 MemorySanitizer &MS; 4934 MemorySanitizerVisitor &MSV; 4935 Value *VAArgTLSCopy = nullptr; 4936 Value *VAArgTLSOriginCopy = nullptr; 4937 Value *VAArgOverflowSize = nullptr; 4938 4939 SmallVector<CallInst *, 16> VAStartInstrumentationList; 4940 4941 enum class ArgKind { 4942 GeneralPurpose, 4943 FloatingPoint, 4944 Vector, 4945 Memory, 4946 Indirect, 4947 }; 4948 4949 enum class ShadowExtension { None, Zero, Sign }; 4950 4951 VarArgSystemZHelper(Function &F, MemorySanitizer &MS, 4952 MemorySanitizerVisitor &MSV) 4953 : F(F), MS(MS), MSV(MSV) {} 4954 4955 ArgKind classifyArgument(Type *T, bool IsSoftFloatABI) { 4956 // T is a SystemZABIInfo::classifyArgumentType() output, and there are 4957 // only a few possibilities of what it can be. In particular, enums, single 4958 // element structs and large types have already been taken care of. 4959 4960 // Some i128 and fp128 arguments are converted to pointers only in the 4961 // back end. 4962 if (T->isIntegerTy(128) || T->isFP128Ty()) 4963 return ArgKind::Indirect; 4964 if (T->isFloatingPointTy()) 4965 return IsSoftFloatABI ? ArgKind::GeneralPurpose : ArgKind::FloatingPoint; 4966 if (T->isIntegerTy() || T->isPointerTy()) 4967 return ArgKind::GeneralPurpose; 4968 if (T->isVectorTy()) 4969 return ArgKind::Vector; 4970 return ArgKind::Memory; 4971 } 4972 4973 ShadowExtension getShadowExtension(const CallBase &CB, unsigned ArgNo) { 4974 // ABI says: "One of the simple integer types no more than 64 bits wide. 4975 // ... If such an argument is shorter than 64 bits, replace it by a full 4976 // 64-bit integer representing the same number, using sign or zero 4977 // extension". Shadow for an integer argument has the same type as the 4978 // argument itself, so it can be sign or zero extended as well. 4979 bool ZExt = CB.paramHasAttr(ArgNo, Attribute::ZExt); 4980 bool SExt = CB.paramHasAttr(ArgNo, Attribute::SExt); 4981 if (ZExt) { 4982 assert(!SExt); 4983 return ShadowExtension::Zero; 4984 } 4985 if (SExt) { 4986 assert(!ZExt); 4987 return ShadowExtension::Sign; 4988 } 4989 return ShadowExtension::None; 4990 } 4991 4992 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4993 bool IsSoftFloatABI = CB.getCalledFunction() 4994 ->getFnAttribute("use-soft-float") 4995 .getValueAsString() == "true"; 4996 unsigned GpOffset = SystemZGpOffset; 4997 unsigned FpOffset = SystemZFpOffset; 4998 unsigned VrIndex = 0; 4999 unsigned OverflowOffset = SystemZOverflowOffset; 5000 const DataLayout &DL = F.getParent()->getDataLayout(); 5001 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 5002 ++ArgIt) { 5003 Value *A = *ArgIt; 5004 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 5005 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 5006 // SystemZABIInfo does not produce ByVal parameters. 5007 assert(!CB.paramHasAttr(ArgNo, Attribute::ByVal)); 5008 Type *T = A->getType(); 5009 ArgKind AK = classifyArgument(T, IsSoftFloatABI); 5010 if (AK == ArgKind::Indirect) { 5011 T = PointerType::get(T, 0); 5012 AK = ArgKind::GeneralPurpose; 5013 } 5014 if (AK == ArgKind::GeneralPurpose && GpOffset >= SystemZGpEndOffset) 5015 AK = ArgKind::Memory; 5016 if (AK == ArgKind::FloatingPoint && FpOffset >= SystemZFpEndOffset) 5017 AK = ArgKind::Memory; 5018 if (AK == ArgKind::Vector && (VrIndex >= SystemZMaxVrArgs || !IsFixed)) 5019 AK = ArgKind::Memory; 5020 Value *ShadowBase = nullptr; 5021 Value *OriginBase = nullptr; 5022 ShadowExtension SE = ShadowExtension::None; 5023 switch (AK) { 5024 case ArgKind::GeneralPurpose: { 5025 // Always keep track of GpOffset, but store shadow only for varargs. 5026 uint64_t ArgSize = 8; 5027 if (GpOffset + ArgSize <= kParamTLSSize) { 5028 if (!IsFixed) { 5029 SE = getShadowExtension(CB, ArgNo); 5030 uint64_t GapSize = 0; 5031 if (SE == ShadowExtension::None) { 5032 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5033 assert(ArgAllocSize <= ArgSize); 5034 GapSize = ArgSize - ArgAllocSize; 5035 } 5036 ShadowBase = getShadowAddrForVAArgument(IRB, GpOffset + GapSize); 5037 if (MS.TrackOrigins) 5038 OriginBase = getOriginPtrForVAArgument(IRB, GpOffset + GapSize); 5039 } 5040 GpOffset += ArgSize; 5041 } else { 5042 GpOffset = kParamTLSSize; 5043 } 5044 break; 5045 } 5046 case ArgKind::FloatingPoint: { 5047 // Always keep track of FpOffset, but store shadow only for varargs. 5048 uint64_t ArgSize = 8; 5049 if (FpOffset + ArgSize <= kParamTLSSize) { 5050 if (!IsFixed) { 5051 // PoP says: "A short floating-point datum requires only the 5052 // left-most 32 bit positions of a floating-point register". 5053 // Therefore, in contrast to AK_GeneralPurpose and AK_Memory, 5054 // don't extend shadow and don't mind the gap. 5055 ShadowBase = getShadowAddrForVAArgument(IRB, FpOffset); 5056 if (MS.TrackOrigins) 5057 OriginBase = getOriginPtrForVAArgument(IRB, FpOffset); 5058 } 5059 FpOffset += ArgSize; 5060 } else { 5061 FpOffset = kParamTLSSize; 5062 } 5063 break; 5064 } 5065 case ArgKind::Vector: { 5066 // Keep track of VrIndex. No need to store shadow, since vector varargs 5067 // go through AK_Memory. 5068 assert(IsFixed); 5069 VrIndex++; 5070 break; 5071 } 5072 case ArgKind::Memory: { 5073 // Keep track of OverflowOffset and store shadow only for varargs. 5074 // Ignore fixed args, since we need to copy only the vararg portion of 5075 // the overflow area shadow. 5076 if (!IsFixed) { 5077 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5078 uint64_t ArgSize = alignTo(ArgAllocSize, 8); 5079 if (OverflowOffset + ArgSize <= kParamTLSSize) { 5080 SE = getShadowExtension(CB, ArgNo); 5081 uint64_t GapSize = 5082 SE == ShadowExtension::None ? ArgSize - ArgAllocSize : 0; 5083 ShadowBase = 5084 getShadowAddrForVAArgument(IRB, OverflowOffset + GapSize); 5085 if (MS.TrackOrigins) 5086 OriginBase = 5087 getOriginPtrForVAArgument(IRB, OverflowOffset + GapSize); 5088 OverflowOffset += ArgSize; 5089 } else { 5090 OverflowOffset = kParamTLSSize; 5091 } 5092 } 5093 break; 5094 } 5095 case ArgKind::Indirect: 5096 llvm_unreachable("Indirect must be converted to GeneralPurpose"); 5097 } 5098 if (ShadowBase == nullptr) 5099 continue; 5100 Value *Shadow = MSV.getShadow(A); 5101 if (SE != ShadowExtension::None) 5102 Shadow = MSV.CreateShadowCast(IRB, Shadow, IRB.getInt64Ty(), 5103 /*Signed*/ SE == ShadowExtension::Sign); 5104 ShadowBase = IRB.CreateIntToPtr( 5105 ShadowBase, PointerType::get(Shadow->getType(), 0), "_msarg_va_s"); 5106 IRB.CreateStore(Shadow, ShadowBase); 5107 if (MS.TrackOrigins) { 5108 Value *Origin = MSV.getOrigin(A); 5109 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 5110 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 5111 kMinOriginAlignment); 5112 } 5113 } 5114 Constant *OverflowSize = ConstantInt::get( 5115 IRB.getInt64Ty(), OverflowOffset - SystemZOverflowOffset); 5116 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 5117 } 5118 5119 Value *getShadowAddrForVAArgument(IRBuilder<> &IRB, unsigned ArgOffset) { 5120 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 5121 return IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5122 } 5123 5124 Value *getOriginPtrForVAArgument(IRBuilder<> &IRB, int ArgOffset) { 5125 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 5126 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5127 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 5128 "_msarg_va_o"); 5129 } 5130 5131 void unpoisonVAListTagForInst(IntrinsicInst &I) { 5132 IRBuilder<> IRB(&I); 5133 Value *VAListTag = I.getArgOperand(0); 5134 Value *ShadowPtr, *OriginPtr; 5135 const Align Alignment = Align(8); 5136 std::tie(ShadowPtr, OriginPtr) = 5137 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 5138 /*isStore*/ true); 5139 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 5140 SystemZVAListTagSize, Alignment, false); 5141 } 5142 5143 void visitVAStartInst(VAStartInst &I) override { 5144 VAStartInstrumentationList.push_back(&I); 5145 unpoisonVAListTagForInst(I); 5146 } 5147 5148 void visitVACopyInst(VACopyInst &I) override { unpoisonVAListTagForInst(I); } 5149 5150 void copyRegSaveArea(IRBuilder<> &IRB, Value *VAListTag) { 5151 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5152 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 5153 IRB.CreateAdd( 5154 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5155 ConstantInt::get(MS.IntptrTy, SystemZRegSaveAreaPtrOffset)), 5156 PointerType::get(RegSaveAreaPtrTy, 0)); 5157 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 5158 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 5159 const Align Alignment = Align(8); 5160 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 5161 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), Alignment, 5162 /*isStore*/ true); 5163 // TODO(iii): copy only fragments filled by visitCallBase() 5164 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 5165 SystemZRegSaveAreaSize); 5166 if (MS.TrackOrigins) 5167 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 5168 Alignment, SystemZRegSaveAreaSize); 5169 } 5170 5171 void copyOverflowArea(IRBuilder<> &IRB, Value *VAListTag) { 5172 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5173 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 5174 IRB.CreateAdd( 5175 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5176 ConstantInt::get(MS.IntptrTy, SystemZOverflowArgAreaPtrOffset)), 5177 PointerType::get(OverflowArgAreaPtrTy, 0)); 5178 Value *OverflowArgAreaPtr = 5179 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 5180 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 5181 const Align Alignment = Align(8); 5182 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 5183 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 5184 Alignment, /*isStore*/ true); 5185 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 5186 SystemZOverflowOffset); 5187 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 5188 VAArgOverflowSize); 5189 if (MS.TrackOrigins) { 5190 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 5191 SystemZOverflowOffset); 5192 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 5193 VAArgOverflowSize); 5194 } 5195 } 5196 5197 void finalizeInstrumentation() override { 5198 assert(!VAArgOverflowSize && !VAArgTLSCopy && 5199 "finalizeInstrumentation called twice"); 5200 if (!VAStartInstrumentationList.empty()) { 5201 // If there is a va_start in this function, make a backup copy of 5202 // va_arg_tls somewhere in the function entry block. 5203 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 5204 VAArgOverflowSize = 5205 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 5206 Value *CopySize = 5207 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, SystemZOverflowOffset), 5208 VAArgOverflowSize); 5209 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5210 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 5211 if (MS.TrackOrigins) { 5212 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5213 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 5214 Align(8), CopySize); 5215 } 5216 } 5217 5218 // Instrument va_start. 5219 // Copy va_list shadow from the backup copy of the TLS contents. 5220 for (size_t VaStartNo = 0, VaStartNum = VAStartInstrumentationList.size(); 5221 VaStartNo < VaStartNum; VaStartNo++) { 5222 CallInst *OrigInst = VAStartInstrumentationList[VaStartNo]; 5223 IRBuilder<> IRB(OrigInst->getNextNode()); 5224 Value *VAListTag = OrigInst->getArgOperand(0); 5225 copyRegSaveArea(IRB, VAListTag); 5226 copyOverflowArea(IRB, VAListTag); 5227 } 5228 } 5229 }; 5230 5231 /// A no-op implementation of VarArgHelper. 5232 struct VarArgNoOpHelper : public VarArgHelper { 5233 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 5234 MemorySanitizerVisitor &MSV) {} 5235 5236 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override {} 5237 5238 void visitVAStartInst(VAStartInst &I) override {} 5239 5240 void visitVACopyInst(VACopyInst &I) override {} 5241 5242 void finalizeInstrumentation() override {} 5243 }; 5244 5245 } // end anonymous namespace 5246 5247 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 5248 MemorySanitizerVisitor &Visitor) { 5249 // VarArg handling is only implemented on AMD64. False positives are possible 5250 // on other platforms. 5251 Triple TargetTriple(Func.getParent()->getTargetTriple()); 5252 if (TargetTriple.getArch() == Triple::x86_64) 5253 return new VarArgAMD64Helper(Func, Msan, Visitor); 5254 else if (TargetTriple.isMIPS64()) 5255 return new VarArgMIPS64Helper(Func, Msan, Visitor); 5256 else if (TargetTriple.getArch() == Triple::aarch64) 5257 return new VarArgAArch64Helper(Func, Msan, Visitor); 5258 else if (TargetTriple.getArch() == Triple::ppc64 || 5259 TargetTriple.getArch() == Triple::ppc64le) 5260 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 5261 else if (TargetTriple.getArch() == Triple::systemz) 5262 return new VarArgSystemZHelper(Func, Msan, Visitor); 5263 else 5264 return new VarArgNoOpHelper(Func, Msan, Visitor); 5265 } 5266 5267 bool MemorySanitizer::sanitizeFunction(Function &F, TargetLibraryInfo &TLI) { 5268 if (!CompileKernel && F.getName() == kMsanModuleCtorName) 5269 return false; 5270 5271 MemorySanitizerVisitor Visitor(F, *this, TLI); 5272 5273 // Clear out readonly/readnone attributes. 5274 AttrBuilder B; 5275 B.addAttribute(Attribute::ReadOnly) 5276 .addAttribute(Attribute::ReadNone) 5277 .addAttribute(Attribute::WriteOnly) 5278 .addAttribute(Attribute::ArgMemOnly) 5279 .addAttribute(Attribute::Speculatable); 5280 F.removeAttributes(AttributeList::FunctionIndex, B); 5281 5282 return Visitor.runOnFunction(); 5283 } 5284