1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // The LLVM Compiler Infrastructure 4 // 5 // This file is distributed under the University of Illinois Open Source 6 // License. See LICENSE.TXT for details. 7 // 8 //===----------------------------------------------------------------------===// 9 // 10 /// \file 11 /// This file is a part of MemorySanitizer, a detector of uninitialized 12 /// reads. 13 /// 14 /// The algorithm of the tool is similar to Memcheck 15 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 16 /// byte of the application memory, poison the shadow of the malloc-ed 17 /// or alloca-ed memory, load the shadow bits on every memory read, 18 /// propagate the shadow bits through some of the arithmetic 19 /// instruction (including MOV), store the shadow bits on every memory 20 /// write, report a bug on some other instructions (e.g. JMP) if the 21 /// associated shadow is poisoned. 22 /// 23 /// But there are differences too. The first and the major one: 24 /// compiler instrumentation instead of binary instrumentation. This 25 /// gives us much better register allocation, possible compiler 26 /// optimizations and a fast start-up. But this brings the major issue 27 /// as well: msan needs to see all program events, including system 28 /// calls and reads/writes in system libraries, so we either need to 29 /// compile *everything* with msan or use a binary translation 30 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 31 /// Another difference from Memcheck is that we use 8 shadow bits per 32 /// byte of application memory and use a direct shadow mapping. This 33 /// greatly simplifies the instrumentation code and avoids races on 34 /// shadow updates (Memcheck is single-threaded so races are not a 35 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 36 /// path storage that uses 8 bits per byte). 37 /// 38 /// The default value of shadow is 0, which means "clean" (not poisoned). 39 /// 40 /// Every module initializer should call __msan_init to ensure that the 41 /// shadow memory is ready. On error, __msan_warning is called. Since 42 /// parameters and return values may be passed via registers, we have a 43 /// specialized thread-local shadow for return values 44 /// (__msan_retval_tls) and parameters (__msan_param_tls). 45 /// 46 /// Origin tracking. 47 /// 48 /// MemorySanitizer can track origins (allocation points) of all uninitialized 49 /// values. This behavior is controlled with a flag (msan-track-origins) and is 50 /// disabled by default. 51 /// 52 /// Origins are 4-byte values created and interpreted by the runtime library. 53 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 54 /// of application memory. Propagation of origins is basically a bunch of 55 /// "select" instructions that pick the origin of a dirty argument, if an 56 /// instruction has one. 57 /// 58 /// Every 4 aligned, consecutive bytes of application memory have one origin 59 /// value associated with them. If these bytes contain uninitialized data 60 /// coming from 2 different allocations, the last store wins. Because of this, 61 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 62 /// practice. 63 /// 64 /// Origins are meaningless for fully initialized values, so MemorySanitizer 65 /// avoids storing origin to memory when a fully initialized value is stored. 66 /// This way it avoids needless overwritting origin of the 4-byte region on 67 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 68 /// 69 /// Atomic handling. 70 /// 71 /// Ideally, every atomic store of application value should update the 72 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 73 /// of two disjoint locations can not be done without severe slowdown. 74 /// 75 /// Therefore, we implement an approximation that may err on the safe side. 76 /// In this implementation, every atomically accessed location in the program 77 /// may only change from (partially) uninitialized to fully initialized, but 78 /// not the other way around. We load the shadow _after_ the application load, 79 /// and we store the shadow _before_ the app store. Also, we always store clean 80 /// shadow (if the application store is atomic). This way, if the store-load 81 /// pair constitutes a happens-before arc, shadow store and load are correctly 82 /// ordered such that the load will get either the value that was stored, or 83 /// some later value (which is always clean). 84 /// 85 /// This does not work very well with Compare-And-Swap (CAS) and 86 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 87 /// must store the new shadow before the app operation, and load the shadow 88 /// after the app operation. Computers don't work this way. Current 89 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 90 /// value. It implements the store part as a simple atomic store by storing a 91 /// clean shadow. 92 /// 93 /// Instrumenting inline assembly. 94 /// 95 /// For inline assembly code LLVM has little idea about which memory locations 96 /// become initialized depending on the arguments. It can be possible to figure 97 /// out which arguments are meant to point to inputs and outputs, but the 98 /// actual semantics can be only visible at runtime. In the Linux kernel it's 99 /// also possible that the arguments only indicate the offset for a base taken 100 /// from a segment register, so it's dangerous to treat any asm() arguments as 101 /// pointers. We take a conservative approach generating calls to 102 /// __msan_instrument_asm_load(ptr, size) and 103 /// __msan_instrument_asm_store(ptr, size) 104 /// , which defer the memory checking/unpoisoning to the runtime library. 105 /// The latter can perform more complex address checks to figure out whether 106 /// it's safe to touch the shadow memory. 107 /// Like with atomic operations, we call __msan_instrument_asm_store() before 108 /// the assembly call, so that changes to the shadow memory will be seen by 109 /// other threads together with main memory initialization. 110 /// 111 /// KernelMemorySanitizer (KMSAN) implementation. 112 /// 113 /// The major differences between KMSAN and MSan instrumentation are: 114 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 115 /// - KMSAN allocates shadow and origin memory for each page separately, so 116 /// there are no explicit accesses to shadow and origin in the 117 /// instrumentation. 118 /// Shadow and origin values for a particular X-byte memory location 119 /// (X=1,2,4,8) are accessed through pointers obtained via the 120 /// __msan_metadata_ptr_for_load_X(ptr) 121 /// __msan_metadata_ptr_for_store_X(ptr) 122 /// functions. The corresponding functions check that the X-byte accesses 123 /// are possible and returns the pointers to shadow and origin memory. 124 /// Arbitrary sized accesses are handled with: 125 /// __msan_metadata_ptr_for_load_n(ptr, size) 126 /// __msan_metadata_ptr_for_store_n(ptr, size); 127 /// - TLS variables are stored in a single per-task struct. A call to a 128 /// function __msan_get_context_state() returning a pointer to that struct 129 /// is inserted into every instrumented function before the entry block; 130 /// - __msan_warning() takes a 32-bit origin parameter; 131 /// - local variables are poisoned with __msan_poison_alloca() upon function 132 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 133 /// function; 134 /// - the pass doesn't declare any global variables or add global constructors 135 /// to the translation unit. 136 /// 137 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 138 /// calls, making sure we're on the safe side wrt. possible false positives. 139 /// 140 /// KernelMemorySanitizer only supports X86_64 at the moment. 141 /// 142 //===----------------------------------------------------------------------===// 143 144 #include "llvm/ADT/APInt.h" 145 #include "llvm/ADT/ArrayRef.h" 146 #include "llvm/ADT/DepthFirstIterator.h" 147 #include "llvm/ADT/SmallString.h" 148 #include "llvm/ADT/SmallVector.h" 149 #include "llvm/ADT/StringExtras.h" 150 #include "llvm/ADT/StringRef.h" 151 #include "llvm/ADT/Triple.h" 152 #include "llvm/Analysis/TargetLibraryInfo.h" 153 #include "llvm/Transforms/Utils/Local.h" 154 #include "llvm/IR/Argument.h" 155 #include "llvm/IR/Attributes.h" 156 #include "llvm/IR/BasicBlock.h" 157 #include "llvm/IR/CallSite.h" 158 #include "llvm/IR/CallingConv.h" 159 #include "llvm/IR/Constant.h" 160 #include "llvm/IR/Constants.h" 161 #include "llvm/IR/DataLayout.h" 162 #include "llvm/IR/DerivedTypes.h" 163 #include "llvm/IR/Function.h" 164 #include "llvm/IR/GlobalValue.h" 165 #include "llvm/IR/GlobalVariable.h" 166 #include "llvm/IR/IRBuilder.h" 167 #include "llvm/IR/InlineAsm.h" 168 #include "llvm/IR/InstVisitor.h" 169 #include "llvm/IR/InstrTypes.h" 170 #include "llvm/IR/Instruction.h" 171 #include "llvm/IR/Instructions.h" 172 #include "llvm/IR/IntrinsicInst.h" 173 #include "llvm/IR/Intrinsics.h" 174 #include "llvm/IR/LLVMContext.h" 175 #include "llvm/IR/MDBuilder.h" 176 #include "llvm/IR/Module.h" 177 #include "llvm/IR/Type.h" 178 #include "llvm/IR/Value.h" 179 #include "llvm/IR/ValueMap.h" 180 #include "llvm/Pass.h" 181 #include "llvm/Support/AtomicOrdering.h" 182 #include "llvm/Support/Casting.h" 183 #include "llvm/Support/CommandLine.h" 184 #include "llvm/Support/Compiler.h" 185 #include "llvm/Support/Debug.h" 186 #include "llvm/Support/ErrorHandling.h" 187 #include "llvm/Support/MathExtras.h" 188 #include "llvm/Support/raw_ostream.h" 189 #include "llvm/Transforms/Instrumentation.h" 190 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 191 #include "llvm/Transforms/Utils/ModuleUtils.h" 192 #include <algorithm> 193 #include <cassert> 194 #include <cstddef> 195 #include <cstdint> 196 #include <memory> 197 #include <string> 198 #include <tuple> 199 200 using namespace llvm; 201 202 #define DEBUG_TYPE "msan" 203 204 static const unsigned kOriginSize = 4; 205 static const unsigned kMinOriginAlignment = 4; 206 static const unsigned kShadowTLSAlignment = 8; 207 208 // These constants must be kept in sync with the ones in msan.h. 209 static const unsigned kParamTLSSize = 800; 210 static const unsigned kRetvalTLSSize = 800; 211 212 // Accesses sizes are powers of two: 1, 2, 4, 8. 213 static const size_t kNumberOfAccessSizes = 4; 214 215 /// Track origins of uninitialized values. 216 /// 217 /// Adds a section to MemorySanitizer report that points to the allocation 218 /// (stack or heap) the uninitialized bits came from originally. 219 static cl::opt<int> ClTrackOrigins("msan-track-origins", 220 cl::desc("Track origins (allocation sites) of poisoned memory"), 221 cl::Hidden, cl::init(0)); 222 223 static cl::opt<bool> ClKeepGoing("msan-keep-going", 224 cl::desc("keep going after reporting a UMR"), 225 cl::Hidden, cl::init(false)); 226 227 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 228 cl::desc("poison uninitialized stack variables"), 229 cl::Hidden, cl::init(true)); 230 231 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 232 cl::desc("poison uninitialized stack variables with a call"), 233 cl::Hidden, cl::init(false)); 234 235 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 236 cl::desc("poison uninitialized stack variables with the given pattern"), 237 cl::Hidden, cl::init(0xff)); 238 239 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 240 cl::desc("poison undef temps"), 241 cl::Hidden, cl::init(true)); 242 243 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 244 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 245 cl::Hidden, cl::init(true)); 246 247 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 248 cl::desc("exact handling of relational integer ICmp"), 249 cl::Hidden, cl::init(false)); 250 251 // When compiling the Linux kernel, we sometimes see false positives related to 252 // MSan being unable to understand that inline assembly calls may initialize 253 // local variables. 254 // This flag makes the compiler conservatively unpoison every memory location 255 // passed into an assembly call. Note that this may cause false positives. 256 // Because it's impossible to figure out the array sizes, we can only unpoison 257 // the first sizeof(type) bytes for each type* pointer. 258 static cl::opt<bool> ClHandleAsmConservative( 259 "msan-handle-asm-conservative", 260 cl::desc("conservative handling of inline assembly"), cl::Hidden, 261 cl::init(false)); 262 263 // This flag controls whether we check the shadow of the address 264 // operand of load or store. Such bugs are very rare, since load from 265 // a garbage address typically results in SEGV, but still happen 266 // (e.g. only lower bits of address are garbage, or the access happens 267 // early at program startup where malloc-ed memory is more likely to 268 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 269 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 270 cl::desc("report accesses through a pointer which has poisoned shadow"), 271 cl::Hidden, cl::init(true)); 272 273 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 274 cl::desc("print out instructions with default strict semantics"), 275 cl::Hidden, cl::init(false)); 276 277 static cl::opt<int> ClInstrumentationWithCallThreshold( 278 "msan-instrumentation-with-call-threshold", 279 cl::desc( 280 "If the function being instrumented requires more than " 281 "this number of checks and origin stores, use callbacks instead of " 282 "inline checks (-1 means never use callbacks)."), 283 cl::Hidden, cl::init(3500)); 284 285 static cl::opt<bool> 286 ClEnableKmsan("msan-kernel", 287 cl::desc("Enable KernelMemorySanitizer instrumentation"), 288 cl::Hidden, cl::init(false)); 289 290 // This is an experiment to enable handling of cases where shadow is a non-zero 291 // compile-time constant. For some unexplainable reason they were silently 292 // ignored in the instrumentation. 293 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 294 cl::desc("Insert checks for constant shadow values"), 295 cl::Hidden, cl::init(false)); 296 297 // This is off by default because of a bug in gold: 298 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 299 static cl::opt<bool> ClWithComdat("msan-with-comdat", 300 cl::desc("Place MSan constructors in comdat sections"), 301 cl::Hidden, cl::init(false)); 302 303 // These options allow to specify custom memory map parameters 304 // See MemoryMapParams for details. 305 static cl::opt<unsigned long long> ClAndMask("msan-and-mask", 306 cl::desc("Define custom MSan AndMask"), 307 cl::Hidden, cl::init(0)); 308 309 static cl::opt<unsigned long long> ClXorMask("msan-xor-mask", 310 cl::desc("Define custom MSan XorMask"), 311 cl::Hidden, cl::init(0)); 312 313 static cl::opt<unsigned long long> ClShadowBase("msan-shadow-base", 314 cl::desc("Define custom MSan ShadowBase"), 315 cl::Hidden, cl::init(0)); 316 317 static cl::opt<unsigned long long> ClOriginBase("msan-origin-base", 318 cl::desc("Define custom MSan OriginBase"), 319 cl::Hidden, cl::init(0)); 320 321 static const char *const kMsanModuleCtorName = "msan.module_ctor"; 322 static const char *const kMsanInitName = "__msan_init"; 323 324 namespace { 325 326 // Memory map parameters used in application-to-shadow address calculation. 327 // Offset = (Addr & ~AndMask) ^ XorMask 328 // Shadow = ShadowBase + Offset 329 // Origin = OriginBase + Offset 330 struct MemoryMapParams { 331 uint64_t AndMask; 332 uint64_t XorMask; 333 uint64_t ShadowBase; 334 uint64_t OriginBase; 335 }; 336 337 struct PlatformMemoryMapParams { 338 const MemoryMapParams *bits32; 339 const MemoryMapParams *bits64; 340 }; 341 342 } // end anonymous namespace 343 344 // i386 Linux 345 static const MemoryMapParams Linux_I386_MemoryMapParams = { 346 0x000080000000, // AndMask 347 0, // XorMask (not used) 348 0, // ShadowBase (not used) 349 0x000040000000, // OriginBase 350 }; 351 352 // x86_64 Linux 353 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 354 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 355 0x400000000000, // AndMask 356 0, // XorMask (not used) 357 0, // ShadowBase (not used) 358 0x200000000000, // OriginBase 359 #else 360 0, // AndMask (not used) 361 0x500000000000, // XorMask 362 0, // ShadowBase (not used) 363 0x100000000000, // OriginBase 364 #endif 365 }; 366 367 // mips64 Linux 368 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 369 0, // AndMask (not used) 370 0x008000000000, // XorMask 371 0, // ShadowBase (not used) 372 0x002000000000, // OriginBase 373 }; 374 375 // ppc64 Linux 376 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 377 0xE00000000000, // AndMask 378 0x100000000000, // XorMask 379 0x080000000000, // ShadowBase 380 0x1C0000000000, // OriginBase 381 }; 382 383 // aarch64 Linux 384 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 385 0, // AndMask (not used) 386 0x06000000000, // XorMask 387 0, // ShadowBase (not used) 388 0x01000000000, // OriginBase 389 }; 390 391 // i386 FreeBSD 392 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 393 0x000180000000, // AndMask 394 0x000040000000, // XorMask 395 0x000020000000, // ShadowBase 396 0x000700000000, // OriginBase 397 }; 398 399 // x86_64 FreeBSD 400 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 401 0xc00000000000, // AndMask 402 0x200000000000, // XorMask 403 0x100000000000, // ShadowBase 404 0x380000000000, // OriginBase 405 }; 406 407 // x86_64 NetBSD 408 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 409 0, // AndMask 410 0x500000000000, // XorMask 411 0, // ShadowBase 412 0x100000000000, // OriginBase 413 }; 414 415 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 416 &Linux_I386_MemoryMapParams, 417 &Linux_X86_64_MemoryMapParams, 418 }; 419 420 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 421 nullptr, 422 &Linux_MIPS64_MemoryMapParams, 423 }; 424 425 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 426 nullptr, 427 &Linux_PowerPC64_MemoryMapParams, 428 }; 429 430 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 431 nullptr, 432 &Linux_AArch64_MemoryMapParams, 433 }; 434 435 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 436 &FreeBSD_I386_MemoryMapParams, 437 &FreeBSD_X86_64_MemoryMapParams, 438 }; 439 440 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 441 nullptr, 442 &NetBSD_X86_64_MemoryMapParams, 443 }; 444 445 namespace { 446 447 /// An instrumentation pass implementing detection of uninitialized 448 /// reads. 449 /// 450 /// MemorySanitizer: instrument the code in module to find 451 /// uninitialized reads. 452 class MemorySanitizer : public FunctionPass { 453 public: 454 // Pass identification, replacement for typeid. 455 static char ID; 456 457 MemorySanitizer(int TrackOrigins = 0, bool Recover = false, 458 bool EnableKmsan = false) 459 : FunctionPass(ID) { 460 this->CompileKernel = 461 ClEnableKmsan.getNumOccurrences() > 0 ? ClEnableKmsan : EnableKmsan; 462 if (ClTrackOrigins.getNumOccurrences() > 0) 463 this->TrackOrigins = ClTrackOrigins; 464 else 465 this->TrackOrigins = this->CompileKernel ? 2 : TrackOrigins; 466 this->Recover = ClKeepGoing.getNumOccurrences() > 0 467 ? ClKeepGoing 468 : (this->CompileKernel | Recover); 469 } 470 StringRef getPassName() const override { return "MemorySanitizer"; } 471 472 void getAnalysisUsage(AnalysisUsage &AU) const override { 473 AU.addRequired<TargetLibraryInfoWrapperPass>(); 474 } 475 476 bool runOnFunction(Function &F) override; 477 bool doInitialization(Module &M) override; 478 479 private: 480 friend struct MemorySanitizerVisitor; 481 friend struct VarArgAMD64Helper; 482 friend struct VarArgMIPS64Helper; 483 friend struct VarArgAArch64Helper; 484 friend struct VarArgPowerPC64Helper; 485 486 void initializeCallbacks(Module &M); 487 void createKernelApi(Module &M); 488 void createUserspaceApi(Module &M); 489 490 /// True if we're compiling the Linux kernel. 491 bool CompileKernel; 492 493 /// Track origins (allocation points) of uninitialized values. 494 int TrackOrigins; 495 bool Recover; 496 497 LLVMContext *C; 498 Type *IntptrTy; 499 Type *OriginTy; 500 501 // XxxTLS variables represent the per-thread state in MSan and per-task state 502 // in KMSAN. 503 // For the userspace these point to thread-local globals. In the kernel land 504 // they point to the members of a per-task struct obtained via a call to 505 // __msan_get_context_state(). 506 507 /// Thread-local shadow storage for function parameters. 508 Value *ParamTLS; 509 510 /// Thread-local origin storage for function parameters. 511 Value *ParamOriginTLS; 512 513 /// Thread-local shadow storage for function return value. 514 Value *RetvalTLS; 515 516 /// Thread-local origin storage for function return value. 517 Value *RetvalOriginTLS; 518 519 /// Thread-local shadow storage for in-register va_arg function 520 /// parameters (x86_64-specific). 521 Value *VAArgTLS; 522 523 /// Thread-local shadow storage for in-register va_arg function 524 /// parameters (x86_64-specific). 525 Value *VAArgOriginTLS; 526 527 /// Thread-local shadow storage for va_arg overflow area 528 /// (x86_64-specific). 529 Value *VAArgOverflowSizeTLS; 530 531 /// Thread-local space used to pass origin value to the UMR reporting 532 /// function. 533 Value *OriginTLS; 534 535 /// Are the instrumentation callbacks set up? 536 bool CallbacksInitialized = false; 537 538 /// The run-time callback to print a warning. 539 Value *WarningFn; 540 541 // These arrays are indexed by log2(AccessSize). 542 Value *MaybeWarningFn[kNumberOfAccessSizes]; 543 Value *MaybeStoreOriginFn[kNumberOfAccessSizes]; 544 545 /// Run-time helper that generates a new origin value for a stack 546 /// allocation. 547 Value *MsanSetAllocaOrigin4Fn; 548 549 /// Run-time helper that poisons stack on function entry. 550 Value *MsanPoisonStackFn; 551 552 /// Run-time helper that records a store (or any event) of an 553 /// uninitialized value and returns an updated origin id encoding this info. 554 Value *MsanChainOriginFn; 555 556 /// MSan runtime replacements for memmove, memcpy and memset. 557 Value *MemmoveFn, *MemcpyFn, *MemsetFn; 558 559 /// KMSAN callback for task-local function argument shadow. 560 Value *MsanGetContextStateFn; 561 562 /// Functions for poisoning/unpoisoning local variables 563 Value *MsanPoisonAllocaFn, *MsanUnpoisonAllocaFn; 564 565 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 566 /// pointers. 567 Value *MsanMetadataPtrForLoadN, *MsanMetadataPtrForStoreN; 568 Value *MsanMetadataPtrForLoad_1_8[4]; 569 Value *MsanMetadataPtrForStore_1_8[4]; 570 Value *MsanInstrumentAsmStoreFn, *MsanInstrumentAsmLoadFn; 571 572 /// Helper to choose between different MsanMetadataPtrXxx(). 573 Value *getKmsanShadowOriginAccessFn(bool isStore, int size); 574 575 /// Memory map parameters used in application-to-shadow calculation. 576 const MemoryMapParams *MapParams; 577 578 /// Custom memory map parameters used when -msan-shadow-base or 579 // -msan-origin-base is provided. 580 MemoryMapParams CustomMapParams; 581 582 MDNode *ColdCallWeights; 583 584 /// Branch weights for origin store. 585 MDNode *OriginStoreWeights; 586 587 /// An empty volatile inline asm that prevents callback merge. 588 InlineAsm *EmptyAsm; 589 590 Function *MsanCtorFunction; 591 }; 592 593 } // end anonymous namespace 594 595 char MemorySanitizer::ID = 0; 596 597 INITIALIZE_PASS_BEGIN( 598 MemorySanitizer, "msan", 599 "MemorySanitizer: detects uninitialized reads.", false, false) 600 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 601 INITIALIZE_PASS_END( 602 MemorySanitizer, "msan", 603 "MemorySanitizer: detects uninitialized reads.", false, false) 604 605 FunctionPass *llvm::createMemorySanitizerPass(int TrackOrigins, bool Recover, 606 bool CompileKernel) { 607 return new MemorySanitizer(TrackOrigins, Recover, CompileKernel); 608 } 609 610 /// Create a non-const global initialized with the given string. 611 /// 612 /// Creates a writable global for Str so that we can pass it to the 613 /// run-time lib. Runtime uses first 4 bytes of the string to store the 614 /// frame ID, so the string needs to be mutable. 615 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 616 StringRef Str) { 617 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 618 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 619 GlobalValue::PrivateLinkage, StrConst, ""); 620 } 621 622 /// Create KMSAN API callbacks. 623 void MemorySanitizer::createKernelApi(Module &M) { 624 IRBuilder<> IRB(*C); 625 626 // These will be initialized in insertKmsanPrologue(). 627 RetvalTLS = nullptr; 628 RetvalOriginTLS = nullptr; 629 ParamTLS = nullptr; 630 ParamOriginTLS = nullptr; 631 VAArgTLS = nullptr; 632 VAArgOriginTLS = nullptr; 633 VAArgOverflowSizeTLS = nullptr; 634 // OriginTLS is unused in the kernel. 635 OriginTLS = nullptr; 636 637 // __msan_warning() in the kernel takes an origin. 638 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 639 IRB.getInt32Ty()); 640 // Requests the per-task context state (kmsan_context_state*) from the 641 // runtime library. 642 MsanGetContextStateFn = M.getOrInsertFunction( 643 "__msan_get_context_state", 644 PointerType::get( 645 StructType::get(ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 646 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 647 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 648 ArrayType::get(IRB.getInt64Ty(), 649 kParamTLSSize / 8), /* va_arg_origin */ 650 IRB.getInt64Ty(), 651 ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 652 OriginTy), 653 0)); 654 655 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 656 PointerType::get(IRB.getInt32Ty(), 0)); 657 658 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 659 std::string name_load = 660 "__msan_metadata_ptr_for_load_" + std::to_string(size); 661 std::string name_store = 662 "__msan_metadata_ptr_for_store_" + std::to_string(size); 663 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 664 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 665 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 666 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 667 } 668 669 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 670 "__msan_metadata_ptr_for_load_n", RetTy, 671 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 672 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 673 "__msan_metadata_ptr_for_store_n", RetTy, 674 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 675 676 // Functions for poisoning and unpoisoning memory. 677 MsanPoisonAllocaFn = 678 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 679 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 680 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 681 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 682 } 683 684 /// Insert declarations for userspace-specific functions and globals. 685 void MemorySanitizer::createUserspaceApi(Module &M) { 686 IRBuilder<> IRB(*C); 687 // Create the callback. 688 // FIXME: this function should have "Cold" calling conv, 689 // which is not yet implemented. 690 StringRef WarningFnName = Recover ? "__msan_warning" 691 : "__msan_warning_noreturn"; 692 WarningFn = M.getOrInsertFunction(WarningFnName, IRB.getVoidTy()); 693 694 // Create the global TLS variables. 695 RetvalTLS = new GlobalVariable( 696 M, ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), false, 697 GlobalVariable::ExternalLinkage, nullptr, "__msan_retval_tls", nullptr, 698 GlobalVariable::InitialExecTLSModel); 699 700 RetvalOriginTLS = new GlobalVariable( 701 M, OriginTy, false, GlobalVariable::ExternalLinkage, nullptr, 702 "__msan_retval_origin_tls", nullptr, GlobalVariable::InitialExecTLSModel); 703 704 ParamTLS = new GlobalVariable( 705 M, ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), false, 706 GlobalVariable::ExternalLinkage, nullptr, "__msan_param_tls", nullptr, 707 GlobalVariable::InitialExecTLSModel); 708 709 ParamOriginTLS = new GlobalVariable( 710 M, ArrayType::get(OriginTy, kParamTLSSize / 4), false, 711 GlobalVariable::ExternalLinkage, nullptr, "__msan_param_origin_tls", 712 nullptr, GlobalVariable::InitialExecTLSModel); 713 714 VAArgTLS = new GlobalVariable( 715 M, ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), false, 716 GlobalVariable::ExternalLinkage, nullptr, "__msan_va_arg_tls", nullptr, 717 GlobalVariable::InitialExecTLSModel); 718 719 VAArgOriginTLS = new GlobalVariable( 720 M, ArrayType::get(OriginTy, kParamTLSSize / 4), false, 721 GlobalVariable::ExternalLinkage, nullptr, "__msan_va_arg_origin_tls", 722 nullptr, GlobalVariable::InitialExecTLSModel); 723 724 VAArgOverflowSizeTLS = new GlobalVariable( 725 M, IRB.getInt64Ty(), false, GlobalVariable::ExternalLinkage, nullptr, 726 "__msan_va_arg_overflow_size_tls", nullptr, 727 GlobalVariable::InitialExecTLSModel); 728 OriginTLS = new GlobalVariable( 729 M, IRB.getInt32Ty(), false, GlobalVariable::ExternalLinkage, nullptr, 730 "__msan_origin_tls", nullptr, GlobalVariable::InitialExecTLSModel); 731 732 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 733 AccessSizeIndex++) { 734 unsigned AccessSize = 1 << AccessSizeIndex; 735 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 736 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 737 FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), 738 IRB.getInt32Ty()); 739 740 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 741 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 742 FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), 743 IRB.getInt8PtrTy(), IRB.getInt32Ty()); 744 } 745 746 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 747 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 748 IRB.getInt8PtrTy(), IntptrTy); 749 MsanPoisonStackFn = 750 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 751 IRB.getInt8PtrTy(), IntptrTy); 752 } 753 754 /// Insert extern declaration of runtime-provided functions and globals. 755 void MemorySanitizer::initializeCallbacks(Module &M) { 756 // Only do this once. 757 if (CallbacksInitialized) 758 return; 759 760 IRBuilder<> IRB(*C); 761 // Initialize callbacks that are common for kernel and userspace 762 // instrumentation. 763 MsanChainOriginFn = M.getOrInsertFunction( 764 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 765 MemmoveFn = M.getOrInsertFunction( 766 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 767 IRB.getInt8PtrTy(), IntptrTy); 768 MemcpyFn = M.getOrInsertFunction( 769 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 770 IntptrTy); 771 MemsetFn = M.getOrInsertFunction( 772 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 773 IntptrTy); 774 // We insert an empty inline asm after __msan_report* to avoid callback merge. 775 EmptyAsm = InlineAsm::get(FunctionType::get(IRB.getVoidTy(), false), 776 StringRef(""), StringRef(""), 777 /*hasSideEffects=*/true); 778 779 MsanInstrumentAsmLoadFn = 780 M.getOrInsertFunction("__msan_instrument_asm_load", IRB.getVoidTy(), 781 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 782 MsanInstrumentAsmStoreFn = 783 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 784 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 785 786 if (CompileKernel) { 787 createKernelApi(M); 788 } else { 789 createUserspaceApi(M); 790 } 791 CallbacksInitialized = true; 792 } 793 794 Value *MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, int size) { 795 Value **Fns = 796 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 797 switch (size) { 798 case 1: 799 return Fns[0]; 800 case 2: 801 return Fns[1]; 802 case 4: 803 return Fns[2]; 804 case 8: 805 return Fns[3]; 806 default: 807 return nullptr; 808 } 809 } 810 811 /// Module-level initialization. 812 /// 813 /// inserts a call to __msan_init to the module's constructor list. 814 bool MemorySanitizer::doInitialization(Module &M) { 815 auto &DL = M.getDataLayout(); 816 817 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 818 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 819 // Check the overrides first 820 if (ShadowPassed || OriginPassed) { 821 CustomMapParams.AndMask = ClAndMask; 822 CustomMapParams.XorMask = ClXorMask; 823 CustomMapParams.ShadowBase = ClShadowBase; 824 CustomMapParams.OriginBase = ClOriginBase; 825 MapParams = &CustomMapParams; 826 } else { 827 Triple TargetTriple(M.getTargetTriple()); 828 switch (TargetTriple.getOS()) { 829 case Triple::FreeBSD: 830 switch (TargetTriple.getArch()) { 831 case Triple::x86_64: 832 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 833 break; 834 case Triple::x86: 835 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 836 break; 837 default: 838 report_fatal_error("unsupported architecture"); 839 } 840 break; 841 case Triple::NetBSD: 842 switch (TargetTriple.getArch()) { 843 case Triple::x86_64: 844 MapParams = NetBSD_X86_MemoryMapParams.bits64; 845 break; 846 default: 847 report_fatal_error("unsupported architecture"); 848 } 849 break; 850 case Triple::Linux: 851 switch (TargetTriple.getArch()) { 852 case Triple::x86_64: 853 MapParams = Linux_X86_MemoryMapParams.bits64; 854 break; 855 case Triple::x86: 856 MapParams = Linux_X86_MemoryMapParams.bits32; 857 break; 858 case Triple::mips64: 859 case Triple::mips64el: 860 MapParams = Linux_MIPS_MemoryMapParams.bits64; 861 break; 862 case Triple::ppc64: 863 case Triple::ppc64le: 864 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 865 break; 866 case Triple::aarch64: 867 case Triple::aarch64_be: 868 MapParams = Linux_ARM_MemoryMapParams.bits64; 869 break; 870 default: 871 report_fatal_error("unsupported architecture"); 872 } 873 break; 874 default: 875 report_fatal_error("unsupported operating system"); 876 } 877 } 878 879 C = &(M.getContext()); 880 IRBuilder<> IRB(*C); 881 IntptrTy = IRB.getIntPtrTy(DL); 882 OriginTy = IRB.getInt32Ty(); 883 884 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 885 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 886 887 if (!CompileKernel) { 888 std::tie(MsanCtorFunction, std::ignore) = 889 createSanitizerCtorAndInitFunctions(M, kMsanModuleCtorName, 890 kMsanInitName, 891 /*InitArgTypes=*/{}, 892 /*InitArgs=*/{}); 893 if (ClWithComdat) { 894 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 895 MsanCtorFunction->setComdat(MsanCtorComdat); 896 appendToGlobalCtors(M, MsanCtorFunction, 0, MsanCtorFunction); 897 } else { 898 appendToGlobalCtors(M, MsanCtorFunction, 0); 899 } 900 901 if (TrackOrigins) 902 new GlobalVariable(M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 903 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 904 905 if (Recover) 906 new GlobalVariable(M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 907 IRB.getInt32(Recover), "__msan_keep_going"); 908 } 909 return true; 910 } 911 912 namespace { 913 914 /// A helper class that handles instrumentation of VarArg 915 /// functions on a particular platform. 916 /// 917 /// Implementations are expected to insert the instrumentation 918 /// necessary to propagate argument shadow through VarArg function 919 /// calls. Visit* methods are called during an InstVisitor pass over 920 /// the function, and should avoid creating new basic blocks. A new 921 /// instance of this class is created for each instrumented function. 922 struct VarArgHelper { 923 virtual ~VarArgHelper() = default; 924 925 /// Visit a CallSite. 926 virtual void visitCallSite(CallSite &CS, IRBuilder<> &IRB) = 0; 927 928 /// Visit a va_start call. 929 virtual void visitVAStartInst(VAStartInst &I) = 0; 930 931 /// Visit a va_copy call. 932 virtual void visitVACopyInst(VACopyInst &I) = 0; 933 934 /// Finalize function instrumentation. 935 /// 936 /// This method is called after visiting all interesting (see above) 937 /// instructions in a function. 938 virtual void finalizeInstrumentation() = 0; 939 }; 940 941 struct MemorySanitizerVisitor; 942 943 } // end anonymous namespace 944 945 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 946 MemorySanitizerVisitor &Visitor); 947 948 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 949 if (TypeSize <= 8) return 0; 950 return Log2_32_Ceil((TypeSize + 7) / 8); 951 } 952 953 namespace { 954 955 /// This class does all the work for a given function. Store and Load 956 /// instructions store and load corresponding shadow and origin 957 /// values. Most instructions propagate shadow from arguments to their 958 /// return values. Certain instructions (most importantly, BranchInst) 959 /// test their argument shadow and print reports (with a runtime call) if it's 960 /// non-zero. 961 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 962 Function &F; 963 MemorySanitizer &MS; 964 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 965 ValueMap<Value*, Value*> ShadowMap, OriginMap; 966 std::unique_ptr<VarArgHelper> VAHelper; 967 const TargetLibraryInfo *TLI; 968 BasicBlock *ActualFnStart; 969 970 // The following flags disable parts of MSan instrumentation based on 971 // blacklist contents and command-line options. 972 bool InsertChecks; 973 bool PropagateShadow; 974 bool PoisonStack; 975 bool PoisonUndef; 976 bool CheckReturnValue; 977 978 struct ShadowOriginAndInsertPoint { 979 Value *Shadow; 980 Value *Origin; 981 Instruction *OrigIns; 982 983 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 984 : Shadow(S), Origin(O), OrigIns(I) {} 985 }; 986 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 987 SmallVector<StoreInst *, 16> StoreList; 988 989 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS) 990 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)) { 991 bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory); 992 InsertChecks = SanitizeFunction; 993 PropagateShadow = SanitizeFunction; 994 PoisonStack = SanitizeFunction && ClPoisonStack; 995 PoisonUndef = SanitizeFunction && ClPoisonUndef; 996 // FIXME: Consider using SpecialCaseList to specify a list of functions that 997 // must always return fully initialized values. For now, we hardcode "main". 998 CheckReturnValue = SanitizeFunction && (F.getName() == "main"); 999 TLI = &MS.getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(); 1000 1001 MS.initializeCallbacks(*F.getParent()); 1002 if (MS.CompileKernel) 1003 ActualFnStart = insertKmsanPrologue(F); 1004 else 1005 ActualFnStart = &F.getEntryBlock(); 1006 1007 LLVM_DEBUG(if (!InsertChecks) dbgs() 1008 << "MemorySanitizer is not inserting checks into '" 1009 << F.getName() << "'\n"); 1010 } 1011 1012 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1013 if (MS.TrackOrigins <= 1) return V; 1014 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1015 } 1016 1017 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1018 const DataLayout &DL = F.getParent()->getDataLayout(); 1019 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1020 if (IntptrSize == kOriginSize) return Origin; 1021 assert(IntptrSize == kOriginSize * 2); 1022 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1023 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1024 } 1025 1026 /// Fill memory range with the given origin value. 1027 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1028 unsigned Size, unsigned Alignment) { 1029 const DataLayout &DL = F.getParent()->getDataLayout(); 1030 unsigned IntptrAlignment = DL.getABITypeAlignment(MS.IntptrTy); 1031 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1032 assert(IntptrAlignment >= kMinOriginAlignment); 1033 assert(IntptrSize >= kOriginSize); 1034 1035 unsigned Ofs = 0; 1036 unsigned CurrentAlignment = Alignment; 1037 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1038 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1039 Value *IntptrOriginPtr = 1040 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1041 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1042 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1043 : IntptrOriginPtr; 1044 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1045 Ofs += IntptrSize / kOriginSize; 1046 CurrentAlignment = IntptrAlignment; 1047 } 1048 } 1049 1050 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1051 Value *GEP = 1052 i ? IRB.CreateConstGEP1_32(nullptr, OriginPtr, i) : OriginPtr; 1053 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1054 CurrentAlignment = kMinOriginAlignment; 1055 } 1056 } 1057 1058 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1059 Value *OriginPtr, unsigned Alignment, bool AsCall) { 1060 const DataLayout &DL = F.getParent()->getDataLayout(); 1061 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1062 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1063 if (Shadow->getType()->isAggregateType()) { 1064 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1065 OriginAlignment); 1066 } else { 1067 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1068 Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow); 1069 if (ConstantShadow) { 1070 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1071 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1072 OriginAlignment); 1073 return; 1074 } 1075 1076 unsigned TypeSizeInBits = 1077 DL.getTypeSizeInBits(ConvertedShadow->getType()); 1078 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1079 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1080 Value *Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1081 Value *ConvertedShadow2 = IRB.CreateZExt( 1082 ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1083 IRB.CreateCall(Fn, {ConvertedShadow2, 1084 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), 1085 Origin}); 1086 } else { 1087 Value *Cmp = IRB.CreateICmpNE( 1088 ConvertedShadow, getCleanShadow(ConvertedShadow), "_mscmp"); 1089 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1090 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1091 IRBuilder<> IRBNew(CheckTerm); 1092 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1093 OriginAlignment); 1094 } 1095 } 1096 } 1097 1098 void materializeStores(bool InstrumentWithCalls) { 1099 for (StoreInst *SI : StoreList) { 1100 IRBuilder<> IRB(SI); 1101 Value *Val = SI->getValueOperand(); 1102 Value *Addr = SI->getPointerOperand(); 1103 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1104 Value *ShadowPtr, *OriginPtr; 1105 Type *ShadowTy = Shadow->getType(); 1106 unsigned Alignment = SI->getAlignment(); 1107 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1108 std::tie(ShadowPtr, OriginPtr) = 1109 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1110 1111 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1112 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1113 (void)NewSI; 1114 1115 if (SI->isAtomic()) 1116 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1117 1118 if (MS.TrackOrigins && !SI->isAtomic()) 1119 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1120 OriginAlignment, InstrumentWithCalls); 1121 } 1122 } 1123 1124 /// Helper function to insert a warning at IRB's current insert point. 1125 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1126 if (!Origin) 1127 Origin = (Value *)IRB.getInt32(0); 1128 if (MS.CompileKernel) { 1129 IRB.CreateCall(MS.WarningFn, Origin); 1130 } else { 1131 if (MS.TrackOrigins) { 1132 IRB.CreateStore(Origin, MS.OriginTLS); 1133 } 1134 IRB.CreateCall(MS.WarningFn, {}); 1135 } 1136 IRB.CreateCall(MS.EmptyAsm, {}); 1137 // FIXME: Insert UnreachableInst if !MS.Recover? 1138 // This may invalidate some of the following checks and needs to be done 1139 // at the very end. 1140 } 1141 1142 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1143 bool AsCall) { 1144 IRBuilder<> IRB(OrigIns); 1145 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1146 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1147 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1148 1149 Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow); 1150 if (ConstantShadow) { 1151 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1152 insertWarningFn(IRB, Origin); 1153 } 1154 return; 1155 } 1156 1157 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1158 1159 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1160 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1161 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1162 Value *Fn = MS.MaybeWarningFn[SizeIndex]; 1163 Value *ConvertedShadow2 = 1164 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1165 IRB.CreateCall(Fn, {ConvertedShadow2, MS.TrackOrigins && Origin 1166 ? Origin 1167 : (Value *)IRB.getInt32(0)}); 1168 } else { 1169 Value *Cmp = IRB.CreateICmpNE(ConvertedShadow, 1170 getCleanShadow(ConvertedShadow), "_mscmp"); 1171 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1172 Cmp, OrigIns, 1173 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1174 1175 IRB.SetInsertPoint(CheckTerm); 1176 insertWarningFn(IRB, Origin); 1177 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1178 } 1179 } 1180 1181 void materializeChecks(bool InstrumentWithCalls) { 1182 for (const auto &ShadowData : InstrumentationList) { 1183 Instruction *OrigIns = ShadowData.OrigIns; 1184 Value *Shadow = ShadowData.Shadow; 1185 Value *Origin = ShadowData.Origin; 1186 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1187 } 1188 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1189 } 1190 1191 BasicBlock *insertKmsanPrologue(Function &F) { 1192 BasicBlock *ret = 1193 SplitBlock(&F.getEntryBlock(), F.getEntryBlock().getFirstNonPHI()); 1194 IRBuilder<> IRB(F.getEntryBlock().getFirstNonPHI()); 1195 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1196 Constant *Zero = IRB.getInt32(0); 1197 MS.ParamTLS = 1198 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(0)}, "param_shadow"); 1199 MS.RetvalTLS = 1200 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(1)}, "retval_shadow"); 1201 MS.VAArgTLS = 1202 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1203 MS.VAArgOriginTLS = 1204 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1205 MS.VAArgOverflowSizeTLS = IRB.CreateGEP( 1206 ContextState, {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1207 MS.ParamOriginTLS = 1208 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(5)}, "param_origin"); 1209 MS.RetvalOriginTLS = 1210 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(6)}, "retval_origin"); 1211 return ret; 1212 } 1213 1214 /// Add MemorySanitizer instrumentation to a function. 1215 bool runOnFunction() { 1216 // In the presence of unreachable blocks, we may see Phi nodes with 1217 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1218 // blocks, such nodes will not have any shadow value associated with them. 1219 // It's easier to remove unreachable blocks than deal with missing shadow. 1220 removeUnreachableBlocks(F); 1221 1222 // Iterate all BBs in depth-first order and create shadow instructions 1223 // for all instructions (where applicable). 1224 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1225 for (BasicBlock *BB : depth_first(ActualFnStart)) 1226 visit(*BB); 1227 1228 // Finalize PHI nodes. 1229 for (PHINode *PN : ShadowPHINodes) { 1230 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1231 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1232 size_t NumValues = PN->getNumIncomingValues(); 1233 for (size_t v = 0; v < NumValues; v++) { 1234 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1235 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1236 } 1237 } 1238 1239 VAHelper->finalizeInstrumentation(); 1240 1241 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1242 InstrumentationList.size() + StoreList.size() > 1243 (unsigned)ClInstrumentationWithCallThreshold; 1244 1245 // Insert shadow value checks. 1246 materializeChecks(InstrumentWithCalls); 1247 1248 // Delayed instrumentation of StoreInst. 1249 // This may not add new address checks. 1250 materializeStores(InstrumentWithCalls); 1251 1252 return true; 1253 } 1254 1255 /// Compute the shadow type that corresponds to a given Value. 1256 Type *getShadowTy(Value *V) { 1257 return getShadowTy(V->getType()); 1258 } 1259 1260 /// Compute the shadow type that corresponds to a given Type. 1261 Type *getShadowTy(Type *OrigTy) { 1262 if (!OrigTy->isSized()) { 1263 return nullptr; 1264 } 1265 // For integer type, shadow is the same as the original type. 1266 // This may return weird-sized types like i1. 1267 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1268 return IT; 1269 const DataLayout &DL = F.getParent()->getDataLayout(); 1270 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1271 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1272 return VectorType::get(IntegerType::get(*MS.C, EltSize), 1273 VT->getNumElements()); 1274 } 1275 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1276 return ArrayType::get(getShadowTy(AT->getElementType()), 1277 AT->getNumElements()); 1278 } 1279 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1280 SmallVector<Type*, 4> Elements; 1281 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1282 Elements.push_back(getShadowTy(ST->getElementType(i))); 1283 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1284 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1285 return Res; 1286 } 1287 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1288 return IntegerType::get(*MS.C, TypeSize); 1289 } 1290 1291 /// Flatten a vector type. 1292 Type *getShadowTyNoVec(Type *ty) { 1293 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1294 return IntegerType::get(*MS.C, vt->getBitWidth()); 1295 return ty; 1296 } 1297 1298 /// Convert a shadow value to it's flattened variant. 1299 Value *convertToShadowTyNoVec(Value *V, IRBuilder<> &IRB) { 1300 Type *Ty = V->getType(); 1301 Type *NoVecTy = getShadowTyNoVec(Ty); 1302 if (Ty == NoVecTy) return V; 1303 return IRB.CreateBitCast(V, NoVecTy); 1304 } 1305 1306 /// Compute the integer shadow offset that corresponds to a given 1307 /// application address. 1308 /// 1309 /// Offset = (Addr & ~AndMask) ^ XorMask 1310 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1311 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1312 1313 uint64_t AndMask = MS.MapParams->AndMask; 1314 if (AndMask) 1315 OffsetLong = 1316 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1317 1318 uint64_t XorMask = MS.MapParams->XorMask; 1319 if (XorMask) 1320 OffsetLong = 1321 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1322 return OffsetLong; 1323 } 1324 1325 /// Compute the shadow and origin addresses corresponding to a given 1326 /// application address. 1327 /// 1328 /// Shadow = ShadowBase + Offset 1329 /// Origin = (OriginBase + Offset) & ~3ULL 1330 std::pair<Value *, Value *> getShadowOriginPtrUserspace(Value *Addr, 1331 IRBuilder<> &IRB, 1332 Type *ShadowTy, 1333 unsigned Alignment) { 1334 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1335 Value *ShadowLong = ShadowOffset; 1336 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1337 if (ShadowBase != 0) { 1338 ShadowLong = 1339 IRB.CreateAdd(ShadowLong, 1340 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1341 } 1342 Value *ShadowPtr = 1343 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1344 Value *OriginPtr = nullptr; 1345 if (MS.TrackOrigins) { 1346 Value *OriginLong = ShadowOffset; 1347 uint64_t OriginBase = MS.MapParams->OriginBase; 1348 if (OriginBase != 0) 1349 OriginLong = IRB.CreateAdd(OriginLong, 1350 ConstantInt::get(MS.IntptrTy, OriginBase)); 1351 if (Alignment < kMinOriginAlignment) { 1352 uint64_t Mask = kMinOriginAlignment - 1; 1353 OriginLong = 1354 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1355 } 1356 OriginPtr = 1357 IRB.CreateIntToPtr(OriginLong, PointerType::get(IRB.getInt32Ty(), 0)); 1358 } 1359 return std::make_pair(ShadowPtr, OriginPtr); 1360 } 1361 1362 std::pair<Value *, Value *> 1363 getShadowOriginPtrKernel(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1364 unsigned Alignment, bool isStore) { 1365 Value *ShadowOriginPtrs; 1366 const DataLayout &DL = F.getParent()->getDataLayout(); 1367 int Size = DL.getTypeStoreSize(ShadowTy); 1368 1369 Value *Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1370 Value *AddrCast = 1371 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1372 if (Getter) { 1373 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1374 } else { 1375 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1376 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1377 : MS.MsanMetadataPtrForLoadN, 1378 {AddrCast, SizeVal}); 1379 } 1380 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1381 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1382 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1383 1384 return std::make_pair(ShadowPtr, OriginPtr); 1385 } 1386 1387 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1388 Type *ShadowTy, 1389 unsigned Alignment, 1390 bool isStore) { 1391 std::pair<Value *, Value *> ret; 1392 if (MS.CompileKernel) 1393 ret = getShadowOriginPtrKernel(Addr, IRB, ShadowTy, Alignment, isStore); 1394 else 1395 ret = getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1396 return ret; 1397 } 1398 1399 /// Compute the shadow address for a given function argument. 1400 /// 1401 /// Shadow = ParamTLS+ArgOffset. 1402 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1403 int ArgOffset) { 1404 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1405 if (ArgOffset) 1406 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1407 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1408 "_msarg"); 1409 } 1410 1411 /// Compute the origin address for a given function argument. 1412 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1413 int ArgOffset) { 1414 if (!MS.TrackOrigins) 1415 return nullptr; 1416 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1417 if (ArgOffset) 1418 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1419 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1420 "_msarg_o"); 1421 } 1422 1423 /// Compute the shadow address for a retval. 1424 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1425 return IRB.CreatePointerCast(MS.RetvalTLS, 1426 PointerType::get(getShadowTy(A), 0), 1427 "_msret"); 1428 } 1429 1430 /// Compute the origin address for a retval. 1431 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1432 // We keep a single origin for the entire retval. Might be too optimistic. 1433 return MS.RetvalOriginTLS; 1434 } 1435 1436 /// Set SV to be the shadow value for V. 1437 void setShadow(Value *V, Value *SV) { 1438 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1439 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1440 } 1441 1442 /// Set Origin to be the origin value for V. 1443 void setOrigin(Value *V, Value *Origin) { 1444 if (!MS.TrackOrigins) return; 1445 assert(!OriginMap.count(V) && "Values may only have one origin"); 1446 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1447 OriginMap[V] = Origin; 1448 } 1449 1450 Constant *getCleanShadow(Type *OrigTy) { 1451 Type *ShadowTy = getShadowTy(OrigTy); 1452 if (!ShadowTy) 1453 return nullptr; 1454 return Constant::getNullValue(ShadowTy); 1455 } 1456 1457 /// Create a clean shadow value for a given value. 1458 /// 1459 /// Clean shadow (all zeroes) means all bits of the value are defined 1460 /// (initialized). 1461 Constant *getCleanShadow(Value *V) { 1462 return getCleanShadow(V->getType()); 1463 } 1464 1465 /// Create a dirty shadow of a given shadow type. 1466 Constant *getPoisonedShadow(Type *ShadowTy) { 1467 assert(ShadowTy); 1468 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1469 return Constant::getAllOnesValue(ShadowTy); 1470 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1471 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1472 getPoisonedShadow(AT->getElementType())); 1473 return ConstantArray::get(AT, Vals); 1474 } 1475 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1476 SmallVector<Constant *, 4> Vals; 1477 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1478 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1479 return ConstantStruct::get(ST, Vals); 1480 } 1481 llvm_unreachable("Unexpected shadow type"); 1482 } 1483 1484 /// Create a dirty shadow for a given value. 1485 Constant *getPoisonedShadow(Value *V) { 1486 Type *ShadowTy = getShadowTy(V); 1487 if (!ShadowTy) 1488 return nullptr; 1489 return getPoisonedShadow(ShadowTy); 1490 } 1491 1492 /// Create a clean (zero) origin. 1493 Value *getCleanOrigin() { 1494 return Constant::getNullValue(MS.OriginTy); 1495 } 1496 1497 /// Get the shadow value for a given Value. 1498 /// 1499 /// This function either returns the value set earlier with setShadow, 1500 /// or extracts if from ParamTLS (for function arguments). 1501 Value *getShadow(Value *V) { 1502 if (!PropagateShadow) return getCleanShadow(V); 1503 if (Instruction *I = dyn_cast<Instruction>(V)) { 1504 if (I->getMetadata("nosanitize")) 1505 return getCleanShadow(V); 1506 // For instructions the shadow is already stored in the map. 1507 Value *Shadow = ShadowMap[V]; 1508 if (!Shadow) { 1509 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1510 (void)I; 1511 assert(Shadow && "No shadow for a value"); 1512 } 1513 return Shadow; 1514 } 1515 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1516 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1517 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1518 (void)U; 1519 return AllOnes; 1520 } 1521 if (Argument *A = dyn_cast<Argument>(V)) { 1522 // For arguments we compute the shadow on demand and store it in the map. 1523 Value **ShadowPtr = &ShadowMap[V]; 1524 if (*ShadowPtr) 1525 return *ShadowPtr; 1526 Function *F = A->getParent(); 1527 IRBuilder<> EntryIRB(ActualFnStart->getFirstNonPHI()); 1528 unsigned ArgOffset = 0; 1529 const DataLayout &DL = F->getParent()->getDataLayout(); 1530 for (auto &FArg : F->args()) { 1531 if (!FArg.getType()->isSized()) { 1532 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1533 continue; 1534 } 1535 unsigned Size = 1536 FArg.hasByValAttr() 1537 ? DL.getTypeAllocSize(FArg.getType()->getPointerElementType()) 1538 : DL.getTypeAllocSize(FArg.getType()); 1539 if (A == &FArg) { 1540 bool Overflow = ArgOffset + Size > kParamTLSSize; 1541 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1542 if (FArg.hasByValAttr()) { 1543 // ByVal pointer itself has clean shadow. We copy the actual 1544 // argument shadow to the underlying memory. 1545 // Figure out maximal valid memcpy alignment. 1546 unsigned ArgAlign = FArg.getParamAlignment(); 1547 if (ArgAlign == 0) { 1548 Type *EltType = A->getType()->getPointerElementType(); 1549 ArgAlign = DL.getABITypeAlignment(EltType); 1550 } 1551 Value *CpShadowPtr = 1552 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1553 /*isStore*/ true) 1554 .first; 1555 // TODO(glider): need to copy origins. 1556 if (Overflow) { 1557 // ParamTLS overflow. 1558 EntryIRB.CreateMemSet( 1559 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1560 Size, ArgAlign); 1561 } else { 1562 unsigned CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1563 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1564 CopyAlign, Size); 1565 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1566 (void)Cpy; 1567 } 1568 *ShadowPtr = getCleanShadow(V); 1569 } else { 1570 if (Overflow) { 1571 // ParamTLS overflow. 1572 *ShadowPtr = getCleanShadow(V); 1573 } else { 1574 *ShadowPtr = 1575 EntryIRB.CreateAlignedLoad(Base, kShadowTLSAlignment); 1576 } 1577 } 1578 LLVM_DEBUG(dbgs() 1579 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1580 if (MS.TrackOrigins && !Overflow) { 1581 Value *OriginPtr = 1582 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1583 setOrigin(A, EntryIRB.CreateLoad(OriginPtr)); 1584 } else { 1585 setOrigin(A, getCleanOrigin()); 1586 } 1587 } 1588 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1589 } 1590 assert(*ShadowPtr && "Could not find shadow for an argument"); 1591 return *ShadowPtr; 1592 } 1593 // For everything else the shadow is zero. 1594 return getCleanShadow(V); 1595 } 1596 1597 /// Get the shadow for i-th argument of the instruction I. 1598 Value *getShadow(Instruction *I, int i) { 1599 return getShadow(I->getOperand(i)); 1600 } 1601 1602 /// Get the origin for a value. 1603 Value *getOrigin(Value *V) { 1604 if (!MS.TrackOrigins) return nullptr; 1605 if (!PropagateShadow) return getCleanOrigin(); 1606 if (isa<Constant>(V)) return getCleanOrigin(); 1607 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1608 "Unexpected value type in getOrigin()"); 1609 if (Instruction *I = dyn_cast<Instruction>(V)) { 1610 if (I->getMetadata("nosanitize")) 1611 return getCleanOrigin(); 1612 } 1613 Value *Origin = OriginMap[V]; 1614 assert(Origin && "Missing origin"); 1615 return Origin; 1616 } 1617 1618 /// Get the origin for i-th argument of the instruction I. 1619 Value *getOrigin(Instruction *I, int i) { 1620 return getOrigin(I->getOperand(i)); 1621 } 1622 1623 /// Remember the place where a shadow check should be inserted. 1624 /// 1625 /// This location will be later instrumented with a check that will print a 1626 /// UMR warning in runtime if the shadow value is not 0. 1627 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1628 assert(Shadow); 1629 if (!InsertChecks) return; 1630 #ifndef NDEBUG 1631 Type *ShadowTy = Shadow->getType(); 1632 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) && 1633 "Can only insert checks for integer and vector shadow types"); 1634 #endif 1635 InstrumentationList.push_back( 1636 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1637 } 1638 1639 /// Remember the place where a shadow check should be inserted. 1640 /// 1641 /// This location will be later instrumented with a check that will print a 1642 /// UMR warning in runtime if the value is not fully defined. 1643 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1644 assert(Val); 1645 Value *Shadow, *Origin; 1646 if (ClCheckConstantShadow) { 1647 Shadow = getShadow(Val); 1648 if (!Shadow) return; 1649 Origin = getOrigin(Val); 1650 } else { 1651 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1652 if (!Shadow) return; 1653 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1654 } 1655 insertShadowCheck(Shadow, Origin, OrigIns); 1656 } 1657 1658 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1659 switch (a) { 1660 case AtomicOrdering::NotAtomic: 1661 return AtomicOrdering::NotAtomic; 1662 case AtomicOrdering::Unordered: 1663 case AtomicOrdering::Monotonic: 1664 case AtomicOrdering::Release: 1665 return AtomicOrdering::Release; 1666 case AtomicOrdering::Acquire: 1667 case AtomicOrdering::AcquireRelease: 1668 return AtomicOrdering::AcquireRelease; 1669 case AtomicOrdering::SequentiallyConsistent: 1670 return AtomicOrdering::SequentiallyConsistent; 1671 } 1672 llvm_unreachable("Unknown ordering"); 1673 } 1674 1675 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1676 switch (a) { 1677 case AtomicOrdering::NotAtomic: 1678 return AtomicOrdering::NotAtomic; 1679 case AtomicOrdering::Unordered: 1680 case AtomicOrdering::Monotonic: 1681 case AtomicOrdering::Acquire: 1682 return AtomicOrdering::Acquire; 1683 case AtomicOrdering::Release: 1684 case AtomicOrdering::AcquireRelease: 1685 return AtomicOrdering::AcquireRelease; 1686 case AtomicOrdering::SequentiallyConsistent: 1687 return AtomicOrdering::SequentiallyConsistent; 1688 } 1689 llvm_unreachable("Unknown ordering"); 1690 } 1691 1692 // ------------------- Visitors. 1693 using InstVisitor<MemorySanitizerVisitor>::visit; 1694 void visit(Instruction &I) { 1695 if (!I.getMetadata("nosanitize")) 1696 InstVisitor<MemorySanitizerVisitor>::visit(I); 1697 } 1698 1699 /// Instrument LoadInst 1700 /// 1701 /// Loads the corresponding shadow and (optionally) origin. 1702 /// Optionally, checks that the load address is fully defined. 1703 void visitLoadInst(LoadInst &I) { 1704 assert(I.getType()->isSized() && "Load type must have size"); 1705 assert(!I.getMetadata("nosanitize")); 1706 IRBuilder<> IRB(I.getNextNode()); 1707 Type *ShadowTy = getShadowTy(&I); 1708 Value *Addr = I.getPointerOperand(); 1709 Value *ShadowPtr, *OriginPtr; 1710 unsigned Alignment = I.getAlignment(); 1711 if (PropagateShadow) { 1712 std::tie(ShadowPtr, OriginPtr) = 1713 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1714 setShadow(&I, IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_msld")); 1715 } else { 1716 setShadow(&I, getCleanShadow(&I)); 1717 } 1718 1719 if (ClCheckAccessAddress) 1720 insertShadowCheck(I.getPointerOperand(), &I); 1721 1722 if (I.isAtomic()) 1723 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1724 1725 if (MS.TrackOrigins) { 1726 if (PropagateShadow) { 1727 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1728 setOrigin(&I, IRB.CreateAlignedLoad(OriginPtr, OriginAlignment)); 1729 } else { 1730 setOrigin(&I, getCleanOrigin()); 1731 } 1732 } 1733 } 1734 1735 /// Instrument StoreInst 1736 /// 1737 /// Stores the corresponding shadow and (optionally) origin. 1738 /// Optionally, checks that the store address is fully defined. 1739 void visitStoreInst(StoreInst &I) { 1740 StoreList.push_back(&I); 1741 if (ClCheckAccessAddress) 1742 insertShadowCheck(I.getPointerOperand(), &I); 1743 } 1744 1745 void handleCASOrRMW(Instruction &I) { 1746 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1747 1748 IRBuilder<> IRB(&I); 1749 Value *Addr = I.getOperand(0); 1750 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, I.getType(), 1751 /*Alignment*/ 1, /*isStore*/ true) 1752 .first; 1753 1754 if (ClCheckAccessAddress) 1755 insertShadowCheck(Addr, &I); 1756 1757 // Only test the conditional argument of cmpxchg instruction. 1758 // The other argument can potentially be uninitialized, but we can not 1759 // detect this situation reliably without possible false positives. 1760 if (isa<AtomicCmpXchgInst>(I)) 1761 insertShadowCheck(I.getOperand(1), &I); 1762 1763 IRB.CreateStore(getCleanShadow(&I), ShadowPtr); 1764 1765 setShadow(&I, getCleanShadow(&I)); 1766 setOrigin(&I, getCleanOrigin()); 1767 } 1768 1769 void visitAtomicRMWInst(AtomicRMWInst &I) { 1770 handleCASOrRMW(I); 1771 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1772 } 1773 1774 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1775 handleCASOrRMW(I); 1776 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1777 } 1778 1779 // Vector manipulation. 1780 void visitExtractElementInst(ExtractElementInst &I) { 1781 insertShadowCheck(I.getOperand(1), &I); 1782 IRBuilder<> IRB(&I); 1783 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 1784 "_msprop")); 1785 setOrigin(&I, getOrigin(&I, 0)); 1786 } 1787 1788 void visitInsertElementInst(InsertElementInst &I) { 1789 insertShadowCheck(I.getOperand(2), &I); 1790 IRBuilder<> IRB(&I); 1791 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 1792 I.getOperand(2), "_msprop")); 1793 setOriginForNaryOp(I); 1794 } 1795 1796 void visitShuffleVectorInst(ShuffleVectorInst &I) { 1797 insertShadowCheck(I.getOperand(2), &I); 1798 IRBuilder<> IRB(&I); 1799 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 1800 I.getOperand(2), "_msprop")); 1801 setOriginForNaryOp(I); 1802 } 1803 1804 // Casts. 1805 void visitSExtInst(SExtInst &I) { 1806 IRBuilder<> IRB(&I); 1807 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 1808 setOrigin(&I, getOrigin(&I, 0)); 1809 } 1810 1811 void visitZExtInst(ZExtInst &I) { 1812 IRBuilder<> IRB(&I); 1813 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 1814 setOrigin(&I, getOrigin(&I, 0)); 1815 } 1816 1817 void visitTruncInst(TruncInst &I) { 1818 IRBuilder<> IRB(&I); 1819 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 1820 setOrigin(&I, getOrigin(&I, 0)); 1821 } 1822 1823 void visitBitCastInst(BitCastInst &I) { 1824 // Special case: if this is the bitcast (there is exactly 1 allowed) between 1825 // a musttail call and a ret, don't instrument. New instructions are not 1826 // allowed after a musttail call. 1827 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 1828 if (CI->isMustTailCall()) 1829 return; 1830 IRBuilder<> IRB(&I); 1831 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 1832 setOrigin(&I, getOrigin(&I, 0)); 1833 } 1834 1835 void visitPtrToIntInst(PtrToIntInst &I) { 1836 IRBuilder<> IRB(&I); 1837 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1838 "_msprop_ptrtoint")); 1839 setOrigin(&I, getOrigin(&I, 0)); 1840 } 1841 1842 void visitIntToPtrInst(IntToPtrInst &I) { 1843 IRBuilder<> IRB(&I); 1844 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1845 "_msprop_inttoptr")); 1846 setOrigin(&I, getOrigin(&I, 0)); 1847 } 1848 1849 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 1850 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 1851 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 1852 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 1853 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 1854 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 1855 1856 /// Propagate shadow for bitwise AND. 1857 /// 1858 /// This code is exact, i.e. if, for example, a bit in the left argument 1859 /// is defined and 0, then neither the value not definedness of the 1860 /// corresponding bit in B don't affect the resulting shadow. 1861 void visitAnd(BinaryOperator &I) { 1862 IRBuilder<> IRB(&I); 1863 // "And" of 0 and a poisoned value results in unpoisoned value. 1864 // 1&1 => 1; 0&1 => 0; p&1 => p; 1865 // 1&0 => 0; 0&0 => 0; p&0 => 0; 1866 // 1&p => p; 0&p => 0; p&p => p; 1867 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 1868 Value *S1 = getShadow(&I, 0); 1869 Value *S2 = getShadow(&I, 1); 1870 Value *V1 = I.getOperand(0); 1871 Value *V2 = I.getOperand(1); 1872 if (V1->getType() != S1->getType()) { 1873 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1874 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1875 } 1876 Value *S1S2 = IRB.CreateAnd(S1, S2); 1877 Value *V1S2 = IRB.CreateAnd(V1, S2); 1878 Value *S1V2 = IRB.CreateAnd(S1, V2); 1879 setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2))); 1880 setOriginForNaryOp(I); 1881 } 1882 1883 void visitOr(BinaryOperator &I) { 1884 IRBuilder<> IRB(&I); 1885 // "Or" of 1 and a poisoned value results in unpoisoned value. 1886 // 1|1 => 1; 0|1 => 1; p|1 => 1; 1887 // 1|0 => 1; 0|0 => 0; p|0 => p; 1888 // 1|p => 1; 0|p => p; p|p => p; 1889 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 1890 Value *S1 = getShadow(&I, 0); 1891 Value *S2 = getShadow(&I, 1); 1892 Value *V1 = IRB.CreateNot(I.getOperand(0)); 1893 Value *V2 = IRB.CreateNot(I.getOperand(1)); 1894 if (V1->getType() != S1->getType()) { 1895 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1896 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1897 } 1898 Value *S1S2 = IRB.CreateAnd(S1, S2); 1899 Value *V1S2 = IRB.CreateAnd(V1, S2); 1900 Value *S1V2 = IRB.CreateAnd(S1, V2); 1901 setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2))); 1902 setOriginForNaryOp(I); 1903 } 1904 1905 /// Default propagation of shadow and/or origin. 1906 /// 1907 /// This class implements the general case of shadow propagation, used in all 1908 /// cases where we don't know and/or don't care about what the operation 1909 /// actually does. It converts all input shadow values to a common type 1910 /// (extending or truncating as necessary), and bitwise OR's them. 1911 /// 1912 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 1913 /// fully initialized), and less prone to false positives. 1914 /// 1915 /// This class also implements the general case of origin propagation. For a 1916 /// Nary operation, result origin is set to the origin of an argument that is 1917 /// not entirely initialized. If there is more than one such arguments, the 1918 /// rightmost of them is picked. It does not matter which one is picked if all 1919 /// arguments are initialized. 1920 template <bool CombineShadow> 1921 class Combiner { 1922 Value *Shadow = nullptr; 1923 Value *Origin = nullptr; 1924 IRBuilder<> &IRB; 1925 MemorySanitizerVisitor *MSV; 1926 1927 public: 1928 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 1929 : IRB(IRB), MSV(MSV) {} 1930 1931 /// Add a pair of shadow and origin values to the mix. 1932 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 1933 if (CombineShadow) { 1934 assert(OpShadow); 1935 if (!Shadow) 1936 Shadow = OpShadow; 1937 else { 1938 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 1939 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 1940 } 1941 } 1942 1943 if (MSV->MS.TrackOrigins) { 1944 assert(OpOrigin); 1945 if (!Origin) { 1946 Origin = OpOrigin; 1947 } else { 1948 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 1949 // No point in adding something that might result in 0 origin value. 1950 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 1951 Value *FlatShadow = MSV->convertToShadowTyNoVec(OpShadow, IRB); 1952 Value *Cond = 1953 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 1954 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 1955 } 1956 } 1957 } 1958 return *this; 1959 } 1960 1961 /// Add an application value to the mix. 1962 Combiner &Add(Value *V) { 1963 Value *OpShadow = MSV->getShadow(V); 1964 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 1965 return Add(OpShadow, OpOrigin); 1966 } 1967 1968 /// Set the current combined values as the given instruction's shadow 1969 /// and origin. 1970 void Done(Instruction *I) { 1971 if (CombineShadow) { 1972 assert(Shadow); 1973 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 1974 MSV->setShadow(I, Shadow); 1975 } 1976 if (MSV->MS.TrackOrigins) { 1977 assert(Origin); 1978 MSV->setOrigin(I, Origin); 1979 } 1980 } 1981 }; 1982 1983 using ShadowAndOriginCombiner = Combiner<true>; 1984 using OriginCombiner = Combiner<false>; 1985 1986 /// Propagate origin for arbitrary operation. 1987 void setOriginForNaryOp(Instruction &I) { 1988 if (!MS.TrackOrigins) return; 1989 IRBuilder<> IRB(&I); 1990 OriginCombiner OC(this, IRB); 1991 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 1992 OC.Add(OI->get()); 1993 OC.Done(&I); 1994 } 1995 1996 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 1997 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 1998 "Vector of pointers is not a valid shadow type"); 1999 return Ty->isVectorTy() ? 2000 Ty->getVectorNumElements() * Ty->getScalarSizeInBits() : 2001 Ty->getPrimitiveSizeInBits(); 2002 } 2003 2004 /// Cast between two shadow types, extending or truncating as 2005 /// necessary. 2006 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2007 bool Signed = false) { 2008 Type *srcTy = V->getType(); 2009 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2010 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2011 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2012 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2013 2014 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2015 return IRB.CreateIntCast(V, dstTy, Signed); 2016 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2017 dstTy->getVectorNumElements() == srcTy->getVectorNumElements()) 2018 return IRB.CreateIntCast(V, dstTy, Signed); 2019 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2020 Value *V2 = 2021 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2022 return IRB.CreateBitCast(V2, dstTy); 2023 // TODO: handle struct types. 2024 } 2025 2026 /// Cast an application value to the type of its own shadow. 2027 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2028 Type *ShadowTy = getShadowTy(V); 2029 if (V->getType() == ShadowTy) 2030 return V; 2031 if (V->getType()->isPtrOrPtrVectorTy()) 2032 return IRB.CreatePtrToInt(V, ShadowTy); 2033 else 2034 return IRB.CreateBitCast(V, ShadowTy); 2035 } 2036 2037 /// Propagate shadow for arbitrary operation. 2038 void handleShadowOr(Instruction &I) { 2039 IRBuilder<> IRB(&I); 2040 ShadowAndOriginCombiner SC(this, IRB); 2041 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2042 SC.Add(OI->get()); 2043 SC.Done(&I); 2044 } 2045 2046 // Handle multiplication by constant. 2047 // 2048 // Handle a special case of multiplication by constant that may have one or 2049 // more zeros in the lower bits. This makes corresponding number of lower bits 2050 // of the result zero as well. We model it by shifting the other operand 2051 // shadow left by the required number of bits. Effectively, we transform 2052 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2053 // We use multiplication by 2**N instead of shift to cover the case of 2054 // multiplication by 0, which may occur in some elements of a vector operand. 2055 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2056 Value *OtherArg) { 2057 Constant *ShadowMul; 2058 Type *Ty = ConstArg->getType(); 2059 if (Ty->isVectorTy()) { 2060 unsigned NumElements = Ty->getVectorNumElements(); 2061 Type *EltTy = Ty->getSequentialElementType(); 2062 SmallVector<Constant *, 16> Elements; 2063 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2064 if (ConstantInt *Elt = 2065 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2066 const APInt &V = Elt->getValue(); 2067 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2068 Elements.push_back(ConstantInt::get(EltTy, V2)); 2069 } else { 2070 Elements.push_back(ConstantInt::get(EltTy, 1)); 2071 } 2072 } 2073 ShadowMul = ConstantVector::get(Elements); 2074 } else { 2075 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2076 const APInt &V = Elt->getValue(); 2077 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2078 ShadowMul = ConstantInt::get(Ty, V2); 2079 } else { 2080 ShadowMul = ConstantInt::get(Ty, 1); 2081 } 2082 } 2083 2084 IRBuilder<> IRB(&I); 2085 setShadow(&I, 2086 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2087 setOrigin(&I, getOrigin(OtherArg)); 2088 } 2089 2090 void visitMul(BinaryOperator &I) { 2091 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2092 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2093 if (constOp0 && !constOp1) 2094 handleMulByConstant(I, constOp0, I.getOperand(1)); 2095 else if (constOp1 && !constOp0) 2096 handleMulByConstant(I, constOp1, I.getOperand(0)); 2097 else 2098 handleShadowOr(I); 2099 } 2100 2101 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2102 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2103 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2104 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2105 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2106 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2107 2108 void handleIntegerDiv(Instruction &I) { 2109 IRBuilder<> IRB(&I); 2110 // Strict on the second argument. 2111 insertShadowCheck(I.getOperand(1), &I); 2112 setShadow(&I, getShadow(&I, 0)); 2113 setOrigin(&I, getOrigin(&I, 0)); 2114 } 2115 2116 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2117 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2118 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2119 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2120 2121 // Floating point division is side-effect free. We can not require that the 2122 // divisor is fully initialized and must propagate shadow. See PR37523. 2123 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2124 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2125 2126 /// Instrument == and != comparisons. 2127 /// 2128 /// Sometimes the comparison result is known even if some of the bits of the 2129 /// arguments are not. 2130 void handleEqualityComparison(ICmpInst &I) { 2131 IRBuilder<> IRB(&I); 2132 Value *A = I.getOperand(0); 2133 Value *B = I.getOperand(1); 2134 Value *Sa = getShadow(A); 2135 Value *Sb = getShadow(B); 2136 2137 // Get rid of pointers and vectors of pointers. 2138 // For ints (and vectors of ints), types of A and Sa match, 2139 // and this is a no-op. 2140 A = IRB.CreatePointerCast(A, Sa->getType()); 2141 B = IRB.CreatePointerCast(B, Sb->getType()); 2142 2143 // A == B <==> (C = A^B) == 0 2144 // A != B <==> (C = A^B) != 0 2145 // Sc = Sa | Sb 2146 Value *C = IRB.CreateXor(A, B); 2147 Value *Sc = IRB.CreateOr(Sa, Sb); 2148 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2149 // Result is defined if one of the following is true 2150 // * there is a defined 1 bit in C 2151 // * C is fully defined 2152 // Si = !(C & ~Sc) && Sc 2153 Value *Zero = Constant::getNullValue(Sc->getType()); 2154 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2155 Value *Si = 2156 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2157 IRB.CreateICmpEQ( 2158 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2159 Si->setName("_msprop_icmp"); 2160 setShadow(&I, Si); 2161 setOriginForNaryOp(I); 2162 } 2163 2164 /// Build the lowest possible value of V, taking into account V's 2165 /// uninitialized bits. 2166 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2167 bool isSigned) { 2168 if (isSigned) { 2169 // Split shadow into sign bit and other bits. 2170 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2171 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2172 // Maximise the undefined shadow bit, minimize other undefined bits. 2173 return 2174 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2175 } else { 2176 // Minimize undefined bits. 2177 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2178 } 2179 } 2180 2181 /// Build the highest possible value of V, taking into account V's 2182 /// uninitialized bits. 2183 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2184 bool isSigned) { 2185 if (isSigned) { 2186 // Split shadow into sign bit and other bits. 2187 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2188 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2189 // Minimise the undefined shadow bit, maximise other undefined bits. 2190 return 2191 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2192 } else { 2193 // Maximize undefined bits. 2194 return IRB.CreateOr(A, Sa); 2195 } 2196 } 2197 2198 /// Instrument relational comparisons. 2199 /// 2200 /// This function does exact shadow propagation for all relational 2201 /// comparisons of integers, pointers and vectors of those. 2202 /// FIXME: output seems suboptimal when one of the operands is a constant 2203 void handleRelationalComparisonExact(ICmpInst &I) { 2204 IRBuilder<> IRB(&I); 2205 Value *A = I.getOperand(0); 2206 Value *B = I.getOperand(1); 2207 Value *Sa = getShadow(A); 2208 Value *Sb = getShadow(B); 2209 2210 // Get rid of pointers and vectors of pointers. 2211 // For ints (and vectors of ints), types of A and Sa match, 2212 // and this is a no-op. 2213 A = IRB.CreatePointerCast(A, Sa->getType()); 2214 B = IRB.CreatePointerCast(B, Sb->getType()); 2215 2216 // Let [a0, a1] be the interval of possible values of A, taking into account 2217 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2218 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2219 bool IsSigned = I.isSigned(); 2220 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2221 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2222 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2223 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2224 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2225 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2226 Value *Si = IRB.CreateXor(S1, S2); 2227 setShadow(&I, Si); 2228 setOriginForNaryOp(I); 2229 } 2230 2231 /// Instrument signed relational comparisons. 2232 /// 2233 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2234 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2235 void handleSignedRelationalComparison(ICmpInst &I) { 2236 Constant *constOp; 2237 Value *op = nullptr; 2238 CmpInst::Predicate pre; 2239 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2240 op = I.getOperand(0); 2241 pre = I.getPredicate(); 2242 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2243 op = I.getOperand(1); 2244 pre = I.getSwappedPredicate(); 2245 } else { 2246 handleShadowOr(I); 2247 return; 2248 } 2249 2250 if ((constOp->isNullValue() && 2251 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2252 (constOp->isAllOnesValue() && 2253 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2254 IRBuilder<> IRB(&I); 2255 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2256 "_msprop_icmp_s"); 2257 setShadow(&I, Shadow); 2258 setOrigin(&I, getOrigin(op)); 2259 } else { 2260 handleShadowOr(I); 2261 } 2262 } 2263 2264 void visitICmpInst(ICmpInst &I) { 2265 if (!ClHandleICmp) { 2266 handleShadowOr(I); 2267 return; 2268 } 2269 if (I.isEquality()) { 2270 handleEqualityComparison(I); 2271 return; 2272 } 2273 2274 assert(I.isRelational()); 2275 if (ClHandleICmpExact) { 2276 handleRelationalComparisonExact(I); 2277 return; 2278 } 2279 if (I.isSigned()) { 2280 handleSignedRelationalComparison(I); 2281 return; 2282 } 2283 2284 assert(I.isUnsigned()); 2285 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2286 handleRelationalComparisonExact(I); 2287 return; 2288 } 2289 2290 handleShadowOr(I); 2291 } 2292 2293 void visitFCmpInst(FCmpInst &I) { 2294 handleShadowOr(I); 2295 } 2296 2297 void handleShift(BinaryOperator &I) { 2298 IRBuilder<> IRB(&I); 2299 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2300 // Otherwise perform the same shift on S1. 2301 Value *S1 = getShadow(&I, 0); 2302 Value *S2 = getShadow(&I, 1); 2303 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2304 S2->getType()); 2305 Value *V2 = I.getOperand(1); 2306 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2307 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2308 setOriginForNaryOp(I); 2309 } 2310 2311 void visitShl(BinaryOperator &I) { handleShift(I); } 2312 void visitAShr(BinaryOperator &I) { handleShift(I); } 2313 void visitLShr(BinaryOperator &I) { handleShift(I); } 2314 2315 /// Instrument llvm.memmove 2316 /// 2317 /// At this point we don't know if llvm.memmove will be inlined or not. 2318 /// If we don't instrument it and it gets inlined, 2319 /// our interceptor will not kick in and we will lose the memmove. 2320 /// If we instrument the call here, but it does not get inlined, 2321 /// we will memove the shadow twice: which is bad in case 2322 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2323 /// 2324 /// Similar situation exists for memcpy and memset. 2325 void visitMemMoveInst(MemMoveInst &I) { 2326 IRBuilder<> IRB(&I); 2327 IRB.CreateCall( 2328 MS.MemmoveFn, 2329 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2330 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2331 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2332 I.eraseFromParent(); 2333 } 2334 2335 // Similar to memmove: avoid copying shadow twice. 2336 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2337 // FIXME: consider doing manual inline for small constant sizes and proper 2338 // alignment. 2339 void visitMemCpyInst(MemCpyInst &I) { 2340 IRBuilder<> IRB(&I); 2341 IRB.CreateCall( 2342 MS.MemcpyFn, 2343 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2344 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2345 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2346 I.eraseFromParent(); 2347 } 2348 2349 // Same as memcpy. 2350 void visitMemSetInst(MemSetInst &I) { 2351 IRBuilder<> IRB(&I); 2352 IRB.CreateCall( 2353 MS.MemsetFn, 2354 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2355 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2356 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2357 I.eraseFromParent(); 2358 } 2359 2360 void visitVAStartInst(VAStartInst &I) { 2361 VAHelper->visitVAStartInst(I); 2362 } 2363 2364 void visitVACopyInst(VACopyInst &I) { 2365 VAHelper->visitVACopyInst(I); 2366 } 2367 2368 /// Handle vector store-like intrinsics. 2369 /// 2370 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2371 /// has 1 pointer argument and 1 vector argument, returns void. 2372 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2373 IRBuilder<> IRB(&I); 2374 Value* Addr = I.getArgOperand(0); 2375 Value *Shadow = getShadow(&I, 1); 2376 Value *ShadowPtr, *OriginPtr; 2377 2378 // We don't know the pointer alignment (could be unaligned SSE store!). 2379 // Have to assume to worst case. 2380 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2381 Addr, IRB, Shadow->getType(), /*Alignment*/ 1, /*isStore*/ true); 2382 IRB.CreateAlignedStore(Shadow, ShadowPtr, 1); 2383 2384 if (ClCheckAccessAddress) 2385 insertShadowCheck(Addr, &I); 2386 2387 // FIXME: factor out common code from materializeStores 2388 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2389 return true; 2390 } 2391 2392 /// Handle vector load-like intrinsics. 2393 /// 2394 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2395 /// has 1 pointer argument, returns a vector. 2396 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2397 IRBuilder<> IRB(&I); 2398 Value *Addr = I.getArgOperand(0); 2399 2400 Type *ShadowTy = getShadowTy(&I); 2401 Value *ShadowPtr, *OriginPtr; 2402 if (PropagateShadow) { 2403 // We don't know the pointer alignment (could be unaligned SSE load!). 2404 // Have to assume to worst case. 2405 unsigned Alignment = 1; 2406 std::tie(ShadowPtr, OriginPtr) = 2407 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2408 setShadow(&I, IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_msld")); 2409 } else { 2410 setShadow(&I, getCleanShadow(&I)); 2411 } 2412 2413 if (ClCheckAccessAddress) 2414 insertShadowCheck(Addr, &I); 2415 2416 if (MS.TrackOrigins) { 2417 if (PropagateShadow) 2418 setOrigin(&I, IRB.CreateLoad(OriginPtr)); 2419 else 2420 setOrigin(&I, getCleanOrigin()); 2421 } 2422 return true; 2423 } 2424 2425 /// Handle (SIMD arithmetic)-like intrinsics. 2426 /// 2427 /// Instrument intrinsics with any number of arguments of the same type, 2428 /// equal to the return type. The type should be simple (no aggregates or 2429 /// pointers; vectors are fine). 2430 /// Caller guarantees that this intrinsic does not access memory. 2431 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2432 Type *RetTy = I.getType(); 2433 if (!(RetTy->isIntOrIntVectorTy() || 2434 RetTy->isFPOrFPVectorTy() || 2435 RetTy->isX86_MMXTy())) 2436 return false; 2437 2438 unsigned NumArgOperands = I.getNumArgOperands(); 2439 2440 for (unsigned i = 0; i < NumArgOperands; ++i) { 2441 Type *Ty = I.getArgOperand(i)->getType(); 2442 if (Ty != RetTy) 2443 return false; 2444 } 2445 2446 IRBuilder<> IRB(&I); 2447 ShadowAndOriginCombiner SC(this, IRB); 2448 for (unsigned i = 0; i < NumArgOperands; ++i) 2449 SC.Add(I.getArgOperand(i)); 2450 SC.Done(&I); 2451 2452 return true; 2453 } 2454 2455 /// Heuristically instrument unknown intrinsics. 2456 /// 2457 /// The main purpose of this code is to do something reasonable with all 2458 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2459 /// We recognize several classes of intrinsics by their argument types and 2460 /// ModRefBehaviour and apply special intrumentation when we are reasonably 2461 /// sure that we know what the intrinsic does. 2462 /// 2463 /// We special-case intrinsics where this approach fails. See llvm.bswap 2464 /// handling as an example of that. 2465 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2466 unsigned NumArgOperands = I.getNumArgOperands(); 2467 if (NumArgOperands == 0) 2468 return false; 2469 2470 if (NumArgOperands == 2 && 2471 I.getArgOperand(0)->getType()->isPointerTy() && 2472 I.getArgOperand(1)->getType()->isVectorTy() && 2473 I.getType()->isVoidTy() && 2474 !I.onlyReadsMemory()) { 2475 // This looks like a vector store. 2476 return handleVectorStoreIntrinsic(I); 2477 } 2478 2479 if (NumArgOperands == 1 && 2480 I.getArgOperand(0)->getType()->isPointerTy() && 2481 I.getType()->isVectorTy() && 2482 I.onlyReadsMemory()) { 2483 // This looks like a vector load. 2484 return handleVectorLoadIntrinsic(I); 2485 } 2486 2487 if (I.doesNotAccessMemory()) 2488 if (maybeHandleSimpleNomemIntrinsic(I)) 2489 return true; 2490 2491 // FIXME: detect and handle SSE maskstore/maskload 2492 return false; 2493 } 2494 2495 void handleBswap(IntrinsicInst &I) { 2496 IRBuilder<> IRB(&I); 2497 Value *Op = I.getArgOperand(0); 2498 Type *OpType = Op->getType(); 2499 Function *BswapFunc = Intrinsic::getDeclaration( 2500 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2501 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2502 setOrigin(&I, getOrigin(Op)); 2503 } 2504 2505 // Instrument vector convert instrinsic. 2506 // 2507 // This function instruments intrinsics like cvtsi2ss: 2508 // %Out = int_xxx_cvtyyy(%ConvertOp) 2509 // or 2510 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2511 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2512 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2513 // elements from \p CopyOp. 2514 // In most cases conversion involves floating-point value which may trigger a 2515 // hardware exception when not fully initialized. For this reason we require 2516 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2517 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2518 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2519 // return a fully initialized value. 2520 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements) { 2521 IRBuilder<> IRB(&I); 2522 Value *CopyOp, *ConvertOp; 2523 2524 switch (I.getNumArgOperands()) { 2525 case 3: 2526 assert(isa<ConstantInt>(I.getArgOperand(2)) && "Invalid rounding mode"); 2527 LLVM_FALLTHROUGH; 2528 case 2: 2529 CopyOp = I.getArgOperand(0); 2530 ConvertOp = I.getArgOperand(1); 2531 break; 2532 case 1: 2533 ConvertOp = I.getArgOperand(0); 2534 CopyOp = nullptr; 2535 break; 2536 default: 2537 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2538 } 2539 2540 // The first *NumUsedElements* elements of ConvertOp are converted to the 2541 // same number of output elements. The rest of the output is copied from 2542 // CopyOp, or (if not available) filled with zeroes. 2543 // Combine shadow for elements of ConvertOp that are used in this operation, 2544 // and insert a check. 2545 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2546 // int->any conversion. 2547 Value *ConvertShadow = getShadow(ConvertOp); 2548 Value *AggShadow = nullptr; 2549 if (ConvertOp->getType()->isVectorTy()) { 2550 AggShadow = IRB.CreateExtractElement( 2551 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2552 for (int i = 1; i < NumUsedElements; ++i) { 2553 Value *MoreShadow = IRB.CreateExtractElement( 2554 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2555 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2556 } 2557 } else { 2558 AggShadow = ConvertShadow; 2559 } 2560 assert(AggShadow->getType()->isIntegerTy()); 2561 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2562 2563 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2564 // ConvertOp. 2565 if (CopyOp) { 2566 assert(CopyOp->getType() == I.getType()); 2567 assert(CopyOp->getType()->isVectorTy()); 2568 Value *ResultShadow = getShadow(CopyOp); 2569 Type *EltTy = ResultShadow->getType()->getVectorElementType(); 2570 for (int i = 0; i < NumUsedElements; ++i) { 2571 ResultShadow = IRB.CreateInsertElement( 2572 ResultShadow, ConstantInt::getNullValue(EltTy), 2573 ConstantInt::get(IRB.getInt32Ty(), i)); 2574 } 2575 setShadow(&I, ResultShadow); 2576 setOrigin(&I, getOrigin(CopyOp)); 2577 } else { 2578 setShadow(&I, getCleanShadow(&I)); 2579 setOrigin(&I, getCleanOrigin()); 2580 } 2581 } 2582 2583 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2584 // zeroes if it is zero, and all ones otherwise. 2585 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2586 if (S->getType()->isVectorTy()) 2587 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2588 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2589 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2590 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2591 } 2592 2593 // Given a vector, extract its first element, and return all 2594 // zeroes if it is zero, and all ones otherwise. 2595 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2596 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2597 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2598 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2599 } 2600 2601 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2602 Type *T = S->getType(); 2603 assert(T->isVectorTy()); 2604 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2605 return IRB.CreateSExt(S2, T); 2606 } 2607 2608 // Instrument vector shift instrinsic. 2609 // 2610 // This function instruments intrinsics like int_x86_avx2_psll_w. 2611 // Intrinsic shifts %In by %ShiftSize bits. 2612 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2613 // size, and the rest is ignored. Behavior is defined even if shift size is 2614 // greater than register (or field) width. 2615 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2616 assert(I.getNumArgOperands() == 2); 2617 IRBuilder<> IRB(&I); 2618 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2619 // Otherwise perform the same shift on S1. 2620 Value *S1 = getShadow(&I, 0); 2621 Value *S2 = getShadow(&I, 1); 2622 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2623 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2624 Value *V1 = I.getOperand(0); 2625 Value *V2 = I.getOperand(1); 2626 Value *Shift = IRB.CreateCall(I.getCalledValue(), 2627 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2628 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2629 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2630 setOriginForNaryOp(I); 2631 } 2632 2633 // Get an X86_MMX-sized vector type. 2634 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2635 const unsigned X86_MMXSizeInBits = 64; 2636 return VectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2637 X86_MMXSizeInBits / EltSizeInBits); 2638 } 2639 2640 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2641 // intrinsic. 2642 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2643 switch (id) { 2644 case Intrinsic::x86_sse2_packsswb_128: 2645 case Intrinsic::x86_sse2_packuswb_128: 2646 return Intrinsic::x86_sse2_packsswb_128; 2647 2648 case Intrinsic::x86_sse2_packssdw_128: 2649 case Intrinsic::x86_sse41_packusdw: 2650 return Intrinsic::x86_sse2_packssdw_128; 2651 2652 case Intrinsic::x86_avx2_packsswb: 2653 case Intrinsic::x86_avx2_packuswb: 2654 return Intrinsic::x86_avx2_packsswb; 2655 2656 case Intrinsic::x86_avx2_packssdw: 2657 case Intrinsic::x86_avx2_packusdw: 2658 return Intrinsic::x86_avx2_packssdw; 2659 2660 case Intrinsic::x86_mmx_packsswb: 2661 case Intrinsic::x86_mmx_packuswb: 2662 return Intrinsic::x86_mmx_packsswb; 2663 2664 case Intrinsic::x86_mmx_packssdw: 2665 return Intrinsic::x86_mmx_packssdw; 2666 default: 2667 llvm_unreachable("unexpected intrinsic id"); 2668 } 2669 } 2670 2671 // Instrument vector pack instrinsic. 2672 // 2673 // This function instruments intrinsics like x86_mmx_packsswb, that 2674 // packs elements of 2 input vectors into half as many bits with saturation. 2675 // Shadow is propagated with the signed variant of the same intrinsic applied 2676 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2677 // EltSizeInBits is used only for x86mmx arguments. 2678 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2679 assert(I.getNumArgOperands() == 2); 2680 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2681 IRBuilder<> IRB(&I); 2682 Value *S1 = getShadow(&I, 0); 2683 Value *S2 = getShadow(&I, 1); 2684 assert(isX86_MMX || S1->getType()->isVectorTy()); 2685 2686 // SExt and ICmpNE below must apply to individual elements of input vectors. 2687 // In case of x86mmx arguments, cast them to appropriate vector types and 2688 // back. 2689 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2690 if (isX86_MMX) { 2691 S1 = IRB.CreateBitCast(S1, T); 2692 S2 = IRB.CreateBitCast(S2, T); 2693 } 2694 Value *S1_ext = IRB.CreateSExt( 2695 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2696 Value *S2_ext = IRB.CreateSExt( 2697 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2698 if (isX86_MMX) { 2699 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2700 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2701 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2702 } 2703 2704 Function *ShadowFn = Intrinsic::getDeclaration( 2705 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2706 2707 Value *S = 2708 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2709 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2710 setShadow(&I, S); 2711 setOriginForNaryOp(I); 2712 } 2713 2714 // Instrument sum-of-absolute-differencies intrinsic. 2715 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2716 const unsigned SignificantBitsPerResultElement = 16; 2717 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2718 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2719 unsigned ZeroBitsPerResultElement = 2720 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2721 2722 IRBuilder<> IRB(&I); 2723 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2724 S = IRB.CreateBitCast(S, ResTy); 2725 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2726 ResTy); 2727 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2728 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2729 setShadow(&I, S); 2730 setOriginForNaryOp(I); 2731 } 2732 2733 // Instrument multiply-add intrinsic. 2734 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2735 unsigned EltSizeInBits = 0) { 2736 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2737 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2738 IRBuilder<> IRB(&I); 2739 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2740 S = IRB.CreateBitCast(S, ResTy); 2741 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2742 ResTy); 2743 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2744 setShadow(&I, S); 2745 setOriginForNaryOp(I); 2746 } 2747 2748 // Instrument compare-packed intrinsic. 2749 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 2750 // all-ones shadow. 2751 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 2752 IRBuilder<> IRB(&I); 2753 Type *ResTy = getShadowTy(&I); 2754 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2755 Value *S = IRB.CreateSExt( 2756 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 2757 setShadow(&I, S); 2758 setOriginForNaryOp(I); 2759 } 2760 2761 // Instrument compare-scalar intrinsic. 2762 // This handles both cmp* intrinsics which return the result in the first 2763 // element of a vector, and comi* which return the result as i32. 2764 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 2765 IRBuilder<> IRB(&I); 2766 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2767 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 2768 setShadow(&I, S); 2769 setOriginForNaryOp(I); 2770 } 2771 2772 void handleStmxcsr(IntrinsicInst &I) { 2773 IRBuilder<> IRB(&I); 2774 Value* Addr = I.getArgOperand(0); 2775 Type *Ty = IRB.getInt32Ty(); 2776 Value *ShadowPtr = 2777 getShadowOriginPtr(Addr, IRB, Ty, /*Alignment*/ 1, /*isStore*/ true) 2778 .first; 2779 2780 IRB.CreateStore(getCleanShadow(Ty), 2781 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 2782 2783 if (ClCheckAccessAddress) 2784 insertShadowCheck(Addr, &I); 2785 } 2786 2787 void handleLdmxcsr(IntrinsicInst &I) { 2788 if (!InsertChecks) return; 2789 2790 IRBuilder<> IRB(&I); 2791 Value *Addr = I.getArgOperand(0); 2792 Type *Ty = IRB.getInt32Ty(); 2793 unsigned Alignment = 1; 2794 Value *ShadowPtr, *OriginPtr; 2795 std::tie(ShadowPtr, OriginPtr) = 2796 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 2797 2798 if (ClCheckAccessAddress) 2799 insertShadowCheck(Addr, &I); 2800 2801 Value *Shadow = IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_ldmxcsr"); 2802 Value *Origin = 2803 MS.TrackOrigins ? IRB.CreateLoad(OriginPtr) : getCleanOrigin(); 2804 insertShadowCheck(Shadow, Origin, &I); 2805 } 2806 2807 void handleMaskedStore(IntrinsicInst &I) { 2808 IRBuilder<> IRB(&I); 2809 Value *V = I.getArgOperand(0); 2810 Value *Addr = I.getArgOperand(1); 2811 unsigned Align = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 2812 Value *Mask = I.getArgOperand(3); 2813 Value *Shadow = getShadow(V); 2814 2815 Value *ShadowPtr; 2816 Value *OriginPtr; 2817 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2818 Addr, IRB, Shadow->getType(), Align, /*isStore*/ true); 2819 2820 if (ClCheckAccessAddress) { 2821 insertShadowCheck(Addr, &I); 2822 // Uninitialized mask is kind of like uninitialized address, but not as 2823 // scary. 2824 insertShadowCheck(Mask, &I); 2825 } 2826 2827 IRB.CreateMaskedStore(Shadow, ShadowPtr, Align, Mask); 2828 2829 if (MS.TrackOrigins) { 2830 auto &DL = F.getParent()->getDataLayout(); 2831 paintOrigin(IRB, getOrigin(V), OriginPtr, 2832 DL.getTypeStoreSize(Shadow->getType()), 2833 std::max(Align, kMinOriginAlignment)); 2834 } 2835 } 2836 2837 bool handleMaskedLoad(IntrinsicInst &I) { 2838 IRBuilder<> IRB(&I); 2839 Value *Addr = I.getArgOperand(0); 2840 unsigned Align = cast<ConstantInt>(I.getArgOperand(1))->getZExtValue(); 2841 Value *Mask = I.getArgOperand(2); 2842 Value *PassThru = I.getArgOperand(3); 2843 2844 Type *ShadowTy = getShadowTy(&I); 2845 Value *ShadowPtr, *OriginPtr; 2846 if (PropagateShadow) { 2847 std::tie(ShadowPtr, OriginPtr) = 2848 getShadowOriginPtr(Addr, IRB, ShadowTy, Align, /*isStore*/ false); 2849 setShadow(&I, IRB.CreateMaskedLoad(ShadowPtr, Align, Mask, 2850 getShadow(PassThru), "_msmaskedld")); 2851 } else { 2852 setShadow(&I, getCleanShadow(&I)); 2853 } 2854 2855 if (ClCheckAccessAddress) { 2856 insertShadowCheck(Addr, &I); 2857 insertShadowCheck(Mask, &I); 2858 } 2859 2860 if (MS.TrackOrigins) { 2861 if (PropagateShadow) { 2862 // Choose between PassThru's and the loaded value's origins. 2863 Value *MaskedPassThruShadow = IRB.CreateAnd( 2864 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 2865 2866 Value *Acc = IRB.CreateExtractElement( 2867 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2868 for (int i = 1, N = PassThru->getType()->getVectorNumElements(); i < N; 2869 ++i) { 2870 Value *More = IRB.CreateExtractElement( 2871 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2872 Acc = IRB.CreateOr(Acc, More); 2873 } 2874 2875 Value *Origin = IRB.CreateSelect( 2876 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 2877 getOrigin(PassThru), IRB.CreateLoad(OriginPtr)); 2878 2879 setOrigin(&I, Origin); 2880 } else { 2881 setOrigin(&I, getCleanOrigin()); 2882 } 2883 } 2884 return true; 2885 } 2886 2887 2888 void visitIntrinsicInst(IntrinsicInst &I) { 2889 switch (I.getIntrinsicID()) { 2890 case Intrinsic::bswap: 2891 handleBswap(I); 2892 break; 2893 case Intrinsic::masked_store: 2894 handleMaskedStore(I); 2895 break; 2896 case Intrinsic::masked_load: 2897 handleMaskedLoad(I); 2898 break; 2899 case Intrinsic::x86_sse_stmxcsr: 2900 handleStmxcsr(I); 2901 break; 2902 case Intrinsic::x86_sse_ldmxcsr: 2903 handleLdmxcsr(I); 2904 break; 2905 case Intrinsic::x86_avx512_vcvtsd2usi64: 2906 case Intrinsic::x86_avx512_vcvtsd2usi32: 2907 case Intrinsic::x86_avx512_vcvtss2usi64: 2908 case Intrinsic::x86_avx512_vcvtss2usi32: 2909 case Intrinsic::x86_avx512_cvttss2usi64: 2910 case Intrinsic::x86_avx512_cvttss2usi: 2911 case Intrinsic::x86_avx512_cvttsd2usi64: 2912 case Intrinsic::x86_avx512_cvttsd2usi: 2913 case Intrinsic::x86_avx512_cvtusi2ss: 2914 case Intrinsic::x86_avx512_cvtusi642sd: 2915 case Intrinsic::x86_avx512_cvtusi642ss: 2916 case Intrinsic::x86_sse2_cvtsd2si64: 2917 case Intrinsic::x86_sse2_cvtsd2si: 2918 case Intrinsic::x86_sse2_cvtsd2ss: 2919 case Intrinsic::x86_sse2_cvttsd2si64: 2920 case Intrinsic::x86_sse2_cvttsd2si: 2921 case Intrinsic::x86_sse_cvtss2si64: 2922 case Intrinsic::x86_sse_cvtss2si: 2923 case Intrinsic::x86_sse_cvttss2si64: 2924 case Intrinsic::x86_sse_cvttss2si: 2925 handleVectorConvertIntrinsic(I, 1); 2926 break; 2927 case Intrinsic::x86_sse_cvtps2pi: 2928 case Intrinsic::x86_sse_cvttps2pi: 2929 handleVectorConvertIntrinsic(I, 2); 2930 break; 2931 2932 case Intrinsic::x86_avx512_psll_w_512: 2933 case Intrinsic::x86_avx512_psll_d_512: 2934 case Intrinsic::x86_avx512_psll_q_512: 2935 case Intrinsic::x86_avx512_pslli_w_512: 2936 case Intrinsic::x86_avx512_pslli_d_512: 2937 case Intrinsic::x86_avx512_pslli_q_512: 2938 case Intrinsic::x86_avx512_psrl_w_512: 2939 case Intrinsic::x86_avx512_psrl_d_512: 2940 case Intrinsic::x86_avx512_psrl_q_512: 2941 case Intrinsic::x86_avx512_psra_w_512: 2942 case Intrinsic::x86_avx512_psra_d_512: 2943 case Intrinsic::x86_avx512_psra_q_512: 2944 case Intrinsic::x86_avx512_psrli_w_512: 2945 case Intrinsic::x86_avx512_psrli_d_512: 2946 case Intrinsic::x86_avx512_psrli_q_512: 2947 case Intrinsic::x86_avx512_psrai_w_512: 2948 case Intrinsic::x86_avx512_psrai_d_512: 2949 case Intrinsic::x86_avx512_psrai_q_512: 2950 case Intrinsic::x86_avx512_psra_q_256: 2951 case Intrinsic::x86_avx512_psra_q_128: 2952 case Intrinsic::x86_avx512_psrai_q_256: 2953 case Intrinsic::x86_avx512_psrai_q_128: 2954 case Intrinsic::x86_avx2_psll_w: 2955 case Intrinsic::x86_avx2_psll_d: 2956 case Intrinsic::x86_avx2_psll_q: 2957 case Intrinsic::x86_avx2_pslli_w: 2958 case Intrinsic::x86_avx2_pslli_d: 2959 case Intrinsic::x86_avx2_pslli_q: 2960 case Intrinsic::x86_avx2_psrl_w: 2961 case Intrinsic::x86_avx2_psrl_d: 2962 case Intrinsic::x86_avx2_psrl_q: 2963 case Intrinsic::x86_avx2_psra_w: 2964 case Intrinsic::x86_avx2_psra_d: 2965 case Intrinsic::x86_avx2_psrli_w: 2966 case Intrinsic::x86_avx2_psrli_d: 2967 case Intrinsic::x86_avx2_psrli_q: 2968 case Intrinsic::x86_avx2_psrai_w: 2969 case Intrinsic::x86_avx2_psrai_d: 2970 case Intrinsic::x86_sse2_psll_w: 2971 case Intrinsic::x86_sse2_psll_d: 2972 case Intrinsic::x86_sse2_psll_q: 2973 case Intrinsic::x86_sse2_pslli_w: 2974 case Intrinsic::x86_sse2_pslli_d: 2975 case Intrinsic::x86_sse2_pslli_q: 2976 case Intrinsic::x86_sse2_psrl_w: 2977 case Intrinsic::x86_sse2_psrl_d: 2978 case Intrinsic::x86_sse2_psrl_q: 2979 case Intrinsic::x86_sse2_psra_w: 2980 case Intrinsic::x86_sse2_psra_d: 2981 case Intrinsic::x86_sse2_psrli_w: 2982 case Intrinsic::x86_sse2_psrli_d: 2983 case Intrinsic::x86_sse2_psrli_q: 2984 case Intrinsic::x86_sse2_psrai_w: 2985 case Intrinsic::x86_sse2_psrai_d: 2986 case Intrinsic::x86_mmx_psll_w: 2987 case Intrinsic::x86_mmx_psll_d: 2988 case Intrinsic::x86_mmx_psll_q: 2989 case Intrinsic::x86_mmx_pslli_w: 2990 case Intrinsic::x86_mmx_pslli_d: 2991 case Intrinsic::x86_mmx_pslli_q: 2992 case Intrinsic::x86_mmx_psrl_w: 2993 case Intrinsic::x86_mmx_psrl_d: 2994 case Intrinsic::x86_mmx_psrl_q: 2995 case Intrinsic::x86_mmx_psra_w: 2996 case Intrinsic::x86_mmx_psra_d: 2997 case Intrinsic::x86_mmx_psrli_w: 2998 case Intrinsic::x86_mmx_psrli_d: 2999 case Intrinsic::x86_mmx_psrli_q: 3000 case Intrinsic::x86_mmx_psrai_w: 3001 case Intrinsic::x86_mmx_psrai_d: 3002 handleVectorShiftIntrinsic(I, /* Variable */ false); 3003 break; 3004 case Intrinsic::x86_avx2_psllv_d: 3005 case Intrinsic::x86_avx2_psllv_d_256: 3006 case Intrinsic::x86_avx512_psllv_d_512: 3007 case Intrinsic::x86_avx2_psllv_q: 3008 case Intrinsic::x86_avx2_psllv_q_256: 3009 case Intrinsic::x86_avx512_psllv_q_512: 3010 case Intrinsic::x86_avx2_psrlv_d: 3011 case Intrinsic::x86_avx2_psrlv_d_256: 3012 case Intrinsic::x86_avx512_psrlv_d_512: 3013 case Intrinsic::x86_avx2_psrlv_q: 3014 case Intrinsic::x86_avx2_psrlv_q_256: 3015 case Intrinsic::x86_avx512_psrlv_q_512: 3016 case Intrinsic::x86_avx2_psrav_d: 3017 case Intrinsic::x86_avx2_psrav_d_256: 3018 case Intrinsic::x86_avx512_psrav_d_512: 3019 case Intrinsic::x86_avx512_psrav_q_128: 3020 case Intrinsic::x86_avx512_psrav_q_256: 3021 case Intrinsic::x86_avx512_psrav_q_512: 3022 handleVectorShiftIntrinsic(I, /* Variable */ true); 3023 break; 3024 3025 case Intrinsic::x86_sse2_packsswb_128: 3026 case Intrinsic::x86_sse2_packssdw_128: 3027 case Intrinsic::x86_sse2_packuswb_128: 3028 case Intrinsic::x86_sse41_packusdw: 3029 case Intrinsic::x86_avx2_packsswb: 3030 case Intrinsic::x86_avx2_packssdw: 3031 case Intrinsic::x86_avx2_packuswb: 3032 case Intrinsic::x86_avx2_packusdw: 3033 handleVectorPackIntrinsic(I); 3034 break; 3035 3036 case Intrinsic::x86_mmx_packsswb: 3037 case Intrinsic::x86_mmx_packuswb: 3038 handleVectorPackIntrinsic(I, 16); 3039 break; 3040 3041 case Intrinsic::x86_mmx_packssdw: 3042 handleVectorPackIntrinsic(I, 32); 3043 break; 3044 3045 case Intrinsic::x86_mmx_psad_bw: 3046 case Intrinsic::x86_sse2_psad_bw: 3047 case Intrinsic::x86_avx2_psad_bw: 3048 handleVectorSadIntrinsic(I); 3049 break; 3050 3051 case Intrinsic::x86_sse2_pmadd_wd: 3052 case Intrinsic::x86_avx2_pmadd_wd: 3053 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3054 case Intrinsic::x86_avx2_pmadd_ub_sw: 3055 handleVectorPmaddIntrinsic(I); 3056 break; 3057 3058 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3059 handleVectorPmaddIntrinsic(I, 8); 3060 break; 3061 3062 case Intrinsic::x86_mmx_pmadd_wd: 3063 handleVectorPmaddIntrinsic(I, 16); 3064 break; 3065 3066 case Intrinsic::x86_sse_cmp_ss: 3067 case Intrinsic::x86_sse2_cmp_sd: 3068 case Intrinsic::x86_sse_comieq_ss: 3069 case Intrinsic::x86_sse_comilt_ss: 3070 case Intrinsic::x86_sse_comile_ss: 3071 case Intrinsic::x86_sse_comigt_ss: 3072 case Intrinsic::x86_sse_comige_ss: 3073 case Intrinsic::x86_sse_comineq_ss: 3074 case Intrinsic::x86_sse_ucomieq_ss: 3075 case Intrinsic::x86_sse_ucomilt_ss: 3076 case Intrinsic::x86_sse_ucomile_ss: 3077 case Intrinsic::x86_sse_ucomigt_ss: 3078 case Intrinsic::x86_sse_ucomige_ss: 3079 case Intrinsic::x86_sse_ucomineq_ss: 3080 case Intrinsic::x86_sse2_comieq_sd: 3081 case Intrinsic::x86_sse2_comilt_sd: 3082 case Intrinsic::x86_sse2_comile_sd: 3083 case Intrinsic::x86_sse2_comigt_sd: 3084 case Intrinsic::x86_sse2_comige_sd: 3085 case Intrinsic::x86_sse2_comineq_sd: 3086 case Intrinsic::x86_sse2_ucomieq_sd: 3087 case Intrinsic::x86_sse2_ucomilt_sd: 3088 case Intrinsic::x86_sse2_ucomile_sd: 3089 case Intrinsic::x86_sse2_ucomigt_sd: 3090 case Intrinsic::x86_sse2_ucomige_sd: 3091 case Intrinsic::x86_sse2_ucomineq_sd: 3092 handleVectorCompareScalarIntrinsic(I); 3093 break; 3094 3095 case Intrinsic::x86_sse_cmp_ps: 3096 case Intrinsic::x86_sse2_cmp_pd: 3097 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3098 // generates reasonably looking IR that fails in the backend with "Do not 3099 // know how to split the result of this operator!". 3100 handleVectorComparePackedIntrinsic(I); 3101 break; 3102 3103 default: 3104 if (!handleUnknownIntrinsic(I)) 3105 visitInstruction(I); 3106 break; 3107 } 3108 } 3109 3110 void visitCallSite(CallSite CS) { 3111 Instruction &I = *CS.getInstruction(); 3112 assert(!I.getMetadata("nosanitize")); 3113 assert((CS.isCall() || CS.isInvoke()) && "Unknown type of CallSite"); 3114 if (CS.isCall()) { 3115 CallInst *Call = cast<CallInst>(&I); 3116 3117 // For inline asm, do the usual thing: check argument shadow and mark all 3118 // outputs as clean. Note that any side effects of the inline asm that are 3119 // not immediately visible in its constraints are not handled. 3120 if (Call->isInlineAsm()) { 3121 if (ClHandleAsmConservative) 3122 visitAsmInstruction(I); 3123 else 3124 visitInstruction(I); 3125 return; 3126 } 3127 3128 assert(!isa<IntrinsicInst>(&I) && "intrinsics are handled elsewhere"); 3129 3130 // We are going to insert code that relies on the fact that the callee 3131 // will become a non-readonly function after it is instrumented by us. To 3132 // prevent this code from being optimized out, mark that function 3133 // non-readonly in advance. 3134 if (Function *Func = Call->getCalledFunction()) { 3135 // Clear out readonly/readnone attributes. 3136 AttrBuilder B; 3137 B.addAttribute(Attribute::ReadOnly) 3138 .addAttribute(Attribute::ReadNone); 3139 Func->removeAttributes(AttributeList::FunctionIndex, B); 3140 } 3141 3142 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3143 } 3144 IRBuilder<> IRB(&I); 3145 3146 unsigned ArgOffset = 0; 3147 LLVM_DEBUG(dbgs() << " CallSite: " << I << "\n"); 3148 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3149 ArgIt != End; ++ArgIt) { 3150 Value *A = *ArgIt; 3151 unsigned i = ArgIt - CS.arg_begin(); 3152 if (!A->getType()->isSized()) { 3153 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << I << "\n"); 3154 continue; 3155 } 3156 unsigned Size = 0; 3157 Value *Store = nullptr; 3158 // Compute the Shadow for arg even if it is ByVal, because 3159 // in that case getShadow() will copy the actual arg shadow to 3160 // __msan_param_tls. 3161 Value *ArgShadow = getShadow(A); 3162 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3163 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3164 << " Shadow: " << *ArgShadow << "\n"); 3165 bool ArgIsInitialized = false; 3166 const DataLayout &DL = F.getParent()->getDataLayout(); 3167 if (CS.paramHasAttr(i, Attribute::ByVal)) { 3168 assert(A->getType()->isPointerTy() && 3169 "ByVal argument is not a pointer!"); 3170 Size = DL.getTypeAllocSize(A->getType()->getPointerElementType()); 3171 if (ArgOffset + Size > kParamTLSSize) break; 3172 unsigned ParamAlignment = CS.getParamAlignment(i); 3173 unsigned Alignment = std::min(ParamAlignment, kShadowTLSAlignment); 3174 Value *AShadowPtr = 3175 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3176 /*isStore*/ false) 3177 .first; 3178 3179 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3180 Alignment, Size); 3181 // TODO(glider): need to copy origins. 3182 } else { 3183 Size = DL.getTypeAllocSize(A->getType()); 3184 if (ArgOffset + Size > kParamTLSSize) break; 3185 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3186 kShadowTLSAlignment); 3187 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3188 if (Cst && Cst->isNullValue()) ArgIsInitialized = true; 3189 } 3190 if (MS.TrackOrigins && !ArgIsInitialized) 3191 IRB.CreateStore(getOrigin(A), 3192 getOriginPtrForArgument(A, IRB, ArgOffset)); 3193 (void)Store; 3194 assert(Size != 0 && Store != nullptr); 3195 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3196 ArgOffset += alignTo(Size, 8); 3197 } 3198 LLVM_DEBUG(dbgs() << " done with call args\n"); 3199 3200 FunctionType *FT = 3201 cast<FunctionType>(CS.getCalledValue()->getType()->getContainedType(0)); 3202 if (FT->isVarArg()) { 3203 VAHelper->visitCallSite(CS, IRB); 3204 } 3205 3206 // Now, get the shadow for the RetVal. 3207 if (!I.getType()->isSized()) return; 3208 // Don't emit the epilogue for musttail call returns. 3209 if (CS.isCall() && cast<CallInst>(&I)->isMustTailCall()) return; 3210 IRBuilder<> IRBBefore(&I); 3211 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3212 Value *Base = getShadowPtrForRetval(&I, IRBBefore); 3213 IRBBefore.CreateAlignedStore(getCleanShadow(&I), Base, kShadowTLSAlignment); 3214 BasicBlock::iterator NextInsn; 3215 if (CS.isCall()) { 3216 NextInsn = ++I.getIterator(); 3217 assert(NextInsn != I.getParent()->end()); 3218 } else { 3219 BasicBlock *NormalDest = cast<InvokeInst>(&I)->getNormalDest(); 3220 if (!NormalDest->getSinglePredecessor()) { 3221 // FIXME: this case is tricky, so we are just conservative here. 3222 // Perhaps we need to split the edge between this BB and NormalDest, 3223 // but a naive attempt to use SplitEdge leads to a crash. 3224 setShadow(&I, getCleanShadow(&I)); 3225 setOrigin(&I, getCleanOrigin()); 3226 return; 3227 } 3228 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3229 // Anything inserted there will be instrumented by MSan later! 3230 NextInsn = NormalDest->getFirstInsertionPt(); 3231 assert(NextInsn != NormalDest->end() && 3232 "Could not find insertion point for retval shadow load"); 3233 } 3234 IRBuilder<> IRBAfter(&*NextInsn); 3235 Value *RetvalShadow = 3236 IRBAfter.CreateAlignedLoad(getShadowPtrForRetval(&I, IRBAfter), 3237 kShadowTLSAlignment, "_msret"); 3238 setShadow(&I, RetvalShadow); 3239 if (MS.TrackOrigins) 3240 setOrigin(&I, IRBAfter.CreateLoad(getOriginPtrForRetval(IRBAfter))); 3241 } 3242 3243 bool isAMustTailRetVal(Value *RetVal) { 3244 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3245 RetVal = I->getOperand(0); 3246 } 3247 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3248 return I->isMustTailCall(); 3249 } 3250 return false; 3251 } 3252 3253 void visitReturnInst(ReturnInst &I) { 3254 IRBuilder<> IRB(&I); 3255 Value *RetVal = I.getReturnValue(); 3256 if (!RetVal) return; 3257 // Don't emit the epilogue for musttail call returns. 3258 if (isAMustTailRetVal(RetVal)) return; 3259 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3260 if (CheckReturnValue) { 3261 insertShadowCheck(RetVal, &I); 3262 Value *Shadow = getCleanShadow(RetVal); 3263 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3264 } else { 3265 Value *Shadow = getShadow(RetVal); 3266 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3267 if (MS.TrackOrigins) 3268 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3269 } 3270 } 3271 3272 void visitPHINode(PHINode &I) { 3273 IRBuilder<> IRB(&I); 3274 if (!PropagateShadow) { 3275 setShadow(&I, getCleanShadow(&I)); 3276 setOrigin(&I, getCleanOrigin()); 3277 return; 3278 } 3279 3280 ShadowPHINodes.push_back(&I); 3281 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3282 "_msphi_s")); 3283 if (MS.TrackOrigins) 3284 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3285 "_msphi_o")); 3286 } 3287 3288 Value *getLocalVarDescription(AllocaInst &I) { 3289 SmallString<2048> StackDescriptionStorage; 3290 raw_svector_ostream StackDescription(StackDescriptionStorage); 3291 // We create a string with a description of the stack allocation and 3292 // pass it into __msan_set_alloca_origin. 3293 // It will be printed by the run-time if stack-originated UMR is found. 3294 // The first 4 bytes of the string are set to '----' and will be replaced 3295 // by __msan_va_arg_overflow_size_tls at the first call. 3296 StackDescription << "----" << I.getName() << "@" << F.getName(); 3297 return createPrivateNonConstGlobalForString(*F.getParent(), 3298 StackDescription.str()); 3299 } 3300 3301 void instrumentAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3302 if (PoisonStack && ClPoisonStackWithCall) { 3303 IRB.CreateCall(MS.MsanPoisonStackFn, 3304 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3305 } else { 3306 Value *ShadowBase, *OriginBase; 3307 std::tie(ShadowBase, OriginBase) = 3308 getShadowOriginPtr(&I, IRB, IRB.getInt8Ty(), 1, /*isStore*/ true); 3309 3310 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3311 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, I.getAlignment()); 3312 } 3313 3314 if (PoisonStack && MS.TrackOrigins) { 3315 Value *Descr = getLocalVarDescription(I); 3316 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3317 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3318 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3319 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3320 } 3321 } 3322 3323 void instrumentAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3324 Value *Descr = getLocalVarDescription(I); 3325 if (PoisonStack) { 3326 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3327 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3328 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3329 } else { 3330 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3331 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3332 } 3333 } 3334 3335 void visitAllocaInst(AllocaInst &I) { 3336 setShadow(&I, getCleanShadow(&I)); 3337 setOrigin(&I, getCleanOrigin()); 3338 IRBuilder<> IRB(I.getNextNode()); 3339 const DataLayout &DL = F.getParent()->getDataLayout(); 3340 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3341 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3342 if (I.isArrayAllocation()) 3343 Len = IRB.CreateMul(Len, I.getArraySize()); 3344 3345 if (MS.CompileKernel) 3346 instrumentAllocaKmsan(I, IRB, Len); 3347 else 3348 instrumentAllocaUserspace(I, IRB, Len); 3349 } 3350 3351 void visitSelectInst(SelectInst& I) { 3352 IRBuilder<> IRB(&I); 3353 // a = select b, c, d 3354 Value *B = I.getCondition(); 3355 Value *C = I.getTrueValue(); 3356 Value *D = I.getFalseValue(); 3357 Value *Sb = getShadow(B); 3358 Value *Sc = getShadow(C); 3359 Value *Sd = getShadow(D); 3360 3361 // Result shadow if condition shadow is 0. 3362 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3363 Value *Sa1; 3364 if (I.getType()->isAggregateType()) { 3365 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3366 // an extra "select". This results in much more compact IR. 3367 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3368 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3369 } else { 3370 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3371 // If Sb (condition is poisoned), look for bits in c and d that are equal 3372 // and both unpoisoned. 3373 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3374 3375 // Cast arguments to shadow-compatible type. 3376 C = CreateAppToShadowCast(IRB, C); 3377 D = CreateAppToShadowCast(IRB, D); 3378 3379 // Result shadow if condition shadow is 1. 3380 Sa1 = IRB.CreateOr(IRB.CreateXor(C, D), IRB.CreateOr(Sc, Sd)); 3381 } 3382 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3383 setShadow(&I, Sa); 3384 if (MS.TrackOrigins) { 3385 // Origins are always i32, so any vector conditions must be flattened. 3386 // FIXME: consider tracking vector origins for app vectors? 3387 if (B->getType()->isVectorTy()) { 3388 Type *FlatTy = getShadowTyNoVec(B->getType()); 3389 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3390 ConstantInt::getNullValue(FlatTy)); 3391 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3392 ConstantInt::getNullValue(FlatTy)); 3393 } 3394 // a = select b, c, d 3395 // Oa = Sb ? Ob : (b ? Oc : Od) 3396 setOrigin( 3397 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3398 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3399 getOrigin(I.getFalseValue())))); 3400 } 3401 } 3402 3403 void visitLandingPadInst(LandingPadInst &I) { 3404 // Do nothing. 3405 // See https://github.com/google/sanitizers/issues/504 3406 setShadow(&I, getCleanShadow(&I)); 3407 setOrigin(&I, getCleanOrigin()); 3408 } 3409 3410 void visitCatchSwitchInst(CatchSwitchInst &I) { 3411 setShadow(&I, getCleanShadow(&I)); 3412 setOrigin(&I, getCleanOrigin()); 3413 } 3414 3415 void visitFuncletPadInst(FuncletPadInst &I) { 3416 setShadow(&I, getCleanShadow(&I)); 3417 setOrigin(&I, getCleanOrigin()); 3418 } 3419 3420 void visitGetElementPtrInst(GetElementPtrInst &I) { 3421 handleShadowOr(I); 3422 } 3423 3424 void visitExtractValueInst(ExtractValueInst &I) { 3425 IRBuilder<> IRB(&I); 3426 Value *Agg = I.getAggregateOperand(); 3427 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 3428 Value *AggShadow = getShadow(Agg); 3429 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3430 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 3431 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 3432 setShadow(&I, ResShadow); 3433 setOriginForNaryOp(I); 3434 } 3435 3436 void visitInsertValueInst(InsertValueInst &I) { 3437 IRBuilder<> IRB(&I); 3438 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 3439 Value *AggShadow = getShadow(I.getAggregateOperand()); 3440 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 3441 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3442 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 3443 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 3444 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 3445 setShadow(&I, Res); 3446 setOriginForNaryOp(I); 3447 } 3448 3449 void dumpInst(Instruction &I) { 3450 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 3451 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 3452 } else { 3453 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 3454 } 3455 errs() << "QQQ " << I << "\n"; 3456 } 3457 3458 void visitResumeInst(ResumeInst &I) { 3459 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 3460 // Nothing to do here. 3461 } 3462 3463 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 3464 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 3465 // Nothing to do here. 3466 } 3467 3468 void visitCatchReturnInst(CatchReturnInst &CRI) { 3469 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 3470 // Nothing to do here. 3471 } 3472 3473 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 3474 const DataLayout &DL, bool isOutput) { 3475 // For each assembly argument, we check its value for being initialized. 3476 // If the argument is a pointer, we assume it points to a single element 3477 // of the corresponding type (or to a 8-byte word, if the type is unsized). 3478 // Each such pointer is instrumented with a call to the runtime library. 3479 Type *OpType = Operand->getType(); 3480 // Check the operand value itself. 3481 insertShadowCheck(Operand, &I); 3482 if (!OpType->isPointerTy()) { 3483 assert(!isOutput); 3484 return; 3485 } 3486 Value *Hook = 3487 isOutput ? MS.MsanInstrumentAsmStoreFn : MS.MsanInstrumentAsmLoadFn; 3488 Type *ElType = OpType->getPointerElementType(); 3489 if (!ElType->isSized()) 3490 return; 3491 int Size = DL.getTypeStoreSize(ElType); 3492 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 3493 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 3494 IRB.CreateCall(Hook, {Ptr, SizeVal}); 3495 } 3496 3497 /// Get the number of output arguments returned by pointers. 3498 int getNumOutputArgs(InlineAsm *IA, CallInst *CI) { 3499 int NumRetOutputs = 0; 3500 int NumOutputs = 0; 3501 Type *RetTy = dyn_cast<Value>(CI)->getType(); 3502 if (!RetTy->isVoidTy()) { 3503 // Register outputs are returned via the CallInst return value. 3504 StructType *ST = dyn_cast_or_null<StructType>(RetTy); 3505 if (ST) 3506 NumRetOutputs = ST->getNumElements(); 3507 else 3508 NumRetOutputs = 1; 3509 } 3510 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 3511 for (size_t i = 0, n = Constraints.size(); i < n; i++) { 3512 InlineAsm::ConstraintInfo Info = Constraints[i]; 3513 switch (Info.Type) { 3514 case InlineAsm::isOutput: 3515 NumOutputs++; 3516 break; 3517 default: 3518 break; 3519 } 3520 } 3521 return NumOutputs - NumRetOutputs; 3522 } 3523 3524 void visitAsmInstruction(Instruction &I) { 3525 // Conservative inline assembly handling: check for poisoned shadow of 3526 // asm() arguments, then unpoison the result and all the memory locations 3527 // pointed to by those arguments. 3528 // An inline asm() statement in C++ contains lists of input and output 3529 // arguments used by the assembly code. These are mapped to operands of the 3530 // CallInst as follows: 3531 // - nR register outputs ("=r) are returned by value in a single structure 3532 // (SSA value of the CallInst); 3533 // - nO other outputs ("=m" and others) are returned by pointer as first 3534 // nO operands of the CallInst; 3535 // - nI inputs ("r", "m" and others) are passed to CallInst as the 3536 // remaining nI operands. 3537 // The total number of asm() arguments in the source is nR+nO+nI, and the 3538 // corresponding CallInst has nO+nI+1 operands (the last operand is the 3539 // function to be called). 3540 const DataLayout &DL = F.getParent()->getDataLayout(); 3541 CallInst *CI = dyn_cast<CallInst>(&I); 3542 IRBuilder<> IRB(&I); 3543 InlineAsm *IA = cast<InlineAsm>(CI->getCalledValue()); 3544 int OutputArgs = getNumOutputArgs(IA, CI); 3545 // The last operand of a CallInst is the function itself. 3546 int NumOperands = CI->getNumOperands() - 1; 3547 3548 // Check input arguments. Doing so before unpoisoning output arguments, so 3549 // that we won't overwrite uninit values before checking them. 3550 for (int i = OutputArgs; i < NumOperands; i++) { 3551 Value *Operand = CI->getOperand(i); 3552 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 3553 } 3554 // Unpoison output arguments. This must happen before the actual InlineAsm 3555 // call, so that the shadow for memory published in the asm() statement 3556 // remains valid. 3557 for (int i = 0; i < OutputArgs; i++) { 3558 Value *Operand = CI->getOperand(i); 3559 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 3560 } 3561 3562 setShadow(&I, getCleanShadow(&I)); 3563 setOrigin(&I, getCleanOrigin()); 3564 } 3565 3566 void visitInstruction(Instruction &I) { 3567 // Everything else: stop propagating and check for poisoned shadow. 3568 if (ClDumpStrictInstructions) 3569 dumpInst(I); 3570 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 3571 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 3572 Value *Operand = I.getOperand(i); 3573 if (Operand->getType()->isSized()) 3574 insertShadowCheck(Operand, &I); 3575 } 3576 setShadow(&I, getCleanShadow(&I)); 3577 setOrigin(&I, getCleanOrigin()); 3578 } 3579 }; 3580 3581 /// AMD64-specific implementation of VarArgHelper. 3582 struct VarArgAMD64Helper : public VarArgHelper { 3583 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 3584 // See a comment in visitCallSite for more details. 3585 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 3586 static const unsigned AMD64FpEndOffsetSSE = 176; 3587 // If SSE is disabled, fp_offset in va_list is zero. 3588 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 3589 3590 unsigned AMD64FpEndOffset; 3591 Function &F; 3592 MemorySanitizer &MS; 3593 MemorySanitizerVisitor &MSV; 3594 Value *VAArgTLSCopy = nullptr; 3595 Value *VAArgTLSOriginCopy = nullptr; 3596 Value *VAArgOverflowSize = nullptr; 3597 3598 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3599 3600 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 3601 3602 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 3603 MemorySanitizerVisitor &MSV) 3604 : F(F), MS(MS), MSV(MSV) { 3605 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 3606 for (const auto &Attr : F.getAttributes().getFnAttributes()) { 3607 if (Attr.isStringAttribute() && 3608 (Attr.getKindAsString() == "target-features")) { 3609 if (Attr.getValueAsString().contains("-sse")) 3610 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 3611 break; 3612 } 3613 } 3614 } 3615 3616 ArgKind classifyArgument(Value* arg) { 3617 // A very rough approximation of X86_64 argument classification rules. 3618 Type *T = arg->getType(); 3619 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 3620 return AK_FloatingPoint; 3621 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 3622 return AK_GeneralPurpose; 3623 if (T->isPointerTy()) 3624 return AK_GeneralPurpose; 3625 return AK_Memory; 3626 } 3627 3628 // For VarArg functions, store the argument shadow in an ABI-specific format 3629 // that corresponds to va_list layout. 3630 // We do this because Clang lowers va_arg in the frontend, and this pass 3631 // only sees the low level code that deals with va_list internals. 3632 // A much easier alternative (provided that Clang emits va_arg instructions) 3633 // would have been to associate each live instance of va_list with a copy of 3634 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 3635 // order. 3636 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3637 unsigned GpOffset = 0; 3638 unsigned FpOffset = AMD64GpEndOffset; 3639 unsigned OverflowOffset = AMD64FpEndOffset; 3640 const DataLayout &DL = F.getParent()->getDataLayout(); 3641 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3642 ArgIt != End; ++ArgIt) { 3643 Value *A = *ArgIt; 3644 unsigned ArgNo = CS.getArgumentNo(ArgIt); 3645 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 3646 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 3647 if (IsByVal) { 3648 // ByVal arguments always go to the overflow area. 3649 // Fixed arguments passed through the overflow area will be stepped 3650 // over by va_start, so don't count them towards the offset. 3651 if (IsFixed) 3652 continue; 3653 assert(A->getType()->isPointerTy()); 3654 Type *RealTy = A->getType()->getPointerElementType(); 3655 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 3656 Value *ShadowBase = getShadowPtrForVAArgument( 3657 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 3658 Value *OriginBase = nullptr; 3659 if (MS.TrackOrigins) 3660 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 3661 OverflowOffset += alignTo(ArgSize, 8); 3662 if (!ShadowBase) 3663 continue; 3664 Value *ShadowPtr, *OriginPtr; 3665 std::tie(ShadowPtr, OriginPtr) = 3666 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 3667 /*isStore*/ false); 3668 3669 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 3670 kShadowTLSAlignment, ArgSize); 3671 if (MS.TrackOrigins) 3672 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 3673 kShadowTLSAlignment, ArgSize); 3674 } else { 3675 ArgKind AK = classifyArgument(A); 3676 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 3677 AK = AK_Memory; 3678 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 3679 AK = AK_Memory; 3680 Value *ShadowBase, *OriginBase = nullptr; 3681 switch (AK) { 3682 case AK_GeneralPurpose: 3683 ShadowBase = 3684 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 3685 if (MS.TrackOrigins) 3686 OriginBase = 3687 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 3688 GpOffset += 8; 3689 break; 3690 case AK_FloatingPoint: 3691 ShadowBase = 3692 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 3693 if (MS.TrackOrigins) 3694 OriginBase = 3695 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 3696 FpOffset += 16; 3697 break; 3698 case AK_Memory: 3699 if (IsFixed) 3700 continue; 3701 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3702 ShadowBase = 3703 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 3704 if (MS.TrackOrigins) 3705 OriginBase = 3706 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 3707 OverflowOffset += alignTo(ArgSize, 8); 3708 } 3709 // Take fixed arguments into account for GpOffset and FpOffset, 3710 // but don't actually store shadows for them. 3711 // TODO(glider): don't call get*PtrForVAArgument() for them. 3712 if (IsFixed) 3713 continue; 3714 if (!ShadowBase) 3715 continue; 3716 Value *Shadow = MSV.getShadow(A); 3717 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 3718 if (MS.TrackOrigins) { 3719 Value *Origin = MSV.getOrigin(A); 3720 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 3721 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 3722 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 3723 } 3724 } 3725 } 3726 Constant *OverflowSize = 3727 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 3728 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 3729 } 3730 3731 /// Compute the shadow address for a given va_arg. 3732 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3733 unsigned ArgOffset, unsigned ArgSize) { 3734 // Make sure we don't overflow __msan_va_arg_tls. 3735 if (ArgOffset + ArgSize > kParamTLSSize) 3736 return nullptr; 3737 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3738 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3739 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3740 "_msarg_va_s"); 3741 } 3742 3743 /// Compute the origin address for a given va_arg. 3744 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 3745 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 3746 // getOriginPtrForVAArgument() is always called after 3747 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 3748 // overflow. 3749 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3750 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 3751 "_msarg_va_o"); 3752 } 3753 3754 void unpoisonVAListTagForInst(IntrinsicInst &I) { 3755 IRBuilder<> IRB(&I); 3756 Value *VAListTag = I.getArgOperand(0); 3757 Value *ShadowPtr, *OriginPtr; 3758 unsigned Alignment = 8; 3759 std::tie(ShadowPtr, OriginPtr) = 3760 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 3761 /*isStore*/ true); 3762 3763 // Unpoison the whole __va_list_tag. 3764 // FIXME: magic ABI constants. 3765 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3766 /* size */ 24, Alignment, false); 3767 // We shouldn't need to zero out the origins, as they're only checked for 3768 // nonzero shadow. 3769 } 3770 3771 void visitVAStartInst(VAStartInst &I) override { 3772 if (F.getCallingConv() == CallingConv::Win64) 3773 return; 3774 VAStartInstrumentationList.push_back(&I); 3775 unpoisonVAListTagForInst(I); 3776 } 3777 3778 void visitVACopyInst(VACopyInst &I) override { 3779 if (F.getCallingConv() == CallingConv::Win64) return; 3780 unpoisonVAListTagForInst(I); 3781 } 3782 3783 void finalizeInstrumentation() override { 3784 assert(!VAArgOverflowSize && !VAArgTLSCopy && 3785 "finalizeInstrumentation called twice"); 3786 if (!VAStartInstrumentationList.empty()) { 3787 // If there is a va_start in this function, make a backup copy of 3788 // va_arg_tls somewhere in the function entry block. 3789 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 3790 VAArgOverflowSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 3791 Value *CopySize = 3792 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 3793 VAArgOverflowSize); 3794 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3795 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 3796 if (MS.TrackOrigins) { 3797 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3798 IRB.CreateMemCpy(VAArgTLSOriginCopy, 8, MS.VAArgOriginTLS, 8, CopySize); 3799 } 3800 } 3801 3802 // Instrument va_start. 3803 // Copy va_list shadow from the backup copy of the TLS contents. 3804 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 3805 CallInst *OrigInst = VAStartInstrumentationList[i]; 3806 IRBuilder<> IRB(OrigInst->getNextNode()); 3807 Value *VAListTag = OrigInst->getArgOperand(0); 3808 3809 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 3810 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3811 ConstantInt::get(MS.IntptrTy, 16)), 3812 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 3813 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr); 3814 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 3815 unsigned Alignment = 16; 3816 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 3817 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 3818 Alignment, /*isStore*/ true); 3819 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 3820 AMD64FpEndOffset); 3821 if (MS.TrackOrigins) 3822 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 3823 Alignment, AMD64FpEndOffset); 3824 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 3825 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3826 ConstantInt::get(MS.IntptrTy, 8)), 3827 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 3828 Value *OverflowArgAreaPtr = IRB.CreateLoad(OverflowArgAreaPtrPtr); 3829 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 3830 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 3831 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 3832 Alignment, /*isStore*/ true); 3833 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 3834 AMD64FpEndOffset); 3835 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 3836 VAArgOverflowSize); 3837 if (MS.TrackOrigins) { 3838 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 3839 AMD64FpEndOffset); 3840 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 3841 VAArgOverflowSize); 3842 } 3843 } 3844 } 3845 }; 3846 3847 /// MIPS64-specific implementation of VarArgHelper. 3848 struct VarArgMIPS64Helper : public VarArgHelper { 3849 Function &F; 3850 MemorySanitizer &MS; 3851 MemorySanitizerVisitor &MSV; 3852 Value *VAArgTLSCopy = nullptr; 3853 Value *VAArgSize = nullptr; 3854 3855 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3856 3857 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 3858 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 3859 3860 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3861 unsigned VAArgOffset = 0; 3862 const DataLayout &DL = F.getParent()->getDataLayout(); 3863 for (CallSite::arg_iterator ArgIt = CS.arg_begin() + 3864 CS.getFunctionType()->getNumParams(), End = CS.arg_end(); 3865 ArgIt != End; ++ArgIt) { 3866 Triple TargetTriple(F.getParent()->getTargetTriple()); 3867 Value *A = *ArgIt; 3868 Value *Base; 3869 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3870 if (TargetTriple.getArch() == Triple::mips64) { 3871 // Adjusting the shadow for argument with size < 8 to match the placement 3872 // of bits in big endian system 3873 if (ArgSize < 8) 3874 VAArgOffset += (8 - ArgSize); 3875 } 3876 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 3877 VAArgOffset += ArgSize; 3878 VAArgOffset = alignTo(VAArgOffset, 8); 3879 if (!Base) 3880 continue; 3881 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 3882 } 3883 3884 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 3885 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 3886 // a new class member i.e. it is the total size of all VarArgs. 3887 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 3888 } 3889 3890 /// Compute the shadow address for a given va_arg. 3891 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3892 unsigned ArgOffset, unsigned ArgSize) { 3893 // Make sure we don't overflow __msan_va_arg_tls. 3894 if (ArgOffset + ArgSize > kParamTLSSize) 3895 return nullptr; 3896 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3897 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3898 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3899 "_msarg"); 3900 } 3901 3902 void visitVAStartInst(VAStartInst &I) override { 3903 IRBuilder<> IRB(&I); 3904 VAStartInstrumentationList.push_back(&I); 3905 Value *VAListTag = I.getArgOperand(0); 3906 Value *ShadowPtr, *OriginPtr; 3907 unsigned Alignment = 8; 3908 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 3909 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 3910 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3911 /* size */ 8, Alignment, false); 3912 } 3913 3914 void visitVACopyInst(VACopyInst &I) override { 3915 IRBuilder<> IRB(&I); 3916 VAStartInstrumentationList.push_back(&I); 3917 Value *VAListTag = I.getArgOperand(0); 3918 Value *ShadowPtr, *OriginPtr; 3919 unsigned Alignment = 8; 3920 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 3921 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 3922 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3923 /* size */ 8, Alignment, false); 3924 } 3925 3926 void finalizeInstrumentation() override { 3927 assert(!VAArgSize && !VAArgTLSCopy && 3928 "finalizeInstrumentation called twice"); 3929 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 3930 VAArgSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 3931 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 3932 VAArgSize); 3933 3934 if (!VAStartInstrumentationList.empty()) { 3935 // If there is a va_start in this function, make a backup copy of 3936 // va_arg_tls somewhere in the function entry block. 3937 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3938 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 3939 } 3940 3941 // Instrument va_start. 3942 // Copy va_list shadow from the backup copy of the TLS contents. 3943 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 3944 CallInst *OrigInst = VAStartInstrumentationList[i]; 3945 IRBuilder<> IRB(OrigInst->getNextNode()); 3946 Value *VAListTag = OrigInst->getArgOperand(0); 3947 Value *RegSaveAreaPtrPtr = 3948 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3949 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 3950 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr); 3951 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 3952 unsigned Alignment = 8; 3953 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 3954 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 3955 Alignment, /*isStore*/ true); 3956 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 3957 CopySize); 3958 } 3959 } 3960 }; 3961 3962 /// AArch64-specific implementation of VarArgHelper. 3963 struct VarArgAArch64Helper : public VarArgHelper { 3964 static const unsigned kAArch64GrArgSize = 64; 3965 static const unsigned kAArch64VrArgSize = 128; 3966 3967 static const unsigned AArch64GrBegOffset = 0; 3968 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 3969 // Make VR space aligned to 16 bytes. 3970 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 3971 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 3972 + kAArch64VrArgSize; 3973 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 3974 3975 Function &F; 3976 MemorySanitizer &MS; 3977 MemorySanitizerVisitor &MSV; 3978 Value *VAArgTLSCopy = nullptr; 3979 Value *VAArgOverflowSize = nullptr; 3980 3981 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3982 3983 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 3984 3985 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 3986 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 3987 3988 ArgKind classifyArgument(Value* arg) { 3989 Type *T = arg->getType(); 3990 if (T->isFPOrFPVectorTy()) 3991 return AK_FloatingPoint; 3992 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 3993 || (T->isPointerTy())) 3994 return AK_GeneralPurpose; 3995 return AK_Memory; 3996 } 3997 3998 // The instrumentation stores the argument shadow in a non ABI-specific 3999 // format because it does not know which argument is named (since Clang, 4000 // like x86_64 case, lowers the va_args in the frontend and this pass only 4001 // sees the low level code that deals with va_list internals). 4002 // The first seven GR registers are saved in the first 56 bytes of the 4003 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4004 // the remaining arguments. 4005 // Using constant offset within the va_arg TLS array allows fast copy 4006 // in the finalize instrumentation. 4007 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4008 unsigned GrOffset = AArch64GrBegOffset; 4009 unsigned VrOffset = AArch64VrBegOffset; 4010 unsigned OverflowOffset = AArch64VAEndOffset; 4011 4012 const DataLayout &DL = F.getParent()->getDataLayout(); 4013 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4014 ArgIt != End; ++ArgIt) { 4015 Value *A = *ArgIt; 4016 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4017 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4018 ArgKind AK = classifyArgument(A); 4019 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4020 AK = AK_Memory; 4021 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4022 AK = AK_Memory; 4023 Value *Base; 4024 switch (AK) { 4025 case AK_GeneralPurpose: 4026 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4027 GrOffset += 8; 4028 break; 4029 case AK_FloatingPoint: 4030 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4031 VrOffset += 16; 4032 break; 4033 case AK_Memory: 4034 // Don't count fixed arguments in the overflow area - va_start will 4035 // skip right over them. 4036 if (IsFixed) 4037 continue; 4038 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4039 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4040 alignTo(ArgSize, 8)); 4041 OverflowOffset += alignTo(ArgSize, 8); 4042 break; 4043 } 4044 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4045 // bother to actually store a shadow. 4046 if (IsFixed) 4047 continue; 4048 if (!Base) 4049 continue; 4050 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4051 } 4052 Constant *OverflowSize = 4053 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4054 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4055 } 4056 4057 /// Compute the shadow address for a given va_arg. 4058 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4059 unsigned ArgOffset, unsigned ArgSize) { 4060 // Make sure we don't overflow __msan_va_arg_tls. 4061 if (ArgOffset + ArgSize > kParamTLSSize) 4062 return nullptr; 4063 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4064 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4065 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4066 "_msarg"); 4067 } 4068 4069 void visitVAStartInst(VAStartInst &I) override { 4070 IRBuilder<> IRB(&I); 4071 VAStartInstrumentationList.push_back(&I); 4072 Value *VAListTag = I.getArgOperand(0); 4073 Value *ShadowPtr, *OriginPtr; 4074 unsigned Alignment = 8; 4075 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4076 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4077 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4078 /* size */ 32, Alignment, false); 4079 } 4080 4081 void visitVACopyInst(VACopyInst &I) override { 4082 IRBuilder<> IRB(&I); 4083 VAStartInstrumentationList.push_back(&I); 4084 Value *VAListTag = I.getArgOperand(0); 4085 Value *ShadowPtr, *OriginPtr; 4086 unsigned Alignment = 8; 4087 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4088 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4089 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4090 /* size */ 32, Alignment, false); 4091 } 4092 4093 // Retrieve a va_list field of 'void*' size. 4094 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4095 Value *SaveAreaPtrPtr = 4096 IRB.CreateIntToPtr( 4097 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4098 ConstantInt::get(MS.IntptrTy, offset)), 4099 Type::getInt64PtrTy(*MS.C)); 4100 return IRB.CreateLoad(SaveAreaPtrPtr); 4101 } 4102 4103 // Retrieve a va_list field of 'int' size. 4104 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4105 Value *SaveAreaPtr = 4106 IRB.CreateIntToPtr( 4107 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4108 ConstantInt::get(MS.IntptrTy, offset)), 4109 Type::getInt32PtrTy(*MS.C)); 4110 Value *SaveArea32 = IRB.CreateLoad(SaveAreaPtr); 4111 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4112 } 4113 4114 void finalizeInstrumentation() override { 4115 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4116 "finalizeInstrumentation called twice"); 4117 if (!VAStartInstrumentationList.empty()) { 4118 // If there is a va_start in this function, make a backup copy of 4119 // va_arg_tls somewhere in the function entry block. 4120 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4121 VAArgOverflowSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 4122 Value *CopySize = 4123 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4124 VAArgOverflowSize); 4125 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4126 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 4127 } 4128 4129 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4130 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4131 4132 // Instrument va_start, copy va_list shadow from the backup copy of 4133 // the TLS contents. 4134 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4135 CallInst *OrigInst = VAStartInstrumentationList[i]; 4136 IRBuilder<> IRB(OrigInst->getNextNode()); 4137 4138 Value *VAListTag = OrigInst->getArgOperand(0); 4139 4140 // The variadic ABI for AArch64 creates two areas to save the incoming 4141 // argument registers (one for 64-bit general register xn-x7 and another 4142 // for 128-bit FP/SIMD vn-v7). 4143 // We need then to propagate the shadow arguments on both regions 4144 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4145 // The remaning arguments are saved on shadow for 'va::stack'. 4146 // One caveat is it requires only to propagate the non-named arguments, 4147 // however on the call site instrumentation 'all' the arguments are 4148 // saved. So to copy the shadow values from the va_arg TLS array 4149 // we need to adjust the offset for both GR and VR fields based on 4150 // the __{gr,vr}_offs value (since they are stores based on incoming 4151 // named arguments). 4152 4153 // Read the stack pointer from the va_list. 4154 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4155 4156 // Read both the __gr_top and __gr_off and add them up. 4157 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4158 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4159 4160 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4161 4162 // Read both the __vr_top and __vr_off and add them up. 4163 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4164 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4165 4166 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4167 4168 // It does not know how many named arguments is being used and, on the 4169 // callsite all the arguments were saved. Since __gr_off is defined as 4170 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4171 // argument by ignoring the bytes of shadow from named arguments. 4172 Value *GrRegSaveAreaShadowPtrOff = 4173 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4174 4175 Value *GrRegSaveAreaShadowPtr = 4176 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4177 /*Alignment*/ 8, /*isStore*/ true) 4178 .first; 4179 4180 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4181 GrRegSaveAreaShadowPtrOff); 4182 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4183 4184 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, 8, GrSrcPtr, 8, GrCopySize); 4185 4186 // Again, but for FP/SIMD values. 4187 Value *VrRegSaveAreaShadowPtrOff = 4188 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4189 4190 Value *VrRegSaveAreaShadowPtr = 4191 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4192 /*Alignment*/ 8, /*isStore*/ true) 4193 .first; 4194 4195 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4196 IRB.getInt8Ty(), 4197 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4198 IRB.getInt32(AArch64VrBegOffset)), 4199 VrRegSaveAreaShadowPtrOff); 4200 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4201 4202 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, 8, VrSrcPtr, 8, VrCopySize); 4203 4204 // And finally for remaining arguments. 4205 Value *StackSaveAreaShadowPtr = 4206 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4207 /*Alignment*/ 16, /*isStore*/ true) 4208 .first; 4209 4210 Value *StackSrcPtr = 4211 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4212 IRB.getInt32(AArch64VAEndOffset)); 4213 4214 IRB.CreateMemCpy(StackSaveAreaShadowPtr, 16, StackSrcPtr, 16, 4215 VAArgOverflowSize); 4216 } 4217 } 4218 }; 4219 4220 /// PowerPC64-specific implementation of VarArgHelper. 4221 struct VarArgPowerPC64Helper : public VarArgHelper { 4222 Function &F; 4223 MemorySanitizer &MS; 4224 MemorySanitizerVisitor &MSV; 4225 Value *VAArgTLSCopy = nullptr; 4226 Value *VAArgSize = nullptr; 4227 4228 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4229 4230 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4231 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4232 4233 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4234 // For PowerPC, we need to deal with alignment of stack arguments - 4235 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4236 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4237 // and QPX vectors are aligned to 32 bytes. For that reason, we 4238 // compute current offset from stack pointer (which is always properly 4239 // aligned), and offset for the first vararg, then subtract them. 4240 unsigned VAArgBase; 4241 Triple TargetTriple(F.getParent()->getTargetTriple()); 4242 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4243 // and 32 bytes for ABIv2. This is usually determined by target 4244 // endianness, but in theory could be overriden by function attribute. 4245 // For simplicity, we ignore it here (it'd only matter for QPX vectors). 4246 if (TargetTriple.getArch() == Triple::ppc64) 4247 VAArgBase = 48; 4248 else 4249 VAArgBase = 32; 4250 unsigned VAArgOffset = VAArgBase; 4251 const DataLayout &DL = F.getParent()->getDataLayout(); 4252 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4253 ArgIt != End; ++ArgIt) { 4254 Value *A = *ArgIt; 4255 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4256 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4257 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 4258 if (IsByVal) { 4259 assert(A->getType()->isPointerTy()); 4260 Type *RealTy = A->getType()->getPointerElementType(); 4261 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4262 uint64_t ArgAlign = CS.getParamAlignment(ArgNo); 4263 if (ArgAlign < 8) 4264 ArgAlign = 8; 4265 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4266 if (!IsFixed) { 4267 Value *Base = getShadowPtrForVAArgument( 4268 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4269 if (Base) { 4270 Value *AShadowPtr, *AOriginPtr; 4271 std::tie(AShadowPtr, AOriginPtr) = 4272 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4273 kShadowTLSAlignment, /*isStore*/ false); 4274 4275 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4276 kShadowTLSAlignment, ArgSize); 4277 } 4278 } 4279 VAArgOffset += alignTo(ArgSize, 8); 4280 } else { 4281 Value *Base; 4282 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4283 uint64_t ArgAlign = 8; 4284 if (A->getType()->isArrayTy()) { 4285 // Arrays are aligned to element size, except for long double 4286 // arrays, which are aligned to 8 bytes. 4287 Type *ElementTy = A->getType()->getArrayElementType(); 4288 if (!ElementTy->isPPC_FP128Ty()) 4289 ArgAlign = DL.getTypeAllocSize(ElementTy); 4290 } else if (A->getType()->isVectorTy()) { 4291 // Vectors are naturally aligned. 4292 ArgAlign = DL.getTypeAllocSize(A->getType()); 4293 } 4294 if (ArgAlign < 8) 4295 ArgAlign = 8; 4296 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4297 if (DL.isBigEndian()) { 4298 // Adjusting the shadow for argument with size < 8 to match the placement 4299 // of bits in big endian system 4300 if (ArgSize < 8) 4301 VAArgOffset += (8 - ArgSize); 4302 } 4303 if (!IsFixed) { 4304 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4305 VAArgOffset - VAArgBase, ArgSize); 4306 if (Base) 4307 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4308 } 4309 VAArgOffset += ArgSize; 4310 VAArgOffset = alignTo(VAArgOffset, 8); 4311 } 4312 if (IsFixed) 4313 VAArgBase = VAArgOffset; 4314 } 4315 4316 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4317 VAArgOffset - VAArgBase); 4318 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4319 // a new class member i.e. it is the total size of all VarArgs. 4320 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4321 } 4322 4323 /// Compute the shadow address for a given va_arg. 4324 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4325 unsigned ArgOffset, unsigned ArgSize) { 4326 // Make sure we don't overflow __msan_va_arg_tls. 4327 if (ArgOffset + ArgSize > kParamTLSSize) 4328 return nullptr; 4329 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4330 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4331 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4332 "_msarg"); 4333 } 4334 4335 void visitVAStartInst(VAStartInst &I) override { 4336 IRBuilder<> IRB(&I); 4337 VAStartInstrumentationList.push_back(&I); 4338 Value *VAListTag = I.getArgOperand(0); 4339 Value *ShadowPtr, *OriginPtr; 4340 unsigned Alignment = 8; 4341 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4342 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4343 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4344 /* size */ 8, Alignment, false); 4345 } 4346 4347 void visitVACopyInst(VACopyInst &I) override { 4348 IRBuilder<> IRB(&I); 4349 Value *VAListTag = I.getArgOperand(0); 4350 Value *ShadowPtr, *OriginPtr; 4351 unsigned Alignment = 8; 4352 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4353 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4354 // Unpoison the whole __va_list_tag. 4355 // FIXME: magic ABI constants. 4356 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4357 /* size */ 8, Alignment, false); 4358 } 4359 4360 void finalizeInstrumentation() override { 4361 assert(!VAArgSize && !VAArgTLSCopy && 4362 "finalizeInstrumentation called twice"); 4363 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4364 VAArgSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 4365 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4366 VAArgSize); 4367 4368 if (!VAStartInstrumentationList.empty()) { 4369 // If there is a va_start in this function, make a backup copy of 4370 // va_arg_tls somewhere in the function entry block. 4371 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4372 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 4373 } 4374 4375 // Instrument va_start. 4376 // Copy va_list shadow from the backup copy of the TLS contents. 4377 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4378 CallInst *OrigInst = VAStartInstrumentationList[i]; 4379 IRBuilder<> IRB(OrigInst->getNextNode()); 4380 Value *VAListTag = OrigInst->getArgOperand(0); 4381 Value *RegSaveAreaPtrPtr = 4382 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4383 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 4384 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr); 4385 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4386 unsigned Alignment = 8; 4387 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4388 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4389 Alignment, /*isStore*/ true); 4390 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4391 CopySize); 4392 } 4393 } 4394 }; 4395 4396 /// A no-op implementation of VarArgHelper. 4397 struct VarArgNoOpHelper : public VarArgHelper { 4398 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 4399 MemorySanitizerVisitor &MSV) {} 4400 4401 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {} 4402 4403 void visitVAStartInst(VAStartInst &I) override {} 4404 4405 void visitVACopyInst(VACopyInst &I) override {} 4406 4407 void finalizeInstrumentation() override {} 4408 }; 4409 4410 } // end anonymous namespace 4411 4412 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 4413 MemorySanitizerVisitor &Visitor) { 4414 // VarArg handling is only implemented on AMD64. False positives are possible 4415 // on other platforms. 4416 Triple TargetTriple(Func.getParent()->getTargetTriple()); 4417 if (TargetTriple.getArch() == Triple::x86_64) 4418 return new VarArgAMD64Helper(Func, Msan, Visitor); 4419 else if (TargetTriple.isMIPS64()) 4420 return new VarArgMIPS64Helper(Func, Msan, Visitor); 4421 else if (TargetTriple.getArch() == Triple::aarch64) 4422 return new VarArgAArch64Helper(Func, Msan, Visitor); 4423 else if (TargetTriple.getArch() == Triple::ppc64 || 4424 TargetTriple.getArch() == Triple::ppc64le) 4425 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 4426 else 4427 return new VarArgNoOpHelper(Func, Msan, Visitor); 4428 } 4429 4430 bool MemorySanitizer::runOnFunction(Function &F) { 4431 if (!CompileKernel && (&F == MsanCtorFunction)) 4432 return false; 4433 MemorySanitizerVisitor Visitor(F, *this); 4434 4435 // Clear out readonly/readnone attributes. 4436 AttrBuilder B; 4437 B.addAttribute(Attribute::ReadOnly) 4438 .addAttribute(Attribute::ReadNone); 4439 F.removeAttributes(AttributeList::FunctionIndex, B); 4440 4441 return Visitor.runOnFunction(); 4442 } 4443