1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // The LLVM Compiler Infrastructure 4 // 5 // This file is distributed under the University of Illinois Open Source 6 // License. See LICENSE.TXT for details. 7 // 8 //===----------------------------------------------------------------------===// 9 // 10 /// \file 11 /// This file is a part of MemorySanitizer, a detector of uninitialized 12 /// reads. 13 /// 14 /// The algorithm of the tool is similar to Memcheck 15 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 16 /// byte of the application memory, poison the shadow of the malloc-ed 17 /// or alloca-ed memory, load the shadow bits on every memory read, 18 /// propagate the shadow bits through some of the arithmetic 19 /// instruction (including MOV), store the shadow bits on every memory 20 /// write, report a bug on some other instructions (e.g. JMP) if the 21 /// associated shadow is poisoned. 22 /// 23 /// But there are differences too. The first and the major one: 24 /// compiler instrumentation instead of binary instrumentation. This 25 /// gives us much better register allocation, possible compiler 26 /// optimizations and a fast start-up. But this brings the major issue 27 /// as well: msan needs to see all program events, including system 28 /// calls and reads/writes in system libraries, so we either need to 29 /// compile *everything* with msan or use a binary translation 30 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 31 /// Another difference from Memcheck is that we use 8 shadow bits per 32 /// byte of application memory and use a direct shadow mapping. This 33 /// greatly simplifies the instrumentation code and avoids races on 34 /// shadow updates (Memcheck is single-threaded so races are not a 35 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 36 /// path storage that uses 8 bits per byte). 37 /// 38 /// The default value of shadow is 0, which means "clean" (not poisoned). 39 /// 40 /// Every module initializer should call __msan_init to ensure that the 41 /// shadow memory is ready. On error, __msan_warning is called. Since 42 /// parameters and return values may be passed via registers, we have a 43 /// specialized thread-local shadow for return values 44 /// (__msan_retval_tls) and parameters (__msan_param_tls). 45 /// 46 /// Origin tracking. 47 /// 48 /// MemorySanitizer can track origins (allocation points) of all uninitialized 49 /// values. This behavior is controlled with a flag (msan-track-origins) and is 50 /// disabled by default. 51 /// 52 /// Origins are 4-byte values created and interpreted by the runtime library. 53 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 54 /// of application memory. Propagation of origins is basically a bunch of 55 /// "select" instructions that pick the origin of a dirty argument, if an 56 /// instruction has one. 57 /// 58 /// Every 4 aligned, consecutive bytes of application memory have one origin 59 /// value associated with them. If these bytes contain uninitialized data 60 /// coming from 2 different allocations, the last store wins. Because of this, 61 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 62 /// practice. 63 /// 64 /// Origins are meaningless for fully initialized values, so MemorySanitizer 65 /// avoids storing origin to memory when a fully initialized value is stored. 66 /// This way it avoids needless overwritting origin of the 4-byte region on 67 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 68 /// 69 /// Atomic handling. 70 /// 71 /// Ideally, every atomic store of application value should update the 72 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 73 /// of two disjoint locations can not be done without severe slowdown. 74 /// 75 /// Therefore, we implement an approximation that may err on the safe side. 76 /// In this implementation, every atomically accessed location in the program 77 /// may only change from (partially) uninitialized to fully initialized, but 78 /// not the other way around. We load the shadow _after_ the application load, 79 /// and we store the shadow _before_ the app store. Also, we always store clean 80 /// shadow (if the application store is atomic). This way, if the store-load 81 /// pair constitutes a happens-before arc, shadow store and load are correctly 82 /// ordered such that the load will get either the value that was stored, or 83 /// some later value (which is always clean). 84 /// 85 /// This does not work very well with Compare-And-Swap (CAS) and 86 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 87 /// must store the new shadow before the app operation, and load the shadow 88 /// after the app operation. Computers don't work this way. Current 89 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 90 /// value. It implements the store part as a simple atomic store by storing a 91 /// clean shadow. 92 /// 93 /// Instrumenting inline assembly. 94 /// 95 /// For inline assembly code LLVM has little idea about which memory locations 96 /// become initialized depending on the arguments. It can be possible to figure 97 /// out which arguments are meant to point to inputs and outputs, but the 98 /// actual semantics can be only visible at runtime. In the Linux kernel it's 99 /// also possible that the arguments only indicate the offset for a base taken 100 /// from a segment register, so it's dangerous to treat any asm() arguments as 101 /// pointers. We take a conservative approach generating calls to 102 /// __msan_instrument_asm_load(ptr, size) and 103 /// __msan_instrument_asm_store(ptr, size) 104 /// , which defer the memory checking/unpoisoning to the runtime library. 105 /// The latter can perform more complex address checks to figure out whether 106 /// it's safe to touch the shadow memory. 107 /// Like with atomic operations, we call __msan_instrument_asm_store() before 108 /// the assembly call, so that changes to the shadow memory will be seen by 109 /// other threads together with main memory initialization. 110 /// 111 /// KernelMemorySanitizer (KMSAN) implementation. 112 /// 113 /// The major differences between KMSAN and MSan instrumentation are: 114 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 115 /// - KMSAN allocates shadow and origin memory for each page separately, so 116 /// there are no explicit accesses to shadow and origin in the 117 /// instrumentation. 118 /// Shadow and origin values for a particular X-byte memory location 119 /// (X=1,2,4,8) are accessed through pointers obtained via the 120 /// __msan_metadata_ptr_for_load_X(ptr) 121 /// __msan_metadata_ptr_for_store_X(ptr) 122 /// functions. The corresponding functions check that the X-byte accesses 123 /// are possible and returns the pointers to shadow and origin memory. 124 /// Arbitrary sized accesses are handled with: 125 /// __msan_metadata_ptr_for_load_n(ptr, size) 126 /// __msan_metadata_ptr_for_store_n(ptr, size); 127 /// - TLS variables are stored in a single per-task struct. A call to a 128 /// function __msan_get_context_state() returning a pointer to that struct 129 /// is inserted into every instrumented function before the entry block; 130 /// - __msan_warning() takes a 32-bit origin parameter; 131 /// - local variables are poisoned with __msan_poison_alloca() upon function 132 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 133 /// function; 134 /// - the pass doesn't declare any global variables or add global constructors 135 /// to the translation unit. 136 /// 137 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 138 /// calls, making sure we're on the safe side wrt. possible false positives. 139 /// 140 /// KernelMemorySanitizer only supports X86_64 at the moment. 141 /// 142 //===----------------------------------------------------------------------===// 143 144 #include "llvm/ADT/APInt.h" 145 #include "llvm/ADT/ArrayRef.h" 146 #include "llvm/ADT/DepthFirstIterator.h" 147 #include "llvm/ADT/SmallString.h" 148 #include "llvm/ADT/SmallVector.h" 149 #include "llvm/ADT/StringExtras.h" 150 #include "llvm/ADT/StringRef.h" 151 #include "llvm/ADT/Triple.h" 152 #include "llvm/Analysis/TargetLibraryInfo.h" 153 #include "llvm/Transforms/Utils/Local.h" 154 #include "llvm/IR/Argument.h" 155 #include "llvm/IR/Attributes.h" 156 #include "llvm/IR/BasicBlock.h" 157 #include "llvm/IR/CallSite.h" 158 #include "llvm/IR/CallingConv.h" 159 #include "llvm/IR/Constant.h" 160 #include "llvm/IR/Constants.h" 161 #include "llvm/IR/DataLayout.h" 162 #include "llvm/IR/DerivedTypes.h" 163 #include "llvm/IR/Function.h" 164 #include "llvm/IR/GlobalValue.h" 165 #include "llvm/IR/GlobalVariable.h" 166 #include "llvm/IR/IRBuilder.h" 167 #include "llvm/IR/InlineAsm.h" 168 #include "llvm/IR/InstVisitor.h" 169 #include "llvm/IR/InstrTypes.h" 170 #include "llvm/IR/Instruction.h" 171 #include "llvm/IR/Instructions.h" 172 #include "llvm/IR/IntrinsicInst.h" 173 #include "llvm/IR/Intrinsics.h" 174 #include "llvm/IR/LLVMContext.h" 175 #include "llvm/IR/MDBuilder.h" 176 #include "llvm/IR/Module.h" 177 #include "llvm/IR/Type.h" 178 #include "llvm/IR/Value.h" 179 #include "llvm/IR/ValueMap.h" 180 #include "llvm/Pass.h" 181 #include "llvm/Support/AtomicOrdering.h" 182 #include "llvm/Support/Casting.h" 183 #include "llvm/Support/CommandLine.h" 184 #include "llvm/Support/Compiler.h" 185 #include "llvm/Support/Debug.h" 186 #include "llvm/Support/ErrorHandling.h" 187 #include "llvm/Support/MathExtras.h" 188 #include "llvm/Support/raw_ostream.h" 189 #include "llvm/Transforms/Instrumentation.h" 190 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 191 #include "llvm/Transforms/Utils/ModuleUtils.h" 192 #include <algorithm> 193 #include <cassert> 194 #include <cstddef> 195 #include <cstdint> 196 #include <memory> 197 #include <string> 198 #include <tuple> 199 200 using namespace llvm; 201 202 #define DEBUG_TYPE "msan" 203 204 static const unsigned kOriginSize = 4; 205 static const unsigned kMinOriginAlignment = 4; 206 static const unsigned kShadowTLSAlignment = 8; 207 208 // These constants must be kept in sync with the ones in msan.h. 209 static const unsigned kParamTLSSize = 800; 210 static const unsigned kRetvalTLSSize = 800; 211 212 // Accesses sizes are powers of two: 1, 2, 4, 8. 213 static const size_t kNumberOfAccessSizes = 4; 214 215 /// Track origins of uninitialized values. 216 /// 217 /// Adds a section to MemorySanitizer report that points to the allocation 218 /// (stack or heap) the uninitialized bits came from originally. 219 static cl::opt<int> ClTrackOrigins("msan-track-origins", 220 cl::desc("Track origins (allocation sites) of poisoned memory"), 221 cl::Hidden, cl::init(0)); 222 223 static cl::opt<bool> ClKeepGoing("msan-keep-going", 224 cl::desc("keep going after reporting a UMR"), 225 cl::Hidden, cl::init(false)); 226 227 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 228 cl::desc("poison uninitialized stack variables"), 229 cl::Hidden, cl::init(true)); 230 231 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 232 cl::desc("poison uninitialized stack variables with a call"), 233 cl::Hidden, cl::init(false)); 234 235 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 236 cl::desc("poison uninitialized stack variables with the given pattern"), 237 cl::Hidden, cl::init(0xff)); 238 239 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 240 cl::desc("poison undef temps"), 241 cl::Hidden, cl::init(true)); 242 243 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 244 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 245 cl::Hidden, cl::init(true)); 246 247 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 248 cl::desc("exact handling of relational integer ICmp"), 249 cl::Hidden, cl::init(false)); 250 251 // When compiling the Linux kernel, we sometimes see false positives related to 252 // MSan being unable to understand that inline assembly calls may initialize 253 // local variables. 254 // This flag makes the compiler conservatively unpoison every memory location 255 // passed into an assembly call. Note that this may cause false positives. 256 // Because it's impossible to figure out the array sizes, we can only unpoison 257 // the first sizeof(type) bytes for each type* pointer. 258 // The instrumentation is only enabled in KMSAN builds, and only if 259 // -msan-handle-asm-conservative is on. This is done because we may want to 260 // quickly disable assembly instrumentation when it breaks. 261 static cl::opt<bool> ClHandleAsmConservative( 262 "msan-handle-asm-conservative", 263 cl::desc("conservative handling of inline assembly"), cl::Hidden, 264 cl::init(true)); 265 266 // This flag controls whether we check the shadow of the address 267 // operand of load or store. Such bugs are very rare, since load from 268 // a garbage address typically results in SEGV, but still happen 269 // (e.g. only lower bits of address are garbage, or the access happens 270 // early at program startup where malloc-ed memory is more likely to 271 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 272 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 273 cl::desc("report accesses through a pointer which has poisoned shadow"), 274 cl::Hidden, cl::init(true)); 275 276 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 277 cl::desc("print out instructions with default strict semantics"), 278 cl::Hidden, cl::init(false)); 279 280 static cl::opt<int> ClInstrumentationWithCallThreshold( 281 "msan-instrumentation-with-call-threshold", 282 cl::desc( 283 "If the function being instrumented requires more than " 284 "this number of checks and origin stores, use callbacks instead of " 285 "inline checks (-1 means never use callbacks)."), 286 cl::Hidden, cl::init(3500)); 287 288 static cl::opt<bool> 289 ClEnableKmsan("msan-kernel", 290 cl::desc("Enable KernelMemorySanitizer instrumentation"), 291 cl::Hidden, cl::init(false)); 292 293 // This is an experiment to enable handling of cases where shadow is a non-zero 294 // compile-time constant. For some unexplainable reason they were silently 295 // ignored in the instrumentation. 296 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 297 cl::desc("Insert checks for constant shadow values"), 298 cl::Hidden, cl::init(false)); 299 300 // This is off by default because of a bug in gold: 301 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 302 static cl::opt<bool> ClWithComdat("msan-with-comdat", 303 cl::desc("Place MSan constructors in comdat sections"), 304 cl::Hidden, cl::init(false)); 305 306 // These options allow to specify custom memory map parameters 307 // See MemoryMapParams for details. 308 static cl::opt<unsigned long long> ClAndMask("msan-and-mask", 309 cl::desc("Define custom MSan AndMask"), 310 cl::Hidden, cl::init(0)); 311 312 static cl::opt<unsigned long long> ClXorMask("msan-xor-mask", 313 cl::desc("Define custom MSan XorMask"), 314 cl::Hidden, cl::init(0)); 315 316 static cl::opt<unsigned long long> ClShadowBase("msan-shadow-base", 317 cl::desc("Define custom MSan ShadowBase"), 318 cl::Hidden, cl::init(0)); 319 320 static cl::opt<unsigned long long> ClOriginBase("msan-origin-base", 321 cl::desc("Define custom MSan OriginBase"), 322 cl::Hidden, cl::init(0)); 323 324 static const char *const kMsanModuleCtorName = "msan.module_ctor"; 325 static const char *const kMsanInitName = "__msan_init"; 326 327 namespace { 328 329 // Memory map parameters used in application-to-shadow address calculation. 330 // Offset = (Addr & ~AndMask) ^ XorMask 331 // Shadow = ShadowBase + Offset 332 // Origin = OriginBase + Offset 333 struct MemoryMapParams { 334 uint64_t AndMask; 335 uint64_t XorMask; 336 uint64_t ShadowBase; 337 uint64_t OriginBase; 338 }; 339 340 struct PlatformMemoryMapParams { 341 const MemoryMapParams *bits32; 342 const MemoryMapParams *bits64; 343 }; 344 345 } // end anonymous namespace 346 347 // i386 Linux 348 static const MemoryMapParams Linux_I386_MemoryMapParams = { 349 0x000080000000, // AndMask 350 0, // XorMask (not used) 351 0, // ShadowBase (not used) 352 0x000040000000, // OriginBase 353 }; 354 355 // x86_64 Linux 356 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 357 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 358 0x400000000000, // AndMask 359 0, // XorMask (not used) 360 0, // ShadowBase (not used) 361 0x200000000000, // OriginBase 362 #else 363 0, // AndMask (not used) 364 0x500000000000, // XorMask 365 0, // ShadowBase (not used) 366 0x100000000000, // OriginBase 367 #endif 368 }; 369 370 // mips64 Linux 371 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 372 0, // AndMask (not used) 373 0x008000000000, // XorMask 374 0, // ShadowBase (not used) 375 0x002000000000, // OriginBase 376 }; 377 378 // ppc64 Linux 379 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 380 0xE00000000000, // AndMask 381 0x100000000000, // XorMask 382 0x080000000000, // ShadowBase 383 0x1C0000000000, // OriginBase 384 }; 385 386 // aarch64 Linux 387 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 388 0, // AndMask (not used) 389 0x06000000000, // XorMask 390 0, // ShadowBase (not used) 391 0x01000000000, // OriginBase 392 }; 393 394 // i386 FreeBSD 395 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 396 0x000180000000, // AndMask 397 0x000040000000, // XorMask 398 0x000020000000, // ShadowBase 399 0x000700000000, // OriginBase 400 }; 401 402 // x86_64 FreeBSD 403 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 404 0xc00000000000, // AndMask 405 0x200000000000, // XorMask 406 0x100000000000, // ShadowBase 407 0x380000000000, // OriginBase 408 }; 409 410 // x86_64 NetBSD 411 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 412 0, // AndMask 413 0x500000000000, // XorMask 414 0, // ShadowBase 415 0x100000000000, // OriginBase 416 }; 417 418 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 419 &Linux_I386_MemoryMapParams, 420 &Linux_X86_64_MemoryMapParams, 421 }; 422 423 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 424 nullptr, 425 &Linux_MIPS64_MemoryMapParams, 426 }; 427 428 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 429 nullptr, 430 &Linux_PowerPC64_MemoryMapParams, 431 }; 432 433 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 434 nullptr, 435 &Linux_AArch64_MemoryMapParams, 436 }; 437 438 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 439 &FreeBSD_I386_MemoryMapParams, 440 &FreeBSD_X86_64_MemoryMapParams, 441 }; 442 443 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 444 nullptr, 445 &NetBSD_X86_64_MemoryMapParams, 446 }; 447 448 namespace { 449 450 /// An instrumentation pass implementing detection of uninitialized 451 /// reads. 452 /// 453 /// MemorySanitizer: instrument the code in module to find 454 /// uninitialized reads. 455 class MemorySanitizer : public FunctionPass { 456 public: 457 // Pass identification, replacement for typeid. 458 static char ID; 459 460 MemorySanitizer(int TrackOrigins = 0, bool Recover = false, 461 bool EnableKmsan = false) 462 : FunctionPass(ID) { 463 this->CompileKernel = 464 ClEnableKmsan.getNumOccurrences() > 0 ? ClEnableKmsan : EnableKmsan; 465 if (ClTrackOrigins.getNumOccurrences() > 0) 466 this->TrackOrigins = ClTrackOrigins; 467 else 468 this->TrackOrigins = this->CompileKernel ? 2 : TrackOrigins; 469 this->Recover = ClKeepGoing.getNumOccurrences() > 0 470 ? ClKeepGoing 471 : (this->CompileKernel | Recover); 472 } 473 StringRef getPassName() const override { return "MemorySanitizer"; } 474 475 void getAnalysisUsage(AnalysisUsage &AU) const override { 476 AU.addRequired<TargetLibraryInfoWrapperPass>(); 477 } 478 479 bool runOnFunction(Function &F) override; 480 bool doInitialization(Module &M) override; 481 482 private: 483 friend struct MemorySanitizerVisitor; 484 friend struct VarArgAMD64Helper; 485 friend struct VarArgMIPS64Helper; 486 friend struct VarArgAArch64Helper; 487 friend struct VarArgPowerPC64Helper; 488 489 void initializeCallbacks(Module &M); 490 void createKernelApi(Module &M); 491 void createUserspaceApi(Module &M); 492 493 /// True if we're compiling the Linux kernel. 494 bool CompileKernel; 495 496 /// Track origins (allocation points) of uninitialized values. 497 int TrackOrigins; 498 bool Recover; 499 500 LLVMContext *C; 501 Type *IntptrTy; 502 Type *OriginTy; 503 504 // XxxTLS variables represent the per-thread state in MSan and per-task state 505 // in KMSAN. 506 // For the userspace these point to thread-local globals. In the kernel land 507 // they point to the members of a per-task struct obtained via a call to 508 // __msan_get_context_state(). 509 510 /// Thread-local shadow storage for function parameters. 511 Value *ParamTLS; 512 513 /// Thread-local origin storage for function parameters. 514 Value *ParamOriginTLS; 515 516 /// Thread-local shadow storage for function return value. 517 Value *RetvalTLS; 518 519 /// Thread-local origin storage for function return value. 520 Value *RetvalOriginTLS; 521 522 /// Thread-local shadow storage for in-register va_arg function 523 /// parameters (x86_64-specific). 524 Value *VAArgTLS; 525 526 /// Thread-local shadow storage for in-register va_arg function 527 /// parameters (x86_64-specific). 528 Value *VAArgOriginTLS; 529 530 /// Thread-local shadow storage for va_arg overflow area 531 /// (x86_64-specific). 532 Value *VAArgOverflowSizeTLS; 533 534 /// Thread-local space used to pass origin value to the UMR reporting 535 /// function. 536 Value *OriginTLS; 537 538 /// Are the instrumentation callbacks set up? 539 bool CallbacksInitialized = false; 540 541 /// The run-time callback to print a warning. 542 Value *WarningFn; 543 544 // These arrays are indexed by log2(AccessSize). 545 Value *MaybeWarningFn[kNumberOfAccessSizes]; 546 Value *MaybeStoreOriginFn[kNumberOfAccessSizes]; 547 548 /// Run-time helper that generates a new origin value for a stack 549 /// allocation. 550 Value *MsanSetAllocaOrigin4Fn; 551 552 /// Run-time helper that poisons stack on function entry. 553 Value *MsanPoisonStackFn; 554 555 /// Run-time helper that records a store (or any event) of an 556 /// uninitialized value and returns an updated origin id encoding this info. 557 Value *MsanChainOriginFn; 558 559 /// MSan runtime replacements for memmove, memcpy and memset. 560 Value *MemmoveFn, *MemcpyFn, *MemsetFn; 561 562 /// KMSAN callback for task-local function argument shadow. 563 Value *MsanGetContextStateFn; 564 565 /// Functions for poisoning/unpoisoning local variables 566 Value *MsanPoisonAllocaFn, *MsanUnpoisonAllocaFn; 567 568 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 569 /// pointers. 570 Value *MsanMetadataPtrForLoadN, *MsanMetadataPtrForStoreN; 571 Value *MsanMetadataPtrForLoad_1_8[4]; 572 Value *MsanMetadataPtrForStore_1_8[4]; 573 Value *MsanInstrumentAsmStoreFn, *MsanInstrumentAsmLoadFn; 574 575 /// Helper to choose between different MsanMetadataPtrXxx(). 576 Value *getKmsanShadowOriginAccessFn(bool isStore, int size); 577 578 /// Memory map parameters used in application-to-shadow calculation. 579 const MemoryMapParams *MapParams; 580 581 /// Custom memory map parameters used when -msan-shadow-base or 582 // -msan-origin-base is provided. 583 MemoryMapParams CustomMapParams; 584 585 MDNode *ColdCallWeights; 586 587 /// Branch weights for origin store. 588 MDNode *OriginStoreWeights; 589 590 /// An empty volatile inline asm that prevents callback merge. 591 InlineAsm *EmptyAsm; 592 593 Function *MsanCtorFunction; 594 }; 595 596 } // end anonymous namespace 597 598 char MemorySanitizer::ID = 0; 599 600 INITIALIZE_PASS_BEGIN( 601 MemorySanitizer, "msan", 602 "MemorySanitizer: detects uninitialized reads.", false, false) 603 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 604 INITIALIZE_PASS_END( 605 MemorySanitizer, "msan", 606 "MemorySanitizer: detects uninitialized reads.", false, false) 607 608 FunctionPass *llvm::createMemorySanitizerPass(int TrackOrigins, bool Recover, 609 bool CompileKernel) { 610 return new MemorySanitizer(TrackOrigins, Recover, CompileKernel); 611 } 612 613 /// Create a non-const global initialized with the given string. 614 /// 615 /// Creates a writable global for Str so that we can pass it to the 616 /// run-time lib. Runtime uses first 4 bytes of the string to store the 617 /// frame ID, so the string needs to be mutable. 618 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 619 StringRef Str) { 620 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 621 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 622 GlobalValue::PrivateLinkage, StrConst, ""); 623 } 624 625 /// Create KMSAN API callbacks. 626 void MemorySanitizer::createKernelApi(Module &M) { 627 IRBuilder<> IRB(*C); 628 629 // These will be initialized in insertKmsanPrologue(). 630 RetvalTLS = nullptr; 631 RetvalOriginTLS = nullptr; 632 ParamTLS = nullptr; 633 ParamOriginTLS = nullptr; 634 VAArgTLS = nullptr; 635 VAArgOriginTLS = nullptr; 636 VAArgOverflowSizeTLS = nullptr; 637 // OriginTLS is unused in the kernel. 638 OriginTLS = nullptr; 639 640 // __msan_warning() in the kernel takes an origin. 641 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 642 IRB.getInt32Ty()); 643 // Requests the per-task context state (kmsan_context_state*) from the 644 // runtime library. 645 MsanGetContextStateFn = M.getOrInsertFunction( 646 "__msan_get_context_state", 647 PointerType::get( 648 StructType::get(ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 649 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 650 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 651 ArrayType::get(IRB.getInt64Ty(), 652 kParamTLSSize / 8), /* va_arg_origin */ 653 IRB.getInt64Ty(), 654 ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 655 OriginTy), 656 0)); 657 658 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 659 PointerType::get(IRB.getInt32Ty(), 0)); 660 661 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 662 std::string name_load = 663 "__msan_metadata_ptr_for_load_" + std::to_string(size); 664 std::string name_store = 665 "__msan_metadata_ptr_for_store_" + std::to_string(size); 666 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 667 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 668 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 669 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 670 } 671 672 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 673 "__msan_metadata_ptr_for_load_n", RetTy, 674 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 675 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 676 "__msan_metadata_ptr_for_store_n", RetTy, 677 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 678 679 // Functions for poisoning and unpoisoning memory. 680 MsanPoisonAllocaFn = 681 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 682 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 683 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 684 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 685 } 686 687 /// Insert declarations for userspace-specific functions and globals. 688 void MemorySanitizer::createUserspaceApi(Module &M) { 689 IRBuilder<> IRB(*C); 690 // Create the callback. 691 // FIXME: this function should have "Cold" calling conv, 692 // which is not yet implemented. 693 StringRef WarningFnName = Recover ? "__msan_warning" 694 : "__msan_warning_noreturn"; 695 WarningFn = M.getOrInsertFunction(WarningFnName, IRB.getVoidTy()); 696 697 // Create the global TLS variables. 698 RetvalTLS = new GlobalVariable( 699 M, ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), false, 700 GlobalVariable::ExternalLinkage, nullptr, "__msan_retval_tls", nullptr, 701 GlobalVariable::InitialExecTLSModel); 702 703 RetvalOriginTLS = new GlobalVariable( 704 M, OriginTy, false, GlobalVariable::ExternalLinkage, nullptr, 705 "__msan_retval_origin_tls", nullptr, GlobalVariable::InitialExecTLSModel); 706 707 ParamTLS = new GlobalVariable( 708 M, ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), false, 709 GlobalVariable::ExternalLinkage, nullptr, "__msan_param_tls", nullptr, 710 GlobalVariable::InitialExecTLSModel); 711 712 ParamOriginTLS = new GlobalVariable( 713 M, ArrayType::get(OriginTy, kParamTLSSize / 4), false, 714 GlobalVariable::ExternalLinkage, nullptr, "__msan_param_origin_tls", 715 nullptr, GlobalVariable::InitialExecTLSModel); 716 717 VAArgTLS = new GlobalVariable( 718 M, ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), false, 719 GlobalVariable::ExternalLinkage, nullptr, "__msan_va_arg_tls", nullptr, 720 GlobalVariable::InitialExecTLSModel); 721 722 VAArgOriginTLS = new GlobalVariable( 723 M, ArrayType::get(OriginTy, kParamTLSSize / 4), false, 724 GlobalVariable::ExternalLinkage, nullptr, "__msan_va_arg_origin_tls", 725 nullptr, GlobalVariable::InitialExecTLSModel); 726 727 VAArgOverflowSizeTLS = new GlobalVariable( 728 M, IRB.getInt64Ty(), false, GlobalVariable::ExternalLinkage, nullptr, 729 "__msan_va_arg_overflow_size_tls", nullptr, 730 GlobalVariable::InitialExecTLSModel); 731 OriginTLS = new GlobalVariable( 732 M, IRB.getInt32Ty(), false, GlobalVariable::ExternalLinkage, nullptr, 733 "__msan_origin_tls", nullptr, GlobalVariable::InitialExecTLSModel); 734 735 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 736 AccessSizeIndex++) { 737 unsigned AccessSize = 1 << AccessSizeIndex; 738 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 739 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 740 FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), 741 IRB.getInt32Ty()); 742 743 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 744 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 745 FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), 746 IRB.getInt8PtrTy(), IRB.getInt32Ty()); 747 } 748 749 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 750 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 751 IRB.getInt8PtrTy(), IntptrTy); 752 MsanPoisonStackFn = 753 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 754 IRB.getInt8PtrTy(), IntptrTy); 755 } 756 757 /// Insert extern declaration of runtime-provided functions and globals. 758 void MemorySanitizer::initializeCallbacks(Module &M) { 759 // Only do this once. 760 if (CallbacksInitialized) 761 return; 762 763 IRBuilder<> IRB(*C); 764 // Initialize callbacks that are common for kernel and userspace 765 // instrumentation. 766 MsanChainOriginFn = M.getOrInsertFunction( 767 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 768 MemmoveFn = M.getOrInsertFunction( 769 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 770 IRB.getInt8PtrTy(), IntptrTy); 771 MemcpyFn = M.getOrInsertFunction( 772 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 773 IntptrTy); 774 MemsetFn = M.getOrInsertFunction( 775 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 776 IntptrTy); 777 // We insert an empty inline asm after __msan_report* to avoid callback merge. 778 EmptyAsm = InlineAsm::get(FunctionType::get(IRB.getVoidTy(), false), 779 StringRef(""), StringRef(""), 780 /*hasSideEffects=*/true); 781 782 MsanInstrumentAsmLoadFn = 783 M.getOrInsertFunction("__msan_instrument_asm_load", IRB.getVoidTy(), 784 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 785 MsanInstrumentAsmStoreFn = 786 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 787 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 788 789 if (CompileKernel) { 790 createKernelApi(M); 791 } else { 792 createUserspaceApi(M); 793 } 794 CallbacksInitialized = true; 795 } 796 797 Value *MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, int size) { 798 Value **Fns = 799 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 800 switch (size) { 801 case 1: 802 return Fns[0]; 803 case 2: 804 return Fns[1]; 805 case 4: 806 return Fns[2]; 807 case 8: 808 return Fns[3]; 809 default: 810 return nullptr; 811 } 812 } 813 814 /// Module-level initialization. 815 /// 816 /// inserts a call to __msan_init to the module's constructor list. 817 bool MemorySanitizer::doInitialization(Module &M) { 818 auto &DL = M.getDataLayout(); 819 820 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 821 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 822 // Check the overrides first 823 if (ShadowPassed || OriginPassed) { 824 CustomMapParams.AndMask = ClAndMask; 825 CustomMapParams.XorMask = ClXorMask; 826 CustomMapParams.ShadowBase = ClShadowBase; 827 CustomMapParams.OriginBase = ClOriginBase; 828 MapParams = &CustomMapParams; 829 } else { 830 Triple TargetTriple(M.getTargetTriple()); 831 switch (TargetTriple.getOS()) { 832 case Triple::FreeBSD: 833 switch (TargetTriple.getArch()) { 834 case Triple::x86_64: 835 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 836 break; 837 case Triple::x86: 838 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 839 break; 840 default: 841 report_fatal_error("unsupported architecture"); 842 } 843 break; 844 case Triple::NetBSD: 845 switch (TargetTriple.getArch()) { 846 case Triple::x86_64: 847 MapParams = NetBSD_X86_MemoryMapParams.bits64; 848 break; 849 default: 850 report_fatal_error("unsupported architecture"); 851 } 852 break; 853 case Triple::Linux: 854 switch (TargetTriple.getArch()) { 855 case Triple::x86_64: 856 MapParams = Linux_X86_MemoryMapParams.bits64; 857 break; 858 case Triple::x86: 859 MapParams = Linux_X86_MemoryMapParams.bits32; 860 break; 861 case Triple::mips64: 862 case Triple::mips64el: 863 MapParams = Linux_MIPS_MemoryMapParams.bits64; 864 break; 865 case Triple::ppc64: 866 case Triple::ppc64le: 867 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 868 break; 869 case Triple::aarch64: 870 case Triple::aarch64_be: 871 MapParams = Linux_ARM_MemoryMapParams.bits64; 872 break; 873 default: 874 report_fatal_error("unsupported architecture"); 875 } 876 break; 877 default: 878 report_fatal_error("unsupported operating system"); 879 } 880 } 881 882 C = &(M.getContext()); 883 IRBuilder<> IRB(*C); 884 IntptrTy = IRB.getIntPtrTy(DL); 885 OriginTy = IRB.getInt32Ty(); 886 887 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 888 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 889 890 if (!CompileKernel) { 891 std::tie(MsanCtorFunction, std::ignore) = 892 createSanitizerCtorAndInitFunctions(M, kMsanModuleCtorName, 893 kMsanInitName, 894 /*InitArgTypes=*/{}, 895 /*InitArgs=*/{}); 896 if (ClWithComdat) { 897 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 898 MsanCtorFunction->setComdat(MsanCtorComdat); 899 appendToGlobalCtors(M, MsanCtorFunction, 0, MsanCtorFunction); 900 } else { 901 appendToGlobalCtors(M, MsanCtorFunction, 0); 902 } 903 904 if (TrackOrigins) 905 new GlobalVariable(M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 906 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 907 908 if (Recover) 909 new GlobalVariable(M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 910 IRB.getInt32(Recover), "__msan_keep_going"); 911 } 912 return true; 913 } 914 915 namespace { 916 917 /// A helper class that handles instrumentation of VarArg 918 /// functions on a particular platform. 919 /// 920 /// Implementations are expected to insert the instrumentation 921 /// necessary to propagate argument shadow through VarArg function 922 /// calls. Visit* methods are called during an InstVisitor pass over 923 /// the function, and should avoid creating new basic blocks. A new 924 /// instance of this class is created for each instrumented function. 925 struct VarArgHelper { 926 virtual ~VarArgHelper() = default; 927 928 /// Visit a CallSite. 929 virtual void visitCallSite(CallSite &CS, IRBuilder<> &IRB) = 0; 930 931 /// Visit a va_start call. 932 virtual void visitVAStartInst(VAStartInst &I) = 0; 933 934 /// Visit a va_copy call. 935 virtual void visitVACopyInst(VACopyInst &I) = 0; 936 937 /// Finalize function instrumentation. 938 /// 939 /// This method is called after visiting all interesting (see above) 940 /// instructions in a function. 941 virtual void finalizeInstrumentation() = 0; 942 }; 943 944 struct MemorySanitizerVisitor; 945 946 } // end anonymous namespace 947 948 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 949 MemorySanitizerVisitor &Visitor); 950 951 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 952 if (TypeSize <= 8) return 0; 953 return Log2_32_Ceil((TypeSize + 7) / 8); 954 } 955 956 namespace { 957 958 /// This class does all the work for a given function. Store and Load 959 /// instructions store and load corresponding shadow and origin 960 /// values. Most instructions propagate shadow from arguments to their 961 /// return values. Certain instructions (most importantly, BranchInst) 962 /// test their argument shadow and print reports (with a runtime call) if it's 963 /// non-zero. 964 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 965 Function &F; 966 MemorySanitizer &MS; 967 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 968 ValueMap<Value*, Value*> ShadowMap, OriginMap; 969 std::unique_ptr<VarArgHelper> VAHelper; 970 const TargetLibraryInfo *TLI; 971 BasicBlock *ActualFnStart; 972 973 // The following flags disable parts of MSan instrumentation based on 974 // blacklist contents and command-line options. 975 bool InsertChecks; 976 bool PropagateShadow; 977 bool PoisonStack; 978 bool PoisonUndef; 979 bool CheckReturnValue; 980 981 struct ShadowOriginAndInsertPoint { 982 Value *Shadow; 983 Value *Origin; 984 Instruction *OrigIns; 985 986 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 987 : Shadow(S), Origin(O), OrigIns(I) {} 988 }; 989 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 990 SmallVector<StoreInst *, 16> StoreList; 991 992 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS) 993 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)) { 994 bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory); 995 InsertChecks = SanitizeFunction; 996 PropagateShadow = SanitizeFunction; 997 PoisonStack = SanitizeFunction && ClPoisonStack; 998 PoisonUndef = SanitizeFunction && ClPoisonUndef; 999 // FIXME: Consider using SpecialCaseList to specify a list of functions that 1000 // must always return fully initialized values. For now, we hardcode "main". 1001 CheckReturnValue = SanitizeFunction && (F.getName() == "main"); 1002 TLI = &MS.getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(); 1003 1004 MS.initializeCallbacks(*F.getParent()); 1005 if (MS.CompileKernel) 1006 ActualFnStart = insertKmsanPrologue(F); 1007 else 1008 ActualFnStart = &F.getEntryBlock(); 1009 1010 LLVM_DEBUG(if (!InsertChecks) dbgs() 1011 << "MemorySanitizer is not inserting checks into '" 1012 << F.getName() << "'\n"); 1013 } 1014 1015 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1016 if (MS.TrackOrigins <= 1) return V; 1017 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1018 } 1019 1020 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1021 const DataLayout &DL = F.getParent()->getDataLayout(); 1022 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1023 if (IntptrSize == kOriginSize) return Origin; 1024 assert(IntptrSize == kOriginSize * 2); 1025 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1026 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1027 } 1028 1029 /// Fill memory range with the given origin value. 1030 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1031 unsigned Size, unsigned Alignment) { 1032 const DataLayout &DL = F.getParent()->getDataLayout(); 1033 unsigned IntptrAlignment = DL.getABITypeAlignment(MS.IntptrTy); 1034 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1035 assert(IntptrAlignment >= kMinOriginAlignment); 1036 assert(IntptrSize >= kOriginSize); 1037 1038 unsigned Ofs = 0; 1039 unsigned CurrentAlignment = Alignment; 1040 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1041 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1042 Value *IntptrOriginPtr = 1043 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1044 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1045 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1046 : IntptrOriginPtr; 1047 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1048 Ofs += IntptrSize / kOriginSize; 1049 CurrentAlignment = IntptrAlignment; 1050 } 1051 } 1052 1053 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1054 Value *GEP = 1055 i ? IRB.CreateConstGEP1_32(nullptr, OriginPtr, i) : OriginPtr; 1056 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1057 CurrentAlignment = kMinOriginAlignment; 1058 } 1059 } 1060 1061 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1062 Value *OriginPtr, unsigned Alignment, bool AsCall) { 1063 const DataLayout &DL = F.getParent()->getDataLayout(); 1064 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1065 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1066 if (Shadow->getType()->isAggregateType()) { 1067 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1068 OriginAlignment); 1069 } else { 1070 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1071 Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow); 1072 if (ConstantShadow) { 1073 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1074 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1075 OriginAlignment); 1076 return; 1077 } 1078 1079 unsigned TypeSizeInBits = 1080 DL.getTypeSizeInBits(ConvertedShadow->getType()); 1081 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1082 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1083 Value *Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1084 Value *ConvertedShadow2 = IRB.CreateZExt( 1085 ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1086 IRB.CreateCall(Fn, {ConvertedShadow2, 1087 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), 1088 Origin}); 1089 } else { 1090 Value *Cmp = IRB.CreateICmpNE( 1091 ConvertedShadow, getCleanShadow(ConvertedShadow), "_mscmp"); 1092 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1093 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1094 IRBuilder<> IRBNew(CheckTerm); 1095 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1096 OriginAlignment); 1097 } 1098 } 1099 } 1100 1101 void materializeStores(bool InstrumentWithCalls) { 1102 for (StoreInst *SI : StoreList) { 1103 IRBuilder<> IRB(SI); 1104 Value *Val = SI->getValueOperand(); 1105 Value *Addr = SI->getPointerOperand(); 1106 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1107 Value *ShadowPtr, *OriginPtr; 1108 Type *ShadowTy = Shadow->getType(); 1109 unsigned Alignment = SI->getAlignment(); 1110 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1111 std::tie(ShadowPtr, OriginPtr) = 1112 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1113 1114 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1115 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1116 (void)NewSI; 1117 1118 if (SI->isAtomic()) 1119 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1120 1121 if (MS.TrackOrigins && !SI->isAtomic()) 1122 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1123 OriginAlignment, InstrumentWithCalls); 1124 } 1125 } 1126 1127 /// Helper function to insert a warning at IRB's current insert point. 1128 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1129 if (!Origin) 1130 Origin = (Value *)IRB.getInt32(0); 1131 if (MS.CompileKernel) { 1132 IRB.CreateCall(MS.WarningFn, Origin); 1133 } else { 1134 if (MS.TrackOrigins) { 1135 IRB.CreateStore(Origin, MS.OriginTLS); 1136 } 1137 IRB.CreateCall(MS.WarningFn, {}); 1138 } 1139 IRB.CreateCall(MS.EmptyAsm, {}); 1140 // FIXME: Insert UnreachableInst if !MS.Recover? 1141 // This may invalidate some of the following checks and needs to be done 1142 // at the very end. 1143 } 1144 1145 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1146 bool AsCall) { 1147 IRBuilder<> IRB(OrigIns); 1148 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1149 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1150 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1151 1152 Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow); 1153 if (ConstantShadow) { 1154 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1155 insertWarningFn(IRB, Origin); 1156 } 1157 return; 1158 } 1159 1160 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1161 1162 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1163 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1164 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1165 Value *Fn = MS.MaybeWarningFn[SizeIndex]; 1166 Value *ConvertedShadow2 = 1167 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1168 IRB.CreateCall(Fn, {ConvertedShadow2, MS.TrackOrigins && Origin 1169 ? Origin 1170 : (Value *)IRB.getInt32(0)}); 1171 } else { 1172 Value *Cmp = IRB.CreateICmpNE(ConvertedShadow, 1173 getCleanShadow(ConvertedShadow), "_mscmp"); 1174 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1175 Cmp, OrigIns, 1176 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1177 1178 IRB.SetInsertPoint(CheckTerm); 1179 insertWarningFn(IRB, Origin); 1180 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1181 } 1182 } 1183 1184 void materializeChecks(bool InstrumentWithCalls) { 1185 for (const auto &ShadowData : InstrumentationList) { 1186 Instruction *OrigIns = ShadowData.OrigIns; 1187 Value *Shadow = ShadowData.Shadow; 1188 Value *Origin = ShadowData.Origin; 1189 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1190 } 1191 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1192 } 1193 1194 BasicBlock *insertKmsanPrologue(Function &F) { 1195 BasicBlock *ret = 1196 SplitBlock(&F.getEntryBlock(), F.getEntryBlock().getFirstNonPHI()); 1197 IRBuilder<> IRB(F.getEntryBlock().getFirstNonPHI()); 1198 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1199 Constant *Zero = IRB.getInt32(0); 1200 MS.ParamTLS = 1201 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(0)}, "param_shadow"); 1202 MS.RetvalTLS = 1203 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(1)}, "retval_shadow"); 1204 MS.VAArgTLS = 1205 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1206 MS.VAArgOriginTLS = 1207 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1208 MS.VAArgOverflowSizeTLS = IRB.CreateGEP( 1209 ContextState, {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1210 MS.ParamOriginTLS = 1211 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(5)}, "param_origin"); 1212 MS.RetvalOriginTLS = 1213 IRB.CreateGEP(ContextState, {Zero, IRB.getInt32(6)}, "retval_origin"); 1214 return ret; 1215 } 1216 1217 /// Add MemorySanitizer instrumentation to a function. 1218 bool runOnFunction() { 1219 // In the presence of unreachable blocks, we may see Phi nodes with 1220 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1221 // blocks, such nodes will not have any shadow value associated with them. 1222 // It's easier to remove unreachable blocks than deal with missing shadow. 1223 removeUnreachableBlocks(F); 1224 1225 // Iterate all BBs in depth-first order and create shadow instructions 1226 // for all instructions (where applicable). 1227 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1228 for (BasicBlock *BB : depth_first(ActualFnStart)) 1229 visit(*BB); 1230 1231 // Finalize PHI nodes. 1232 for (PHINode *PN : ShadowPHINodes) { 1233 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1234 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1235 size_t NumValues = PN->getNumIncomingValues(); 1236 for (size_t v = 0; v < NumValues; v++) { 1237 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1238 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1239 } 1240 } 1241 1242 VAHelper->finalizeInstrumentation(); 1243 1244 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1245 InstrumentationList.size() + StoreList.size() > 1246 (unsigned)ClInstrumentationWithCallThreshold; 1247 1248 // Insert shadow value checks. 1249 materializeChecks(InstrumentWithCalls); 1250 1251 // Delayed instrumentation of StoreInst. 1252 // This may not add new address checks. 1253 materializeStores(InstrumentWithCalls); 1254 1255 return true; 1256 } 1257 1258 /// Compute the shadow type that corresponds to a given Value. 1259 Type *getShadowTy(Value *V) { 1260 return getShadowTy(V->getType()); 1261 } 1262 1263 /// Compute the shadow type that corresponds to a given Type. 1264 Type *getShadowTy(Type *OrigTy) { 1265 if (!OrigTy->isSized()) { 1266 return nullptr; 1267 } 1268 // For integer type, shadow is the same as the original type. 1269 // This may return weird-sized types like i1. 1270 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1271 return IT; 1272 const DataLayout &DL = F.getParent()->getDataLayout(); 1273 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1274 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1275 return VectorType::get(IntegerType::get(*MS.C, EltSize), 1276 VT->getNumElements()); 1277 } 1278 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1279 return ArrayType::get(getShadowTy(AT->getElementType()), 1280 AT->getNumElements()); 1281 } 1282 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1283 SmallVector<Type*, 4> Elements; 1284 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1285 Elements.push_back(getShadowTy(ST->getElementType(i))); 1286 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1287 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1288 return Res; 1289 } 1290 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1291 return IntegerType::get(*MS.C, TypeSize); 1292 } 1293 1294 /// Flatten a vector type. 1295 Type *getShadowTyNoVec(Type *ty) { 1296 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1297 return IntegerType::get(*MS.C, vt->getBitWidth()); 1298 return ty; 1299 } 1300 1301 /// Convert a shadow value to it's flattened variant. 1302 Value *convertToShadowTyNoVec(Value *V, IRBuilder<> &IRB) { 1303 Type *Ty = V->getType(); 1304 Type *NoVecTy = getShadowTyNoVec(Ty); 1305 if (Ty == NoVecTy) return V; 1306 return IRB.CreateBitCast(V, NoVecTy); 1307 } 1308 1309 /// Compute the integer shadow offset that corresponds to a given 1310 /// application address. 1311 /// 1312 /// Offset = (Addr & ~AndMask) ^ XorMask 1313 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1314 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1315 1316 uint64_t AndMask = MS.MapParams->AndMask; 1317 if (AndMask) 1318 OffsetLong = 1319 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1320 1321 uint64_t XorMask = MS.MapParams->XorMask; 1322 if (XorMask) 1323 OffsetLong = 1324 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1325 return OffsetLong; 1326 } 1327 1328 /// Compute the shadow and origin addresses corresponding to a given 1329 /// application address. 1330 /// 1331 /// Shadow = ShadowBase + Offset 1332 /// Origin = (OriginBase + Offset) & ~3ULL 1333 std::pair<Value *, Value *> getShadowOriginPtrUserspace(Value *Addr, 1334 IRBuilder<> &IRB, 1335 Type *ShadowTy, 1336 unsigned Alignment) { 1337 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1338 Value *ShadowLong = ShadowOffset; 1339 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1340 if (ShadowBase != 0) { 1341 ShadowLong = 1342 IRB.CreateAdd(ShadowLong, 1343 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1344 } 1345 Value *ShadowPtr = 1346 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1347 Value *OriginPtr = nullptr; 1348 if (MS.TrackOrigins) { 1349 Value *OriginLong = ShadowOffset; 1350 uint64_t OriginBase = MS.MapParams->OriginBase; 1351 if (OriginBase != 0) 1352 OriginLong = IRB.CreateAdd(OriginLong, 1353 ConstantInt::get(MS.IntptrTy, OriginBase)); 1354 if (Alignment < kMinOriginAlignment) { 1355 uint64_t Mask = kMinOriginAlignment - 1; 1356 OriginLong = 1357 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1358 } 1359 OriginPtr = 1360 IRB.CreateIntToPtr(OriginLong, PointerType::get(IRB.getInt32Ty(), 0)); 1361 } 1362 return std::make_pair(ShadowPtr, OriginPtr); 1363 } 1364 1365 std::pair<Value *, Value *> 1366 getShadowOriginPtrKernel(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1367 unsigned Alignment, bool isStore) { 1368 Value *ShadowOriginPtrs; 1369 const DataLayout &DL = F.getParent()->getDataLayout(); 1370 int Size = DL.getTypeStoreSize(ShadowTy); 1371 1372 Value *Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1373 Value *AddrCast = 1374 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1375 if (Getter) { 1376 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1377 } else { 1378 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1379 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1380 : MS.MsanMetadataPtrForLoadN, 1381 {AddrCast, SizeVal}); 1382 } 1383 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1384 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1385 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1386 1387 return std::make_pair(ShadowPtr, OriginPtr); 1388 } 1389 1390 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1391 Type *ShadowTy, 1392 unsigned Alignment, 1393 bool isStore) { 1394 std::pair<Value *, Value *> ret; 1395 if (MS.CompileKernel) 1396 ret = getShadowOriginPtrKernel(Addr, IRB, ShadowTy, Alignment, isStore); 1397 else 1398 ret = getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1399 return ret; 1400 } 1401 1402 /// Compute the shadow address for a given function argument. 1403 /// 1404 /// Shadow = ParamTLS+ArgOffset. 1405 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1406 int ArgOffset) { 1407 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1408 if (ArgOffset) 1409 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1410 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1411 "_msarg"); 1412 } 1413 1414 /// Compute the origin address for a given function argument. 1415 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1416 int ArgOffset) { 1417 if (!MS.TrackOrigins) 1418 return nullptr; 1419 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1420 if (ArgOffset) 1421 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1422 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1423 "_msarg_o"); 1424 } 1425 1426 /// Compute the shadow address for a retval. 1427 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1428 return IRB.CreatePointerCast(MS.RetvalTLS, 1429 PointerType::get(getShadowTy(A), 0), 1430 "_msret"); 1431 } 1432 1433 /// Compute the origin address for a retval. 1434 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1435 // We keep a single origin for the entire retval. Might be too optimistic. 1436 return MS.RetvalOriginTLS; 1437 } 1438 1439 /// Set SV to be the shadow value for V. 1440 void setShadow(Value *V, Value *SV) { 1441 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1442 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1443 } 1444 1445 /// Set Origin to be the origin value for V. 1446 void setOrigin(Value *V, Value *Origin) { 1447 if (!MS.TrackOrigins) return; 1448 assert(!OriginMap.count(V) && "Values may only have one origin"); 1449 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1450 OriginMap[V] = Origin; 1451 } 1452 1453 Constant *getCleanShadow(Type *OrigTy) { 1454 Type *ShadowTy = getShadowTy(OrigTy); 1455 if (!ShadowTy) 1456 return nullptr; 1457 return Constant::getNullValue(ShadowTy); 1458 } 1459 1460 /// Create a clean shadow value for a given value. 1461 /// 1462 /// Clean shadow (all zeroes) means all bits of the value are defined 1463 /// (initialized). 1464 Constant *getCleanShadow(Value *V) { 1465 return getCleanShadow(V->getType()); 1466 } 1467 1468 /// Create a dirty shadow of a given shadow type. 1469 Constant *getPoisonedShadow(Type *ShadowTy) { 1470 assert(ShadowTy); 1471 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1472 return Constant::getAllOnesValue(ShadowTy); 1473 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1474 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1475 getPoisonedShadow(AT->getElementType())); 1476 return ConstantArray::get(AT, Vals); 1477 } 1478 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1479 SmallVector<Constant *, 4> Vals; 1480 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1481 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1482 return ConstantStruct::get(ST, Vals); 1483 } 1484 llvm_unreachable("Unexpected shadow type"); 1485 } 1486 1487 /// Create a dirty shadow for a given value. 1488 Constant *getPoisonedShadow(Value *V) { 1489 Type *ShadowTy = getShadowTy(V); 1490 if (!ShadowTy) 1491 return nullptr; 1492 return getPoisonedShadow(ShadowTy); 1493 } 1494 1495 /// Create a clean (zero) origin. 1496 Value *getCleanOrigin() { 1497 return Constant::getNullValue(MS.OriginTy); 1498 } 1499 1500 /// Get the shadow value for a given Value. 1501 /// 1502 /// This function either returns the value set earlier with setShadow, 1503 /// or extracts if from ParamTLS (for function arguments). 1504 Value *getShadow(Value *V) { 1505 if (!PropagateShadow) return getCleanShadow(V); 1506 if (Instruction *I = dyn_cast<Instruction>(V)) { 1507 if (I->getMetadata("nosanitize")) 1508 return getCleanShadow(V); 1509 // For instructions the shadow is already stored in the map. 1510 Value *Shadow = ShadowMap[V]; 1511 if (!Shadow) { 1512 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1513 (void)I; 1514 assert(Shadow && "No shadow for a value"); 1515 } 1516 return Shadow; 1517 } 1518 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1519 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1520 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1521 (void)U; 1522 return AllOnes; 1523 } 1524 if (Argument *A = dyn_cast<Argument>(V)) { 1525 // For arguments we compute the shadow on demand and store it in the map. 1526 Value **ShadowPtr = &ShadowMap[V]; 1527 if (*ShadowPtr) 1528 return *ShadowPtr; 1529 Function *F = A->getParent(); 1530 IRBuilder<> EntryIRB(ActualFnStart->getFirstNonPHI()); 1531 unsigned ArgOffset = 0; 1532 const DataLayout &DL = F->getParent()->getDataLayout(); 1533 for (auto &FArg : F->args()) { 1534 if (!FArg.getType()->isSized()) { 1535 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1536 continue; 1537 } 1538 unsigned Size = 1539 FArg.hasByValAttr() 1540 ? DL.getTypeAllocSize(FArg.getType()->getPointerElementType()) 1541 : DL.getTypeAllocSize(FArg.getType()); 1542 if (A == &FArg) { 1543 bool Overflow = ArgOffset + Size > kParamTLSSize; 1544 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1545 if (FArg.hasByValAttr()) { 1546 // ByVal pointer itself has clean shadow. We copy the actual 1547 // argument shadow to the underlying memory. 1548 // Figure out maximal valid memcpy alignment. 1549 unsigned ArgAlign = FArg.getParamAlignment(); 1550 if (ArgAlign == 0) { 1551 Type *EltType = A->getType()->getPointerElementType(); 1552 ArgAlign = DL.getABITypeAlignment(EltType); 1553 } 1554 Value *CpShadowPtr = 1555 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1556 /*isStore*/ true) 1557 .first; 1558 // TODO(glider): need to copy origins. 1559 if (Overflow) { 1560 // ParamTLS overflow. 1561 EntryIRB.CreateMemSet( 1562 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1563 Size, ArgAlign); 1564 } else { 1565 unsigned CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1566 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1567 CopyAlign, Size); 1568 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1569 (void)Cpy; 1570 } 1571 *ShadowPtr = getCleanShadow(V); 1572 } else { 1573 if (Overflow) { 1574 // ParamTLS overflow. 1575 *ShadowPtr = getCleanShadow(V); 1576 } else { 1577 *ShadowPtr = 1578 EntryIRB.CreateAlignedLoad(Base, kShadowTLSAlignment); 1579 } 1580 } 1581 LLVM_DEBUG(dbgs() 1582 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1583 if (MS.TrackOrigins && !Overflow) { 1584 Value *OriginPtr = 1585 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1586 setOrigin(A, EntryIRB.CreateLoad(OriginPtr)); 1587 } else { 1588 setOrigin(A, getCleanOrigin()); 1589 } 1590 } 1591 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1592 } 1593 assert(*ShadowPtr && "Could not find shadow for an argument"); 1594 return *ShadowPtr; 1595 } 1596 // For everything else the shadow is zero. 1597 return getCleanShadow(V); 1598 } 1599 1600 /// Get the shadow for i-th argument of the instruction I. 1601 Value *getShadow(Instruction *I, int i) { 1602 return getShadow(I->getOperand(i)); 1603 } 1604 1605 /// Get the origin for a value. 1606 Value *getOrigin(Value *V) { 1607 if (!MS.TrackOrigins) return nullptr; 1608 if (!PropagateShadow) return getCleanOrigin(); 1609 if (isa<Constant>(V)) return getCleanOrigin(); 1610 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1611 "Unexpected value type in getOrigin()"); 1612 if (Instruction *I = dyn_cast<Instruction>(V)) { 1613 if (I->getMetadata("nosanitize")) 1614 return getCleanOrigin(); 1615 } 1616 Value *Origin = OriginMap[V]; 1617 assert(Origin && "Missing origin"); 1618 return Origin; 1619 } 1620 1621 /// Get the origin for i-th argument of the instruction I. 1622 Value *getOrigin(Instruction *I, int i) { 1623 return getOrigin(I->getOperand(i)); 1624 } 1625 1626 /// Remember the place where a shadow check should be inserted. 1627 /// 1628 /// This location will be later instrumented with a check that will print a 1629 /// UMR warning in runtime if the shadow value is not 0. 1630 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1631 assert(Shadow); 1632 if (!InsertChecks) return; 1633 #ifndef NDEBUG 1634 Type *ShadowTy = Shadow->getType(); 1635 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) && 1636 "Can only insert checks for integer and vector shadow types"); 1637 #endif 1638 InstrumentationList.push_back( 1639 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1640 } 1641 1642 /// Remember the place where a shadow check should be inserted. 1643 /// 1644 /// This location will be later instrumented with a check that will print a 1645 /// UMR warning in runtime if the value is not fully defined. 1646 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1647 assert(Val); 1648 Value *Shadow, *Origin; 1649 if (ClCheckConstantShadow) { 1650 Shadow = getShadow(Val); 1651 if (!Shadow) return; 1652 Origin = getOrigin(Val); 1653 } else { 1654 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1655 if (!Shadow) return; 1656 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1657 } 1658 insertShadowCheck(Shadow, Origin, OrigIns); 1659 } 1660 1661 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1662 switch (a) { 1663 case AtomicOrdering::NotAtomic: 1664 return AtomicOrdering::NotAtomic; 1665 case AtomicOrdering::Unordered: 1666 case AtomicOrdering::Monotonic: 1667 case AtomicOrdering::Release: 1668 return AtomicOrdering::Release; 1669 case AtomicOrdering::Acquire: 1670 case AtomicOrdering::AcquireRelease: 1671 return AtomicOrdering::AcquireRelease; 1672 case AtomicOrdering::SequentiallyConsistent: 1673 return AtomicOrdering::SequentiallyConsistent; 1674 } 1675 llvm_unreachable("Unknown ordering"); 1676 } 1677 1678 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1679 switch (a) { 1680 case AtomicOrdering::NotAtomic: 1681 return AtomicOrdering::NotAtomic; 1682 case AtomicOrdering::Unordered: 1683 case AtomicOrdering::Monotonic: 1684 case AtomicOrdering::Acquire: 1685 return AtomicOrdering::Acquire; 1686 case AtomicOrdering::Release: 1687 case AtomicOrdering::AcquireRelease: 1688 return AtomicOrdering::AcquireRelease; 1689 case AtomicOrdering::SequentiallyConsistent: 1690 return AtomicOrdering::SequentiallyConsistent; 1691 } 1692 llvm_unreachable("Unknown ordering"); 1693 } 1694 1695 // ------------------- Visitors. 1696 using InstVisitor<MemorySanitizerVisitor>::visit; 1697 void visit(Instruction &I) { 1698 if (!I.getMetadata("nosanitize")) 1699 InstVisitor<MemorySanitizerVisitor>::visit(I); 1700 } 1701 1702 /// Instrument LoadInst 1703 /// 1704 /// Loads the corresponding shadow and (optionally) origin. 1705 /// Optionally, checks that the load address is fully defined. 1706 void visitLoadInst(LoadInst &I) { 1707 assert(I.getType()->isSized() && "Load type must have size"); 1708 assert(!I.getMetadata("nosanitize")); 1709 IRBuilder<> IRB(I.getNextNode()); 1710 Type *ShadowTy = getShadowTy(&I); 1711 Value *Addr = I.getPointerOperand(); 1712 Value *ShadowPtr, *OriginPtr; 1713 unsigned Alignment = I.getAlignment(); 1714 if (PropagateShadow) { 1715 std::tie(ShadowPtr, OriginPtr) = 1716 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1717 setShadow(&I, IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_msld")); 1718 } else { 1719 setShadow(&I, getCleanShadow(&I)); 1720 } 1721 1722 if (ClCheckAccessAddress) 1723 insertShadowCheck(I.getPointerOperand(), &I); 1724 1725 if (I.isAtomic()) 1726 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1727 1728 if (MS.TrackOrigins) { 1729 if (PropagateShadow) { 1730 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1731 setOrigin(&I, IRB.CreateAlignedLoad(OriginPtr, OriginAlignment)); 1732 } else { 1733 setOrigin(&I, getCleanOrigin()); 1734 } 1735 } 1736 } 1737 1738 /// Instrument StoreInst 1739 /// 1740 /// Stores the corresponding shadow and (optionally) origin. 1741 /// Optionally, checks that the store address is fully defined. 1742 void visitStoreInst(StoreInst &I) { 1743 StoreList.push_back(&I); 1744 if (ClCheckAccessAddress) 1745 insertShadowCheck(I.getPointerOperand(), &I); 1746 } 1747 1748 void handleCASOrRMW(Instruction &I) { 1749 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1750 1751 IRBuilder<> IRB(&I); 1752 Value *Addr = I.getOperand(0); 1753 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, I.getType(), 1754 /*Alignment*/ 1, /*isStore*/ true) 1755 .first; 1756 1757 if (ClCheckAccessAddress) 1758 insertShadowCheck(Addr, &I); 1759 1760 // Only test the conditional argument of cmpxchg instruction. 1761 // The other argument can potentially be uninitialized, but we can not 1762 // detect this situation reliably without possible false positives. 1763 if (isa<AtomicCmpXchgInst>(I)) 1764 insertShadowCheck(I.getOperand(1), &I); 1765 1766 IRB.CreateStore(getCleanShadow(&I), ShadowPtr); 1767 1768 setShadow(&I, getCleanShadow(&I)); 1769 setOrigin(&I, getCleanOrigin()); 1770 } 1771 1772 void visitAtomicRMWInst(AtomicRMWInst &I) { 1773 handleCASOrRMW(I); 1774 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1775 } 1776 1777 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1778 handleCASOrRMW(I); 1779 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1780 } 1781 1782 // Vector manipulation. 1783 void visitExtractElementInst(ExtractElementInst &I) { 1784 insertShadowCheck(I.getOperand(1), &I); 1785 IRBuilder<> IRB(&I); 1786 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 1787 "_msprop")); 1788 setOrigin(&I, getOrigin(&I, 0)); 1789 } 1790 1791 void visitInsertElementInst(InsertElementInst &I) { 1792 insertShadowCheck(I.getOperand(2), &I); 1793 IRBuilder<> IRB(&I); 1794 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 1795 I.getOperand(2), "_msprop")); 1796 setOriginForNaryOp(I); 1797 } 1798 1799 void visitShuffleVectorInst(ShuffleVectorInst &I) { 1800 insertShadowCheck(I.getOperand(2), &I); 1801 IRBuilder<> IRB(&I); 1802 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 1803 I.getOperand(2), "_msprop")); 1804 setOriginForNaryOp(I); 1805 } 1806 1807 // Casts. 1808 void visitSExtInst(SExtInst &I) { 1809 IRBuilder<> IRB(&I); 1810 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 1811 setOrigin(&I, getOrigin(&I, 0)); 1812 } 1813 1814 void visitZExtInst(ZExtInst &I) { 1815 IRBuilder<> IRB(&I); 1816 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 1817 setOrigin(&I, getOrigin(&I, 0)); 1818 } 1819 1820 void visitTruncInst(TruncInst &I) { 1821 IRBuilder<> IRB(&I); 1822 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 1823 setOrigin(&I, getOrigin(&I, 0)); 1824 } 1825 1826 void visitBitCastInst(BitCastInst &I) { 1827 // Special case: if this is the bitcast (there is exactly 1 allowed) between 1828 // a musttail call and a ret, don't instrument. New instructions are not 1829 // allowed after a musttail call. 1830 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 1831 if (CI->isMustTailCall()) 1832 return; 1833 IRBuilder<> IRB(&I); 1834 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 1835 setOrigin(&I, getOrigin(&I, 0)); 1836 } 1837 1838 void visitPtrToIntInst(PtrToIntInst &I) { 1839 IRBuilder<> IRB(&I); 1840 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1841 "_msprop_ptrtoint")); 1842 setOrigin(&I, getOrigin(&I, 0)); 1843 } 1844 1845 void visitIntToPtrInst(IntToPtrInst &I) { 1846 IRBuilder<> IRB(&I); 1847 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1848 "_msprop_inttoptr")); 1849 setOrigin(&I, getOrigin(&I, 0)); 1850 } 1851 1852 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 1853 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 1854 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 1855 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 1856 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 1857 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 1858 1859 /// Propagate shadow for bitwise AND. 1860 /// 1861 /// This code is exact, i.e. if, for example, a bit in the left argument 1862 /// is defined and 0, then neither the value not definedness of the 1863 /// corresponding bit in B don't affect the resulting shadow. 1864 void visitAnd(BinaryOperator &I) { 1865 IRBuilder<> IRB(&I); 1866 // "And" of 0 and a poisoned value results in unpoisoned value. 1867 // 1&1 => 1; 0&1 => 0; p&1 => p; 1868 // 1&0 => 0; 0&0 => 0; p&0 => 0; 1869 // 1&p => p; 0&p => 0; p&p => p; 1870 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 1871 Value *S1 = getShadow(&I, 0); 1872 Value *S2 = getShadow(&I, 1); 1873 Value *V1 = I.getOperand(0); 1874 Value *V2 = I.getOperand(1); 1875 if (V1->getType() != S1->getType()) { 1876 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1877 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1878 } 1879 Value *S1S2 = IRB.CreateAnd(S1, S2); 1880 Value *V1S2 = IRB.CreateAnd(V1, S2); 1881 Value *S1V2 = IRB.CreateAnd(S1, V2); 1882 setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2))); 1883 setOriginForNaryOp(I); 1884 } 1885 1886 void visitOr(BinaryOperator &I) { 1887 IRBuilder<> IRB(&I); 1888 // "Or" of 1 and a poisoned value results in unpoisoned value. 1889 // 1|1 => 1; 0|1 => 1; p|1 => 1; 1890 // 1|0 => 1; 0|0 => 0; p|0 => p; 1891 // 1|p => 1; 0|p => p; p|p => p; 1892 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 1893 Value *S1 = getShadow(&I, 0); 1894 Value *S2 = getShadow(&I, 1); 1895 Value *V1 = IRB.CreateNot(I.getOperand(0)); 1896 Value *V2 = IRB.CreateNot(I.getOperand(1)); 1897 if (V1->getType() != S1->getType()) { 1898 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1899 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1900 } 1901 Value *S1S2 = IRB.CreateAnd(S1, S2); 1902 Value *V1S2 = IRB.CreateAnd(V1, S2); 1903 Value *S1V2 = IRB.CreateAnd(S1, V2); 1904 setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2))); 1905 setOriginForNaryOp(I); 1906 } 1907 1908 /// Default propagation of shadow and/or origin. 1909 /// 1910 /// This class implements the general case of shadow propagation, used in all 1911 /// cases where we don't know and/or don't care about what the operation 1912 /// actually does. It converts all input shadow values to a common type 1913 /// (extending or truncating as necessary), and bitwise OR's them. 1914 /// 1915 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 1916 /// fully initialized), and less prone to false positives. 1917 /// 1918 /// This class also implements the general case of origin propagation. For a 1919 /// Nary operation, result origin is set to the origin of an argument that is 1920 /// not entirely initialized. If there is more than one such arguments, the 1921 /// rightmost of them is picked. It does not matter which one is picked if all 1922 /// arguments are initialized. 1923 template <bool CombineShadow> 1924 class Combiner { 1925 Value *Shadow = nullptr; 1926 Value *Origin = nullptr; 1927 IRBuilder<> &IRB; 1928 MemorySanitizerVisitor *MSV; 1929 1930 public: 1931 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 1932 : IRB(IRB), MSV(MSV) {} 1933 1934 /// Add a pair of shadow and origin values to the mix. 1935 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 1936 if (CombineShadow) { 1937 assert(OpShadow); 1938 if (!Shadow) 1939 Shadow = OpShadow; 1940 else { 1941 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 1942 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 1943 } 1944 } 1945 1946 if (MSV->MS.TrackOrigins) { 1947 assert(OpOrigin); 1948 if (!Origin) { 1949 Origin = OpOrigin; 1950 } else { 1951 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 1952 // No point in adding something that might result in 0 origin value. 1953 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 1954 Value *FlatShadow = MSV->convertToShadowTyNoVec(OpShadow, IRB); 1955 Value *Cond = 1956 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 1957 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 1958 } 1959 } 1960 } 1961 return *this; 1962 } 1963 1964 /// Add an application value to the mix. 1965 Combiner &Add(Value *V) { 1966 Value *OpShadow = MSV->getShadow(V); 1967 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 1968 return Add(OpShadow, OpOrigin); 1969 } 1970 1971 /// Set the current combined values as the given instruction's shadow 1972 /// and origin. 1973 void Done(Instruction *I) { 1974 if (CombineShadow) { 1975 assert(Shadow); 1976 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 1977 MSV->setShadow(I, Shadow); 1978 } 1979 if (MSV->MS.TrackOrigins) { 1980 assert(Origin); 1981 MSV->setOrigin(I, Origin); 1982 } 1983 } 1984 }; 1985 1986 using ShadowAndOriginCombiner = Combiner<true>; 1987 using OriginCombiner = Combiner<false>; 1988 1989 /// Propagate origin for arbitrary operation. 1990 void setOriginForNaryOp(Instruction &I) { 1991 if (!MS.TrackOrigins) return; 1992 IRBuilder<> IRB(&I); 1993 OriginCombiner OC(this, IRB); 1994 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 1995 OC.Add(OI->get()); 1996 OC.Done(&I); 1997 } 1998 1999 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2000 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2001 "Vector of pointers is not a valid shadow type"); 2002 return Ty->isVectorTy() ? 2003 Ty->getVectorNumElements() * Ty->getScalarSizeInBits() : 2004 Ty->getPrimitiveSizeInBits(); 2005 } 2006 2007 /// Cast between two shadow types, extending or truncating as 2008 /// necessary. 2009 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2010 bool Signed = false) { 2011 Type *srcTy = V->getType(); 2012 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2013 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2014 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2015 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2016 2017 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2018 return IRB.CreateIntCast(V, dstTy, Signed); 2019 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2020 dstTy->getVectorNumElements() == srcTy->getVectorNumElements()) 2021 return IRB.CreateIntCast(V, dstTy, Signed); 2022 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2023 Value *V2 = 2024 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2025 return IRB.CreateBitCast(V2, dstTy); 2026 // TODO: handle struct types. 2027 } 2028 2029 /// Cast an application value to the type of its own shadow. 2030 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2031 Type *ShadowTy = getShadowTy(V); 2032 if (V->getType() == ShadowTy) 2033 return V; 2034 if (V->getType()->isPtrOrPtrVectorTy()) 2035 return IRB.CreatePtrToInt(V, ShadowTy); 2036 else 2037 return IRB.CreateBitCast(V, ShadowTy); 2038 } 2039 2040 /// Propagate shadow for arbitrary operation. 2041 void handleShadowOr(Instruction &I) { 2042 IRBuilder<> IRB(&I); 2043 ShadowAndOriginCombiner SC(this, IRB); 2044 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2045 SC.Add(OI->get()); 2046 SC.Done(&I); 2047 } 2048 2049 // Handle multiplication by constant. 2050 // 2051 // Handle a special case of multiplication by constant that may have one or 2052 // more zeros in the lower bits. This makes corresponding number of lower bits 2053 // of the result zero as well. We model it by shifting the other operand 2054 // shadow left by the required number of bits. Effectively, we transform 2055 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2056 // We use multiplication by 2**N instead of shift to cover the case of 2057 // multiplication by 0, which may occur in some elements of a vector operand. 2058 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2059 Value *OtherArg) { 2060 Constant *ShadowMul; 2061 Type *Ty = ConstArg->getType(); 2062 if (Ty->isVectorTy()) { 2063 unsigned NumElements = Ty->getVectorNumElements(); 2064 Type *EltTy = Ty->getSequentialElementType(); 2065 SmallVector<Constant *, 16> Elements; 2066 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2067 if (ConstantInt *Elt = 2068 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2069 const APInt &V = Elt->getValue(); 2070 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2071 Elements.push_back(ConstantInt::get(EltTy, V2)); 2072 } else { 2073 Elements.push_back(ConstantInt::get(EltTy, 1)); 2074 } 2075 } 2076 ShadowMul = ConstantVector::get(Elements); 2077 } else { 2078 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2079 const APInt &V = Elt->getValue(); 2080 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2081 ShadowMul = ConstantInt::get(Ty, V2); 2082 } else { 2083 ShadowMul = ConstantInt::get(Ty, 1); 2084 } 2085 } 2086 2087 IRBuilder<> IRB(&I); 2088 setShadow(&I, 2089 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2090 setOrigin(&I, getOrigin(OtherArg)); 2091 } 2092 2093 void visitMul(BinaryOperator &I) { 2094 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2095 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2096 if (constOp0 && !constOp1) 2097 handleMulByConstant(I, constOp0, I.getOperand(1)); 2098 else if (constOp1 && !constOp0) 2099 handleMulByConstant(I, constOp1, I.getOperand(0)); 2100 else 2101 handleShadowOr(I); 2102 } 2103 2104 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2105 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2106 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2107 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2108 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2109 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2110 2111 void handleIntegerDiv(Instruction &I) { 2112 IRBuilder<> IRB(&I); 2113 // Strict on the second argument. 2114 insertShadowCheck(I.getOperand(1), &I); 2115 setShadow(&I, getShadow(&I, 0)); 2116 setOrigin(&I, getOrigin(&I, 0)); 2117 } 2118 2119 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2120 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2121 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2122 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2123 2124 // Floating point division is side-effect free. We can not require that the 2125 // divisor is fully initialized and must propagate shadow. See PR37523. 2126 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2127 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2128 2129 /// Instrument == and != comparisons. 2130 /// 2131 /// Sometimes the comparison result is known even if some of the bits of the 2132 /// arguments are not. 2133 void handleEqualityComparison(ICmpInst &I) { 2134 IRBuilder<> IRB(&I); 2135 Value *A = I.getOperand(0); 2136 Value *B = I.getOperand(1); 2137 Value *Sa = getShadow(A); 2138 Value *Sb = getShadow(B); 2139 2140 // Get rid of pointers and vectors of pointers. 2141 // For ints (and vectors of ints), types of A and Sa match, 2142 // and this is a no-op. 2143 A = IRB.CreatePointerCast(A, Sa->getType()); 2144 B = IRB.CreatePointerCast(B, Sb->getType()); 2145 2146 // A == B <==> (C = A^B) == 0 2147 // A != B <==> (C = A^B) != 0 2148 // Sc = Sa | Sb 2149 Value *C = IRB.CreateXor(A, B); 2150 Value *Sc = IRB.CreateOr(Sa, Sb); 2151 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2152 // Result is defined if one of the following is true 2153 // * there is a defined 1 bit in C 2154 // * C is fully defined 2155 // Si = !(C & ~Sc) && Sc 2156 Value *Zero = Constant::getNullValue(Sc->getType()); 2157 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2158 Value *Si = 2159 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2160 IRB.CreateICmpEQ( 2161 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2162 Si->setName("_msprop_icmp"); 2163 setShadow(&I, Si); 2164 setOriginForNaryOp(I); 2165 } 2166 2167 /// Build the lowest possible value of V, taking into account V's 2168 /// uninitialized bits. 2169 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2170 bool isSigned) { 2171 if (isSigned) { 2172 // Split shadow into sign bit and other bits. 2173 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2174 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2175 // Maximise the undefined shadow bit, minimize other undefined bits. 2176 return 2177 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2178 } else { 2179 // Minimize undefined bits. 2180 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2181 } 2182 } 2183 2184 /// Build the highest possible value of V, taking into account V's 2185 /// uninitialized bits. 2186 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2187 bool isSigned) { 2188 if (isSigned) { 2189 // Split shadow into sign bit and other bits. 2190 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2191 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2192 // Minimise the undefined shadow bit, maximise other undefined bits. 2193 return 2194 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2195 } else { 2196 // Maximize undefined bits. 2197 return IRB.CreateOr(A, Sa); 2198 } 2199 } 2200 2201 /// Instrument relational comparisons. 2202 /// 2203 /// This function does exact shadow propagation for all relational 2204 /// comparisons of integers, pointers and vectors of those. 2205 /// FIXME: output seems suboptimal when one of the operands is a constant 2206 void handleRelationalComparisonExact(ICmpInst &I) { 2207 IRBuilder<> IRB(&I); 2208 Value *A = I.getOperand(0); 2209 Value *B = I.getOperand(1); 2210 Value *Sa = getShadow(A); 2211 Value *Sb = getShadow(B); 2212 2213 // Get rid of pointers and vectors of pointers. 2214 // For ints (and vectors of ints), types of A and Sa match, 2215 // and this is a no-op. 2216 A = IRB.CreatePointerCast(A, Sa->getType()); 2217 B = IRB.CreatePointerCast(B, Sb->getType()); 2218 2219 // Let [a0, a1] be the interval of possible values of A, taking into account 2220 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2221 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2222 bool IsSigned = I.isSigned(); 2223 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2224 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2225 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2226 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2227 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2228 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2229 Value *Si = IRB.CreateXor(S1, S2); 2230 setShadow(&I, Si); 2231 setOriginForNaryOp(I); 2232 } 2233 2234 /// Instrument signed relational comparisons. 2235 /// 2236 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2237 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2238 void handleSignedRelationalComparison(ICmpInst &I) { 2239 Constant *constOp; 2240 Value *op = nullptr; 2241 CmpInst::Predicate pre; 2242 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2243 op = I.getOperand(0); 2244 pre = I.getPredicate(); 2245 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2246 op = I.getOperand(1); 2247 pre = I.getSwappedPredicate(); 2248 } else { 2249 handleShadowOr(I); 2250 return; 2251 } 2252 2253 if ((constOp->isNullValue() && 2254 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2255 (constOp->isAllOnesValue() && 2256 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2257 IRBuilder<> IRB(&I); 2258 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2259 "_msprop_icmp_s"); 2260 setShadow(&I, Shadow); 2261 setOrigin(&I, getOrigin(op)); 2262 } else { 2263 handleShadowOr(I); 2264 } 2265 } 2266 2267 void visitICmpInst(ICmpInst &I) { 2268 if (!ClHandleICmp) { 2269 handleShadowOr(I); 2270 return; 2271 } 2272 if (I.isEquality()) { 2273 handleEqualityComparison(I); 2274 return; 2275 } 2276 2277 assert(I.isRelational()); 2278 if (ClHandleICmpExact) { 2279 handleRelationalComparisonExact(I); 2280 return; 2281 } 2282 if (I.isSigned()) { 2283 handleSignedRelationalComparison(I); 2284 return; 2285 } 2286 2287 assert(I.isUnsigned()); 2288 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2289 handleRelationalComparisonExact(I); 2290 return; 2291 } 2292 2293 handleShadowOr(I); 2294 } 2295 2296 void visitFCmpInst(FCmpInst &I) { 2297 handleShadowOr(I); 2298 } 2299 2300 void handleShift(BinaryOperator &I) { 2301 IRBuilder<> IRB(&I); 2302 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2303 // Otherwise perform the same shift on S1. 2304 Value *S1 = getShadow(&I, 0); 2305 Value *S2 = getShadow(&I, 1); 2306 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2307 S2->getType()); 2308 Value *V2 = I.getOperand(1); 2309 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2310 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2311 setOriginForNaryOp(I); 2312 } 2313 2314 void visitShl(BinaryOperator &I) { handleShift(I); } 2315 void visitAShr(BinaryOperator &I) { handleShift(I); } 2316 void visitLShr(BinaryOperator &I) { handleShift(I); } 2317 2318 /// Instrument llvm.memmove 2319 /// 2320 /// At this point we don't know if llvm.memmove will be inlined or not. 2321 /// If we don't instrument it and it gets inlined, 2322 /// our interceptor will not kick in and we will lose the memmove. 2323 /// If we instrument the call here, but it does not get inlined, 2324 /// we will memove the shadow twice: which is bad in case 2325 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2326 /// 2327 /// Similar situation exists for memcpy and memset. 2328 void visitMemMoveInst(MemMoveInst &I) { 2329 IRBuilder<> IRB(&I); 2330 IRB.CreateCall( 2331 MS.MemmoveFn, 2332 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2333 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2334 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2335 I.eraseFromParent(); 2336 } 2337 2338 // Similar to memmove: avoid copying shadow twice. 2339 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2340 // FIXME: consider doing manual inline for small constant sizes and proper 2341 // alignment. 2342 void visitMemCpyInst(MemCpyInst &I) { 2343 IRBuilder<> IRB(&I); 2344 IRB.CreateCall( 2345 MS.MemcpyFn, 2346 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2347 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2348 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2349 I.eraseFromParent(); 2350 } 2351 2352 // Same as memcpy. 2353 void visitMemSetInst(MemSetInst &I) { 2354 IRBuilder<> IRB(&I); 2355 IRB.CreateCall( 2356 MS.MemsetFn, 2357 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2358 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2359 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2360 I.eraseFromParent(); 2361 } 2362 2363 void visitVAStartInst(VAStartInst &I) { 2364 VAHelper->visitVAStartInst(I); 2365 } 2366 2367 void visitVACopyInst(VACopyInst &I) { 2368 VAHelper->visitVACopyInst(I); 2369 } 2370 2371 /// Handle vector store-like intrinsics. 2372 /// 2373 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2374 /// has 1 pointer argument and 1 vector argument, returns void. 2375 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2376 IRBuilder<> IRB(&I); 2377 Value* Addr = I.getArgOperand(0); 2378 Value *Shadow = getShadow(&I, 1); 2379 Value *ShadowPtr, *OriginPtr; 2380 2381 // We don't know the pointer alignment (could be unaligned SSE store!). 2382 // Have to assume to worst case. 2383 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2384 Addr, IRB, Shadow->getType(), /*Alignment*/ 1, /*isStore*/ true); 2385 IRB.CreateAlignedStore(Shadow, ShadowPtr, 1); 2386 2387 if (ClCheckAccessAddress) 2388 insertShadowCheck(Addr, &I); 2389 2390 // FIXME: factor out common code from materializeStores 2391 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2392 return true; 2393 } 2394 2395 /// Handle vector load-like intrinsics. 2396 /// 2397 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2398 /// has 1 pointer argument, returns a vector. 2399 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2400 IRBuilder<> IRB(&I); 2401 Value *Addr = I.getArgOperand(0); 2402 2403 Type *ShadowTy = getShadowTy(&I); 2404 Value *ShadowPtr, *OriginPtr; 2405 if (PropagateShadow) { 2406 // We don't know the pointer alignment (could be unaligned SSE load!). 2407 // Have to assume to worst case. 2408 unsigned Alignment = 1; 2409 std::tie(ShadowPtr, OriginPtr) = 2410 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2411 setShadow(&I, IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_msld")); 2412 } else { 2413 setShadow(&I, getCleanShadow(&I)); 2414 } 2415 2416 if (ClCheckAccessAddress) 2417 insertShadowCheck(Addr, &I); 2418 2419 if (MS.TrackOrigins) { 2420 if (PropagateShadow) 2421 setOrigin(&I, IRB.CreateLoad(OriginPtr)); 2422 else 2423 setOrigin(&I, getCleanOrigin()); 2424 } 2425 return true; 2426 } 2427 2428 /// Handle (SIMD arithmetic)-like intrinsics. 2429 /// 2430 /// Instrument intrinsics with any number of arguments of the same type, 2431 /// equal to the return type. The type should be simple (no aggregates or 2432 /// pointers; vectors are fine). 2433 /// Caller guarantees that this intrinsic does not access memory. 2434 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2435 Type *RetTy = I.getType(); 2436 if (!(RetTy->isIntOrIntVectorTy() || 2437 RetTy->isFPOrFPVectorTy() || 2438 RetTy->isX86_MMXTy())) 2439 return false; 2440 2441 unsigned NumArgOperands = I.getNumArgOperands(); 2442 2443 for (unsigned i = 0; i < NumArgOperands; ++i) { 2444 Type *Ty = I.getArgOperand(i)->getType(); 2445 if (Ty != RetTy) 2446 return false; 2447 } 2448 2449 IRBuilder<> IRB(&I); 2450 ShadowAndOriginCombiner SC(this, IRB); 2451 for (unsigned i = 0; i < NumArgOperands; ++i) 2452 SC.Add(I.getArgOperand(i)); 2453 SC.Done(&I); 2454 2455 return true; 2456 } 2457 2458 /// Heuristically instrument unknown intrinsics. 2459 /// 2460 /// The main purpose of this code is to do something reasonable with all 2461 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2462 /// We recognize several classes of intrinsics by their argument types and 2463 /// ModRefBehaviour and apply special intrumentation when we are reasonably 2464 /// sure that we know what the intrinsic does. 2465 /// 2466 /// We special-case intrinsics where this approach fails. See llvm.bswap 2467 /// handling as an example of that. 2468 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2469 unsigned NumArgOperands = I.getNumArgOperands(); 2470 if (NumArgOperands == 0) 2471 return false; 2472 2473 if (NumArgOperands == 2 && 2474 I.getArgOperand(0)->getType()->isPointerTy() && 2475 I.getArgOperand(1)->getType()->isVectorTy() && 2476 I.getType()->isVoidTy() && 2477 !I.onlyReadsMemory()) { 2478 // This looks like a vector store. 2479 return handleVectorStoreIntrinsic(I); 2480 } 2481 2482 if (NumArgOperands == 1 && 2483 I.getArgOperand(0)->getType()->isPointerTy() && 2484 I.getType()->isVectorTy() && 2485 I.onlyReadsMemory()) { 2486 // This looks like a vector load. 2487 return handleVectorLoadIntrinsic(I); 2488 } 2489 2490 if (I.doesNotAccessMemory()) 2491 if (maybeHandleSimpleNomemIntrinsic(I)) 2492 return true; 2493 2494 // FIXME: detect and handle SSE maskstore/maskload 2495 return false; 2496 } 2497 2498 void handleBswap(IntrinsicInst &I) { 2499 IRBuilder<> IRB(&I); 2500 Value *Op = I.getArgOperand(0); 2501 Type *OpType = Op->getType(); 2502 Function *BswapFunc = Intrinsic::getDeclaration( 2503 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2504 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2505 setOrigin(&I, getOrigin(Op)); 2506 } 2507 2508 // Instrument vector convert instrinsic. 2509 // 2510 // This function instruments intrinsics like cvtsi2ss: 2511 // %Out = int_xxx_cvtyyy(%ConvertOp) 2512 // or 2513 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2514 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2515 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2516 // elements from \p CopyOp. 2517 // In most cases conversion involves floating-point value which may trigger a 2518 // hardware exception when not fully initialized. For this reason we require 2519 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2520 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2521 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2522 // return a fully initialized value. 2523 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements) { 2524 IRBuilder<> IRB(&I); 2525 Value *CopyOp, *ConvertOp; 2526 2527 switch (I.getNumArgOperands()) { 2528 case 3: 2529 assert(isa<ConstantInt>(I.getArgOperand(2)) && "Invalid rounding mode"); 2530 LLVM_FALLTHROUGH; 2531 case 2: 2532 CopyOp = I.getArgOperand(0); 2533 ConvertOp = I.getArgOperand(1); 2534 break; 2535 case 1: 2536 ConvertOp = I.getArgOperand(0); 2537 CopyOp = nullptr; 2538 break; 2539 default: 2540 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2541 } 2542 2543 // The first *NumUsedElements* elements of ConvertOp are converted to the 2544 // same number of output elements. The rest of the output is copied from 2545 // CopyOp, or (if not available) filled with zeroes. 2546 // Combine shadow for elements of ConvertOp that are used in this operation, 2547 // and insert a check. 2548 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2549 // int->any conversion. 2550 Value *ConvertShadow = getShadow(ConvertOp); 2551 Value *AggShadow = nullptr; 2552 if (ConvertOp->getType()->isVectorTy()) { 2553 AggShadow = IRB.CreateExtractElement( 2554 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2555 for (int i = 1; i < NumUsedElements; ++i) { 2556 Value *MoreShadow = IRB.CreateExtractElement( 2557 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2558 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2559 } 2560 } else { 2561 AggShadow = ConvertShadow; 2562 } 2563 assert(AggShadow->getType()->isIntegerTy()); 2564 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2565 2566 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2567 // ConvertOp. 2568 if (CopyOp) { 2569 assert(CopyOp->getType() == I.getType()); 2570 assert(CopyOp->getType()->isVectorTy()); 2571 Value *ResultShadow = getShadow(CopyOp); 2572 Type *EltTy = ResultShadow->getType()->getVectorElementType(); 2573 for (int i = 0; i < NumUsedElements; ++i) { 2574 ResultShadow = IRB.CreateInsertElement( 2575 ResultShadow, ConstantInt::getNullValue(EltTy), 2576 ConstantInt::get(IRB.getInt32Ty(), i)); 2577 } 2578 setShadow(&I, ResultShadow); 2579 setOrigin(&I, getOrigin(CopyOp)); 2580 } else { 2581 setShadow(&I, getCleanShadow(&I)); 2582 setOrigin(&I, getCleanOrigin()); 2583 } 2584 } 2585 2586 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2587 // zeroes if it is zero, and all ones otherwise. 2588 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2589 if (S->getType()->isVectorTy()) 2590 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2591 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2592 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2593 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2594 } 2595 2596 // Given a vector, extract its first element, and return all 2597 // zeroes if it is zero, and all ones otherwise. 2598 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2599 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2600 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2601 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2602 } 2603 2604 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2605 Type *T = S->getType(); 2606 assert(T->isVectorTy()); 2607 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2608 return IRB.CreateSExt(S2, T); 2609 } 2610 2611 // Instrument vector shift instrinsic. 2612 // 2613 // This function instruments intrinsics like int_x86_avx2_psll_w. 2614 // Intrinsic shifts %In by %ShiftSize bits. 2615 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2616 // size, and the rest is ignored. Behavior is defined even if shift size is 2617 // greater than register (or field) width. 2618 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2619 assert(I.getNumArgOperands() == 2); 2620 IRBuilder<> IRB(&I); 2621 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2622 // Otherwise perform the same shift on S1. 2623 Value *S1 = getShadow(&I, 0); 2624 Value *S2 = getShadow(&I, 1); 2625 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2626 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2627 Value *V1 = I.getOperand(0); 2628 Value *V2 = I.getOperand(1); 2629 Value *Shift = IRB.CreateCall(I.getCalledValue(), 2630 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2631 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2632 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2633 setOriginForNaryOp(I); 2634 } 2635 2636 // Get an X86_MMX-sized vector type. 2637 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2638 const unsigned X86_MMXSizeInBits = 64; 2639 return VectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2640 X86_MMXSizeInBits / EltSizeInBits); 2641 } 2642 2643 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2644 // intrinsic. 2645 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2646 switch (id) { 2647 case Intrinsic::x86_sse2_packsswb_128: 2648 case Intrinsic::x86_sse2_packuswb_128: 2649 return Intrinsic::x86_sse2_packsswb_128; 2650 2651 case Intrinsic::x86_sse2_packssdw_128: 2652 case Intrinsic::x86_sse41_packusdw: 2653 return Intrinsic::x86_sse2_packssdw_128; 2654 2655 case Intrinsic::x86_avx2_packsswb: 2656 case Intrinsic::x86_avx2_packuswb: 2657 return Intrinsic::x86_avx2_packsswb; 2658 2659 case Intrinsic::x86_avx2_packssdw: 2660 case Intrinsic::x86_avx2_packusdw: 2661 return Intrinsic::x86_avx2_packssdw; 2662 2663 case Intrinsic::x86_mmx_packsswb: 2664 case Intrinsic::x86_mmx_packuswb: 2665 return Intrinsic::x86_mmx_packsswb; 2666 2667 case Intrinsic::x86_mmx_packssdw: 2668 return Intrinsic::x86_mmx_packssdw; 2669 default: 2670 llvm_unreachable("unexpected intrinsic id"); 2671 } 2672 } 2673 2674 // Instrument vector pack instrinsic. 2675 // 2676 // This function instruments intrinsics like x86_mmx_packsswb, that 2677 // packs elements of 2 input vectors into half as many bits with saturation. 2678 // Shadow is propagated with the signed variant of the same intrinsic applied 2679 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2680 // EltSizeInBits is used only for x86mmx arguments. 2681 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2682 assert(I.getNumArgOperands() == 2); 2683 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2684 IRBuilder<> IRB(&I); 2685 Value *S1 = getShadow(&I, 0); 2686 Value *S2 = getShadow(&I, 1); 2687 assert(isX86_MMX || S1->getType()->isVectorTy()); 2688 2689 // SExt and ICmpNE below must apply to individual elements of input vectors. 2690 // In case of x86mmx arguments, cast them to appropriate vector types and 2691 // back. 2692 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2693 if (isX86_MMX) { 2694 S1 = IRB.CreateBitCast(S1, T); 2695 S2 = IRB.CreateBitCast(S2, T); 2696 } 2697 Value *S1_ext = IRB.CreateSExt( 2698 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2699 Value *S2_ext = IRB.CreateSExt( 2700 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2701 if (isX86_MMX) { 2702 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2703 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2704 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2705 } 2706 2707 Function *ShadowFn = Intrinsic::getDeclaration( 2708 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2709 2710 Value *S = 2711 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2712 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2713 setShadow(&I, S); 2714 setOriginForNaryOp(I); 2715 } 2716 2717 // Instrument sum-of-absolute-differencies intrinsic. 2718 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2719 const unsigned SignificantBitsPerResultElement = 16; 2720 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2721 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2722 unsigned ZeroBitsPerResultElement = 2723 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2724 2725 IRBuilder<> IRB(&I); 2726 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2727 S = IRB.CreateBitCast(S, ResTy); 2728 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2729 ResTy); 2730 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2731 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2732 setShadow(&I, S); 2733 setOriginForNaryOp(I); 2734 } 2735 2736 // Instrument multiply-add intrinsic. 2737 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2738 unsigned EltSizeInBits = 0) { 2739 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2740 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2741 IRBuilder<> IRB(&I); 2742 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2743 S = IRB.CreateBitCast(S, ResTy); 2744 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2745 ResTy); 2746 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2747 setShadow(&I, S); 2748 setOriginForNaryOp(I); 2749 } 2750 2751 // Instrument compare-packed intrinsic. 2752 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 2753 // all-ones shadow. 2754 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 2755 IRBuilder<> IRB(&I); 2756 Type *ResTy = getShadowTy(&I); 2757 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2758 Value *S = IRB.CreateSExt( 2759 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 2760 setShadow(&I, S); 2761 setOriginForNaryOp(I); 2762 } 2763 2764 // Instrument compare-scalar intrinsic. 2765 // This handles both cmp* intrinsics which return the result in the first 2766 // element of a vector, and comi* which return the result as i32. 2767 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 2768 IRBuilder<> IRB(&I); 2769 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2770 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 2771 setShadow(&I, S); 2772 setOriginForNaryOp(I); 2773 } 2774 2775 void handleStmxcsr(IntrinsicInst &I) { 2776 IRBuilder<> IRB(&I); 2777 Value* Addr = I.getArgOperand(0); 2778 Type *Ty = IRB.getInt32Ty(); 2779 Value *ShadowPtr = 2780 getShadowOriginPtr(Addr, IRB, Ty, /*Alignment*/ 1, /*isStore*/ true) 2781 .first; 2782 2783 IRB.CreateStore(getCleanShadow(Ty), 2784 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 2785 2786 if (ClCheckAccessAddress) 2787 insertShadowCheck(Addr, &I); 2788 } 2789 2790 void handleLdmxcsr(IntrinsicInst &I) { 2791 if (!InsertChecks) return; 2792 2793 IRBuilder<> IRB(&I); 2794 Value *Addr = I.getArgOperand(0); 2795 Type *Ty = IRB.getInt32Ty(); 2796 unsigned Alignment = 1; 2797 Value *ShadowPtr, *OriginPtr; 2798 std::tie(ShadowPtr, OriginPtr) = 2799 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 2800 2801 if (ClCheckAccessAddress) 2802 insertShadowCheck(Addr, &I); 2803 2804 Value *Shadow = IRB.CreateAlignedLoad(ShadowPtr, Alignment, "_ldmxcsr"); 2805 Value *Origin = 2806 MS.TrackOrigins ? IRB.CreateLoad(OriginPtr) : getCleanOrigin(); 2807 insertShadowCheck(Shadow, Origin, &I); 2808 } 2809 2810 void handleMaskedStore(IntrinsicInst &I) { 2811 IRBuilder<> IRB(&I); 2812 Value *V = I.getArgOperand(0); 2813 Value *Addr = I.getArgOperand(1); 2814 unsigned Align = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 2815 Value *Mask = I.getArgOperand(3); 2816 Value *Shadow = getShadow(V); 2817 2818 Value *ShadowPtr; 2819 Value *OriginPtr; 2820 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2821 Addr, IRB, Shadow->getType(), Align, /*isStore*/ true); 2822 2823 if (ClCheckAccessAddress) { 2824 insertShadowCheck(Addr, &I); 2825 // Uninitialized mask is kind of like uninitialized address, but not as 2826 // scary. 2827 insertShadowCheck(Mask, &I); 2828 } 2829 2830 IRB.CreateMaskedStore(Shadow, ShadowPtr, Align, Mask); 2831 2832 if (MS.TrackOrigins) { 2833 auto &DL = F.getParent()->getDataLayout(); 2834 paintOrigin(IRB, getOrigin(V), OriginPtr, 2835 DL.getTypeStoreSize(Shadow->getType()), 2836 std::max(Align, kMinOriginAlignment)); 2837 } 2838 } 2839 2840 bool handleMaskedLoad(IntrinsicInst &I) { 2841 IRBuilder<> IRB(&I); 2842 Value *Addr = I.getArgOperand(0); 2843 unsigned Align = cast<ConstantInt>(I.getArgOperand(1))->getZExtValue(); 2844 Value *Mask = I.getArgOperand(2); 2845 Value *PassThru = I.getArgOperand(3); 2846 2847 Type *ShadowTy = getShadowTy(&I); 2848 Value *ShadowPtr, *OriginPtr; 2849 if (PropagateShadow) { 2850 std::tie(ShadowPtr, OriginPtr) = 2851 getShadowOriginPtr(Addr, IRB, ShadowTy, Align, /*isStore*/ false); 2852 setShadow(&I, IRB.CreateMaskedLoad(ShadowPtr, Align, Mask, 2853 getShadow(PassThru), "_msmaskedld")); 2854 } else { 2855 setShadow(&I, getCleanShadow(&I)); 2856 } 2857 2858 if (ClCheckAccessAddress) { 2859 insertShadowCheck(Addr, &I); 2860 insertShadowCheck(Mask, &I); 2861 } 2862 2863 if (MS.TrackOrigins) { 2864 if (PropagateShadow) { 2865 // Choose between PassThru's and the loaded value's origins. 2866 Value *MaskedPassThruShadow = IRB.CreateAnd( 2867 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 2868 2869 Value *Acc = IRB.CreateExtractElement( 2870 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2871 for (int i = 1, N = PassThru->getType()->getVectorNumElements(); i < N; 2872 ++i) { 2873 Value *More = IRB.CreateExtractElement( 2874 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2875 Acc = IRB.CreateOr(Acc, More); 2876 } 2877 2878 Value *Origin = IRB.CreateSelect( 2879 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 2880 getOrigin(PassThru), IRB.CreateLoad(OriginPtr)); 2881 2882 setOrigin(&I, Origin); 2883 } else { 2884 setOrigin(&I, getCleanOrigin()); 2885 } 2886 } 2887 return true; 2888 } 2889 2890 2891 void visitIntrinsicInst(IntrinsicInst &I) { 2892 switch (I.getIntrinsicID()) { 2893 case Intrinsic::bswap: 2894 handleBswap(I); 2895 break; 2896 case Intrinsic::masked_store: 2897 handleMaskedStore(I); 2898 break; 2899 case Intrinsic::masked_load: 2900 handleMaskedLoad(I); 2901 break; 2902 case Intrinsic::x86_sse_stmxcsr: 2903 handleStmxcsr(I); 2904 break; 2905 case Intrinsic::x86_sse_ldmxcsr: 2906 handleLdmxcsr(I); 2907 break; 2908 case Intrinsic::x86_avx512_vcvtsd2usi64: 2909 case Intrinsic::x86_avx512_vcvtsd2usi32: 2910 case Intrinsic::x86_avx512_vcvtss2usi64: 2911 case Intrinsic::x86_avx512_vcvtss2usi32: 2912 case Intrinsic::x86_avx512_cvttss2usi64: 2913 case Intrinsic::x86_avx512_cvttss2usi: 2914 case Intrinsic::x86_avx512_cvttsd2usi64: 2915 case Intrinsic::x86_avx512_cvttsd2usi: 2916 case Intrinsic::x86_avx512_cvtusi2ss: 2917 case Intrinsic::x86_avx512_cvtusi642sd: 2918 case Intrinsic::x86_avx512_cvtusi642ss: 2919 case Intrinsic::x86_sse2_cvtsd2si64: 2920 case Intrinsic::x86_sse2_cvtsd2si: 2921 case Intrinsic::x86_sse2_cvtsd2ss: 2922 case Intrinsic::x86_sse2_cvttsd2si64: 2923 case Intrinsic::x86_sse2_cvttsd2si: 2924 case Intrinsic::x86_sse_cvtss2si64: 2925 case Intrinsic::x86_sse_cvtss2si: 2926 case Intrinsic::x86_sse_cvttss2si64: 2927 case Intrinsic::x86_sse_cvttss2si: 2928 handleVectorConvertIntrinsic(I, 1); 2929 break; 2930 case Intrinsic::x86_sse_cvtps2pi: 2931 case Intrinsic::x86_sse_cvttps2pi: 2932 handleVectorConvertIntrinsic(I, 2); 2933 break; 2934 2935 case Intrinsic::x86_avx512_psll_w_512: 2936 case Intrinsic::x86_avx512_psll_d_512: 2937 case Intrinsic::x86_avx512_psll_q_512: 2938 case Intrinsic::x86_avx512_pslli_w_512: 2939 case Intrinsic::x86_avx512_pslli_d_512: 2940 case Intrinsic::x86_avx512_pslli_q_512: 2941 case Intrinsic::x86_avx512_psrl_w_512: 2942 case Intrinsic::x86_avx512_psrl_d_512: 2943 case Intrinsic::x86_avx512_psrl_q_512: 2944 case Intrinsic::x86_avx512_psra_w_512: 2945 case Intrinsic::x86_avx512_psra_d_512: 2946 case Intrinsic::x86_avx512_psra_q_512: 2947 case Intrinsic::x86_avx512_psrli_w_512: 2948 case Intrinsic::x86_avx512_psrli_d_512: 2949 case Intrinsic::x86_avx512_psrli_q_512: 2950 case Intrinsic::x86_avx512_psrai_w_512: 2951 case Intrinsic::x86_avx512_psrai_d_512: 2952 case Intrinsic::x86_avx512_psrai_q_512: 2953 case Intrinsic::x86_avx512_psra_q_256: 2954 case Intrinsic::x86_avx512_psra_q_128: 2955 case Intrinsic::x86_avx512_psrai_q_256: 2956 case Intrinsic::x86_avx512_psrai_q_128: 2957 case Intrinsic::x86_avx2_psll_w: 2958 case Intrinsic::x86_avx2_psll_d: 2959 case Intrinsic::x86_avx2_psll_q: 2960 case Intrinsic::x86_avx2_pslli_w: 2961 case Intrinsic::x86_avx2_pslli_d: 2962 case Intrinsic::x86_avx2_pslli_q: 2963 case Intrinsic::x86_avx2_psrl_w: 2964 case Intrinsic::x86_avx2_psrl_d: 2965 case Intrinsic::x86_avx2_psrl_q: 2966 case Intrinsic::x86_avx2_psra_w: 2967 case Intrinsic::x86_avx2_psra_d: 2968 case Intrinsic::x86_avx2_psrli_w: 2969 case Intrinsic::x86_avx2_psrli_d: 2970 case Intrinsic::x86_avx2_psrli_q: 2971 case Intrinsic::x86_avx2_psrai_w: 2972 case Intrinsic::x86_avx2_psrai_d: 2973 case Intrinsic::x86_sse2_psll_w: 2974 case Intrinsic::x86_sse2_psll_d: 2975 case Intrinsic::x86_sse2_psll_q: 2976 case Intrinsic::x86_sse2_pslli_w: 2977 case Intrinsic::x86_sse2_pslli_d: 2978 case Intrinsic::x86_sse2_pslli_q: 2979 case Intrinsic::x86_sse2_psrl_w: 2980 case Intrinsic::x86_sse2_psrl_d: 2981 case Intrinsic::x86_sse2_psrl_q: 2982 case Intrinsic::x86_sse2_psra_w: 2983 case Intrinsic::x86_sse2_psra_d: 2984 case Intrinsic::x86_sse2_psrli_w: 2985 case Intrinsic::x86_sse2_psrli_d: 2986 case Intrinsic::x86_sse2_psrli_q: 2987 case Intrinsic::x86_sse2_psrai_w: 2988 case Intrinsic::x86_sse2_psrai_d: 2989 case Intrinsic::x86_mmx_psll_w: 2990 case Intrinsic::x86_mmx_psll_d: 2991 case Intrinsic::x86_mmx_psll_q: 2992 case Intrinsic::x86_mmx_pslli_w: 2993 case Intrinsic::x86_mmx_pslli_d: 2994 case Intrinsic::x86_mmx_pslli_q: 2995 case Intrinsic::x86_mmx_psrl_w: 2996 case Intrinsic::x86_mmx_psrl_d: 2997 case Intrinsic::x86_mmx_psrl_q: 2998 case Intrinsic::x86_mmx_psra_w: 2999 case Intrinsic::x86_mmx_psra_d: 3000 case Intrinsic::x86_mmx_psrli_w: 3001 case Intrinsic::x86_mmx_psrli_d: 3002 case Intrinsic::x86_mmx_psrli_q: 3003 case Intrinsic::x86_mmx_psrai_w: 3004 case Intrinsic::x86_mmx_psrai_d: 3005 handleVectorShiftIntrinsic(I, /* Variable */ false); 3006 break; 3007 case Intrinsic::x86_avx2_psllv_d: 3008 case Intrinsic::x86_avx2_psllv_d_256: 3009 case Intrinsic::x86_avx512_psllv_d_512: 3010 case Intrinsic::x86_avx2_psllv_q: 3011 case Intrinsic::x86_avx2_psllv_q_256: 3012 case Intrinsic::x86_avx512_psllv_q_512: 3013 case Intrinsic::x86_avx2_psrlv_d: 3014 case Intrinsic::x86_avx2_psrlv_d_256: 3015 case Intrinsic::x86_avx512_psrlv_d_512: 3016 case Intrinsic::x86_avx2_psrlv_q: 3017 case Intrinsic::x86_avx2_psrlv_q_256: 3018 case Intrinsic::x86_avx512_psrlv_q_512: 3019 case Intrinsic::x86_avx2_psrav_d: 3020 case Intrinsic::x86_avx2_psrav_d_256: 3021 case Intrinsic::x86_avx512_psrav_d_512: 3022 case Intrinsic::x86_avx512_psrav_q_128: 3023 case Intrinsic::x86_avx512_psrav_q_256: 3024 case Intrinsic::x86_avx512_psrav_q_512: 3025 handleVectorShiftIntrinsic(I, /* Variable */ true); 3026 break; 3027 3028 case Intrinsic::x86_sse2_packsswb_128: 3029 case Intrinsic::x86_sse2_packssdw_128: 3030 case Intrinsic::x86_sse2_packuswb_128: 3031 case Intrinsic::x86_sse41_packusdw: 3032 case Intrinsic::x86_avx2_packsswb: 3033 case Intrinsic::x86_avx2_packssdw: 3034 case Intrinsic::x86_avx2_packuswb: 3035 case Intrinsic::x86_avx2_packusdw: 3036 handleVectorPackIntrinsic(I); 3037 break; 3038 3039 case Intrinsic::x86_mmx_packsswb: 3040 case Intrinsic::x86_mmx_packuswb: 3041 handleVectorPackIntrinsic(I, 16); 3042 break; 3043 3044 case Intrinsic::x86_mmx_packssdw: 3045 handleVectorPackIntrinsic(I, 32); 3046 break; 3047 3048 case Intrinsic::x86_mmx_psad_bw: 3049 case Intrinsic::x86_sse2_psad_bw: 3050 case Intrinsic::x86_avx2_psad_bw: 3051 handleVectorSadIntrinsic(I); 3052 break; 3053 3054 case Intrinsic::x86_sse2_pmadd_wd: 3055 case Intrinsic::x86_avx2_pmadd_wd: 3056 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3057 case Intrinsic::x86_avx2_pmadd_ub_sw: 3058 handleVectorPmaddIntrinsic(I); 3059 break; 3060 3061 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3062 handleVectorPmaddIntrinsic(I, 8); 3063 break; 3064 3065 case Intrinsic::x86_mmx_pmadd_wd: 3066 handleVectorPmaddIntrinsic(I, 16); 3067 break; 3068 3069 case Intrinsic::x86_sse_cmp_ss: 3070 case Intrinsic::x86_sse2_cmp_sd: 3071 case Intrinsic::x86_sse_comieq_ss: 3072 case Intrinsic::x86_sse_comilt_ss: 3073 case Intrinsic::x86_sse_comile_ss: 3074 case Intrinsic::x86_sse_comigt_ss: 3075 case Intrinsic::x86_sse_comige_ss: 3076 case Intrinsic::x86_sse_comineq_ss: 3077 case Intrinsic::x86_sse_ucomieq_ss: 3078 case Intrinsic::x86_sse_ucomilt_ss: 3079 case Intrinsic::x86_sse_ucomile_ss: 3080 case Intrinsic::x86_sse_ucomigt_ss: 3081 case Intrinsic::x86_sse_ucomige_ss: 3082 case Intrinsic::x86_sse_ucomineq_ss: 3083 case Intrinsic::x86_sse2_comieq_sd: 3084 case Intrinsic::x86_sse2_comilt_sd: 3085 case Intrinsic::x86_sse2_comile_sd: 3086 case Intrinsic::x86_sse2_comigt_sd: 3087 case Intrinsic::x86_sse2_comige_sd: 3088 case Intrinsic::x86_sse2_comineq_sd: 3089 case Intrinsic::x86_sse2_ucomieq_sd: 3090 case Intrinsic::x86_sse2_ucomilt_sd: 3091 case Intrinsic::x86_sse2_ucomile_sd: 3092 case Intrinsic::x86_sse2_ucomigt_sd: 3093 case Intrinsic::x86_sse2_ucomige_sd: 3094 case Intrinsic::x86_sse2_ucomineq_sd: 3095 handleVectorCompareScalarIntrinsic(I); 3096 break; 3097 3098 case Intrinsic::x86_sse_cmp_ps: 3099 case Intrinsic::x86_sse2_cmp_pd: 3100 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3101 // generates reasonably looking IR that fails in the backend with "Do not 3102 // know how to split the result of this operator!". 3103 handleVectorComparePackedIntrinsic(I); 3104 break; 3105 3106 default: 3107 if (!handleUnknownIntrinsic(I)) 3108 visitInstruction(I); 3109 break; 3110 } 3111 } 3112 3113 void visitCallSite(CallSite CS) { 3114 Instruction &I = *CS.getInstruction(); 3115 assert(!I.getMetadata("nosanitize")); 3116 assert((CS.isCall() || CS.isInvoke()) && "Unknown type of CallSite"); 3117 if (CS.isCall()) { 3118 CallInst *Call = cast<CallInst>(&I); 3119 3120 // For inline asm, do the usual thing: check argument shadow and mark all 3121 // outputs as clean. Note that any side effects of the inline asm that are 3122 // not immediately visible in its constraints are not handled. 3123 if (Call->isInlineAsm()) { 3124 if (ClHandleAsmConservative && MS.CompileKernel) 3125 visitAsmInstruction(I); 3126 else 3127 visitInstruction(I); 3128 return; 3129 } 3130 3131 assert(!isa<IntrinsicInst>(&I) && "intrinsics are handled elsewhere"); 3132 3133 // We are going to insert code that relies on the fact that the callee 3134 // will become a non-readonly function after it is instrumented by us. To 3135 // prevent this code from being optimized out, mark that function 3136 // non-readonly in advance. 3137 if (Function *Func = Call->getCalledFunction()) { 3138 // Clear out readonly/readnone attributes. 3139 AttrBuilder B; 3140 B.addAttribute(Attribute::ReadOnly) 3141 .addAttribute(Attribute::ReadNone); 3142 Func->removeAttributes(AttributeList::FunctionIndex, B); 3143 } 3144 3145 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3146 } 3147 IRBuilder<> IRB(&I); 3148 3149 unsigned ArgOffset = 0; 3150 LLVM_DEBUG(dbgs() << " CallSite: " << I << "\n"); 3151 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3152 ArgIt != End; ++ArgIt) { 3153 Value *A = *ArgIt; 3154 unsigned i = ArgIt - CS.arg_begin(); 3155 if (!A->getType()->isSized()) { 3156 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << I << "\n"); 3157 continue; 3158 } 3159 unsigned Size = 0; 3160 Value *Store = nullptr; 3161 // Compute the Shadow for arg even if it is ByVal, because 3162 // in that case getShadow() will copy the actual arg shadow to 3163 // __msan_param_tls. 3164 Value *ArgShadow = getShadow(A); 3165 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3166 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3167 << " Shadow: " << *ArgShadow << "\n"); 3168 bool ArgIsInitialized = false; 3169 const DataLayout &DL = F.getParent()->getDataLayout(); 3170 if (CS.paramHasAttr(i, Attribute::ByVal)) { 3171 assert(A->getType()->isPointerTy() && 3172 "ByVal argument is not a pointer!"); 3173 Size = DL.getTypeAllocSize(A->getType()->getPointerElementType()); 3174 if (ArgOffset + Size > kParamTLSSize) break; 3175 unsigned ParamAlignment = CS.getParamAlignment(i); 3176 unsigned Alignment = std::min(ParamAlignment, kShadowTLSAlignment); 3177 Value *AShadowPtr = 3178 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3179 /*isStore*/ false) 3180 .first; 3181 3182 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3183 Alignment, Size); 3184 // TODO(glider): need to copy origins. 3185 } else { 3186 Size = DL.getTypeAllocSize(A->getType()); 3187 if (ArgOffset + Size > kParamTLSSize) break; 3188 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3189 kShadowTLSAlignment); 3190 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3191 if (Cst && Cst->isNullValue()) ArgIsInitialized = true; 3192 } 3193 if (MS.TrackOrigins && !ArgIsInitialized) 3194 IRB.CreateStore(getOrigin(A), 3195 getOriginPtrForArgument(A, IRB, ArgOffset)); 3196 (void)Store; 3197 assert(Size != 0 && Store != nullptr); 3198 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3199 ArgOffset += alignTo(Size, 8); 3200 } 3201 LLVM_DEBUG(dbgs() << " done with call args\n"); 3202 3203 FunctionType *FT = 3204 cast<FunctionType>(CS.getCalledValue()->getType()->getContainedType(0)); 3205 if (FT->isVarArg()) { 3206 VAHelper->visitCallSite(CS, IRB); 3207 } 3208 3209 // Now, get the shadow for the RetVal. 3210 if (!I.getType()->isSized()) return; 3211 // Don't emit the epilogue for musttail call returns. 3212 if (CS.isCall() && cast<CallInst>(&I)->isMustTailCall()) return; 3213 IRBuilder<> IRBBefore(&I); 3214 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3215 Value *Base = getShadowPtrForRetval(&I, IRBBefore); 3216 IRBBefore.CreateAlignedStore(getCleanShadow(&I), Base, kShadowTLSAlignment); 3217 BasicBlock::iterator NextInsn; 3218 if (CS.isCall()) { 3219 NextInsn = ++I.getIterator(); 3220 assert(NextInsn != I.getParent()->end()); 3221 } else { 3222 BasicBlock *NormalDest = cast<InvokeInst>(&I)->getNormalDest(); 3223 if (!NormalDest->getSinglePredecessor()) { 3224 // FIXME: this case is tricky, so we are just conservative here. 3225 // Perhaps we need to split the edge between this BB and NormalDest, 3226 // but a naive attempt to use SplitEdge leads to a crash. 3227 setShadow(&I, getCleanShadow(&I)); 3228 setOrigin(&I, getCleanOrigin()); 3229 return; 3230 } 3231 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3232 // Anything inserted there will be instrumented by MSan later! 3233 NextInsn = NormalDest->getFirstInsertionPt(); 3234 assert(NextInsn != NormalDest->end() && 3235 "Could not find insertion point for retval shadow load"); 3236 } 3237 IRBuilder<> IRBAfter(&*NextInsn); 3238 Value *RetvalShadow = 3239 IRBAfter.CreateAlignedLoad(getShadowPtrForRetval(&I, IRBAfter), 3240 kShadowTLSAlignment, "_msret"); 3241 setShadow(&I, RetvalShadow); 3242 if (MS.TrackOrigins) 3243 setOrigin(&I, IRBAfter.CreateLoad(getOriginPtrForRetval(IRBAfter))); 3244 } 3245 3246 bool isAMustTailRetVal(Value *RetVal) { 3247 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3248 RetVal = I->getOperand(0); 3249 } 3250 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3251 return I->isMustTailCall(); 3252 } 3253 return false; 3254 } 3255 3256 void visitReturnInst(ReturnInst &I) { 3257 IRBuilder<> IRB(&I); 3258 Value *RetVal = I.getReturnValue(); 3259 if (!RetVal) return; 3260 // Don't emit the epilogue for musttail call returns. 3261 if (isAMustTailRetVal(RetVal)) return; 3262 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3263 if (CheckReturnValue) { 3264 insertShadowCheck(RetVal, &I); 3265 Value *Shadow = getCleanShadow(RetVal); 3266 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3267 } else { 3268 Value *Shadow = getShadow(RetVal); 3269 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3270 if (MS.TrackOrigins) 3271 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3272 } 3273 } 3274 3275 void visitPHINode(PHINode &I) { 3276 IRBuilder<> IRB(&I); 3277 if (!PropagateShadow) { 3278 setShadow(&I, getCleanShadow(&I)); 3279 setOrigin(&I, getCleanOrigin()); 3280 return; 3281 } 3282 3283 ShadowPHINodes.push_back(&I); 3284 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3285 "_msphi_s")); 3286 if (MS.TrackOrigins) 3287 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3288 "_msphi_o")); 3289 } 3290 3291 Value *getLocalVarDescription(AllocaInst &I) { 3292 SmallString<2048> StackDescriptionStorage; 3293 raw_svector_ostream StackDescription(StackDescriptionStorage); 3294 // We create a string with a description of the stack allocation and 3295 // pass it into __msan_set_alloca_origin. 3296 // It will be printed by the run-time if stack-originated UMR is found. 3297 // The first 4 bytes of the string are set to '----' and will be replaced 3298 // by __msan_va_arg_overflow_size_tls at the first call. 3299 StackDescription << "----" << I.getName() << "@" << F.getName(); 3300 return createPrivateNonConstGlobalForString(*F.getParent(), 3301 StackDescription.str()); 3302 } 3303 3304 void instrumentAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3305 if (PoisonStack && ClPoisonStackWithCall) { 3306 IRB.CreateCall(MS.MsanPoisonStackFn, 3307 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3308 } else { 3309 Value *ShadowBase, *OriginBase; 3310 std::tie(ShadowBase, OriginBase) = 3311 getShadowOriginPtr(&I, IRB, IRB.getInt8Ty(), 1, /*isStore*/ true); 3312 3313 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3314 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, I.getAlignment()); 3315 } 3316 3317 if (PoisonStack && MS.TrackOrigins) { 3318 Value *Descr = getLocalVarDescription(I); 3319 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3320 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3321 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3322 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3323 } 3324 } 3325 3326 void instrumentAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3327 Value *Descr = getLocalVarDescription(I); 3328 if (PoisonStack) { 3329 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3330 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3331 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3332 } else { 3333 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3334 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3335 } 3336 } 3337 3338 void visitAllocaInst(AllocaInst &I) { 3339 setShadow(&I, getCleanShadow(&I)); 3340 setOrigin(&I, getCleanOrigin()); 3341 IRBuilder<> IRB(I.getNextNode()); 3342 const DataLayout &DL = F.getParent()->getDataLayout(); 3343 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3344 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3345 if (I.isArrayAllocation()) 3346 Len = IRB.CreateMul(Len, I.getArraySize()); 3347 3348 if (MS.CompileKernel) 3349 instrumentAllocaKmsan(I, IRB, Len); 3350 else 3351 instrumentAllocaUserspace(I, IRB, Len); 3352 } 3353 3354 void visitSelectInst(SelectInst& I) { 3355 IRBuilder<> IRB(&I); 3356 // a = select b, c, d 3357 Value *B = I.getCondition(); 3358 Value *C = I.getTrueValue(); 3359 Value *D = I.getFalseValue(); 3360 Value *Sb = getShadow(B); 3361 Value *Sc = getShadow(C); 3362 Value *Sd = getShadow(D); 3363 3364 // Result shadow if condition shadow is 0. 3365 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3366 Value *Sa1; 3367 if (I.getType()->isAggregateType()) { 3368 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3369 // an extra "select". This results in much more compact IR. 3370 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3371 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3372 } else { 3373 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3374 // If Sb (condition is poisoned), look for bits in c and d that are equal 3375 // and both unpoisoned. 3376 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3377 3378 // Cast arguments to shadow-compatible type. 3379 C = CreateAppToShadowCast(IRB, C); 3380 D = CreateAppToShadowCast(IRB, D); 3381 3382 // Result shadow if condition shadow is 1. 3383 Sa1 = IRB.CreateOr(IRB.CreateXor(C, D), IRB.CreateOr(Sc, Sd)); 3384 } 3385 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3386 setShadow(&I, Sa); 3387 if (MS.TrackOrigins) { 3388 // Origins are always i32, so any vector conditions must be flattened. 3389 // FIXME: consider tracking vector origins for app vectors? 3390 if (B->getType()->isVectorTy()) { 3391 Type *FlatTy = getShadowTyNoVec(B->getType()); 3392 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3393 ConstantInt::getNullValue(FlatTy)); 3394 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3395 ConstantInt::getNullValue(FlatTy)); 3396 } 3397 // a = select b, c, d 3398 // Oa = Sb ? Ob : (b ? Oc : Od) 3399 setOrigin( 3400 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3401 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3402 getOrigin(I.getFalseValue())))); 3403 } 3404 } 3405 3406 void visitLandingPadInst(LandingPadInst &I) { 3407 // Do nothing. 3408 // See https://github.com/google/sanitizers/issues/504 3409 setShadow(&I, getCleanShadow(&I)); 3410 setOrigin(&I, getCleanOrigin()); 3411 } 3412 3413 void visitCatchSwitchInst(CatchSwitchInst &I) { 3414 setShadow(&I, getCleanShadow(&I)); 3415 setOrigin(&I, getCleanOrigin()); 3416 } 3417 3418 void visitFuncletPadInst(FuncletPadInst &I) { 3419 setShadow(&I, getCleanShadow(&I)); 3420 setOrigin(&I, getCleanOrigin()); 3421 } 3422 3423 void visitGetElementPtrInst(GetElementPtrInst &I) { 3424 handleShadowOr(I); 3425 } 3426 3427 void visitExtractValueInst(ExtractValueInst &I) { 3428 IRBuilder<> IRB(&I); 3429 Value *Agg = I.getAggregateOperand(); 3430 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 3431 Value *AggShadow = getShadow(Agg); 3432 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3433 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 3434 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 3435 setShadow(&I, ResShadow); 3436 setOriginForNaryOp(I); 3437 } 3438 3439 void visitInsertValueInst(InsertValueInst &I) { 3440 IRBuilder<> IRB(&I); 3441 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 3442 Value *AggShadow = getShadow(I.getAggregateOperand()); 3443 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 3444 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3445 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 3446 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 3447 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 3448 setShadow(&I, Res); 3449 setOriginForNaryOp(I); 3450 } 3451 3452 void dumpInst(Instruction &I) { 3453 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 3454 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 3455 } else { 3456 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 3457 } 3458 errs() << "QQQ " << I << "\n"; 3459 } 3460 3461 void visitResumeInst(ResumeInst &I) { 3462 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 3463 // Nothing to do here. 3464 } 3465 3466 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 3467 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 3468 // Nothing to do here. 3469 } 3470 3471 void visitCatchReturnInst(CatchReturnInst &CRI) { 3472 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 3473 // Nothing to do here. 3474 } 3475 3476 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 3477 const DataLayout &DL, bool isOutput) { 3478 // For each assembly argument, we check its value for being initialized. 3479 // If the argument is a pointer, we assume it points to a single element 3480 // of the corresponding type (or to a 8-byte word, if the type is unsized). 3481 // Each such pointer is instrumented with a call to the runtime library. 3482 Type *OpType = Operand->getType(); 3483 // Check the operand value itself. 3484 insertShadowCheck(Operand, &I); 3485 if (!OpType->isPointerTy()) { 3486 assert(!isOutput); 3487 return; 3488 } 3489 Value *Hook = 3490 isOutput ? MS.MsanInstrumentAsmStoreFn : MS.MsanInstrumentAsmLoadFn; 3491 Type *ElType = OpType->getPointerElementType(); 3492 if (!ElType->isSized()) 3493 return; 3494 int Size = DL.getTypeStoreSize(ElType); 3495 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 3496 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 3497 IRB.CreateCall(Hook, {Ptr, SizeVal}); 3498 } 3499 3500 /// Get the number of output arguments returned by pointers. 3501 int getNumOutputArgs(InlineAsm *IA, CallInst *CI) { 3502 int NumRetOutputs = 0; 3503 int NumOutputs = 0; 3504 Type *RetTy = dyn_cast<Value>(CI)->getType(); 3505 if (!RetTy->isVoidTy()) { 3506 // Register outputs are returned via the CallInst return value. 3507 StructType *ST = dyn_cast_or_null<StructType>(RetTy); 3508 if (ST) 3509 NumRetOutputs = ST->getNumElements(); 3510 else 3511 NumRetOutputs = 1; 3512 } 3513 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 3514 for (size_t i = 0, n = Constraints.size(); i < n; i++) { 3515 InlineAsm::ConstraintInfo Info = Constraints[i]; 3516 switch (Info.Type) { 3517 case InlineAsm::isOutput: 3518 NumOutputs++; 3519 break; 3520 default: 3521 break; 3522 } 3523 } 3524 return NumOutputs - NumRetOutputs; 3525 } 3526 3527 void visitAsmInstruction(Instruction &I) { 3528 // Conservative inline assembly handling: check for poisoned shadow of 3529 // asm() arguments, then unpoison the result and all the memory locations 3530 // pointed to by those arguments. 3531 // An inline asm() statement in C++ contains lists of input and output 3532 // arguments used by the assembly code. These are mapped to operands of the 3533 // CallInst as follows: 3534 // - nR register outputs ("=r) are returned by value in a single structure 3535 // (SSA value of the CallInst); 3536 // - nO other outputs ("=m" and others) are returned by pointer as first 3537 // nO operands of the CallInst; 3538 // - nI inputs ("r", "m" and others) are passed to CallInst as the 3539 // remaining nI operands. 3540 // The total number of asm() arguments in the source is nR+nO+nI, and the 3541 // corresponding CallInst has nO+nI+1 operands (the last operand is the 3542 // function to be called). 3543 const DataLayout &DL = F.getParent()->getDataLayout(); 3544 CallInst *CI = dyn_cast<CallInst>(&I); 3545 IRBuilder<> IRB(&I); 3546 InlineAsm *IA = cast<InlineAsm>(CI->getCalledValue()); 3547 int OutputArgs = getNumOutputArgs(IA, CI); 3548 // The last operand of a CallInst is the function itself. 3549 int NumOperands = CI->getNumOperands() - 1; 3550 3551 // Check input arguments. Doing so before unpoisoning output arguments, so 3552 // that we won't overwrite uninit values before checking them. 3553 for (int i = OutputArgs; i < NumOperands; i++) { 3554 Value *Operand = CI->getOperand(i); 3555 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 3556 } 3557 // Unpoison output arguments. This must happen before the actual InlineAsm 3558 // call, so that the shadow for memory published in the asm() statement 3559 // remains valid. 3560 for (int i = 0; i < OutputArgs; i++) { 3561 Value *Operand = CI->getOperand(i); 3562 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 3563 } 3564 3565 setShadow(&I, getCleanShadow(&I)); 3566 setOrigin(&I, getCleanOrigin()); 3567 } 3568 3569 void visitInstruction(Instruction &I) { 3570 // Everything else: stop propagating and check for poisoned shadow. 3571 if (ClDumpStrictInstructions) 3572 dumpInst(I); 3573 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 3574 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 3575 Value *Operand = I.getOperand(i); 3576 if (Operand->getType()->isSized()) 3577 insertShadowCheck(Operand, &I); 3578 } 3579 setShadow(&I, getCleanShadow(&I)); 3580 setOrigin(&I, getCleanOrigin()); 3581 } 3582 }; 3583 3584 /// AMD64-specific implementation of VarArgHelper. 3585 struct VarArgAMD64Helper : public VarArgHelper { 3586 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 3587 // See a comment in visitCallSite for more details. 3588 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 3589 static const unsigned AMD64FpEndOffsetSSE = 176; 3590 // If SSE is disabled, fp_offset in va_list is zero. 3591 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 3592 3593 unsigned AMD64FpEndOffset; 3594 Function &F; 3595 MemorySanitizer &MS; 3596 MemorySanitizerVisitor &MSV; 3597 Value *VAArgTLSCopy = nullptr; 3598 Value *VAArgTLSOriginCopy = nullptr; 3599 Value *VAArgOverflowSize = nullptr; 3600 3601 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3602 3603 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 3604 3605 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 3606 MemorySanitizerVisitor &MSV) 3607 : F(F), MS(MS), MSV(MSV) { 3608 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 3609 for (const auto &Attr : F.getAttributes().getFnAttributes()) { 3610 if (Attr.isStringAttribute() && 3611 (Attr.getKindAsString() == "target-features")) { 3612 if (Attr.getValueAsString().contains("-sse")) 3613 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 3614 break; 3615 } 3616 } 3617 } 3618 3619 ArgKind classifyArgument(Value* arg) { 3620 // A very rough approximation of X86_64 argument classification rules. 3621 Type *T = arg->getType(); 3622 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 3623 return AK_FloatingPoint; 3624 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 3625 return AK_GeneralPurpose; 3626 if (T->isPointerTy()) 3627 return AK_GeneralPurpose; 3628 return AK_Memory; 3629 } 3630 3631 // For VarArg functions, store the argument shadow in an ABI-specific format 3632 // that corresponds to va_list layout. 3633 // We do this because Clang lowers va_arg in the frontend, and this pass 3634 // only sees the low level code that deals with va_list internals. 3635 // A much easier alternative (provided that Clang emits va_arg instructions) 3636 // would have been to associate each live instance of va_list with a copy of 3637 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 3638 // order. 3639 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3640 unsigned GpOffset = 0; 3641 unsigned FpOffset = AMD64GpEndOffset; 3642 unsigned OverflowOffset = AMD64FpEndOffset; 3643 const DataLayout &DL = F.getParent()->getDataLayout(); 3644 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3645 ArgIt != End; ++ArgIt) { 3646 Value *A = *ArgIt; 3647 unsigned ArgNo = CS.getArgumentNo(ArgIt); 3648 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 3649 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 3650 if (IsByVal) { 3651 // ByVal arguments always go to the overflow area. 3652 // Fixed arguments passed through the overflow area will be stepped 3653 // over by va_start, so don't count them towards the offset. 3654 if (IsFixed) 3655 continue; 3656 assert(A->getType()->isPointerTy()); 3657 Type *RealTy = A->getType()->getPointerElementType(); 3658 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 3659 Value *ShadowBase = getShadowPtrForVAArgument( 3660 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 3661 Value *OriginBase = nullptr; 3662 if (MS.TrackOrigins) 3663 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 3664 OverflowOffset += alignTo(ArgSize, 8); 3665 if (!ShadowBase) 3666 continue; 3667 Value *ShadowPtr, *OriginPtr; 3668 std::tie(ShadowPtr, OriginPtr) = 3669 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 3670 /*isStore*/ false); 3671 3672 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 3673 kShadowTLSAlignment, ArgSize); 3674 if (MS.TrackOrigins) 3675 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 3676 kShadowTLSAlignment, ArgSize); 3677 } else { 3678 ArgKind AK = classifyArgument(A); 3679 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 3680 AK = AK_Memory; 3681 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 3682 AK = AK_Memory; 3683 Value *ShadowBase, *OriginBase = nullptr; 3684 switch (AK) { 3685 case AK_GeneralPurpose: 3686 ShadowBase = 3687 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 3688 if (MS.TrackOrigins) 3689 OriginBase = 3690 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 3691 GpOffset += 8; 3692 break; 3693 case AK_FloatingPoint: 3694 ShadowBase = 3695 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 3696 if (MS.TrackOrigins) 3697 OriginBase = 3698 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 3699 FpOffset += 16; 3700 break; 3701 case AK_Memory: 3702 if (IsFixed) 3703 continue; 3704 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3705 ShadowBase = 3706 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 3707 if (MS.TrackOrigins) 3708 OriginBase = 3709 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 3710 OverflowOffset += alignTo(ArgSize, 8); 3711 } 3712 // Take fixed arguments into account for GpOffset and FpOffset, 3713 // but don't actually store shadows for them. 3714 // TODO(glider): don't call get*PtrForVAArgument() for them. 3715 if (IsFixed) 3716 continue; 3717 if (!ShadowBase) 3718 continue; 3719 Value *Shadow = MSV.getShadow(A); 3720 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 3721 if (MS.TrackOrigins) { 3722 Value *Origin = MSV.getOrigin(A); 3723 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 3724 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 3725 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 3726 } 3727 } 3728 } 3729 Constant *OverflowSize = 3730 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 3731 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 3732 } 3733 3734 /// Compute the shadow address for a given va_arg. 3735 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3736 unsigned ArgOffset, unsigned ArgSize) { 3737 // Make sure we don't overflow __msan_va_arg_tls. 3738 if (ArgOffset + ArgSize > kParamTLSSize) 3739 return nullptr; 3740 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3741 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3742 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3743 "_msarg_va_s"); 3744 } 3745 3746 /// Compute the origin address for a given va_arg. 3747 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 3748 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 3749 // getOriginPtrForVAArgument() is always called after 3750 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 3751 // overflow. 3752 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3753 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 3754 "_msarg_va_o"); 3755 } 3756 3757 void unpoisonVAListTagForInst(IntrinsicInst &I) { 3758 IRBuilder<> IRB(&I); 3759 Value *VAListTag = I.getArgOperand(0); 3760 Value *ShadowPtr, *OriginPtr; 3761 unsigned Alignment = 8; 3762 std::tie(ShadowPtr, OriginPtr) = 3763 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 3764 /*isStore*/ true); 3765 3766 // Unpoison the whole __va_list_tag. 3767 // FIXME: magic ABI constants. 3768 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3769 /* size */ 24, Alignment, false); 3770 // We shouldn't need to zero out the origins, as they're only checked for 3771 // nonzero shadow. 3772 } 3773 3774 void visitVAStartInst(VAStartInst &I) override { 3775 if (F.getCallingConv() == CallingConv::Win64) 3776 return; 3777 VAStartInstrumentationList.push_back(&I); 3778 unpoisonVAListTagForInst(I); 3779 } 3780 3781 void visitVACopyInst(VACopyInst &I) override { 3782 if (F.getCallingConv() == CallingConv::Win64) return; 3783 unpoisonVAListTagForInst(I); 3784 } 3785 3786 void finalizeInstrumentation() override { 3787 assert(!VAArgOverflowSize && !VAArgTLSCopy && 3788 "finalizeInstrumentation called twice"); 3789 if (!VAStartInstrumentationList.empty()) { 3790 // If there is a va_start in this function, make a backup copy of 3791 // va_arg_tls somewhere in the function entry block. 3792 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 3793 VAArgOverflowSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 3794 Value *CopySize = 3795 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 3796 VAArgOverflowSize); 3797 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3798 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 3799 if (MS.TrackOrigins) { 3800 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3801 IRB.CreateMemCpy(VAArgTLSOriginCopy, 8, MS.VAArgOriginTLS, 8, CopySize); 3802 } 3803 } 3804 3805 // Instrument va_start. 3806 // Copy va_list shadow from the backup copy of the TLS contents. 3807 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 3808 CallInst *OrigInst = VAStartInstrumentationList[i]; 3809 IRBuilder<> IRB(OrigInst->getNextNode()); 3810 Value *VAListTag = OrigInst->getArgOperand(0); 3811 3812 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 3813 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3814 ConstantInt::get(MS.IntptrTy, 16)), 3815 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 3816 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr); 3817 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 3818 unsigned Alignment = 16; 3819 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 3820 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 3821 Alignment, /*isStore*/ true); 3822 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 3823 AMD64FpEndOffset); 3824 if (MS.TrackOrigins) 3825 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 3826 Alignment, AMD64FpEndOffset); 3827 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 3828 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3829 ConstantInt::get(MS.IntptrTy, 8)), 3830 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 3831 Value *OverflowArgAreaPtr = IRB.CreateLoad(OverflowArgAreaPtrPtr); 3832 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 3833 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 3834 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 3835 Alignment, /*isStore*/ true); 3836 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 3837 AMD64FpEndOffset); 3838 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 3839 VAArgOverflowSize); 3840 if (MS.TrackOrigins) { 3841 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 3842 AMD64FpEndOffset); 3843 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 3844 VAArgOverflowSize); 3845 } 3846 } 3847 } 3848 }; 3849 3850 /// MIPS64-specific implementation of VarArgHelper. 3851 struct VarArgMIPS64Helper : public VarArgHelper { 3852 Function &F; 3853 MemorySanitizer &MS; 3854 MemorySanitizerVisitor &MSV; 3855 Value *VAArgTLSCopy = nullptr; 3856 Value *VAArgSize = nullptr; 3857 3858 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3859 3860 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 3861 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 3862 3863 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3864 unsigned VAArgOffset = 0; 3865 const DataLayout &DL = F.getParent()->getDataLayout(); 3866 for (CallSite::arg_iterator ArgIt = CS.arg_begin() + 3867 CS.getFunctionType()->getNumParams(), End = CS.arg_end(); 3868 ArgIt != End; ++ArgIt) { 3869 Triple TargetTriple(F.getParent()->getTargetTriple()); 3870 Value *A = *ArgIt; 3871 Value *Base; 3872 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3873 if (TargetTriple.getArch() == Triple::mips64) { 3874 // Adjusting the shadow for argument with size < 8 to match the placement 3875 // of bits in big endian system 3876 if (ArgSize < 8) 3877 VAArgOffset += (8 - ArgSize); 3878 } 3879 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 3880 VAArgOffset += ArgSize; 3881 VAArgOffset = alignTo(VAArgOffset, 8); 3882 if (!Base) 3883 continue; 3884 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 3885 } 3886 3887 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 3888 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 3889 // a new class member i.e. it is the total size of all VarArgs. 3890 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 3891 } 3892 3893 /// Compute the shadow address for a given va_arg. 3894 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3895 unsigned ArgOffset, unsigned ArgSize) { 3896 // Make sure we don't overflow __msan_va_arg_tls. 3897 if (ArgOffset + ArgSize > kParamTLSSize) 3898 return nullptr; 3899 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3900 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3901 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3902 "_msarg"); 3903 } 3904 3905 void visitVAStartInst(VAStartInst &I) override { 3906 IRBuilder<> IRB(&I); 3907 VAStartInstrumentationList.push_back(&I); 3908 Value *VAListTag = I.getArgOperand(0); 3909 Value *ShadowPtr, *OriginPtr; 3910 unsigned Alignment = 8; 3911 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 3912 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 3913 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3914 /* size */ 8, Alignment, false); 3915 } 3916 3917 void visitVACopyInst(VACopyInst &I) override { 3918 IRBuilder<> IRB(&I); 3919 VAStartInstrumentationList.push_back(&I); 3920 Value *VAListTag = I.getArgOperand(0); 3921 Value *ShadowPtr, *OriginPtr; 3922 unsigned Alignment = 8; 3923 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 3924 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 3925 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3926 /* size */ 8, Alignment, false); 3927 } 3928 3929 void finalizeInstrumentation() override { 3930 assert(!VAArgSize && !VAArgTLSCopy && 3931 "finalizeInstrumentation called twice"); 3932 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 3933 VAArgSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 3934 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 3935 VAArgSize); 3936 3937 if (!VAStartInstrumentationList.empty()) { 3938 // If there is a va_start in this function, make a backup copy of 3939 // va_arg_tls somewhere in the function entry block. 3940 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3941 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 3942 } 3943 3944 // Instrument va_start. 3945 // Copy va_list shadow from the backup copy of the TLS contents. 3946 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 3947 CallInst *OrigInst = VAStartInstrumentationList[i]; 3948 IRBuilder<> IRB(OrigInst->getNextNode()); 3949 Value *VAListTag = OrigInst->getArgOperand(0); 3950 Value *RegSaveAreaPtrPtr = 3951 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3952 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 3953 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr); 3954 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 3955 unsigned Alignment = 8; 3956 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 3957 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 3958 Alignment, /*isStore*/ true); 3959 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 3960 CopySize); 3961 } 3962 } 3963 }; 3964 3965 /// AArch64-specific implementation of VarArgHelper. 3966 struct VarArgAArch64Helper : public VarArgHelper { 3967 static const unsigned kAArch64GrArgSize = 64; 3968 static const unsigned kAArch64VrArgSize = 128; 3969 3970 static const unsigned AArch64GrBegOffset = 0; 3971 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 3972 // Make VR space aligned to 16 bytes. 3973 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 3974 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 3975 + kAArch64VrArgSize; 3976 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 3977 3978 Function &F; 3979 MemorySanitizer &MS; 3980 MemorySanitizerVisitor &MSV; 3981 Value *VAArgTLSCopy = nullptr; 3982 Value *VAArgOverflowSize = nullptr; 3983 3984 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3985 3986 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 3987 3988 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 3989 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 3990 3991 ArgKind classifyArgument(Value* arg) { 3992 Type *T = arg->getType(); 3993 if (T->isFPOrFPVectorTy()) 3994 return AK_FloatingPoint; 3995 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 3996 || (T->isPointerTy())) 3997 return AK_GeneralPurpose; 3998 return AK_Memory; 3999 } 4000 4001 // The instrumentation stores the argument shadow in a non ABI-specific 4002 // format because it does not know which argument is named (since Clang, 4003 // like x86_64 case, lowers the va_args in the frontend and this pass only 4004 // sees the low level code that deals with va_list internals). 4005 // The first seven GR registers are saved in the first 56 bytes of the 4006 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4007 // the remaining arguments. 4008 // Using constant offset within the va_arg TLS array allows fast copy 4009 // in the finalize instrumentation. 4010 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4011 unsigned GrOffset = AArch64GrBegOffset; 4012 unsigned VrOffset = AArch64VrBegOffset; 4013 unsigned OverflowOffset = AArch64VAEndOffset; 4014 4015 const DataLayout &DL = F.getParent()->getDataLayout(); 4016 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4017 ArgIt != End; ++ArgIt) { 4018 Value *A = *ArgIt; 4019 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4020 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4021 ArgKind AK = classifyArgument(A); 4022 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4023 AK = AK_Memory; 4024 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4025 AK = AK_Memory; 4026 Value *Base; 4027 switch (AK) { 4028 case AK_GeneralPurpose: 4029 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4030 GrOffset += 8; 4031 break; 4032 case AK_FloatingPoint: 4033 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4034 VrOffset += 16; 4035 break; 4036 case AK_Memory: 4037 // Don't count fixed arguments in the overflow area - va_start will 4038 // skip right over them. 4039 if (IsFixed) 4040 continue; 4041 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4042 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4043 alignTo(ArgSize, 8)); 4044 OverflowOffset += alignTo(ArgSize, 8); 4045 break; 4046 } 4047 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4048 // bother to actually store a shadow. 4049 if (IsFixed) 4050 continue; 4051 if (!Base) 4052 continue; 4053 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4054 } 4055 Constant *OverflowSize = 4056 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4057 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4058 } 4059 4060 /// Compute the shadow address for a given va_arg. 4061 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4062 unsigned ArgOffset, unsigned ArgSize) { 4063 // Make sure we don't overflow __msan_va_arg_tls. 4064 if (ArgOffset + ArgSize > kParamTLSSize) 4065 return nullptr; 4066 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4067 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4068 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4069 "_msarg"); 4070 } 4071 4072 void visitVAStartInst(VAStartInst &I) override { 4073 IRBuilder<> IRB(&I); 4074 VAStartInstrumentationList.push_back(&I); 4075 Value *VAListTag = I.getArgOperand(0); 4076 Value *ShadowPtr, *OriginPtr; 4077 unsigned Alignment = 8; 4078 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4079 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4080 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4081 /* size */ 32, Alignment, false); 4082 } 4083 4084 void visitVACopyInst(VACopyInst &I) override { 4085 IRBuilder<> IRB(&I); 4086 VAStartInstrumentationList.push_back(&I); 4087 Value *VAListTag = I.getArgOperand(0); 4088 Value *ShadowPtr, *OriginPtr; 4089 unsigned Alignment = 8; 4090 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4091 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4092 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4093 /* size */ 32, Alignment, false); 4094 } 4095 4096 // Retrieve a va_list field of 'void*' size. 4097 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4098 Value *SaveAreaPtrPtr = 4099 IRB.CreateIntToPtr( 4100 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4101 ConstantInt::get(MS.IntptrTy, offset)), 4102 Type::getInt64PtrTy(*MS.C)); 4103 return IRB.CreateLoad(SaveAreaPtrPtr); 4104 } 4105 4106 // Retrieve a va_list field of 'int' size. 4107 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4108 Value *SaveAreaPtr = 4109 IRB.CreateIntToPtr( 4110 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4111 ConstantInt::get(MS.IntptrTy, offset)), 4112 Type::getInt32PtrTy(*MS.C)); 4113 Value *SaveArea32 = IRB.CreateLoad(SaveAreaPtr); 4114 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4115 } 4116 4117 void finalizeInstrumentation() override { 4118 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4119 "finalizeInstrumentation called twice"); 4120 if (!VAStartInstrumentationList.empty()) { 4121 // If there is a va_start in this function, make a backup copy of 4122 // va_arg_tls somewhere in the function entry block. 4123 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4124 VAArgOverflowSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 4125 Value *CopySize = 4126 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4127 VAArgOverflowSize); 4128 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4129 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 4130 } 4131 4132 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4133 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4134 4135 // Instrument va_start, copy va_list shadow from the backup copy of 4136 // the TLS contents. 4137 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4138 CallInst *OrigInst = VAStartInstrumentationList[i]; 4139 IRBuilder<> IRB(OrigInst->getNextNode()); 4140 4141 Value *VAListTag = OrigInst->getArgOperand(0); 4142 4143 // The variadic ABI for AArch64 creates two areas to save the incoming 4144 // argument registers (one for 64-bit general register xn-x7 and another 4145 // for 128-bit FP/SIMD vn-v7). 4146 // We need then to propagate the shadow arguments on both regions 4147 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4148 // The remaning arguments are saved on shadow for 'va::stack'. 4149 // One caveat is it requires only to propagate the non-named arguments, 4150 // however on the call site instrumentation 'all' the arguments are 4151 // saved. So to copy the shadow values from the va_arg TLS array 4152 // we need to adjust the offset for both GR and VR fields based on 4153 // the __{gr,vr}_offs value (since they are stores based on incoming 4154 // named arguments). 4155 4156 // Read the stack pointer from the va_list. 4157 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4158 4159 // Read both the __gr_top and __gr_off and add them up. 4160 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4161 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4162 4163 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4164 4165 // Read both the __vr_top and __vr_off and add them up. 4166 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4167 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4168 4169 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4170 4171 // It does not know how many named arguments is being used and, on the 4172 // callsite all the arguments were saved. Since __gr_off is defined as 4173 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4174 // argument by ignoring the bytes of shadow from named arguments. 4175 Value *GrRegSaveAreaShadowPtrOff = 4176 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4177 4178 Value *GrRegSaveAreaShadowPtr = 4179 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4180 /*Alignment*/ 8, /*isStore*/ true) 4181 .first; 4182 4183 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4184 GrRegSaveAreaShadowPtrOff); 4185 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4186 4187 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, 8, GrSrcPtr, 8, GrCopySize); 4188 4189 // Again, but for FP/SIMD values. 4190 Value *VrRegSaveAreaShadowPtrOff = 4191 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4192 4193 Value *VrRegSaveAreaShadowPtr = 4194 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4195 /*Alignment*/ 8, /*isStore*/ true) 4196 .first; 4197 4198 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4199 IRB.getInt8Ty(), 4200 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4201 IRB.getInt32(AArch64VrBegOffset)), 4202 VrRegSaveAreaShadowPtrOff); 4203 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4204 4205 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, 8, VrSrcPtr, 8, VrCopySize); 4206 4207 // And finally for remaining arguments. 4208 Value *StackSaveAreaShadowPtr = 4209 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4210 /*Alignment*/ 16, /*isStore*/ true) 4211 .first; 4212 4213 Value *StackSrcPtr = 4214 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4215 IRB.getInt32(AArch64VAEndOffset)); 4216 4217 IRB.CreateMemCpy(StackSaveAreaShadowPtr, 16, StackSrcPtr, 16, 4218 VAArgOverflowSize); 4219 } 4220 } 4221 }; 4222 4223 /// PowerPC64-specific implementation of VarArgHelper. 4224 struct VarArgPowerPC64Helper : public VarArgHelper { 4225 Function &F; 4226 MemorySanitizer &MS; 4227 MemorySanitizerVisitor &MSV; 4228 Value *VAArgTLSCopy = nullptr; 4229 Value *VAArgSize = nullptr; 4230 4231 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4232 4233 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4234 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4235 4236 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4237 // For PowerPC, we need to deal with alignment of stack arguments - 4238 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4239 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4240 // and QPX vectors are aligned to 32 bytes. For that reason, we 4241 // compute current offset from stack pointer (which is always properly 4242 // aligned), and offset for the first vararg, then subtract them. 4243 unsigned VAArgBase; 4244 Triple TargetTriple(F.getParent()->getTargetTriple()); 4245 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4246 // and 32 bytes for ABIv2. This is usually determined by target 4247 // endianness, but in theory could be overriden by function attribute. 4248 // For simplicity, we ignore it here (it'd only matter for QPX vectors). 4249 if (TargetTriple.getArch() == Triple::ppc64) 4250 VAArgBase = 48; 4251 else 4252 VAArgBase = 32; 4253 unsigned VAArgOffset = VAArgBase; 4254 const DataLayout &DL = F.getParent()->getDataLayout(); 4255 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4256 ArgIt != End; ++ArgIt) { 4257 Value *A = *ArgIt; 4258 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4259 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4260 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 4261 if (IsByVal) { 4262 assert(A->getType()->isPointerTy()); 4263 Type *RealTy = A->getType()->getPointerElementType(); 4264 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4265 uint64_t ArgAlign = CS.getParamAlignment(ArgNo); 4266 if (ArgAlign < 8) 4267 ArgAlign = 8; 4268 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4269 if (!IsFixed) { 4270 Value *Base = getShadowPtrForVAArgument( 4271 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4272 if (Base) { 4273 Value *AShadowPtr, *AOriginPtr; 4274 std::tie(AShadowPtr, AOriginPtr) = 4275 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4276 kShadowTLSAlignment, /*isStore*/ false); 4277 4278 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4279 kShadowTLSAlignment, ArgSize); 4280 } 4281 } 4282 VAArgOffset += alignTo(ArgSize, 8); 4283 } else { 4284 Value *Base; 4285 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4286 uint64_t ArgAlign = 8; 4287 if (A->getType()->isArrayTy()) { 4288 // Arrays are aligned to element size, except for long double 4289 // arrays, which are aligned to 8 bytes. 4290 Type *ElementTy = A->getType()->getArrayElementType(); 4291 if (!ElementTy->isPPC_FP128Ty()) 4292 ArgAlign = DL.getTypeAllocSize(ElementTy); 4293 } else if (A->getType()->isVectorTy()) { 4294 // Vectors are naturally aligned. 4295 ArgAlign = DL.getTypeAllocSize(A->getType()); 4296 } 4297 if (ArgAlign < 8) 4298 ArgAlign = 8; 4299 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4300 if (DL.isBigEndian()) { 4301 // Adjusting the shadow for argument with size < 8 to match the placement 4302 // of bits in big endian system 4303 if (ArgSize < 8) 4304 VAArgOffset += (8 - ArgSize); 4305 } 4306 if (!IsFixed) { 4307 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4308 VAArgOffset - VAArgBase, ArgSize); 4309 if (Base) 4310 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4311 } 4312 VAArgOffset += ArgSize; 4313 VAArgOffset = alignTo(VAArgOffset, 8); 4314 } 4315 if (IsFixed) 4316 VAArgBase = VAArgOffset; 4317 } 4318 4319 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4320 VAArgOffset - VAArgBase); 4321 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4322 // a new class member i.e. it is the total size of all VarArgs. 4323 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4324 } 4325 4326 /// Compute the shadow address for a given va_arg. 4327 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4328 unsigned ArgOffset, unsigned ArgSize) { 4329 // Make sure we don't overflow __msan_va_arg_tls. 4330 if (ArgOffset + ArgSize > kParamTLSSize) 4331 return nullptr; 4332 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4333 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4334 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4335 "_msarg"); 4336 } 4337 4338 void visitVAStartInst(VAStartInst &I) override { 4339 IRBuilder<> IRB(&I); 4340 VAStartInstrumentationList.push_back(&I); 4341 Value *VAListTag = I.getArgOperand(0); 4342 Value *ShadowPtr, *OriginPtr; 4343 unsigned Alignment = 8; 4344 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4345 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4346 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4347 /* size */ 8, Alignment, false); 4348 } 4349 4350 void visitVACopyInst(VACopyInst &I) override { 4351 IRBuilder<> IRB(&I); 4352 Value *VAListTag = I.getArgOperand(0); 4353 Value *ShadowPtr, *OriginPtr; 4354 unsigned Alignment = 8; 4355 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4356 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4357 // Unpoison the whole __va_list_tag. 4358 // FIXME: magic ABI constants. 4359 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4360 /* size */ 8, Alignment, false); 4361 } 4362 4363 void finalizeInstrumentation() override { 4364 assert(!VAArgSize && !VAArgTLSCopy && 4365 "finalizeInstrumentation called twice"); 4366 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4367 VAArgSize = IRB.CreateLoad(MS.VAArgOverflowSizeTLS); 4368 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4369 VAArgSize); 4370 4371 if (!VAStartInstrumentationList.empty()) { 4372 // If there is a va_start in this function, make a backup copy of 4373 // va_arg_tls somewhere in the function entry block. 4374 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4375 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 4376 } 4377 4378 // Instrument va_start. 4379 // Copy va_list shadow from the backup copy of the TLS contents. 4380 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4381 CallInst *OrigInst = VAStartInstrumentationList[i]; 4382 IRBuilder<> IRB(OrigInst->getNextNode()); 4383 Value *VAListTag = OrigInst->getArgOperand(0); 4384 Value *RegSaveAreaPtrPtr = 4385 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4386 PointerType::get(Type::getInt64PtrTy(*MS.C), 0)); 4387 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrPtr); 4388 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4389 unsigned Alignment = 8; 4390 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4391 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4392 Alignment, /*isStore*/ true); 4393 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4394 CopySize); 4395 } 4396 } 4397 }; 4398 4399 /// A no-op implementation of VarArgHelper. 4400 struct VarArgNoOpHelper : public VarArgHelper { 4401 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 4402 MemorySanitizerVisitor &MSV) {} 4403 4404 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {} 4405 4406 void visitVAStartInst(VAStartInst &I) override {} 4407 4408 void visitVACopyInst(VACopyInst &I) override {} 4409 4410 void finalizeInstrumentation() override {} 4411 }; 4412 4413 } // end anonymous namespace 4414 4415 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 4416 MemorySanitizerVisitor &Visitor) { 4417 // VarArg handling is only implemented on AMD64. False positives are possible 4418 // on other platforms. 4419 Triple TargetTriple(Func.getParent()->getTargetTriple()); 4420 if (TargetTriple.getArch() == Triple::x86_64) 4421 return new VarArgAMD64Helper(Func, Msan, Visitor); 4422 else if (TargetTriple.isMIPS64()) 4423 return new VarArgMIPS64Helper(Func, Msan, Visitor); 4424 else if (TargetTriple.getArch() == Triple::aarch64) 4425 return new VarArgAArch64Helper(Func, Msan, Visitor); 4426 else if (TargetTriple.getArch() == Triple::ppc64 || 4427 TargetTriple.getArch() == Triple::ppc64le) 4428 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 4429 else 4430 return new VarArgNoOpHelper(Func, Msan, Visitor); 4431 } 4432 4433 bool MemorySanitizer::runOnFunction(Function &F) { 4434 if (!CompileKernel && (&F == MsanCtorFunction)) 4435 return false; 4436 MemorySanitizerVisitor Visitor(F, *this); 4437 4438 // Clear out readonly/readnone attributes. 4439 AttrBuilder B; 4440 B.addAttribute(Attribute::ReadOnly) 4441 .addAttribute(Attribute::ReadNone); 4442 F.removeAttributes(AttributeList::FunctionIndex, B); 4443 4444 return Visitor.runOnFunction(); 4445 } 4446