1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 // 9 /// \file 10 /// This file is a part of MemorySanitizer, a detector of uninitialized 11 /// reads. 12 /// 13 /// The algorithm of the tool is similar to Memcheck 14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 15 /// byte of the application memory, poison the shadow of the malloc-ed 16 /// or alloca-ed memory, load the shadow bits on every memory read, 17 /// propagate the shadow bits through some of the arithmetic 18 /// instruction (including MOV), store the shadow bits on every memory 19 /// write, report a bug on some other instructions (e.g. JMP) if the 20 /// associated shadow is poisoned. 21 /// 22 /// But there are differences too. The first and the major one: 23 /// compiler instrumentation instead of binary instrumentation. This 24 /// gives us much better register allocation, possible compiler 25 /// optimizations and a fast start-up. But this brings the major issue 26 /// as well: msan needs to see all program events, including system 27 /// calls and reads/writes in system libraries, so we either need to 28 /// compile *everything* with msan or use a binary translation 29 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 30 /// Another difference from Memcheck is that we use 8 shadow bits per 31 /// byte of application memory and use a direct shadow mapping. This 32 /// greatly simplifies the instrumentation code and avoids races on 33 /// shadow updates (Memcheck is single-threaded so races are not a 34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 35 /// path storage that uses 8 bits per byte). 36 /// 37 /// The default value of shadow is 0, which means "clean" (not poisoned). 38 /// 39 /// Every module initializer should call __msan_init to ensure that the 40 /// shadow memory is ready. On error, __msan_warning is called. Since 41 /// parameters and return values may be passed via registers, we have a 42 /// specialized thread-local shadow for return values 43 /// (__msan_retval_tls) and parameters (__msan_param_tls). 44 /// 45 /// Origin tracking. 46 /// 47 /// MemorySanitizer can track origins (allocation points) of all uninitialized 48 /// values. This behavior is controlled with a flag (msan-track-origins) and is 49 /// disabled by default. 50 /// 51 /// Origins are 4-byte values created and interpreted by the runtime library. 52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 53 /// of application memory. Propagation of origins is basically a bunch of 54 /// "select" instructions that pick the origin of a dirty argument, if an 55 /// instruction has one. 56 /// 57 /// Every 4 aligned, consecutive bytes of application memory have one origin 58 /// value associated with them. If these bytes contain uninitialized data 59 /// coming from 2 different allocations, the last store wins. Because of this, 60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 61 /// practice. 62 /// 63 /// Origins are meaningless for fully initialized values, so MemorySanitizer 64 /// avoids storing origin to memory when a fully initialized value is stored. 65 /// This way it avoids needless overwriting origin of the 4-byte region on 66 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 67 /// 68 /// Atomic handling. 69 /// 70 /// Ideally, every atomic store of application value should update the 71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 72 /// of two disjoint locations can not be done without severe slowdown. 73 /// 74 /// Therefore, we implement an approximation that may err on the safe side. 75 /// In this implementation, every atomically accessed location in the program 76 /// may only change from (partially) uninitialized to fully initialized, but 77 /// not the other way around. We load the shadow _after_ the application load, 78 /// and we store the shadow _before_ the app store. Also, we always store clean 79 /// shadow (if the application store is atomic). This way, if the store-load 80 /// pair constitutes a happens-before arc, shadow store and load are correctly 81 /// ordered such that the load will get either the value that was stored, or 82 /// some later value (which is always clean). 83 /// 84 /// This does not work very well with Compare-And-Swap (CAS) and 85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 86 /// must store the new shadow before the app operation, and load the shadow 87 /// after the app operation. Computers don't work this way. Current 88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 89 /// value. It implements the store part as a simple atomic store by storing a 90 /// clean shadow. 91 /// 92 /// Instrumenting inline assembly. 93 /// 94 /// For inline assembly code LLVM has little idea about which memory locations 95 /// become initialized depending on the arguments. It can be possible to figure 96 /// out which arguments are meant to point to inputs and outputs, but the 97 /// actual semantics can be only visible at runtime. In the Linux kernel it's 98 /// also possible that the arguments only indicate the offset for a base taken 99 /// from a segment register, so it's dangerous to treat any asm() arguments as 100 /// pointers. We take a conservative approach generating calls to 101 /// __msan_instrument_asm_store(ptr, size) 102 /// , which defer the memory unpoisoning to the runtime library. 103 /// The latter can perform more complex address checks to figure out whether 104 /// it's safe to touch the shadow memory. 105 /// Like with atomic operations, we call __msan_instrument_asm_store() before 106 /// the assembly call, so that changes to the shadow memory will be seen by 107 /// other threads together with main memory initialization. 108 /// 109 /// KernelMemorySanitizer (KMSAN) implementation. 110 /// 111 /// The major differences between KMSAN and MSan instrumentation are: 112 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 113 /// - KMSAN allocates shadow and origin memory for each page separately, so 114 /// there are no explicit accesses to shadow and origin in the 115 /// instrumentation. 116 /// Shadow and origin values for a particular X-byte memory location 117 /// (X=1,2,4,8) are accessed through pointers obtained via the 118 /// __msan_metadata_ptr_for_load_X(ptr) 119 /// __msan_metadata_ptr_for_store_X(ptr) 120 /// functions. The corresponding functions check that the X-byte accesses 121 /// are possible and returns the pointers to shadow and origin memory. 122 /// Arbitrary sized accesses are handled with: 123 /// __msan_metadata_ptr_for_load_n(ptr, size) 124 /// __msan_metadata_ptr_for_store_n(ptr, size); 125 /// - TLS variables are stored in a single per-task struct. A call to a 126 /// function __msan_get_context_state() returning a pointer to that struct 127 /// is inserted into every instrumented function before the entry block; 128 /// - __msan_warning() takes a 32-bit origin parameter; 129 /// - local variables are poisoned with __msan_poison_alloca() upon function 130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 131 /// function; 132 /// - the pass doesn't declare any global variables or add global constructors 133 /// to the translation unit. 134 /// 135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 136 /// calls, making sure we're on the safe side wrt. possible false positives. 137 /// 138 /// KernelMemorySanitizer only supports X86_64 at the moment. 139 /// 140 // 141 // FIXME: This sanitizer does not yet handle scalable vectors 142 // 143 //===----------------------------------------------------------------------===// 144 145 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h" 146 #include "llvm/ADT/APInt.h" 147 #include "llvm/ADT/ArrayRef.h" 148 #include "llvm/ADT/DepthFirstIterator.h" 149 #include "llvm/ADT/SmallSet.h" 150 #include "llvm/ADT/SmallString.h" 151 #include "llvm/ADT/SmallVector.h" 152 #include "llvm/ADT/StringExtras.h" 153 #include "llvm/ADT/StringRef.h" 154 #include "llvm/ADT/Triple.h" 155 #include "llvm/Analysis/TargetLibraryInfo.h" 156 #include "llvm/Analysis/ValueTracking.h" 157 #include "llvm/IR/Argument.h" 158 #include "llvm/IR/Attributes.h" 159 #include "llvm/IR/BasicBlock.h" 160 #include "llvm/IR/CallingConv.h" 161 #include "llvm/IR/Constant.h" 162 #include "llvm/IR/Constants.h" 163 #include "llvm/IR/DataLayout.h" 164 #include "llvm/IR/DerivedTypes.h" 165 #include "llvm/IR/Function.h" 166 #include "llvm/IR/GlobalValue.h" 167 #include "llvm/IR/GlobalVariable.h" 168 #include "llvm/IR/IRBuilder.h" 169 #include "llvm/IR/InlineAsm.h" 170 #include "llvm/IR/InstVisitor.h" 171 #include "llvm/IR/InstrTypes.h" 172 #include "llvm/IR/Instruction.h" 173 #include "llvm/IR/Instructions.h" 174 #include "llvm/IR/IntrinsicInst.h" 175 #include "llvm/IR/Intrinsics.h" 176 #include "llvm/IR/IntrinsicsX86.h" 177 #include "llvm/IR/LLVMContext.h" 178 #include "llvm/IR/MDBuilder.h" 179 #include "llvm/IR/Module.h" 180 #include "llvm/IR/Type.h" 181 #include "llvm/IR/Value.h" 182 #include "llvm/IR/ValueMap.h" 183 #include "llvm/InitializePasses.h" 184 #include "llvm/Pass.h" 185 #include "llvm/Support/AtomicOrdering.h" 186 #include "llvm/Support/Casting.h" 187 #include "llvm/Support/CommandLine.h" 188 #include "llvm/Support/Compiler.h" 189 #include "llvm/Support/Debug.h" 190 #include "llvm/Support/ErrorHandling.h" 191 #include "llvm/Support/MathExtras.h" 192 #include "llvm/Support/raw_ostream.h" 193 #include "llvm/Transforms/Instrumentation.h" 194 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 195 #include "llvm/Transforms/Utils/Local.h" 196 #include "llvm/Transforms/Utils/ModuleUtils.h" 197 #include <algorithm> 198 #include <cassert> 199 #include <cstddef> 200 #include <cstdint> 201 #include <memory> 202 #include <string> 203 #include <tuple> 204 205 using namespace llvm; 206 207 #define DEBUG_TYPE "msan" 208 209 static const unsigned kOriginSize = 4; 210 static const Align kMinOriginAlignment = Align(4); 211 static const Align kShadowTLSAlignment = Align(8); 212 213 // These constants must be kept in sync with the ones in msan.h. 214 static const unsigned kParamTLSSize = 800; 215 static const unsigned kRetvalTLSSize = 800; 216 217 // Accesses sizes are powers of two: 1, 2, 4, 8. 218 static const size_t kNumberOfAccessSizes = 4; 219 220 /// Track origins of uninitialized values. 221 /// 222 /// Adds a section to MemorySanitizer report that points to the allocation 223 /// (stack or heap) the uninitialized bits came from originally. 224 static cl::opt<int> ClTrackOrigins("msan-track-origins", 225 cl::desc("Track origins (allocation sites) of poisoned memory"), 226 cl::Hidden, cl::init(0)); 227 228 static cl::opt<bool> ClKeepGoing("msan-keep-going", 229 cl::desc("keep going after reporting a UMR"), 230 cl::Hidden, cl::init(false)); 231 232 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 233 cl::desc("poison uninitialized stack variables"), 234 cl::Hidden, cl::init(true)); 235 236 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 237 cl::desc("poison uninitialized stack variables with a call"), 238 cl::Hidden, cl::init(false)); 239 240 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 241 cl::desc("poison uninitialized stack variables with the given pattern"), 242 cl::Hidden, cl::init(0xff)); 243 244 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 245 cl::desc("poison undef temps"), 246 cl::Hidden, cl::init(true)); 247 248 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 249 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 250 cl::Hidden, cl::init(true)); 251 252 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 253 cl::desc("exact handling of relational integer ICmp"), 254 cl::Hidden, cl::init(false)); 255 256 static cl::opt<bool> ClHandleLifetimeIntrinsics( 257 "msan-handle-lifetime-intrinsics", 258 cl::desc( 259 "when possible, poison scoped variables at the beginning of the scope " 260 "(slower, but more precise)"), 261 cl::Hidden, cl::init(true)); 262 263 // When compiling the Linux kernel, we sometimes see false positives related to 264 // MSan being unable to understand that inline assembly calls may initialize 265 // local variables. 266 // This flag makes the compiler conservatively unpoison every memory location 267 // passed into an assembly call. Note that this may cause false positives. 268 // Because it's impossible to figure out the array sizes, we can only unpoison 269 // the first sizeof(type) bytes for each type* pointer. 270 // The instrumentation is only enabled in KMSAN builds, and only if 271 // -msan-handle-asm-conservative is on. This is done because we may want to 272 // quickly disable assembly instrumentation when it breaks. 273 static cl::opt<bool> ClHandleAsmConservative( 274 "msan-handle-asm-conservative", 275 cl::desc("conservative handling of inline assembly"), cl::Hidden, 276 cl::init(true)); 277 278 // This flag controls whether we check the shadow of the address 279 // operand of load or store. Such bugs are very rare, since load from 280 // a garbage address typically results in SEGV, but still happen 281 // (e.g. only lower bits of address are garbage, or the access happens 282 // early at program startup where malloc-ed memory is more likely to 283 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 284 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 285 cl::desc("report accesses through a pointer which has poisoned shadow"), 286 cl::Hidden, cl::init(true)); 287 288 static cl::opt<bool> ClEagerChecks( 289 "msan-eager-checks", 290 cl::desc("check arguments and return values at function call boundaries"), 291 cl::Hidden, cl::init(false)); 292 293 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 294 cl::desc("print out instructions with default strict semantics"), 295 cl::Hidden, cl::init(false)); 296 297 static cl::opt<int> ClInstrumentationWithCallThreshold( 298 "msan-instrumentation-with-call-threshold", 299 cl::desc( 300 "If the function being instrumented requires more than " 301 "this number of checks and origin stores, use callbacks instead of " 302 "inline checks (-1 means never use callbacks)."), 303 cl::Hidden, cl::init(3500)); 304 305 static cl::opt<bool> 306 ClEnableKmsan("msan-kernel", 307 cl::desc("Enable KernelMemorySanitizer instrumentation"), 308 cl::Hidden, cl::init(false)); 309 310 static cl::opt<bool> 311 ClDisableChecks("msan-disable-checks", 312 cl::desc("Apply no_sanitize to the whole file"), cl::Hidden, 313 cl::init(false)); 314 315 // This is an experiment to enable handling of cases where shadow is a non-zero 316 // compile-time constant. For some unexplainable reason they were silently 317 // ignored in the instrumentation. 318 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 319 cl::desc("Insert checks for constant shadow values"), 320 cl::Hidden, cl::init(false)); 321 322 // This is off by default because of a bug in gold: 323 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 324 static cl::opt<bool> ClWithComdat("msan-with-comdat", 325 cl::desc("Place MSan constructors in comdat sections"), 326 cl::Hidden, cl::init(false)); 327 328 // These options allow to specify custom memory map parameters 329 // See MemoryMapParams for details. 330 static cl::opt<uint64_t> ClAndMask("msan-and-mask", 331 cl::desc("Define custom MSan AndMask"), 332 cl::Hidden, cl::init(0)); 333 334 static cl::opt<uint64_t> ClXorMask("msan-xor-mask", 335 cl::desc("Define custom MSan XorMask"), 336 cl::Hidden, cl::init(0)); 337 338 static cl::opt<uint64_t> ClShadowBase("msan-shadow-base", 339 cl::desc("Define custom MSan ShadowBase"), 340 cl::Hidden, cl::init(0)); 341 342 static cl::opt<uint64_t> ClOriginBase("msan-origin-base", 343 cl::desc("Define custom MSan OriginBase"), 344 cl::Hidden, cl::init(0)); 345 346 const char kMsanModuleCtorName[] = "msan.module_ctor"; 347 const char kMsanInitName[] = "__msan_init"; 348 349 namespace { 350 351 // Memory map parameters used in application-to-shadow address calculation. 352 // Offset = (Addr & ~AndMask) ^ XorMask 353 // Shadow = ShadowBase + Offset 354 // Origin = OriginBase + Offset 355 struct MemoryMapParams { 356 uint64_t AndMask; 357 uint64_t XorMask; 358 uint64_t ShadowBase; 359 uint64_t OriginBase; 360 }; 361 362 struct PlatformMemoryMapParams { 363 const MemoryMapParams *bits32; 364 const MemoryMapParams *bits64; 365 }; 366 367 } // end anonymous namespace 368 369 // i386 Linux 370 static const MemoryMapParams Linux_I386_MemoryMapParams = { 371 0x000080000000, // AndMask 372 0, // XorMask (not used) 373 0, // ShadowBase (not used) 374 0x000040000000, // OriginBase 375 }; 376 377 // x86_64 Linux 378 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 379 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 380 0x400000000000, // AndMask 381 0, // XorMask (not used) 382 0, // ShadowBase (not used) 383 0x200000000000, // OriginBase 384 #else 385 0, // AndMask (not used) 386 0x500000000000, // XorMask 387 0, // ShadowBase (not used) 388 0x100000000000, // OriginBase 389 #endif 390 }; 391 392 // mips64 Linux 393 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 394 0, // AndMask (not used) 395 0x008000000000, // XorMask 396 0, // ShadowBase (not used) 397 0x002000000000, // OriginBase 398 }; 399 400 // ppc64 Linux 401 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 402 0xE00000000000, // AndMask 403 0x100000000000, // XorMask 404 0x080000000000, // ShadowBase 405 0x1C0000000000, // OriginBase 406 }; 407 408 // s390x Linux 409 static const MemoryMapParams Linux_S390X_MemoryMapParams = { 410 0xC00000000000, // AndMask 411 0, // XorMask (not used) 412 0x080000000000, // ShadowBase 413 0x1C0000000000, // OriginBase 414 }; 415 416 // aarch64 Linux 417 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 418 0, // AndMask (not used) 419 0x06000000000, // XorMask 420 0, // ShadowBase (not used) 421 0x01000000000, // OriginBase 422 }; 423 424 // i386 FreeBSD 425 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 426 0x000180000000, // AndMask 427 0x000040000000, // XorMask 428 0x000020000000, // ShadowBase 429 0x000700000000, // OriginBase 430 }; 431 432 // x86_64 FreeBSD 433 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 434 0xc00000000000, // AndMask 435 0x200000000000, // XorMask 436 0x100000000000, // ShadowBase 437 0x380000000000, // OriginBase 438 }; 439 440 // x86_64 NetBSD 441 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 442 0, // AndMask 443 0x500000000000, // XorMask 444 0, // ShadowBase 445 0x100000000000, // OriginBase 446 }; 447 448 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 449 &Linux_I386_MemoryMapParams, 450 &Linux_X86_64_MemoryMapParams, 451 }; 452 453 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 454 nullptr, 455 &Linux_MIPS64_MemoryMapParams, 456 }; 457 458 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 459 nullptr, 460 &Linux_PowerPC64_MemoryMapParams, 461 }; 462 463 static const PlatformMemoryMapParams Linux_S390_MemoryMapParams = { 464 nullptr, 465 &Linux_S390X_MemoryMapParams, 466 }; 467 468 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 469 nullptr, 470 &Linux_AArch64_MemoryMapParams, 471 }; 472 473 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 474 &FreeBSD_I386_MemoryMapParams, 475 &FreeBSD_X86_64_MemoryMapParams, 476 }; 477 478 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 479 nullptr, 480 &NetBSD_X86_64_MemoryMapParams, 481 }; 482 483 namespace { 484 485 /// Instrument functions of a module to detect uninitialized reads. 486 /// 487 /// Instantiating MemorySanitizer inserts the msan runtime library API function 488 /// declarations into the module if they don't exist already. Instantiating 489 /// ensures the __msan_init function is in the list of global constructors for 490 /// the module. 491 class MemorySanitizer { 492 public: 493 MemorySanitizer(Module &M, MemorySanitizerOptions Options) 494 : CompileKernel(Options.Kernel), TrackOrigins(Options.TrackOrigins), 495 Recover(Options.Recover), EagerChecks(Options.EagerChecks) { 496 initializeModule(M); 497 } 498 499 // MSan cannot be moved or copied because of MapParams. 500 MemorySanitizer(MemorySanitizer &&) = delete; 501 MemorySanitizer &operator=(MemorySanitizer &&) = delete; 502 MemorySanitizer(const MemorySanitizer &) = delete; 503 MemorySanitizer &operator=(const MemorySanitizer &) = delete; 504 505 bool sanitizeFunction(Function &F, TargetLibraryInfo &TLI); 506 507 private: 508 friend struct MemorySanitizerVisitor; 509 friend struct VarArgAMD64Helper; 510 friend struct VarArgMIPS64Helper; 511 friend struct VarArgAArch64Helper; 512 friend struct VarArgPowerPC64Helper; 513 friend struct VarArgSystemZHelper; 514 515 void initializeModule(Module &M); 516 void initializeCallbacks(Module &M); 517 void createKernelApi(Module &M); 518 void createUserspaceApi(Module &M); 519 520 /// True if we're compiling the Linux kernel. 521 bool CompileKernel; 522 /// Track origins (allocation points) of uninitialized values. 523 int TrackOrigins; 524 bool Recover; 525 bool EagerChecks; 526 527 LLVMContext *C; 528 Type *IntptrTy; 529 Type *OriginTy; 530 531 // XxxTLS variables represent the per-thread state in MSan and per-task state 532 // in KMSAN. 533 // For the userspace these point to thread-local globals. In the kernel land 534 // they point to the members of a per-task struct obtained via a call to 535 // __msan_get_context_state(). 536 537 /// Thread-local shadow storage for function parameters. 538 Value *ParamTLS; 539 540 /// Thread-local origin storage for function parameters. 541 Value *ParamOriginTLS; 542 543 /// Thread-local shadow storage for function return value. 544 Value *RetvalTLS; 545 546 /// Thread-local origin storage for function return value. 547 Value *RetvalOriginTLS; 548 549 /// Thread-local shadow storage for in-register va_arg function 550 /// parameters (x86_64-specific). 551 Value *VAArgTLS; 552 553 /// Thread-local shadow storage for in-register va_arg function 554 /// parameters (x86_64-specific). 555 Value *VAArgOriginTLS; 556 557 /// Thread-local shadow storage for va_arg overflow area 558 /// (x86_64-specific). 559 Value *VAArgOverflowSizeTLS; 560 561 /// Are the instrumentation callbacks set up? 562 bool CallbacksInitialized = false; 563 564 /// The run-time callback to print a warning. 565 FunctionCallee WarningFn; 566 567 // These arrays are indexed by log2(AccessSize). 568 FunctionCallee MaybeWarningFn[kNumberOfAccessSizes]; 569 FunctionCallee MaybeStoreOriginFn[kNumberOfAccessSizes]; 570 571 /// Run-time helper that generates a new origin value for a stack 572 /// allocation. 573 FunctionCallee MsanSetAllocaOrigin4Fn; 574 575 /// Run-time helper that poisons stack on function entry. 576 FunctionCallee MsanPoisonStackFn; 577 578 /// Run-time helper that records a store (or any event) of an 579 /// uninitialized value and returns an updated origin id encoding this info. 580 FunctionCallee MsanChainOriginFn; 581 582 /// Run-time helper that paints an origin over a region. 583 FunctionCallee MsanSetOriginFn; 584 585 /// MSan runtime replacements for memmove, memcpy and memset. 586 FunctionCallee MemmoveFn, MemcpyFn, MemsetFn; 587 588 /// KMSAN callback for task-local function argument shadow. 589 StructType *MsanContextStateTy; 590 FunctionCallee MsanGetContextStateFn; 591 592 /// Functions for poisoning/unpoisoning local variables 593 FunctionCallee MsanPoisonAllocaFn, MsanUnpoisonAllocaFn; 594 595 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 596 /// pointers. 597 FunctionCallee MsanMetadataPtrForLoadN, MsanMetadataPtrForStoreN; 598 FunctionCallee MsanMetadataPtrForLoad_1_8[4]; 599 FunctionCallee MsanMetadataPtrForStore_1_8[4]; 600 FunctionCallee MsanInstrumentAsmStoreFn; 601 602 /// Helper to choose between different MsanMetadataPtrXxx(). 603 FunctionCallee getKmsanShadowOriginAccessFn(bool isStore, int size); 604 605 /// Memory map parameters used in application-to-shadow calculation. 606 const MemoryMapParams *MapParams; 607 608 /// Custom memory map parameters used when -msan-shadow-base or 609 // -msan-origin-base is provided. 610 MemoryMapParams CustomMapParams; 611 612 MDNode *ColdCallWeights; 613 614 /// Branch weights for origin store. 615 MDNode *OriginStoreWeights; 616 }; 617 618 void insertModuleCtor(Module &M) { 619 getOrCreateSanitizerCtorAndInitFunctions( 620 M, kMsanModuleCtorName, kMsanInitName, 621 /*InitArgTypes=*/{}, 622 /*InitArgs=*/{}, 623 // This callback is invoked when the functions are created the first 624 // time. Hook them into the global ctors list in that case: 625 [&](Function *Ctor, FunctionCallee) { 626 if (!ClWithComdat) { 627 appendToGlobalCtors(M, Ctor, 0); 628 return; 629 } 630 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 631 Ctor->setComdat(MsanCtorComdat); 632 appendToGlobalCtors(M, Ctor, 0, Ctor); 633 }); 634 } 635 636 /// A legacy function pass for msan instrumentation. 637 /// 638 /// Instruments functions to detect uninitialized reads. 639 struct MemorySanitizerLegacyPass : public FunctionPass { 640 // Pass identification, replacement for typeid. 641 static char ID; 642 643 MemorySanitizerLegacyPass(MemorySanitizerOptions Options = {}) 644 : FunctionPass(ID), Options(Options) { 645 initializeMemorySanitizerLegacyPassPass(*PassRegistry::getPassRegistry()); 646 } 647 StringRef getPassName() const override { return "MemorySanitizerLegacyPass"; } 648 649 void getAnalysisUsage(AnalysisUsage &AU) const override { 650 AU.addRequired<TargetLibraryInfoWrapperPass>(); 651 } 652 653 bool runOnFunction(Function &F) override { 654 return MSan->sanitizeFunction( 655 F, getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F)); 656 } 657 bool doInitialization(Module &M) override; 658 659 Optional<MemorySanitizer> MSan; 660 MemorySanitizerOptions Options; 661 }; 662 663 template <class T> T getOptOrDefault(const cl::opt<T> &Opt, T Default) { 664 return (Opt.getNumOccurrences() > 0) ? Opt : Default; 665 } 666 667 } // end anonymous namespace 668 669 MemorySanitizerOptions::MemorySanitizerOptions(int TO, bool R, bool K, 670 bool EagerChecks) 671 : Kernel(getOptOrDefault(ClEnableKmsan, K)), 672 TrackOrigins(getOptOrDefault(ClTrackOrigins, Kernel ? 2 : TO)), 673 Recover(getOptOrDefault(ClKeepGoing, Kernel || R)), 674 EagerChecks(getOptOrDefault(ClEagerChecks, EagerChecks)) {} 675 676 PreservedAnalyses MemorySanitizerPass::run(Function &F, 677 FunctionAnalysisManager &FAM) { 678 MemorySanitizer Msan(*F.getParent(), Options); 679 if (Msan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F))) 680 return PreservedAnalyses::none(); 681 return PreservedAnalyses::all(); 682 } 683 684 PreservedAnalyses 685 ModuleMemorySanitizerPass::run(Module &M, ModuleAnalysisManager &AM) { 686 if (Options.Kernel) 687 return PreservedAnalyses::all(); 688 insertModuleCtor(M); 689 return PreservedAnalyses::none(); 690 } 691 692 void MemorySanitizerPass::printPipeline( 693 raw_ostream &OS, function_ref<StringRef(StringRef)> MapClassName2PassName) { 694 static_cast<PassInfoMixin<MemorySanitizerPass> *>(this)->printPipeline( 695 OS, MapClassName2PassName); 696 OS << "<"; 697 if (Options.Recover) 698 OS << "recover;"; 699 if (Options.Kernel) 700 OS << "kernel;"; 701 if (Options.EagerChecks) 702 OS << "eager-checks;"; 703 OS << "track-origins=" << Options.TrackOrigins; 704 OS << ">"; 705 } 706 707 char MemorySanitizerLegacyPass::ID = 0; 708 709 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass, "msan", 710 "MemorySanitizer: detects uninitialized reads.", false, 711 false) 712 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 713 INITIALIZE_PASS_END(MemorySanitizerLegacyPass, "msan", 714 "MemorySanitizer: detects uninitialized reads.", false, 715 false) 716 717 FunctionPass * 718 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options) { 719 return new MemorySanitizerLegacyPass(Options); 720 } 721 722 /// Create a non-const global initialized with the given string. 723 /// 724 /// Creates a writable global for Str so that we can pass it to the 725 /// run-time lib. Runtime uses first 4 bytes of the string to store the 726 /// frame ID, so the string needs to be mutable. 727 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 728 StringRef Str) { 729 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 730 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 731 GlobalValue::PrivateLinkage, StrConst, ""); 732 } 733 734 /// Create KMSAN API callbacks. 735 void MemorySanitizer::createKernelApi(Module &M) { 736 IRBuilder<> IRB(*C); 737 738 // These will be initialized in insertKmsanPrologue(). 739 RetvalTLS = nullptr; 740 RetvalOriginTLS = nullptr; 741 ParamTLS = nullptr; 742 ParamOriginTLS = nullptr; 743 VAArgTLS = nullptr; 744 VAArgOriginTLS = nullptr; 745 VAArgOverflowSizeTLS = nullptr; 746 747 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 748 IRB.getInt32Ty()); 749 // Requests the per-task context state (kmsan_context_state*) from the 750 // runtime library. 751 MsanContextStateTy = StructType::get( 752 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 753 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 754 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 755 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), /* va_arg_origin */ 756 IRB.getInt64Ty(), ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 757 OriginTy); 758 MsanGetContextStateFn = M.getOrInsertFunction( 759 "__msan_get_context_state", PointerType::get(MsanContextStateTy, 0)); 760 761 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 762 PointerType::get(IRB.getInt32Ty(), 0)); 763 764 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 765 std::string name_load = 766 "__msan_metadata_ptr_for_load_" + std::to_string(size); 767 std::string name_store = 768 "__msan_metadata_ptr_for_store_" + std::to_string(size); 769 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 770 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 771 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 772 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 773 } 774 775 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 776 "__msan_metadata_ptr_for_load_n", RetTy, 777 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 778 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 779 "__msan_metadata_ptr_for_store_n", RetTy, 780 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 781 782 // Functions for poisoning and unpoisoning memory. 783 MsanPoisonAllocaFn = 784 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 785 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 786 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 787 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 788 } 789 790 static Constant *getOrInsertGlobal(Module &M, StringRef Name, Type *Ty) { 791 return M.getOrInsertGlobal(Name, Ty, [&] { 792 return new GlobalVariable(M, Ty, false, GlobalVariable::ExternalLinkage, 793 nullptr, Name, nullptr, 794 GlobalVariable::InitialExecTLSModel); 795 }); 796 } 797 798 /// Insert declarations for userspace-specific functions and globals. 799 void MemorySanitizer::createUserspaceApi(Module &M) { 800 IRBuilder<> IRB(*C); 801 802 // Create the callback. 803 // FIXME: this function should have "Cold" calling conv, 804 // which is not yet implemented. 805 StringRef WarningFnName = Recover ? "__msan_warning_with_origin" 806 : "__msan_warning_with_origin_noreturn"; 807 WarningFn = 808 M.getOrInsertFunction(WarningFnName, IRB.getVoidTy(), IRB.getInt32Ty()); 809 810 // Create the global TLS variables. 811 RetvalTLS = 812 getOrInsertGlobal(M, "__msan_retval_tls", 813 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8)); 814 815 RetvalOriginTLS = getOrInsertGlobal(M, "__msan_retval_origin_tls", OriginTy); 816 817 ParamTLS = 818 getOrInsertGlobal(M, "__msan_param_tls", 819 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 820 821 ParamOriginTLS = 822 getOrInsertGlobal(M, "__msan_param_origin_tls", 823 ArrayType::get(OriginTy, kParamTLSSize / 4)); 824 825 VAArgTLS = 826 getOrInsertGlobal(M, "__msan_va_arg_tls", 827 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 828 829 VAArgOriginTLS = 830 getOrInsertGlobal(M, "__msan_va_arg_origin_tls", 831 ArrayType::get(OriginTy, kParamTLSSize / 4)); 832 833 VAArgOverflowSizeTLS = 834 getOrInsertGlobal(M, "__msan_va_arg_overflow_size_tls", IRB.getInt64Ty()); 835 836 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 837 AccessSizeIndex++) { 838 unsigned AccessSize = 1 << AccessSizeIndex; 839 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 840 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeWarningFnAttrs; 841 MaybeWarningFnAttrs.push_back(std::make_pair( 842 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 843 MaybeWarningFnAttrs.push_back(std::make_pair( 844 AttributeList::FirstArgIndex + 1, Attribute::get(*C, Attribute::ZExt))); 845 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 846 FunctionName, AttributeList::get(*C, MaybeWarningFnAttrs), 847 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt32Ty()); 848 849 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 850 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeStoreOriginFnAttrs; 851 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 852 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 853 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 854 AttributeList::FirstArgIndex + 2, Attribute::get(*C, Attribute::ZExt))); 855 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 856 FunctionName, AttributeList::get(*C, MaybeStoreOriginFnAttrs), 857 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt8PtrTy(), 858 IRB.getInt32Ty()); 859 } 860 861 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 862 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 863 IRB.getInt8PtrTy(), IntptrTy); 864 MsanPoisonStackFn = 865 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 866 IRB.getInt8PtrTy(), IntptrTy); 867 } 868 869 /// Insert extern declaration of runtime-provided functions and globals. 870 void MemorySanitizer::initializeCallbacks(Module &M) { 871 // Only do this once. 872 if (CallbacksInitialized) 873 return; 874 875 IRBuilder<> IRB(*C); 876 // Initialize callbacks that are common for kernel and userspace 877 // instrumentation. 878 MsanChainOriginFn = M.getOrInsertFunction( 879 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 880 MsanSetOriginFn = 881 M.getOrInsertFunction("__msan_set_origin", IRB.getVoidTy(), 882 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt32Ty()); 883 MemmoveFn = M.getOrInsertFunction( 884 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 885 IRB.getInt8PtrTy(), IntptrTy); 886 MemcpyFn = M.getOrInsertFunction( 887 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 888 IntptrTy); 889 MemsetFn = M.getOrInsertFunction( 890 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 891 IntptrTy); 892 893 MsanInstrumentAsmStoreFn = 894 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 895 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 896 897 if (CompileKernel) { 898 createKernelApi(M); 899 } else { 900 createUserspaceApi(M); 901 } 902 CallbacksInitialized = true; 903 } 904 905 FunctionCallee MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, 906 int size) { 907 FunctionCallee *Fns = 908 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 909 switch (size) { 910 case 1: 911 return Fns[0]; 912 case 2: 913 return Fns[1]; 914 case 4: 915 return Fns[2]; 916 case 8: 917 return Fns[3]; 918 default: 919 return nullptr; 920 } 921 } 922 923 /// Module-level initialization. 924 /// 925 /// inserts a call to __msan_init to the module's constructor list. 926 void MemorySanitizer::initializeModule(Module &M) { 927 auto &DL = M.getDataLayout(); 928 929 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 930 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 931 // Check the overrides first 932 if (ShadowPassed || OriginPassed) { 933 CustomMapParams.AndMask = ClAndMask; 934 CustomMapParams.XorMask = ClXorMask; 935 CustomMapParams.ShadowBase = ClShadowBase; 936 CustomMapParams.OriginBase = ClOriginBase; 937 MapParams = &CustomMapParams; 938 } else { 939 Triple TargetTriple(M.getTargetTriple()); 940 switch (TargetTriple.getOS()) { 941 case Triple::FreeBSD: 942 switch (TargetTriple.getArch()) { 943 case Triple::x86_64: 944 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 945 break; 946 case Triple::x86: 947 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 948 break; 949 default: 950 report_fatal_error("unsupported architecture"); 951 } 952 break; 953 case Triple::NetBSD: 954 switch (TargetTriple.getArch()) { 955 case Triple::x86_64: 956 MapParams = NetBSD_X86_MemoryMapParams.bits64; 957 break; 958 default: 959 report_fatal_error("unsupported architecture"); 960 } 961 break; 962 case Triple::Linux: 963 switch (TargetTriple.getArch()) { 964 case Triple::x86_64: 965 MapParams = Linux_X86_MemoryMapParams.bits64; 966 break; 967 case Triple::x86: 968 MapParams = Linux_X86_MemoryMapParams.bits32; 969 break; 970 case Triple::mips64: 971 case Triple::mips64el: 972 MapParams = Linux_MIPS_MemoryMapParams.bits64; 973 break; 974 case Triple::ppc64: 975 case Triple::ppc64le: 976 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 977 break; 978 case Triple::systemz: 979 MapParams = Linux_S390_MemoryMapParams.bits64; 980 break; 981 case Triple::aarch64: 982 case Triple::aarch64_be: 983 MapParams = Linux_ARM_MemoryMapParams.bits64; 984 break; 985 default: 986 report_fatal_error("unsupported architecture"); 987 } 988 break; 989 default: 990 report_fatal_error("unsupported operating system"); 991 } 992 } 993 994 C = &(M.getContext()); 995 IRBuilder<> IRB(*C); 996 IntptrTy = IRB.getIntPtrTy(DL); 997 OriginTy = IRB.getInt32Ty(); 998 999 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 1000 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 1001 1002 if (!CompileKernel) { 1003 if (TrackOrigins) 1004 M.getOrInsertGlobal("__msan_track_origins", IRB.getInt32Ty(), [&] { 1005 return new GlobalVariable( 1006 M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 1007 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 1008 }); 1009 1010 if (Recover) 1011 M.getOrInsertGlobal("__msan_keep_going", IRB.getInt32Ty(), [&] { 1012 return new GlobalVariable(M, IRB.getInt32Ty(), true, 1013 GlobalValue::WeakODRLinkage, 1014 IRB.getInt32(Recover), "__msan_keep_going"); 1015 }); 1016 } 1017 } 1018 1019 bool MemorySanitizerLegacyPass::doInitialization(Module &M) { 1020 if (!Options.Kernel) 1021 insertModuleCtor(M); 1022 MSan.emplace(M, Options); 1023 return true; 1024 } 1025 1026 namespace { 1027 1028 /// A helper class that handles instrumentation of VarArg 1029 /// functions on a particular platform. 1030 /// 1031 /// Implementations are expected to insert the instrumentation 1032 /// necessary to propagate argument shadow through VarArg function 1033 /// calls. Visit* methods are called during an InstVisitor pass over 1034 /// the function, and should avoid creating new basic blocks. A new 1035 /// instance of this class is created for each instrumented function. 1036 struct VarArgHelper { 1037 virtual ~VarArgHelper() = default; 1038 1039 /// Visit a CallBase. 1040 virtual void visitCallBase(CallBase &CB, IRBuilder<> &IRB) = 0; 1041 1042 /// Visit a va_start call. 1043 virtual void visitVAStartInst(VAStartInst &I) = 0; 1044 1045 /// Visit a va_copy call. 1046 virtual void visitVACopyInst(VACopyInst &I) = 0; 1047 1048 /// Finalize function instrumentation. 1049 /// 1050 /// This method is called after visiting all interesting (see above) 1051 /// instructions in a function. 1052 virtual void finalizeInstrumentation() = 0; 1053 }; 1054 1055 struct MemorySanitizerVisitor; 1056 1057 } // end anonymous namespace 1058 1059 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 1060 MemorySanitizerVisitor &Visitor); 1061 1062 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 1063 if (TypeSize <= 8) return 0; 1064 return Log2_32_Ceil((TypeSize + 7) / 8); 1065 } 1066 1067 namespace { 1068 1069 /// This class does all the work for a given function. Store and Load 1070 /// instructions store and load corresponding shadow and origin 1071 /// values. Most instructions propagate shadow from arguments to their 1072 /// return values. Certain instructions (most importantly, BranchInst) 1073 /// test their argument shadow and print reports (with a runtime call) if it's 1074 /// non-zero. 1075 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 1076 Function &F; 1077 MemorySanitizer &MS; 1078 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 1079 ValueMap<Value*, Value*> ShadowMap, OriginMap; 1080 std::unique_ptr<VarArgHelper> VAHelper; 1081 const TargetLibraryInfo *TLI; 1082 Instruction *FnPrologueEnd; 1083 1084 // The following flags disable parts of MSan instrumentation based on 1085 // exclusion list contents and command-line options. 1086 bool InsertChecks; 1087 bool PropagateShadow; 1088 bool PoisonStack; 1089 bool PoisonUndef; 1090 1091 struct ShadowOriginAndInsertPoint { 1092 Value *Shadow; 1093 Value *Origin; 1094 Instruction *OrigIns; 1095 1096 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 1097 : Shadow(S), Origin(O), OrigIns(I) {} 1098 }; 1099 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 1100 bool InstrumentLifetimeStart = ClHandleLifetimeIntrinsics; 1101 SmallSet<AllocaInst *, 16> AllocaSet; 1102 SmallVector<std::pair<IntrinsicInst *, AllocaInst *>, 16> LifetimeStartList; 1103 SmallVector<StoreInst *, 16> StoreList; 1104 1105 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS, 1106 const TargetLibraryInfo &TLI) 1107 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)), TLI(&TLI) { 1108 bool SanitizeFunction = 1109 F.hasFnAttribute(Attribute::SanitizeMemory) && !ClDisableChecks; 1110 InsertChecks = SanitizeFunction; 1111 PropagateShadow = SanitizeFunction; 1112 PoisonStack = SanitizeFunction && ClPoisonStack; 1113 PoisonUndef = SanitizeFunction && ClPoisonUndef; 1114 1115 // In the presence of unreachable blocks, we may see Phi nodes with 1116 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1117 // blocks, such nodes will not have any shadow value associated with them. 1118 // It's easier to remove unreachable blocks than deal with missing shadow. 1119 removeUnreachableBlocks(F); 1120 1121 MS.initializeCallbacks(*F.getParent()); 1122 FnPrologueEnd = IRBuilder<>(F.getEntryBlock().getFirstNonPHI()) 1123 .CreateIntrinsic(Intrinsic::donothing, {}, {}); 1124 1125 if (MS.CompileKernel) { 1126 IRBuilder<> IRB(FnPrologueEnd); 1127 insertKmsanPrologue(IRB); 1128 } 1129 1130 LLVM_DEBUG(if (!InsertChecks) dbgs() 1131 << "MemorySanitizer is not inserting checks into '" 1132 << F.getName() << "'\n"); 1133 } 1134 1135 bool isInPrologue(Instruction &I) { 1136 return I.getParent() == FnPrologueEnd->getParent() && 1137 (&I == FnPrologueEnd || I.comesBefore(FnPrologueEnd)); 1138 } 1139 1140 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1141 if (MS.TrackOrigins <= 1) return V; 1142 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1143 } 1144 1145 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1146 const DataLayout &DL = F.getParent()->getDataLayout(); 1147 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1148 if (IntptrSize == kOriginSize) return Origin; 1149 assert(IntptrSize == kOriginSize * 2); 1150 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1151 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1152 } 1153 1154 /// Fill memory range with the given origin value. 1155 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1156 unsigned Size, Align Alignment) { 1157 const DataLayout &DL = F.getParent()->getDataLayout(); 1158 const Align IntptrAlignment = DL.getABITypeAlign(MS.IntptrTy); 1159 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1160 assert(IntptrAlignment >= kMinOriginAlignment); 1161 assert(IntptrSize >= kOriginSize); 1162 1163 unsigned Ofs = 0; 1164 Align CurrentAlignment = Alignment; 1165 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1166 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1167 Value *IntptrOriginPtr = 1168 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1169 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1170 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1171 : IntptrOriginPtr; 1172 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1173 Ofs += IntptrSize / kOriginSize; 1174 CurrentAlignment = IntptrAlignment; 1175 } 1176 } 1177 1178 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1179 Value *GEP = 1180 i ? IRB.CreateConstGEP1_32(MS.OriginTy, OriginPtr, i) : OriginPtr; 1181 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1182 CurrentAlignment = kMinOriginAlignment; 1183 } 1184 } 1185 1186 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1187 Value *OriginPtr, Align Alignment, bool AsCall) { 1188 const DataLayout &DL = F.getParent()->getDataLayout(); 1189 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1190 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1191 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1192 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1193 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1194 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1195 OriginAlignment); 1196 return; 1197 } 1198 1199 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1200 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1201 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1202 FunctionCallee Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1203 Value *ConvertedShadow2 = 1204 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1205 CallBase *CB = IRB.CreateCall( 1206 Fn, {ConvertedShadow2, 1207 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), Origin}); 1208 CB->addParamAttr(0, Attribute::ZExt); 1209 CB->addParamAttr(2, Attribute::ZExt); 1210 } else { 1211 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1212 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1213 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1214 IRBuilder<> IRBNew(CheckTerm); 1215 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1216 OriginAlignment); 1217 } 1218 } 1219 1220 void materializeStores(bool InstrumentWithCalls) { 1221 for (StoreInst *SI : StoreList) { 1222 IRBuilder<> IRB(SI); 1223 Value *Val = SI->getValueOperand(); 1224 Value *Addr = SI->getPointerOperand(); 1225 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1226 Value *ShadowPtr, *OriginPtr; 1227 Type *ShadowTy = Shadow->getType(); 1228 const Align Alignment = SI->getAlign(); 1229 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1230 std::tie(ShadowPtr, OriginPtr) = 1231 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1232 1233 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1234 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1235 (void)NewSI; 1236 1237 if (SI->isAtomic()) 1238 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1239 1240 if (MS.TrackOrigins && !SI->isAtomic()) 1241 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1242 OriginAlignment, InstrumentWithCalls); 1243 } 1244 } 1245 1246 /// Helper function to insert a warning at IRB's current insert point. 1247 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1248 if (!Origin) 1249 Origin = (Value *)IRB.getInt32(0); 1250 assert(Origin->getType()->isIntegerTy()); 1251 IRB.CreateCall(MS.WarningFn, Origin)->setCannotMerge(); 1252 // FIXME: Insert UnreachableInst if !MS.Recover? 1253 // This may invalidate some of the following checks and needs to be done 1254 // at the very end. 1255 } 1256 1257 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1258 bool AsCall) { 1259 IRBuilder<> IRB(OrigIns); 1260 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1261 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1262 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1263 1264 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1265 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1266 insertWarningFn(IRB, Origin); 1267 } 1268 return; 1269 } 1270 1271 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1272 1273 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1274 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1275 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1276 FunctionCallee Fn = MS.MaybeWarningFn[SizeIndex]; 1277 Value *ConvertedShadow2 = 1278 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1279 CallBase *CB = IRB.CreateCall( 1280 Fn, {ConvertedShadow2, 1281 MS.TrackOrigins && Origin ? Origin : (Value *)IRB.getInt32(0)}); 1282 CB->addParamAttr(0, Attribute::ZExt); 1283 CB->addParamAttr(1, Attribute::ZExt); 1284 } else { 1285 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1286 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1287 Cmp, OrigIns, 1288 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1289 1290 IRB.SetInsertPoint(CheckTerm); 1291 insertWarningFn(IRB, Origin); 1292 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1293 } 1294 } 1295 1296 void materializeChecks(bool InstrumentWithCalls) { 1297 for (const auto &ShadowData : InstrumentationList) { 1298 Instruction *OrigIns = ShadowData.OrigIns; 1299 Value *Shadow = ShadowData.Shadow; 1300 Value *Origin = ShadowData.Origin; 1301 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1302 } 1303 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1304 } 1305 1306 // Returns the last instruction in the new prologue 1307 void insertKmsanPrologue(IRBuilder<> &IRB) { 1308 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1309 Constant *Zero = IRB.getInt32(0); 1310 MS.ParamTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1311 {Zero, IRB.getInt32(0)}, "param_shadow"); 1312 MS.RetvalTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1313 {Zero, IRB.getInt32(1)}, "retval_shadow"); 1314 MS.VAArgTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1315 {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1316 MS.VAArgOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1317 {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1318 MS.VAArgOverflowSizeTLS = 1319 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1320 {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1321 MS.ParamOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1322 {Zero, IRB.getInt32(5)}, "param_origin"); 1323 MS.RetvalOriginTLS = 1324 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1325 {Zero, IRB.getInt32(6)}, "retval_origin"); 1326 } 1327 1328 /// Add MemorySanitizer instrumentation to a function. 1329 bool runOnFunction() { 1330 // Iterate all BBs in depth-first order and create shadow instructions 1331 // for all instructions (where applicable). 1332 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1333 for (BasicBlock *BB : depth_first(FnPrologueEnd->getParent())) 1334 visit(*BB); 1335 1336 // Finalize PHI nodes. 1337 for (PHINode *PN : ShadowPHINodes) { 1338 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1339 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1340 size_t NumValues = PN->getNumIncomingValues(); 1341 for (size_t v = 0; v < NumValues; v++) { 1342 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1343 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1344 } 1345 } 1346 1347 VAHelper->finalizeInstrumentation(); 1348 1349 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to 1350 // instrumenting only allocas. 1351 if (InstrumentLifetimeStart) { 1352 for (auto Item : LifetimeStartList) { 1353 instrumentAlloca(*Item.second, Item.first); 1354 AllocaSet.erase(Item.second); 1355 } 1356 } 1357 // Poison the allocas for which we didn't instrument the corresponding 1358 // lifetime intrinsics. 1359 for (AllocaInst *AI : AllocaSet) 1360 instrumentAlloca(*AI); 1361 1362 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1363 InstrumentationList.size() + StoreList.size() > 1364 (unsigned)ClInstrumentationWithCallThreshold; 1365 1366 // Insert shadow value checks. 1367 materializeChecks(InstrumentWithCalls); 1368 1369 // Delayed instrumentation of StoreInst. 1370 // This may not add new address checks. 1371 materializeStores(InstrumentWithCalls); 1372 1373 return true; 1374 } 1375 1376 /// Compute the shadow type that corresponds to a given Value. 1377 Type *getShadowTy(Value *V) { 1378 return getShadowTy(V->getType()); 1379 } 1380 1381 /// Compute the shadow type that corresponds to a given Type. 1382 Type *getShadowTy(Type *OrigTy) { 1383 if (!OrigTy->isSized()) { 1384 return nullptr; 1385 } 1386 // For integer type, shadow is the same as the original type. 1387 // This may return weird-sized types like i1. 1388 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1389 return IT; 1390 const DataLayout &DL = F.getParent()->getDataLayout(); 1391 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1392 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1393 return FixedVectorType::get(IntegerType::get(*MS.C, EltSize), 1394 cast<FixedVectorType>(VT)->getNumElements()); 1395 } 1396 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1397 return ArrayType::get(getShadowTy(AT->getElementType()), 1398 AT->getNumElements()); 1399 } 1400 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1401 SmallVector<Type*, 4> Elements; 1402 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1403 Elements.push_back(getShadowTy(ST->getElementType(i))); 1404 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1405 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1406 return Res; 1407 } 1408 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1409 return IntegerType::get(*MS.C, TypeSize); 1410 } 1411 1412 /// Flatten a vector type. 1413 Type *getShadowTyNoVec(Type *ty) { 1414 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1415 return IntegerType::get(*MS.C, 1416 vt->getPrimitiveSizeInBits().getFixedSize()); 1417 return ty; 1418 } 1419 1420 /// Extract combined shadow of struct elements as a bool 1421 Value *collapseStructShadow(StructType *Struct, Value *Shadow, 1422 IRBuilder<> &IRB) { 1423 Value *FalseVal = IRB.getIntN(/* width */ 1, /* value */ 0); 1424 Value *Aggregator = FalseVal; 1425 1426 for (unsigned Idx = 0; Idx < Struct->getNumElements(); Idx++) { 1427 // Combine by ORing together each element's bool shadow 1428 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1429 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1430 Value *ShadowBool = convertToBool(ShadowInner, IRB); 1431 1432 if (Aggregator != FalseVal) 1433 Aggregator = IRB.CreateOr(Aggregator, ShadowBool); 1434 else 1435 Aggregator = ShadowBool; 1436 } 1437 1438 return Aggregator; 1439 } 1440 1441 // Extract combined shadow of array elements 1442 Value *collapseArrayShadow(ArrayType *Array, Value *Shadow, 1443 IRBuilder<> &IRB) { 1444 if (!Array->getNumElements()) 1445 return IRB.getIntN(/* width */ 1, /* value */ 0); 1446 1447 Value *FirstItem = IRB.CreateExtractValue(Shadow, 0); 1448 Value *Aggregator = convertShadowToScalar(FirstItem, IRB); 1449 1450 for (unsigned Idx = 1; Idx < Array->getNumElements(); Idx++) { 1451 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1452 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1453 Aggregator = IRB.CreateOr(Aggregator, ShadowInner); 1454 } 1455 return Aggregator; 1456 } 1457 1458 /// Convert a shadow value to it's flattened variant. The resulting 1459 /// shadow may not necessarily have the same bit width as the input 1460 /// value, but it will always be comparable to zero. 1461 Value *convertShadowToScalar(Value *V, IRBuilder<> &IRB) { 1462 if (StructType *Struct = dyn_cast<StructType>(V->getType())) 1463 return collapseStructShadow(Struct, V, IRB); 1464 if (ArrayType *Array = dyn_cast<ArrayType>(V->getType())) 1465 return collapseArrayShadow(Array, V, IRB); 1466 Type *Ty = V->getType(); 1467 Type *NoVecTy = getShadowTyNoVec(Ty); 1468 if (Ty == NoVecTy) return V; 1469 return IRB.CreateBitCast(V, NoVecTy); 1470 } 1471 1472 // Convert a scalar value to an i1 by comparing with 0 1473 Value *convertToBool(Value *V, IRBuilder<> &IRB, const Twine &name = "") { 1474 Type *VTy = V->getType(); 1475 assert(VTy->isIntegerTy()); 1476 if (VTy->getIntegerBitWidth() == 1) 1477 // Just converting a bool to a bool, so do nothing. 1478 return V; 1479 return IRB.CreateICmpNE(V, ConstantInt::get(VTy, 0), name); 1480 } 1481 1482 /// Compute the integer shadow offset that corresponds to a given 1483 /// application address. 1484 /// 1485 /// Offset = (Addr & ~AndMask) ^ XorMask 1486 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1487 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1488 1489 uint64_t AndMask = MS.MapParams->AndMask; 1490 if (AndMask) 1491 OffsetLong = 1492 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1493 1494 uint64_t XorMask = MS.MapParams->XorMask; 1495 if (XorMask) 1496 OffsetLong = 1497 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1498 return OffsetLong; 1499 } 1500 1501 /// Compute the shadow and origin addresses corresponding to a given 1502 /// application address. 1503 /// 1504 /// Shadow = ShadowBase + Offset 1505 /// Origin = (OriginBase + Offset) & ~3ULL 1506 std::pair<Value *, Value *> 1507 getShadowOriginPtrUserspace(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1508 MaybeAlign Alignment) { 1509 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1510 Value *ShadowLong = ShadowOffset; 1511 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1512 if (ShadowBase != 0) { 1513 ShadowLong = 1514 IRB.CreateAdd(ShadowLong, 1515 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1516 } 1517 Value *ShadowPtr = 1518 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1519 Value *OriginPtr = nullptr; 1520 if (MS.TrackOrigins) { 1521 Value *OriginLong = ShadowOffset; 1522 uint64_t OriginBase = MS.MapParams->OriginBase; 1523 if (OriginBase != 0) 1524 OriginLong = IRB.CreateAdd(OriginLong, 1525 ConstantInt::get(MS.IntptrTy, OriginBase)); 1526 if (!Alignment || *Alignment < kMinOriginAlignment) { 1527 uint64_t Mask = kMinOriginAlignment.value() - 1; 1528 OriginLong = 1529 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1530 } 1531 OriginPtr = 1532 IRB.CreateIntToPtr(OriginLong, PointerType::get(MS.OriginTy, 0)); 1533 } 1534 return std::make_pair(ShadowPtr, OriginPtr); 1535 } 1536 1537 std::pair<Value *, Value *> getShadowOriginPtrKernel(Value *Addr, 1538 IRBuilder<> &IRB, 1539 Type *ShadowTy, 1540 bool isStore) { 1541 Value *ShadowOriginPtrs; 1542 const DataLayout &DL = F.getParent()->getDataLayout(); 1543 int Size = DL.getTypeStoreSize(ShadowTy); 1544 1545 FunctionCallee Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1546 Value *AddrCast = 1547 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1548 if (Getter) { 1549 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1550 } else { 1551 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1552 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1553 : MS.MsanMetadataPtrForLoadN, 1554 {AddrCast, SizeVal}); 1555 } 1556 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1557 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1558 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1559 1560 return std::make_pair(ShadowPtr, OriginPtr); 1561 } 1562 1563 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1564 Type *ShadowTy, 1565 MaybeAlign Alignment, 1566 bool isStore) { 1567 if (MS.CompileKernel) 1568 return getShadowOriginPtrKernel(Addr, IRB, ShadowTy, isStore); 1569 return getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1570 } 1571 1572 /// Compute the shadow address for a given function argument. 1573 /// 1574 /// Shadow = ParamTLS+ArgOffset. 1575 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1576 int ArgOffset) { 1577 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1578 if (ArgOffset) 1579 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1580 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1581 "_msarg"); 1582 } 1583 1584 /// Compute the origin address for a given function argument. 1585 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1586 int ArgOffset) { 1587 if (!MS.TrackOrigins) 1588 return nullptr; 1589 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1590 if (ArgOffset) 1591 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1592 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1593 "_msarg_o"); 1594 } 1595 1596 /// Compute the shadow address for a retval. 1597 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1598 return IRB.CreatePointerCast(MS.RetvalTLS, 1599 PointerType::get(getShadowTy(A), 0), 1600 "_msret"); 1601 } 1602 1603 /// Compute the origin address for a retval. 1604 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1605 // We keep a single origin for the entire retval. Might be too optimistic. 1606 return MS.RetvalOriginTLS; 1607 } 1608 1609 /// Set SV to be the shadow value for V. 1610 void setShadow(Value *V, Value *SV) { 1611 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1612 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1613 } 1614 1615 /// Set Origin to be the origin value for V. 1616 void setOrigin(Value *V, Value *Origin) { 1617 if (!MS.TrackOrigins) return; 1618 assert(!OriginMap.count(V) && "Values may only have one origin"); 1619 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1620 OriginMap[V] = Origin; 1621 } 1622 1623 Constant *getCleanShadow(Type *OrigTy) { 1624 Type *ShadowTy = getShadowTy(OrigTy); 1625 if (!ShadowTy) 1626 return nullptr; 1627 return Constant::getNullValue(ShadowTy); 1628 } 1629 1630 /// Create a clean shadow value for a given value. 1631 /// 1632 /// Clean shadow (all zeroes) means all bits of the value are defined 1633 /// (initialized). 1634 Constant *getCleanShadow(Value *V) { 1635 return getCleanShadow(V->getType()); 1636 } 1637 1638 /// Create a dirty shadow of a given shadow type. 1639 Constant *getPoisonedShadow(Type *ShadowTy) { 1640 assert(ShadowTy); 1641 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1642 return Constant::getAllOnesValue(ShadowTy); 1643 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1644 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1645 getPoisonedShadow(AT->getElementType())); 1646 return ConstantArray::get(AT, Vals); 1647 } 1648 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1649 SmallVector<Constant *, 4> Vals; 1650 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1651 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1652 return ConstantStruct::get(ST, Vals); 1653 } 1654 llvm_unreachable("Unexpected shadow type"); 1655 } 1656 1657 /// Create a dirty shadow for a given value. 1658 Constant *getPoisonedShadow(Value *V) { 1659 Type *ShadowTy = getShadowTy(V); 1660 if (!ShadowTy) 1661 return nullptr; 1662 return getPoisonedShadow(ShadowTy); 1663 } 1664 1665 /// Create a clean (zero) origin. 1666 Value *getCleanOrigin() { 1667 return Constant::getNullValue(MS.OriginTy); 1668 } 1669 1670 /// Get the shadow value for a given Value. 1671 /// 1672 /// This function either returns the value set earlier with setShadow, 1673 /// or extracts if from ParamTLS (for function arguments). 1674 Value *getShadow(Value *V) { 1675 if (!PropagateShadow) return getCleanShadow(V); 1676 if (Instruction *I = dyn_cast<Instruction>(V)) { 1677 if (I->getMetadata("nosanitize")) 1678 return getCleanShadow(V); 1679 // For instructions the shadow is already stored in the map. 1680 Value *Shadow = ShadowMap[V]; 1681 if (!Shadow) { 1682 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1683 (void)I; 1684 assert(Shadow && "No shadow for a value"); 1685 } 1686 return Shadow; 1687 } 1688 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1689 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1690 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1691 (void)U; 1692 return AllOnes; 1693 } 1694 if (Argument *A = dyn_cast<Argument>(V)) { 1695 // For arguments we compute the shadow on demand and store it in the map. 1696 Value **ShadowPtr = &ShadowMap[V]; 1697 if (*ShadowPtr) 1698 return *ShadowPtr; 1699 Function *F = A->getParent(); 1700 IRBuilder<> EntryIRB(FnPrologueEnd); 1701 unsigned ArgOffset = 0; 1702 const DataLayout &DL = F->getParent()->getDataLayout(); 1703 for (auto &FArg : F->args()) { 1704 if (!FArg.getType()->isSized()) { 1705 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1706 continue; 1707 } 1708 1709 unsigned Size = FArg.hasByValAttr() 1710 ? DL.getTypeAllocSize(FArg.getParamByValType()) 1711 : DL.getTypeAllocSize(FArg.getType()); 1712 1713 if (A == &FArg) { 1714 bool Overflow = ArgOffset + Size > kParamTLSSize; 1715 if (FArg.hasByValAttr()) { 1716 // ByVal pointer itself has clean shadow. We copy the actual 1717 // argument shadow to the underlying memory. 1718 // Figure out maximal valid memcpy alignment. 1719 const Align ArgAlign = DL.getValueOrABITypeAlignment( 1720 MaybeAlign(FArg.getParamAlignment()), FArg.getParamByValType()); 1721 Value *CpShadowPtr = 1722 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1723 /*isStore*/ true) 1724 .first; 1725 // TODO(glider): need to copy origins. 1726 if (Overflow) { 1727 // ParamTLS overflow. 1728 EntryIRB.CreateMemSet( 1729 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1730 Size, ArgAlign); 1731 } else { 1732 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1733 const Align CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1734 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1735 CopyAlign, Size); 1736 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1737 (void)Cpy; 1738 } 1739 } 1740 1741 if (Overflow || FArg.hasByValAttr() || 1742 (MS.EagerChecks && FArg.hasAttribute(Attribute::NoUndef))) { 1743 *ShadowPtr = getCleanShadow(V); 1744 setOrigin(A, getCleanOrigin()); 1745 } else { 1746 // Shadow over TLS 1747 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1748 *ShadowPtr = EntryIRB.CreateAlignedLoad(getShadowTy(&FArg), Base, 1749 kShadowTLSAlignment); 1750 if (MS.TrackOrigins) { 1751 Value *OriginPtr = 1752 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1753 setOrigin(A, EntryIRB.CreateLoad(MS.OriginTy, OriginPtr)); 1754 } 1755 } 1756 LLVM_DEBUG(dbgs() 1757 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1758 break; 1759 } 1760 1761 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1762 } 1763 assert(*ShadowPtr && "Could not find shadow for an argument"); 1764 return *ShadowPtr; 1765 } 1766 // For everything else the shadow is zero. 1767 return getCleanShadow(V); 1768 } 1769 1770 /// Get the shadow for i-th argument of the instruction I. 1771 Value *getShadow(Instruction *I, int i) { 1772 return getShadow(I->getOperand(i)); 1773 } 1774 1775 /// Get the origin for a value. 1776 Value *getOrigin(Value *V) { 1777 if (!MS.TrackOrigins) return nullptr; 1778 if (!PropagateShadow) return getCleanOrigin(); 1779 if (isa<Constant>(V)) return getCleanOrigin(); 1780 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1781 "Unexpected value type in getOrigin()"); 1782 if (Instruction *I = dyn_cast<Instruction>(V)) { 1783 if (I->getMetadata("nosanitize")) 1784 return getCleanOrigin(); 1785 } 1786 Value *Origin = OriginMap[V]; 1787 assert(Origin && "Missing origin"); 1788 return Origin; 1789 } 1790 1791 /// Get the origin for i-th argument of the instruction I. 1792 Value *getOrigin(Instruction *I, int i) { 1793 return getOrigin(I->getOperand(i)); 1794 } 1795 1796 /// Remember the place where a shadow check should be inserted. 1797 /// 1798 /// This location will be later instrumented with a check that will print a 1799 /// UMR warning in runtime if the shadow value is not 0. 1800 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1801 assert(Shadow); 1802 if (!InsertChecks) return; 1803 #ifndef NDEBUG 1804 Type *ShadowTy = Shadow->getType(); 1805 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy) || 1806 isa<StructType>(ShadowTy) || isa<ArrayType>(ShadowTy)) && 1807 "Can only insert checks for integer, vector, and aggregate shadow " 1808 "types"); 1809 #endif 1810 InstrumentationList.push_back( 1811 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1812 } 1813 1814 /// Remember the place where a shadow check should be inserted. 1815 /// 1816 /// This location will be later instrumented with a check that will print a 1817 /// UMR warning in runtime if the value is not fully defined. 1818 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1819 assert(Val); 1820 Value *Shadow, *Origin; 1821 if (ClCheckConstantShadow) { 1822 Shadow = getShadow(Val); 1823 if (!Shadow) return; 1824 Origin = getOrigin(Val); 1825 } else { 1826 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1827 if (!Shadow) return; 1828 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1829 } 1830 insertShadowCheck(Shadow, Origin, OrigIns); 1831 } 1832 1833 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1834 switch (a) { 1835 case AtomicOrdering::NotAtomic: 1836 return AtomicOrdering::NotAtomic; 1837 case AtomicOrdering::Unordered: 1838 case AtomicOrdering::Monotonic: 1839 case AtomicOrdering::Release: 1840 return AtomicOrdering::Release; 1841 case AtomicOrdering::Acquire: 1842 case AtomicOrdering::AcquireRelease: 1843 return AtomicOrdering::AcquireRelease; 1844 case AtomicOrdering::SequentiallyConsistent: 1845 return AtomicOrdering::SequentiallyConsistent; 1846 } 1847 llvm_unreachable("Unknown ordering"); 1848 } 1849 1850 Value *makeAddReleaseOrderingTable(IRBuilder<> &IRB) { 1851 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1852 uint32_t OrderingTable[NumOrderings] = {}; 1853 1854 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1855 OrderingTable[(int)AtomicOrderingCABI::release] = 1856 (int)AtomicOrderingCABI::release; 1857 OrderingTable[(int)AtomicOrderingCABI::consume] = 1858 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1859 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1860 (int)AtomicOrderingCABI::acq_rel; 1861 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1862 (int)AtomicOrderingCABI::seq_cst; 1863 1864 return ConstantDataVector::get(IRB.getContext(), 1865 makeArrayRef(OrderingTable, NumOrderings)); 1866 } 1867 1868 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1869 switch (a) { 1870 case AtomicOrdering::NotAtomic: 1871 return AtomicOrdering::NotAtomic; 1872 case AtomicOrdering::Unordered: 1873 case AtomicOrdering::Monotonic: 1874 case AtomicOrdering::Acquire: 1875 return AtomicOrdering::Acquire; 1876 case AtomicOrdering::Release: 1877 case AtomicOrdering::AcquireRelease: 1878 return AtomicOrdering::AcquireRelease; 1879 case AtomicOrdering::SequentiallyConsistent: 1880 return AtomicOrdering::SequentiallyConsistent; 1881 } 1882 llvm_unreachable("Unknown ordering"); 1883 } 1884 1885 Value *makeAddAcquireOrderingTable(IRBuilder<> &IRB) { 1886 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1887 uint32_t OrderingTable[NumOrderings] = {}; 1888 1889 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1890 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1891 OrderingTable[(int)AtomicOrderingCABI::consume] = 1892 (int)AtomicOrderingCABI::acquire; 1893 OrderingTable[(int)AtomicOrderingCABI::release] = 1894 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1895 (int)AtomicOrderingCABI::acq_rel; 1896 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1897 (int)AtomicOrderingCABI::seq_cst; 1898 1899 return ConstantDataVector::get(IRB.getContext(), 1900 makeArrayRef(OrderingTable, NumOrderings)); 1901 } 1902 1903 // ------------------- Visitors. 1904 using InstVisitor<MemorySanitizerVisitor>::visit; 1905 void visit(Instruction &I) { 1906 if (I.getMetadata("nosanitize")) 1907 return; 1908 // Don't want to visit if we're in the prologue 1909 if (isInPrologue(I)) 1910 return; 1911 InstVisitor<MemorySanitizerVisitor>::visit(I); 1912 } 1913 1914 /// Instrument LoadInst 1915 /// 1916 /// Loads the corresponding shadow and (optionally) origin. 1917 /// Optionally, checks that the load address is fully defined. 1918 void visitLoadInst(LoadInst &I) { 1919 assert(I.getType()->isSized() && "Load type must have size"); 1920 assert(!I.getMetadata("nosanitize")); 1921 IRBuilder<> IRB(I.getNextNode()); 1922 Type *ShadowTy = getShadowTy(&I); 1923 Value *Addr = I.getPointerOperand(); 1924 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 1925 const Align Alignment = assumeAligned(I.getAlignment()); 1926 if (PropagateShadow) { 1927 std::tie(ShadowPtr, OriginPtr) = 1928 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1929 setShadow(&I, 1930 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 1931 } else { 1932 setShadow(&I, getCleanShadow(&I)); 1933 } 1934 1935 if (ClCheckAccessAddress) 1936 insertShadowCheck(I.getPointerOperand(), &I); 1937 1938 if (I.isAtomic()) 1939 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1940 1941 if (MS.TrackOrigins) { 1942 if (PropagateShadow) { 1943 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1944 setOrigin( 1945 &I, IRB.CreateAlignedLoad(MS.OriginTy, OriginPtr, OriginAlignment)); 1946 } else { 1947 setOrigin(&I, getCleanOrigin()); 1948 } 1949 } 1950 } 1951 1952 /// Instrument StoreInst 1953 /// 1954 /// Stores the corresponding shadow and (optionally) origin. 1955 /// Optionally, checks that the store address is fully defined. 1956 void visitStoreInst(StoreInst &I) { 1957 StoreList.push_back(&I); 1958 if (ClCheckAccessAddress) 1959 insertShadowCheck(I.getPointerOperand(), &I); 1960 } 1961 1962 void handleCASOrRMW(Instruction &I) { 1963 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1964 1965 IRBuilder<> IRB(&I); 1966 Value *Addr = I.getOperand(0); 1967 Value *Val = I.getOperand(1); 1968 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, Val->getType(), Align(1), 1969 /*isStore*/ true) 1970 .first; 1971 1972 if (ClCheckAccessAddress) 1973 insertShadowCheck(Addr, &I); 1974 1975 // Only test the conditional argument of cmpxchg instruction. 1976 // The other argument can potentially be uninitialized, but we can not 1977 // detect this situation reliably without possible false positives. 1978 if (isa<AtomicCmpXchgInst>(I)) 1979 insertShadowCheck(Val, &I); 1980 1981 IRB.CreateStore(getCleanShadow(Val), ShadowPtr); 1982 1983 setShadow(&I, getCleanShadow(&I)); 1984 setOrigin(&I, getCleanOrigin()); 1985 } 1986 1987 void visitAtomicRMWInst(AtomicRMWInst &I) { 1988 handleCASOrRMW(I); 1989 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1990 } 1991 1992 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1993 handleCASOrRMW(I); 1994 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1995 } 1996 1997 // Vector manipulation. 1998 void visitExtractElementInst(ExtractElementInst &I) { 1999 insertShadowCheck(I.getOperand(1), &I); 2000 IRBuilder<> IRB(&I); 2001 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 2002 "_msprop")); 2003 setOrigin(&I, getOrigin(&I, 0)); 2004 } 2005 2006 void visitInsertElementInst(InsertElementInst &I) { 2007 insertShadowCheck(I.getOperand(2), &I); 2008 IRBuilder<> IRB(&I); 2009 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 2010 I.getOperand(2), "_msprop")); 2011 setOriginForNaryOp(I); 2012 } 2013 2014 void visitShuffleVectorInst(ShuffleVectorInst &I) { 2015 IRBuilder<> IRB(&I); 2016 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 2017 I.getShuffleMask(), "_msprop")); 2018 setOriginForNaryOp(I); 2019 } 2020 2021 // Casts. 2022 void visitSExtInst(SExtInst &I) { 2023 IRBuilder<> IRB(&I); 2024 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 2025 setOrigin(&I, getOrigin(&I, 0)); 2026 } 2027 2028 void visitZExtInst(ZExtInst &I) { 2029 IRBuilder<> IRB(&I); 2030 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 2031 setOrigin(&I, getOrigin(&I, 0)); 2032 } 2033 2034 void visitTruncInst(TruncInst &I) { 2035 IRBuilder<> IRB(&I); 2036 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 2037 setOrigin(&I, getOrigin(&I, 0)); 2038 } 2039 2040 void visitBitCastInst(BitCastInst &I) { 2041 // Special case: if this is the bitcast (there is exactly 1 allowed) between 2042 // a musttail call and a ret, don't instrument. New instructions are not 2043 // allowed after a musttail call. 2044 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 2045 if (CI->isMustTailCall()) 2046 return; 2047 IRBuilder<> IRB(&I); 2048 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 2049 setOrigin(&I, getOrigin(&I, 0)); 2050 } 2051 2052 void visitPtrToIntInst(PtrToIntInst &I) { 2053 IRBuilder<> IRB(&I); 2054 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2055 "_msprop_ptrtoint")); 2056 setOrigin(&I, getOrigin(&I, 0)); 2057 } 2058 2059 void visitIntToPtrInst(IntToPtrInst &I) { 2060 IRBuilder<> IRB(&I); 2061 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2062 "_msprop_inttoptr")); 2063 setOrigin(&I, getOrigin(&I, 0)); 2064 } 2065 2066 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 2067 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 2068 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 2069 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 2070 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 2071 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 2072 2073 /// Propagate shadow for bitwise AND. 2074 /// 2075 /// This code is exact, i.e. if, for example, a bit in the left argument 2076 /// is defined and 0, then neither the value not definedness of the 2077 /// corresponding bit in B don't affect the resulting shadow. 2078 void visitAnd(BinaryOperator &I) { 2079 IRBuilder<> IRB(&I); 2080 // "And" of 0 and a poisoned value results in unpoisoned value. 2081 // 1&1 => 1; 0&1 => 0; p&1 => p; 2082 // 1&0 => 0; 0&0 => 0; p&0 => 0; 2083 // 1&p => p; 0&p => 0; p&p => p; 2084 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 2085 Value *S1 = getShadow(&I, 0); 2086 Value *S2 = getShadow(&I, 1); 2087 Value *V1 = I.getOperand(0); 2088 Value *V2 = I.getOperand(1); 2089 if (V1->getType() != S1->getType()) { 2090 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2091 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2092 } 2093 Value *S1S2 = IRB.CreateAnd(S1, S2); 2094 Value *V1S2 = IRB.CreateAnd(V1, S2); 2095 Value *S1V2 = IRB.CreateAnd(S1, V2); 2096 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2097 setOriginForNaryOp(I); 2098 } 2099 2100 void visitOr(BinaryOperator &I) { 2101 IRBuilder<> IRB(&I); 2102 // "Or" of 1 and a poisoned value results in unpoisoned value. 2103 // 1|1 => 1; 0|1 => 1; p|1 => 1; 2104 // 1|0 => 1; 0|0 => 0; p|0 => p; 2105 // 1|p => 1; 0|p => p; p|p => p; 2106 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 2107 Value *S1 = getShadow(&I, 0); 2108 Value *S2 = getShadow(&I, 1); 2109 Value *V1 = IRB.CreateNot(I.getOperand(0)); 2110 Value *V2 = IRB.CreateNot(I.getOperand(1)); 2111 if (V1->getType() != S1->getType()) { 2112 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2113 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2114 } 2115 Value *S1S2 = IRB.CreateAnd(S1, S2); 2116 Value *V1S2 = IRB.CreateAnd(V1, S2); 2117 Value *S1V2 = IRB.CreateAnd(S1, V2); 2118 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2119 setOriginForNaryOp(I); 2120 } 2121 2122 /// Default propagation of shadow and/or origin. 2123 /// 2124 /// This class implements the general case of shadow propagation, used in all 2125 /// cases where we don't know and/or don't care about what the operation 2126 /// actually does. It converts all input shadow values to a common type 2127 /// (extending or truncating as necessary), and bitwise OR's them. 2128 /// 2129 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 2130 /// fully initialized), and less prone to false positives. 2131 /// 2132 /// This class also implements the general case of origin propagation. For a 2133 /// Nary operation, result origin is set to the origin of an argument that is 2134 /// not entirely initialized. If there is more than one such arguments, the 2135 /// rightmost of them is picked. It does not matter which one is picked if all 2136 /// arguments are initialized. 2137 template <bool CombineShadow> 2138 class Combiner { 2139 Value *Shadow = nullptr; 2140 Value *Origin = nullptr; 2141 IRBuilder<> &IRB; 2142 MemorySanitizerVisitor *MSV; 2143 2144 public: 2145 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 2146 : IRB(IRB), MSV(MSV) {} 2147 2148 /// Add a pair of shadow and origin values to the mix. 2149 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 2150 if (CombineShadow) { 2151 assert(OpShadow); 2152 if (!Shadow) 2153 Shadow = OpShadow; 2154 else { 2155 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 2156 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 2157 } 2158 } 2159 2160 if (MSV->MS.TrackOrigins) { 2161 assert(OpOrigin); 2162 if (!Origin) { 2163 Origin = OpOrigin; 2164 } else { 2165 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 2166 // No point in adding something that might result in 0 origin value. 2167 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 2168 Value *FlatShadow = MSV->convertShadowToScalar(OpShadow, IRB); 2169 Value *Cond = 2170 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 2171 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 2172 } 2173 } 2174 } 2175 return *this; 2176 } 2177 2178 /// Add an application value to the mix. 2179 Combiner &Add(Value *V) { 2180 Value *OpShadow = MSV->getShadow(V); 2181 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 2182 return Add(OpShadow, OpOrigin); 2183 } 2184 2185 /// Set the current combined values as the given instruction's shadow 2186 /// and origin. 2187 void Done(Instruction *I) { 2188 if (CombineShadow) { 2189 assert(Shadow); 2190 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 2191 MSV->setShadow(I, Shadow); 2192 } 2193 if (MSV->MS.TrackOrigins) { 2194 assert(Origin); 2195 MSV->setOrigin(I, Origin); 2196 } 2197 } 2198 }; 2199 2200 using ShadowAndOriginCombiner = Combiner<true>; 2201 using OriginCombiner = Combiner<false>; 2202 2203 /// Propagate origin for arbitrary operation. 2204 void setOriginForNaryOp(Instruction &I) { 2205 if (!MS.TrackOrigins) return; 2206 IRBuilder<> IRB(&I); 2207 OriginCombiner OC(this, IRB); 2208 for (Use &Op : I.operands()) 2209 OC.Add(Op.get()); 2210 OC.Done(&I); 2211 } 2212 2213 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2214 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2215 "Vector of pointers is not a valid shadow type"); 2216 return Ty->isVectorTy() ? cast<FixedVectorType>(Ty)->getNumElements() * 2217 Ty->getScalarSizeInBits() 2218 : Ty->getPrimitiveSizeInBits(); 2219 } 2220 2221 /// Cast between two shadow types, extending or truncating as 2222 /// necessary. 2223 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2224 bool Signed = false) { 2225 Type *srcTy = V->getType(); 2226 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2227 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2228 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2229 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2230 2231 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2232 return IRB.CreateIntCast(V, dstTy, Signed); 2233 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2234 cast<FixedVectorType>(dstTy)->getNumElements() == 2235 cast<FixedVectorType>(srcTy)->getNumElements()) 2236 return IRB.CreateIntCast(V, dstTy, Signed); 2237 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2238 Value *V2 = 2239 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2240 return IRB.CreateBitCast(V2, dstTy); 2241 // TODO: handle struct types. 2242 } 2243 2244 /// Cast an application value to the type of its own shadow. 2245 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2246 Type *ShadowTy = getShadowTy(V); 2247 if (V->getType() == ShadowTy) 2248 return V; 2249 if (V->getType()->isPtrOrPtrVectorTy()) 2250 return IRB.CreatePtrToInt(V, ShadowTy); 2251 else 2252 return IRB.CreateBitCast(V, ShadowTy); 2253 } 2254 2255 /// Propagate shadow for arbitrary operation. 2256 void handleShadowOr(Instruction &I) { 2257 IRBuilder<> IRB(&I); 2258 ShadowAndOriginCombiner SC(this, IRB); 2259 for (Use &Op : I.operands()) 2260 SC.Add(Op.get()); 2261 SC.Done(&I); 2262 } 2263 2264 void visitFNeg(UnaryOperator &I) { handleShadowOr(I); } 2265 2266 // Handle multiplication by constant. 2267 // 2268 // Handle a special case of multiplication by constant that may have one or 2269 // more zeros in the lower bits. This makes corresponding number of lower bits 2270 // of the result zero as well. We model it by shifting the other operand 2271 // shadow left by the required number of bits. Effectively, we transform 2272 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2273 // We use multiplication by 2**N instead of shift to cover the case of 2274 // multiplication by 0, which may occur in some elements of a vector operand. 2275 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2276 Value *OtherArg) { 2277 Constant *ShadowMul; 2278 Type *Ty = ConstArg->getType(); 2279 if (auto *VTy = dyn_cast<VectorType>(Ty)) { 2280 unsigned NumElements = cast<FixedVectorType>(VTy)->getNumElements(); 2281 Type *EltTy = VTy->getElementType(); 2282 SmallVector<Constant *, 16> Elements; 2283 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2284 if (ConstantInt *Elt = 2285 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2286 const APInt &V = Elt->getValue(); 2287 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2288 Elements.push_back(ConstantInt::get(EltTy, V2)); 2289 } else { 2290 Elements.push_back(ConstantInt::get(EltTy, 1)); 2291 } 2292 } 2293 ShadowMul = ConstantVector::get(Elements); 2294 } else { 2295 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2296 const APInt &V = Elt->getValue(); 2297 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2298 ShadowMul = ConstantInt::get(Ty, V2); 2299 } else { 2300 ShadowMul = ConstantInt::get(Ty, 1); 2301 } 2302 } 2303 2304 IRBuilder<> IRB(&I); 2305 setShadow(&I, 2306 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2307 setOrigin(&I, getOrigin(OtherArg)); 2308 } 2309 2310 void visitMul(BinaryOperator &I) { 2311 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2312 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2313 if (constOp0 && !constOp1) 2314 handleMulByConstant(I, constOp0, I.getOperand(1)); 2315 else if (constOp1 && !constOp0) 2316 handleMulByConstant(I, constOp1, I.getOperand(0)); 2317 else 2318 handleShadowOr(I); 2319 } 2320 2321 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2322 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2323 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2324 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2325 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2326 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2327 2328 void handleIntegerDiv(Instruction &I) { 2329 IRBuilder<> IRB(&I); 2330 // Strict on the second argument. 2331 insertShadowCheck(I.getOperand(1), &I); 2332 setShadow(&I, getShadow(&I, 0)); 2333 setOrigin(&I, getOrigin(&I, 0)); 2334 } 2335 2336 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2337 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2338 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2339 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2340 2341 // Floating point division is side-effect free. We can not require that the 2342 // divisor is fully initialized and must propagate shadow. See PR37523. 2343 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2344 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2345 2346 /// Instrument == and != comparisons. 2347 /// 2348 /// Sometimes the comparison result is known even if some of the bits of the 2349 /// arguments are not. 2350 void handleEqualityComparison(ICmpInst &I) { 2351 IRBuilder<> IRB(&I); 2352 Value *A = I.getOperand(0); 2353 Value *B = I.getOperand(1); 2354 Value *Sa = getShadow(A); 2355 Value *Sb = getShadow(B); 2356 2357 // Get rid of pointers and vectors of pointers. 2358 // For ints (and vectors of ints), types of A and Sa match, 2359 // and this is a no-op. 2360 A = IRB.CreatePointerCast(A, Sa->getType()); 2361 B = IRB.CreatePointerCast(B, Sb->getType()); 2362 2363 // A == B <==> (C = A^B) == 0 2364 // A != B <==> (C = A^B) != 0 2365 // Sc = Sa | Sb 2366 Value *C = IRB.CreateXor(A, B); 2367 Value *Sc = IRB.CreateOr(Sa, Sb); 2368 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2369 // Result is defined if one of the following is true 2370 // * there is a defined 1 bit in C 2371 // * C is fully defined 2372 // Si = !(C & ~Sc) && Sc 2373 Value *Zero = Constant::getNullValue(Sc->getType()); 2374 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2375 Value *Si = 2376 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2377 IRB.CreateICmpEQ( 2378 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2379 Si->setName("_msprop_icmp"); 2380 setShadow(&I, Si); 2381 setOriginForNaryOp(I); 2382 } 2383 2384 /// Build the lowest possible value of V, taking into account V's 2385 /// uninitialized bits. 2386 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2387 bool isSigned) { 2388 if (isSigned) { 2389 // Split shadow into sign bit and other bits. 2390 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2391 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2392 // Maximise the undefined shadow bit, minimize other undefined bits. 2393 return 2394 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2395 } else { 2396 // Minimize undefined bits. 2397 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2398 } 2399 } 2400 2401 /// Build the highest possible value of V, taking into account V's 2402 /// uninitialized bits. 2403 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2404 bool isSigned) { 2405 if (isSigned) { 2406 // Split shadow into sign bit and other bits. 2407 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2408 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2409 // Minimise the undefined shadow bit, maximise other undefined bits. 2410 return 2411 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2412 } else { 2413 // Maximize undefined bits. 2414 return IRB.CreateOr(A, Sa); 2415 } 2416 } 2417 2418 /// Instrument relational comparisons. 2419 /// 2420 /// This function does exact shadow propagation for all relational 2421 /// comparisons of integers, pointers and vectors of those. 2422 /// FIXME: output seems suboptimal when one of the operands is a constant 2423 void handleRelationalComparisonExact(ICmpInst &I) { 2424 IRBuilder<> IRB(&I); 2425 Value *A = I.getOperand(0); 2426 Value *B = I.getOperand(1); 2427 Value *Sa = getShadow(A); 2428 Value *Sb = getShadow(B); 2429 2430 // Get rid of pointers and vectors of pointers. 2431 // For ints (and vectors of ints), types of A and Sa match, 2432 // and this is a no-op. 2433 A = IRB.CreatePointerCast(A, Sa->getType()); 2434 B = IRB.CreatePointerCast(B, Sb->getType()); 2435 2436 // Let [a0, a1] be the interval of possible values of A, taking into account 2437 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2438 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2439 bool IsSigned = I.isSigned(); 2440 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2441 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2442 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2443 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2444 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2445 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2446 Value *Si = IRB.CreateXor(S1, S2); 2447 setShadow(&I, Si); 2448 setOriginForNaryOp(I); 2449 } 2450 2451 /// Instrument signed relational comparisons. 2452 /// 2453 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2454 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2455 void handleSignedRelationalComparison(ICmpInst &I) { 2456 Constant *constOp; 2457 Value *op = nullptr; 2458 CmpInst::Predicate pre; 2459 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2460 op = I.getOperand(0); 2461 pre = I.getPredicate(); 2462 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2463 op = I.getOperand(1); 2464 pre = I.getSwappedPredicate(); 2465 } else { 2466 handleShadowOr(I); 2467 return; 2468 } 2469 2470 if ((constOp->isNullValue() && 2471 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2472 (constOp->isAllOnesValue() && 2473 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2474 IRBuilder<> IRB(&I); 2475 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2476 "_msprop_icmp_s"); 2477 setShadow(&I, Shadow); 2478 setOrigin(&I, getOrigin(op)); 2479 } else { 2480 handleShadowOr(I); 2481 } 2482 } 2483 2484 void visitICmpInst(ICmpInst &I) { 2485 if (!ClHandleICmp) { 2486 handleShadowOr(I); 2487 return; 2488 } 2489 if (I.isEquality()) { 2490 handleEqualityComparison(I); 2491 return; 2492 } 2493 2494 assert(I.isRelational()); 2495 if (ClHandleICmpExact) { 2496 handleRelationalComparisonExact(I); 2497 return; 2498 } 2499 if (I.isSigned()) { 2500 handleSignedRelationalComparison(I); 2501 return; 2502 } 2503 2504 assert(I.isUnsigned()); 2505 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2506 handleRelationalComparisonExact(I); 2507 return; 2508 } 2509 2510 handleShadowOr(I); 2511 } 2512 2513 void visitFCmpInst(FCmpInst &I) { 2514 handleShadowOr(I); 2515 } 2516 2517 void handleShift(BinaryOperator &I) { 2518 IRBuilder<> IRB(&I); 2519 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2520 // Otherwise perform the same shift on S1. 2521 Value *S1 = getShadow(&I, 0); 2522 Value *S2 = getShadow(&I, 1); 2523 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2524 S2->getType()); 2525 Value *V2 = I.getOperand(1); 2526 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2527 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2528 setOriginForNaryOp(I); 2529 } 2530 2531 void visitShl(BinaryOperator &I) { handleShift(I); } 2532 void visitAShr(BinaryOperator &I) { handleShift(I); } 2533 void visitLShr(BinaryOperator &I) { handleShift(I); } 2534 2535 void handleFunnelShift(IntrinsicInst &I) { 2536 IRBuilder<> IRB(&I); 2537 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2538 // Otherwise perform the same shift on S0 and S1. 2539 Value *S0 = getShadow(&I, 0); 2540 Value *S1 = getShadow(&I, 1); 2541 Value *S2 = getShadow(&I, 2); 2542 Value *S2Conv = 2543 IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), S2->getType()); 2544 Value *V2 = I.getOperand(2); 2545 Function *Intrin = Intrinsic::getDeclaration( 2546 I.getModule(), I.getIntrinsicID(), S2Conv->getType()); 2547 Value *Shift = IRB.CreateCall(Intrin, {S0, S1, V2}); 2548 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2549 setOriginForNaryOp(I); 2550 } 2551 2552 /// Instrument llvm.memmove 2553 /// 2554 /// At this point we don't know if llvm.memmove will be inlined or not. 2555 /// If we don't instrument it and it gets inlined, 2556 /// our interceptor will not kick in and we will lose the memmove. 2557 /// If we instrument the call here, but it does not get inlined, 2558 /// we will memove the shadow twice: which is bad in case 2559 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2560 /// 2561 /// Similar situation exists for memcpy and memset. 2562 void visitMemMoveInst(MemMoveInst &I) { 2563 IRBuilder<> IRB(&I); 2564 IRB.CreateCall( 2565 MS.MemmoveFn, 2566 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2567 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2568 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2569 I.eraseFromParent(); 2570 } 2571 2572 // Similar to memmove: avoid copying shadow twice. 2573 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2574 // FIXME: consider doing manual inline for small constant sizes and proper 2575 // alignment. 2576 void visitMemCpyInst(MemCpyInst &I) { 2577 IRBuilder<> IRB(&I); 2578 IRB.CreateCall( 2579 MS.MemcpyFn, 2580 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2581 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2582 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2583 I.eraseFromParent(); 2584 } 2585 2586 // Same as memcpy. 2587 void visitMemSetInst(MemSetInst &I) { 2588 IRBuilder<> IRB(&I); 2589 IRB.CreateCall( 2590 MS.MemsetFn, 2591 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2592 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2593 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2594 I.eraseFromParent(); 2595 } 2596 2597 void visitVAStartInst(VAStartInst &I) { 2598 VAHelper->visitVAStartInst(I); 2599 } 2600 2601 void visitVACopyInst(VACopyInst &I) { 2602 VAHelper->visitVACopyInst(I); 2603 } 2604 2605 /// Handle vector store-like intrinsics. 2606 /// 2607 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2608 /// has 1 pointer argument and 1 vector argument, returns void. 2609 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2610 IRBuilder<> IRB(&I); 2611 Value* Addr = I.getArgOperand(0); 2612 Value *Shadow = getShadow(&I, 1); 2613 Value *ShadowPtr, *OriginPtr; 2614 2615 // We don't know the pointer alignment (could be unaligned SSE store!). 2616 // Have to assume to worst case. 2617 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2618 Addr, IRB, Shadow->getType(), Align(1), /*isStore*/ true); 2619 IRB.CreateAlignedStore(Shadow, ShadowPtr, Align(1)); 2620 2621 if (ClCheckAccessAddress) 2622 insertShadowCheck(Addr, &I); 2623 2624 // FIXME: factor out common code from materializeStores 2625 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2626 return true; 2627 } 2628 2629 /// Handle vector load-like intrinsics. 2630 /// 2631 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2632 /// has 1 pointer argument, returns a vector. 2633 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2634 IRBuilder<> IRB(&I); 2635 Value *Addr = I.getArgOperand(0); 2636 2637 Type *ShadowTy = getShadowTy(&I); 2638 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 2639 if (PropagateShadow) { 2640 // We don't know the pointer alignment (could be unaligned SSE load!). 2641 // Have to assume to worst case. 2642 const Align Alignment = Align(1); 2643 std::tie(ShadowPtr, OriginPtr) = 2644 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2645 setShadow(&I, 2646 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 2647 } else { 2648 setShadow(&I, getCleanShadow(&I)); 2649 } 2650 2651 if (ClCheckAccessAddress) 2652 insertShadowCheck(Addr, &I); 2653 2654 if (MS.TrackOrigins) { 2655 if (PropagateShadow) 2656 setOrigin(&I, IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2657 else 2658 setOrigin(&I, getCleanOrigin()); 2659 } 2660 return true; 2661 } 2662 2663 /// Handle (SIMD arithmetic)-like intrinsics. 2664 /// 2665 /// Instrument intrinsics with any number of arguments of the same type, 2666 /// equal to the return type. The type should be simple (no aggregates or 2667 /// pointers; vectors are fine). 2668 /// Caller guarantees that this intrinsic does not access memory. 2669 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2670 Type *RetTy = I.getType(); 2671 if (!(RetTy->isIntOrIntVectorTy() || 2672 RetTy->isFPOrFPVectorTy() || 2673 RetTy->isX86_MMXTy())) 2674 return false; 2675 2676 unsigned NumArgOperands = I.arg_size(); 2677 for (unsigned i = 0; i < NumArgOperands; ++i) { 2678 Type *Ty = I.getArgOperand(i)->getType(); 2679 if (Ty != RetTy) 2680 return false; 2681 } 2682 2683 IRBuilder<> IRB(&I); 2684 ShadowAndOriginCombiner SC(this, IRB); 2685 for (unsigned i = 0; i < NumArgOperands; ++i) 2686 SC.Add(I.getArgOperand(i)); 2687 SC.Done(&I); 2688 2689 return true; 2690 } 2691 2692 /// Heuristically instrument unknown intrinsics. 2693 /// 2694 /// The main purpose of this code is to do something reasonable with all 2695 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2696 /// We recognize several classes of intrinsics by their argument types and 2697 /// ModRefBehaviour and apply special instrumentation when we are reasonably 2698 /// sure that we know what the intrinsic does. 2699 /// 2700 /// We special-case intrinsics where this approach fails. See llvm.bswap 2701 /// handling as an example of that. 2702 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2703 unsigned NumArgOperands = I.arg_size(); 2704 if (NumArgOperands == 0) 2705 return false; 2706 2707 if (NumArgOperands == 2 && 2708 I.getArgOperand(0)->getType()->isPointerTy() && 2709 I.getArgOperand(1)->getType()->isVectorTy() && 2710 I.getType()->isVoidTy() && 2711 !I.onlyReadsMemory()) { 2712 // This looks like a vector store. 2713 return handleVectorStoreIntrinsic(I); 2714 } 2715 2716 if (NumArgOperands == 1 && 2717 I.getArgOperand(0)->getType()->isPointerTy() && 2718 I.getType()->isVectorTy() && 2719 I.onlyReadsMemory()) { 2720 // This looks like a vector load. 2721 return handleVectorLoadIntrinsic(I); 2722 } 2723 2724 if (I.doesNotAccessMemory()) 2725 if (maybeHandleSimpleNomemIntrinsic(I)) 2726 return true; 2727 2728 // FIXME: detect and handle SSE maskstore/maskload 2729 return false; 2730 } 2731 2732 void handleInvariantGroup(IntrinsicInst &I) { 2733 setShadow(&I, getShadow(&I, 0)); 2734 setOrigin(&I, getOrigin(&I, 0)); 2735 } 2736 2737 void handleLifetimeStart(IntrinsicInst &I) { 2738 if (!PoisonStack) 2739 return; 2740 AllocaInst *AI = llvm::findAllocaForValue(I.getArgOperand(1)); 2741 if (!AI) 2742 InstrumentLifetimeStart = false; 2743 LifetimeStartList.push_back(std::make_pair(&I, AI)); 2744 } 2745 2746 void handleBswap(IntrinsicInst &I) { 2747 IRBuilder<> IRB(&I); 2748 Value *Op = I.getArgOperand(0); 2749 Type *OpType = Op->getType(); 2750 Function *BswapFunc = Intrinsic::getDeclaration( 2751 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2752 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2753 setOrigin(&I, getOrigin(Op)); 2754 } 2755 2756 // Instrument vector convert intrinsic. 2757 // 2758 // This function instruments intrinsics like cvtsi2ss: 2759 // %Out = int_xxx_cvtyyy(%ConvertOp) 2760 // or 2761 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2762 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2763 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2764 // elements from \p CopyOp. 2765 // In most cases conversion involves floating-point value which may trigger a 2766 // hardware exception when not fully initialized. For this reason we require 2767 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2768 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2769 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2770 // return a fully initialized value. 2771 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements, 2772 bool HasRoundingMode = false) { 2773 IRBuilder<> IRB(&I); 2774 Value *CopyOp, *ConvertOp; 2775 2776 assert((!HasRoundingMode || 2777 isa<ConstantInt>(I.getArgOperand(I.arg_size() - 1))) && 2778 "Invalid rounding mode"); 2779 2780 switch (I.arg_size() - HasRoundingMode) { 2781 case 2: 2782 CopyOp = I.getArgOperand(0); 2783 ConvertOp = I.getArgOperand(1); 2784 break; 2785 case 1: 2786 ConvertOp = I.getArgOperand(0); 2787 CopyOp = nullptr; 2788 break; 2789 default: 2790 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2791 } 2792 2793 // The first *NumUsedElements* elements of ConvertOp are converted to the 2794 // same number of output elements. The rest of the output is copied from 2795 // CopyOp, or (if not available) filled with zeroes. 2796 // Combine shadow for elements of ConvertOp that are used in this operation, 2797 // and insert a check. 2798 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2799 // int->any conversion. 2800 Value *ConvertShadow = getShadow(ConvertOp); 2801 Value *AggShadow = nullptr; 2802 if (ConvertOp->getType()->isVectorTy()) { 2803 AggShadow = IRB.CreateExtractElement( 2804 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2805 for (int i = 1; i < NumUsedElements; ++i) { 2806 Value *MoreShadow = IRB.CreateExtractElement( 2807 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2808 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2809 } 2810 } else { 2811 AggShadow = ConvertShadow; 2812 } 2813 assert(AggShadow->getType()->isIntegerTy()); 2814 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2815 2816 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2817 // ConvertOp. 2818 if (CopyOp) { 2819 assert(CopyOp->getType() == I.getType()); 2820 assert(CopyOp->getType()->isVectorTy()); 2821 Value *ResultShadow = getShadow(CopyOp); 2822 Type *EltTy = cast<VectorType>(ResultShadow->getType())->getElementType(); 2823 for (int i = 0; i < NumUsedElements; ++i) { 2824 ResultShadow = IRB.CreateInsertElement( 2825 ResultShadow, ConstantInt::getNullValue(EltTy), 2826 ConstantInt::get(IRB.getInt32Ty(), i)); 2827 } 2828 setShadow(&I, ResultShadow); 2829 setOrigin(&I, getOrigin(CopyOp)); 2830 } else { 2831 setShadow(&I, getCleanShadow(&I)); 2832 setOrigin(&I, getCleanOrigin()); 2833 } 2834 } 2835 2836 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2837 // zeroes if it is zero, and all ones otherwise. 2838 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2839 if (S->getType()->isVectorTy()) 2840 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2841 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2842 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2843 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2844 } 2845 2846 // Given a vector, extract its first element, and return all 2847 // zeroes if it is zero, and all ones otherwise. 2848 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2849 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2850 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2851 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2852 } 2853 2854 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2855 Type *T = S->getType(); 2856 assert(T->isVectorTy()); 2857 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2858 return IRB.CreateSExt(S2, T); 2859 } 2860 2861 // Instrument vector shift intrinsic. 2862 // 2863 // This function instruments intrinsics like int_x86_avx2_psll_w. 2864 // Intrinsic shifts %In by %ShiftSize bits. 2865 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2866 // size, and the rest is ignored. Behavior is defined even if shift size is 2867 // greater than register (or field) width. 2868 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2869 assert(I.arg_size() == 2); 2870 IRBuilder<> IRB(&I); 2871 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2872 // Otherwise perform the same shift on S1. 2873 Value *S1 = getShadow(&I, 0); 2874 Value *S2 = getShadow(&I, 1); 2875 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2876 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2877 Value *V1 = I.getOperand(0); 2878 Value *V2 = I.getOperand(1); 2879 Value *Shift = IRB.CreateCall(I.getFunctionType(), I.getCalledOperand(), 2880 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2881 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2882 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2883 setOriginForNaryOp(I); 2884 } 2885 2886 // Get an X86_MMX-sized vector type. 2887 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2888 const unsigned X86_MMXSizeInBits = 64; 2889 assert(EltSizeInBits != 0 && (X86_MMXSizeInBits % EltSizeInBits) == 0 && 2890 "Illegal MMX vector element size"); 2891 return FixedVectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2892 X86_MMXSizeInBits / EltSizeInBits); 2893 } 2894 2895 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2896 // intrinsic. 2897 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2898 switch (id) { 2899 case Intrinsic::x86_sse2_packsswb_128: 2900 case Intrinsic::x86_sse2_packuswb_128: 2901 return Intrinsic::x86_sse2_packsswb_128; 2902 2903 case Intrinsic::x86_sse2_packssdw_128: 2904 case Intrinsic::x86_sse41_packusdw: 2905 return Intrinsic::x86_sse2_packssdw_128; 2906 2907 case Intrinsic::x86_avx2_packsswb: 2908 case Intrinsic::x86_avx2_packuswb: 2909 return Intrinsic::x86_avx2_packsswb; 2910 2911 case Intrinsic::x86_avx2_packssdw: 2912 case Intrinsic::x86_avx2_packusdw: 2913 return Intrinsic::x86_avx2_packssdw; 2914 2915 case Intrinsic::x86_mmx_packsswb: 2916 case Intrinsic::x86_mmx_packuswb: 2917 return Intrinsic::x86_mmx_packsswb; 2918 2919 case Intrinsic::x86_mmx_packssdw: 2920 return Intrinsic::x86_mmx_packssdw; 2921 default: 2922 llvm_unreachable("unexpected intrinsic id"); 2923 } 2924 } 2925 2926 // Instrument vector pack intrinsic. 2927 // 2928 // This function instruments intrinsics like x86_mmx_packsswb, that 2929 // packs elements of 2 input vectors into half as many bits with saturation. 2930 // Shadow is propagated with the signed variant of the same intrinsic applied 2931 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2932 // EltSizeInBits is used only for x86mmx arguments. 2933 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2934 assert(I.arg_size() == 2); 2935 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2936 IRBuilder<> IRB(&I); 2937 Value *S1 = getShadow(&I, 0); 2938 Value *S2 = getShadow(&I, 1); 2939 assert(isX86_MMX || S1->getType()->isVectorTy()); 2940 2941 // SExt and ICmpNE below must apply to individual elements of input vectors. 2942 // In case of x86mmx arguments, cast them to appropriate vector types and 2943 // back. 2944 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2945 if (isX86_MMX) { 2946 S1 = IRB.CreateBitCast(S1, T); 2947 S2 = IRB.CreateBitCast(S2, T); 2948 } 2949 Value *S1_ext = IRB.CreateSExt( 2950 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2951 Value *S2_ext = IRB.CreateSExt( 2952 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2953 if (isX86_MMX) { 2954 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2955 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2956 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2957 } 2958 2959 Function *ShadowFn = Intrinsic::getDeclaration( 2960 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2961 2962 Value *S = 2963 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2964 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2965 setShadow(&I, S); 2966 setOriginForNaryOp(I); 2967 } 2968 2969 // Instrument sum-of-absolute-differences intrinsic. 2970 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2971 const unsigned SignificantBitsPerResultElement = 16; 2972 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2973 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2974 unsigned ZeroBitsPerResultElement = 2975 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2976 2977 IRBuilder<> IRB(&I); 2978 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2979 S = IRB.CreateBitCast(S, ResTy); 2980 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2981 ResTy); 2982 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2983 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2984 setShadow(&I, S); 2985 setOriginForNaryOp(I); 2986 } 2987 2988 // Instrument multiply-add intrinsic. 2989 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2990 unsigned EltSizeInBits = 0) { 2991 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2992 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2993 IRBuilder<> IRB(&I); 2994 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2995 S = IRB.CreateBitCast(S, ResTy); 2996 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2997 ResTy); 2998 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2999 setShadow(&I, S); 3000 setOriginForNaryOp(I); 3001 } 3002 3003 // Instrument compare-packed intrinsic. 3004 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 3005 // all-ones shadow. 3006 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 3007 IRBuilder<> IRB(&I); 3008 Type *ResTy = getShadowTy(&I); 3009 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3010 Value *S = IRB.CreateSExt( 3011 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 3012 setShadow(&I, S); 3013 setOriginForNaryOp(I); 3014 } 3015 3016 // Instrument compare-scalar intrinsic. 3017 // This handles both cmp* intrinsics which return the result in the first 3018 // element of a vector, and comi* which return the result as i32. 3019 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 3020 IRBuilder<> IRB(&I); 3021 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3022 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 3023 setShadow(&I, S); 3024 setOriginForNaryOp(I); 3025 } 3026 3027 // Instrument generic vector reduction intrinsics 3028 // by ORing together all their fields. 3029 void handleVectorReduceIntrinsic(IntrinsicInst &I) { 3030 IRBuilder<> IRB(&I); 3031 Value *S = IRB.CreateOrReduce(getShadow(&I, 0)); 3032 setShadow(&I, S); 3033 setOrigin(&I, getOrigin(&I, 0)); 3034 } 3035 3036 // Instrument vector.reduce.or intrinsic. 3037 // Valid (non-poisoned) set bits in the operand pull low the 3038 // corresponding shadow bits. 3039 void handleVectorReduceOrIntrinsic(IntrinsicInst &I) { 3040 IRBuilder<> IRB(&I); 3041 Value *OperandShadow = getShadow(&I, 0); 3042 Value *OperandUnsetBits = IRB.CreateNot(I.getOperand(0)); 3043 Value *OperandUnsetOrPoison = IRB.CreateOr(OperandUnsetBits, OperandShadow); 3044 // Bit N is clean if any field's bit N is 1 and unpoison 3045 Value *OutShadowMask = IRB.CreateAndReduce(OperandUnsetOrPoison); 3046 // Otherwise, it is clean if every field's bit N is unpoison 3047 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3048 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3049 3050 setShadow(&I, S); 3051 setOrigin(&I, getOrigin(&I, 0)); 3052 } 3053 3054 // Instrument vector.reduce.and intrinsic. 3055 // Valid (non-poisoned) unset bits in the operand pull down the 3056 // corresponding shadow bits. 3057 void handleVectorReduceAndIntrinsic(IntrinsicInst &I) { 3058 IRBuilder<> IRB(&I); 3059 Value *OperandShadow = getShadow(&I, 0); 3060 Value *OperandSetOrPoison = IRB.CreateOr(I.getOperand(0), OperandShadow); 3061 // Bit N is clean if any field's bit N is 0 and unpoison 3062 Value *OutShadowMask = IRB.CreateAndReduce(OperandSetOrPoison); 3063 // Otherwise, it is clean if every field's bit N is unpoison 3064 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3065 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3066 3067 setShadow(&I, S); 3068 setOrigin(&I, getOrigin(&I, 0)); 3069 } 3070 3071 void handleStmxcsr(IntrinsicInst &I) { 3072 IRBuilder<> IRB(&I); 3073 Value* Addr = I.getArgOperand(0); 3074 Type *Ty = IRB.getInt32Ty(); 3075 Value *ShadowPtr = 3076 getShadowOriginPtr(Addr, IRB, Ty, Align(1), /*isStore*/ true).first; 3077 3078 IRB.CreateStore(getCleanShadow(Ty), 3079 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 3080 3081 if (ClCheckAccessAddress) 3082 insertShadowCheck(Addr, &I); 3083 } 3084 3085 void handleLdmxcsr(IntrinsicInst &I) { 3086 if (!InsertChecks) return; 3087 3088 IRBuilder<> IRB(&I); 3089 Value *Addr = I.getArgOperand(0); 3090 Type *Ty = IRB.getInt32Ty(); 3091 const Align Alignment = Align(1); 3092 Value *ShadowPtr, *OriginPtr; 3093 std::tie(ShadowPtr, OriginPtr) = 3094 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 3095 3096 if (ClCheckAccessAddress) 3097 insertShadowCheck(Addr, &I); 3098 3099 Value *Shadow = IRB.CreateAlignedLoad(Ty, ShadowPtr, Alignment, "_ldmxcsr"); 3100 Value *Origin = MS.TrackOrigins ? IRB.CreateLoad(MS.OriginTy, OriginPtr) 3101 : getCleanOrigin(); 3102 insertShadowCheck(Shadow, Origin, &I); 3103 } 3104 3105 void handleMaskedStore(IntrinsicInst &I) { 3106 IRBuilder<> IRB(&I); 3107 Value *V = I.getArgOperand(0); 3108 Value *Addr = I.getArgOperand(1); 3109 const Align Alignment( 3110 cast<ConstantInt>(I.getArgOperand(2))->getZExtValue()); 3111 Value *Mask = I.getArgOperand(3); 3112 Value *Shadow = getShadow(V); 3113 3114 Value *ShadowPtr; 3115 Value *OriginPtr; 3116 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 3117 Addr, IRB, Shadow->getType(), Alignment, /*isStore*/ true); 3118 3119 if (ClCheckAccessAddress) { 3120 insertShadowCheck(Addr, &I); 3121 // Uninitialized mask is kind of like uninitialized address, but not as 3122 // scary. 3123 insertShadowCheck(Mask, &I); 3124 } 3125 3126 IRB.CreateMaskedStore(Shadow, ShadowPtr, Alignment, Mask); 3127 3128 if (MS.TrackOrigins) { 3129 auto &DL = F.getParent()->getDataLayout(); 3130 paintOrigin(IRB, getOrigin(V), OriginPtr, 3131 DL.getTypeStoreSize(Shadow->getType()), 3132 std::max(Alignment, kMinOriginAlignment)); 3133 } 3134 } 3135 3136 bool handleMaskedLoad(IntrinsicInst &I) { 3137 IRBuilder<> IRB(&I); 3138 Value *Addr = I.getArgOperand(0); 3139 const Align Alignment( 3140 cast<ConstantInt>(I.getArgOperand(1))->getZExtValue()); 3141 Value *Mask = I.getArgOperand(2); 3142 Value *PassThru = I.getArgOperand(3); 3143 3144 Type *ShadowTy = getShadowTy(&I); 3145 Value *ShadowPtr, *OriginPtr; 3146 if (PropagateShadow) { 3147 std::tie(ShadowPtr, OriginPtr) = 3148 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 3149 setShadow(&I, IRB.CreateMaskedLoad(ShadowTy, ShadowPtr, Alignment, Mask, 3150 getShadow(PassThru), "_msmaskedld")); 3151 } else { 3152 setShadow(&I, getCleanShadow(&I)); 3153 } 3154 3155 if (ClCheckAccessAddress) { 3156 insertShadowCheck(Addr, &I); 3157 insertShadowCheck(Mask, &I); 3158 } 3159 3160 if (MS.TrackOrigins) { 3161 if (PropagateShadow) { 3162 // Choose between PassThru's and the loaded value's origins. 3163 Value *MaskedPassThruShadow = IRB.CreateAnd( 3164 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 3165 3166 Value *Acc = IRB.CreateExtractElement( 3167 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 3168 for (int i = 1, N = cast<FixedVectorType>(PassThru->getType()) 3169 ->getNumElements(); 3170 i < N; ++i) { 3171 Value *More = IRB.CreateExtractElement( 3172 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 3173 Acc = IRB.CreateOr(Acc, More); 3174 } 3175 3176 Value *Origin = IRB.CreateSelect( 3177 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 3178 getOrigin(PassThru), IRB.CreateLoad(MS.OriginTy, OriginPtr)); 3179 3180 setOrigin(&I, Origin); 3181 } else { 3182 setOrigin(&I, getCleanOrigin()); 3183 } 3184 } 3185 return true; 3186 } 3187 3188 // Instrument BMI / BMI2 intrinsics. 3189 // All of these intrinsics are Z = I(X, Y) 3190 // where the types of all operands and the result match, and are either i32 or i64. 3191 // The following instrumentation happens to work for all of them: 3192 // Sz = I(Sx, Y) | (sext (Sy != 0)) 3193 void handleBmiIntrinsic(IntrinsicInst &I) { 3194 IRBuilder<> IRB(&I); 3195 Type *ShadowTy = getShadowTy(&I); 3196 3197 // If any bit of the mask operand is poisoned, then the whole thing is. 3198 Value *SMask = getShadow(&I, 1); 3199 SMask = IRB.CreateSExt(IRB.CreateICmpNE(SMask, getCleanShadow(ShadowTy)), 3200 ShadowTy); 3201 // Apply the same intrinsic to the shadow of the first operand. 3202 Value *S = IRB.CreateCall(I.getCalledFunction(), 3203 {getShadow(&I, 0), I.getOperand(1)}); 3204 S = IRB.CreateOr(SMask, S); 3205 setShadow(&I, S); 3206 setOriginForNaryOp(I); 3207 } 3208 3209 SmallVector<int, 8> getPclmulMask(unsigned Width, bool OddElements) { 3210 SmallVector<int, 8> Mask; 3211 for (unsigned X = OddElements ? 1 : 0; X < Width; X += 2) { 3212 Mask.append(2, X); 3213 } 3214 return Mask; 3215 } 3216 3217 // Instrument pclmul intrinsics. 3218 // These intrinsics operate either on odd or on even elements of the input 3219 // vectors, depending on the constant in the 3rd argument, ignoring the rest. 3220 // Replace the unused elements with copies of the used ones, ex: 3221 // (0, 1, 2, 3) -> (0, 0, 2, 2) (even case) 3222 // or 3223 // (0, 1, 2, 3) -> (1, 1, 3, 3) (odd case) 3224 // and then apply the usual shadow combining logic. 3225 void handlePclmulIntrinsic(IntrinsicInst &I) { 3226 IRBuilder<> IRB(&I); 3227 unsigned Width = 3228 cast<FixedVectorType>(I.getArgOperand(0)->getType())->getNumElements(); 3229 assert(isa<ConstantInt>(I.getArgOperand(2)) && 3230 "pclmul 3rd operand must be a constant"); 3231 unsigned Imm = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 3232 Value *Shuf0 = IRB.CreateShuffleVector(getShadow(&I, 0), 3233 getPclmulMask(Width, Imm & 0x01)); 3234 Value *Shuf1 = IRB.CreateShuffleVector(getShadow(&I, 1), 3235 getPclmulMask(Width, Imm & 0x10)); 3236 ShadowAndOriginCombiner SOC(this, IRB); 3237 SOC.Add(Shuf0, getOrigin(&I, 0)); 3238 SOC.Add(Shuf1, getOrigin(&I, 1)); 3239 SOC.Done(&I); 3240 } 3241 3242 // Instrument _mm_*_sd intrinsics 3243 void handleUnarySdIntrinsic(IntrinsicInst &I) { 3244 IRBuilder<> IRB(&I); 3245 Value *First = getShadow(&I, 0); 3246 Value *Second = getShadow(&I, 1); 3247 // High word of first operand, low word of second 3248 Value *Shadow = 3249 IRB.CreateShuffleVector(First, Second, llvm::makeArrayRef<int>({2, 1})); 3250 3251 setShadow(&I, Shadow); 3252 setOriginForNaryOp(I); 3253 } 3254 3255 void handleBinarySdIntrinsic(IntrinsicInst &I) { 3256 IRBuilder<> IRB(&I); 3257 Value *First = getShadow(&I, 0); 3258 Value *Second = getShadow(&I, 1); 3259 Value *OrShadow = IRB.CreateOr(First, Second); 3260 // High word of first operand, low word of both OR'd together 3261 Value *Shadow = IRB.CreateShuffleVector(First, OrShadow, 3262 llvm::makeArrayRef<int>({2, 1})); 3263 3264 setShadow(&I, Shadow); 3265 setOriginForNaryOp(I); 3266 } 3267 3268 // Instrument abs intrinsic. 3269 // handleUnknownIntrinsic can't handle it because of the last 3270 // is_int_min_poison argument which does not match the result type. 3271 void handleAbsIntrinsic(IntrinsicInst &I) { 3272 assert(I.getType()->isIntOrIntVectorTy()); 3273 assert(I.getArgOperand(0)->getType() == I.getType()); 3274 3275 // FIXME: Handle is_int_min_poison. 3276 IRBuilder<> IRB(&I); 3277 setShadow(&I, getShadow(&I, 0)); 3278 setOrigin(&I, getOrigin(&I, 0)); 3279 } 3280 3281 void visitIntrinsicInst(IntrinsicInst &I) { 3282 switch (I.getIntrinsicID()) { 3283 case Intrinsic::abs: 3284 handleAbsIntrinsic(I); 3285 break; 3286 case Intrinsic::lifetime_start: 3287 handleLifetimeStart(I); 3288 break; 3289 case Intrinsic::launder_invariant_group: 3290 case Intrinsic::strip_invariant_group: 3291 handleInvariantGroup(I); 3292 break; 3293 case Intrinsic::bswap: 3294 handleBswap(I); 3295 break; 3296 case Intrinsic::masked_store: 3297 handleMaskedStore(I); 3298 break; 3299 case Intrinsic::masked_load: 3300 handleMaskedLoad(I); 3301 break; 3302 case Intrinsic::vector_reduce_and: 3303 handleVectorReduceAndIntrinsic(I); 3304 break; 3305 case Intrinsic::vector_reduce_or: 3306 handleVectorReduceOrIntrinsic(I); 3307 break; 3308 case Intrinsic::vector_reduce_add: 3309 case Intrinsic::vector_reduce_xor: 3310 case Intrinsic::vector_reduce_mul: 3311 handleVectorReduceIntrinsic(I); 3312 break; 3313 case Intrinsic::x86_sse_stmxcsr: 3314 handleStmxcsr(I); 3315 break; 3316 case Intrinsic::x86_sse_ldmxcsr: 3317 handleLdmxcsr(I); 3318 break; 3319 case Intrinsic::x86_avx512_vcvtsd2usi64: 3320 case Intrinsic::x86_avx512_vcvtsd2usi32: 3321 case Intrinsic::x86_avx512_vcvtss2usi64: 3322 case Intrinsic::x86_avx512_vcvtss2usi32: 3323 case Intrinsic::x86_avx512_cvttss2usi64: 3324 case Intrinsic::x86_avx512_cvttss2usi: 3325 case Intrinsic::x86_avx512_cvttsd2usi64: 3326 case Intrinsic::x86_avx512_cvttsd2usi: 3327 case Intrinsic::x86_avx512_cvtusi2ss: 3328 case Intrinsic::x86_avx512_cvtusi642sd: 3329 case Intrinsic::x86_avx512_cvtusi642ss: 3330 handleVectorConvertIntrinsic(I, 1, true); 3331 break; 3332 case Intrinsic::x86_sse2_cvtsd2si64: 3333 case Intrinsic::x86_sse2_cvtsd2si: 3334 case Intrinsic::x86_sse2_cvtsd2ss: 3335 case Intrinsic::x86_sse2_cvttsd2si64: 3336 case Intrinsic::x86_sse2_cvttsd2si: 3337 case Intrinsic::x86_sse_cvtss2si64: 3338 case Intrinsic::x86_sse_cvtss2si: 3339 case Intrinsic::x86_sse_cvttss2si64: 3340 case Intrinsic::x86_sse_cvttss2si: 3341 handleVectorConvertIntrinsic(I, 1); 3342 break; 3343 case Intrinsic::x86_sse_cvtps2pi: 3344 case Intrinsic::x86_sse_cvttps2pi: 3345 handleVectorConvertIntrinsic(I, 2); 3346 break; 3347 3348 case Intrinsic::x86_avx512_psll_w_512: 3349 case Intrinsic::x86_avx512_psll_d_512: 3350 case Intrinsic::x86_avx512_psll_q_512: 3351 case Intrinsic::x86_avx512_pslli_w_512: 3352 case Intrinsic::x86_avx512_pslli_d_512: 3353 case Intrinsic::x86_avx512_pslli_q_512: 3354 case Intrinsic::x86_avx512_psrl_w_512: 3355 case Intrinsic::x86_avx512_psrl_d_512: 3356 case Intrinsic::x86_avx512_psrl_q_512: 3357 case Intrinsic::x86_avx512_psra_w_512: 3358 case Intrinsic::x86_avx512_psra_d_512: 3359 case Intrinsic::x86_avx512_psra_q_512: 3360 case Intrinsic::x86_avx512_psrli_w_512: 3361 case Intrinsic::x86_avx512_psrli_d_512: 3362 case Intrinsic::x86_avx512_psrli_q_512: 3363 case Intrinsic::x86_avx512_psrai_w_512: 3364 case Intrinsic::x86_avx512_psrai_d_512: 3365 case Intrinsic::x86_avx512_psrai_q_512: 3366 case Intrinsic::x86_avx512_psra_q_256: 3367 case Intrinsic::x86_avx512_psra_q_128: 3368 case Intrinsic::x86_avx512_psrai_q_256: 3369 case Intrinsic::x86_avx512_psrai_q_128: 3370 case Intrinsic::x86_avx2_psll_w: 3371 case Intrinsic::x86_avx2_psll_d: 3372 case Intrinsic::x86_avx2_psll_q: 3373 case Intrinsic::x86_avx2_pslli_w: 3374 case Intrinsic::x86_avx2_pslli_d: 3375 case Intrinsic::x86_avx2_pslli_q: 3376 case Intrinsic::x86_avx2_psrl_w: 3377 case Intrinsic::x86_avx2_psrl_d: 3378 case Intrinsic::x86_avx2_psrl_q: 3379 case Intrinsic::x86_avx2_psra_w: 3380 case Intrinsic::x86_avx2_psra_d: 3381 case Intrinsic::x86_avx2_psrli_w: 3382 case Intrinsic::x86_avx2_psrli_d: 3383 case Intrinsic::x86_avx2_psrli_q: 3384 case Intrinsic::x86_avx2_psrai_w: 3385 case Intrinsic::x86_avx2_psrai_d: 3386 case Intrinsic::x86_sse2_psll_w: 3387 case Intrinsic::x86_sse2_psll_d: 3388 case Intrinsic::x86_sse2_psll_q: 3389 case Intrinsic::x86_sse2_pslli_w: 3390 case Intrinsic::x86_sse2_pslli_d: 3391 case Intrinsic::x86_sse2_pslli_q: 3392 case Intrinsic::x86_sse2_psrl_w: 3393 case Intrinsic::x86_sse2_psrl_d: 3394 case Intrinsic::x86_sse2_psrl_q: 3395 case Intrinsic::x86_sse2_psra_w: 3396 case Intrinsic::x86_sse2_psra_d: 3397 case Intrinsic::x86_sse2_psrli_w: 3398 case Intrinsic::x86_sse2_psrli_d: 3399 case Intrinsic::x86_sse2_psrli_q: 3400 case Intrinsic::x86_sse2_psrai_w: 3401 case Intrinsic::x86_sse2_psrai_d: 3402 case Intrinsic::x86_mmx_psll_w: 3403 case Intrinsic::x86_mmx_psll_d: 3404 case Intrinsic::x86_mmx_psll_q: 3405 case Intrinsic::x86_mmx_pslli_w: 3406 case Intrinsic::x86_mmx_pslli_d: 3407 case Intrinsic::x86_mmx_pslli_q: 3408 case Intrinsic::x86_mmx_psrl_w: 3409 case Intrinsic::x86_mmx_psrl_d: 3410 case Intrinsic::x86_mmx_psrl_q: 3411 case Intrinsic::x86_mmx_psra_w: 3412 case Intrinsic::x86_mmx_psra_d: 3413 case Intrinsic::x86_mmx_psrli_w: 3414 case Intrinsic::x86_mmx_psrli_d: 3415 case Intrinsic::x86_mmx_psrli_q: 3416 case Intrinsic::x86_mmx_psrai_w: 3417 case Intrinsic::x86_mmx_psrai_d: 3418 handleVectorShiftIntrinsic(I, /* Variable */ false); 3419 break; 3420 case Intrinsic::x86_avx2_psllv_d: 3421 case Intrinsic::x86_avx2_psllv_d_256: 3422 case Intrinsic::x86_avx512_psllv_d_512: 3423 case Intrinsic::x86_avx2_psllv_q: 3424 case Intrinsic::x86_avx2_psllv_q_256: 3425 case Intrinsic::x86_avx512_psllv_q_512: 3426 case Intrinsic::x86_avx2_psrlv_d: 3427 case Intrinsic::x86_avx2_psrlv_d_256: 3428 case Intrinsic::x86_avx512_psrlv_d_512: 3429 case Intrinsic::x86_avx2_psrlv_q: 3430 case Intrinsic::x86_avx2_psrlv_q_256: 3431 case Intrinsic::x86_avx512_psrlv_q_512: 3432 case Intrinsic::x86_avx2_psrav_d: 3433 case Intrinsic::x86_avx2_psrav_d_256: 3434 case Intrinsic::x86_avx512_psrav_d_512: 3435 case Intrinsic::x86_avx512_psrav_q_128: 3436 case Intrinsic::x86_avx512_psrav_q_256: 3437 case Intrinsic::x86_avx512_psrav_q_512: 3438 handleVectorShiftIntrinsic(I, /* Variable */ true); 3439 break; 3440 3441 case Intrinsic::x86_sse2_packsswb_128: 3442 case Intrinsic::x86_sse2_packssdw_128: 3443 case Intrinsic::x86_sse2_packuswb_128: 3444 case Intrinsic::x86_sse41_packusdw: 3445 case Intrinsic::x86_avx2_packsswb: 3446 case Intrinsic::x86_avx2_packssdw: 3447 case Intrinsic::x86_avx2_packuswb: 3448 case Intrinsic::x86_avx2_packusdw: 3449 handleVectorPackIntrinsic(I); 3450 break; 3451 3452 case Intrinsic::x86_mmx_packsswb: 3453 case Intrinsic::x86_mmx_packuswb: 3454 handleVectorPackIntrinsic(I, 16); 3455 break; 3456 3457 case Intrinsic::x86_mmx_packssdw: 3458 handleVectorPackIntrinsic(I, 32); 3459 break; 3460 3461 case Intrinsic::x86_mmx_psad_bw: 3462 case Intrinsic::x86_sse2_psad_bw: 3463 case Intrinsic::x86_avx2_psad_bw: 3464 handleVectorSadIntrinsic(I); 3465 break; 3466 3467 case Intrinsic::x86_sse2_pmadd_wd: 3468 case Intrinsic::x86_avx2_pmadd_wd: 3469 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3470 case Intrinsic::x86_avx2_pmadd_ub_sw: 3471 handleVectorPmaddIntrinsic(I); 3472 break; 3473 3474 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3475 handleVectorPmaddIntrinsic(I, 8); 3476 break; 3477 3478 case Intrinsic::x86_mmx_pmadd_wd: 3479 handleVectorPmaddIntrinsic(I, 16); 3480 break; 3481 3482 case Intrinsic::x86_sse_cmp_ss: 3483 case Intrinsic::x86_sse2_cmp_sd: 3484 case Intrinsic::x86_sse_comieq_ss: 3485 case Intrinsic::x86_sse_comilt_ss: 3486 case Intrinsic::x86_sse_comile_ss: 3487 case Intrinsic::x86_sse_comigt_ss: 3488 case Intrinsic::x86_sse_comige_ss: 3489 case Intrinsic::x86_sse_comineq_ss: 3490 case Intrinsic::x86_sse_ucomieq_ss: 3491 case Intrinsic::x86_sse_ucomilt_ss: 3492 case Intrinsic::x86_sse_ucomile_ss: 3493 case Intrinsic::x86_sse_ucomigt_ss: 3494 case Intrinsic::x86_sse_ucomige_ss: 3495 case Intrinsic::x86_sse_ucomineq_ss: 3496 case Intrinsic::x86_sse2_comieq_sd: 3497 case Intrinsic::x86_sse2_comilt_sd: 3498 case Intrinsic::x86_sse2_comile_sd: 3499 case Intrinsic::x86_sse2_comigt_sd: 3500 case Intrinsic::x86_sse2_comige_sd: 3501 case Intrinsic::x86_sse2_comineq_sd: 3502 case Intrinsic::x86_sse2_ucomieq_sd: 3503 case Intrinsic::x86_sse2_ucomilt_sd: 3504 case Intrinsic::x86_sse2_ucomile_sd: 3505 case Intrinsic::x86_sse2_ucomigt_sd: 3506 case Intrinsic::x86_sse2_ucomige_sd: 3507 case Intrinsic::x86_sse2_ucomineq_sd: 3508 handleVectorCompareScalarIntrinsic(I); 3509 break; 3510 3511 case Intrinsic::x86_sse_cmp_ps: 3512 case Intrinsic::x86_sse2_cmp_pd: 3513 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3514 // generates reasonably looking IR that fails in the backend with "Do not 3515 // know how to split the result of this operator!". 3516 handleVectorComparePackedIntrinsic(I); 3517 break; 3518 3519 case Intrinsic::x86_bmi_bextr_32: 3520 case Intrinsic::x86_bmi_bextr_64: 3521 case Intrinsic::x86_bmi_bzhi_32: 3522 case Intrinsic::x86_bmi_bzhi_64: 3523 case Intrinsic::x86_bmi_pdep_32: 3524 case Intrinsic::x86_bmi_pdep_64: 3525 case Intrinsic::x86_bmi_pext_32: 3526 case Intrinsic::x86_bmi_pext_64: 3527 handleBmiIntrinsic(I); 3528 break; 3529 3530 case Intrinsic::x86_pclmulqdq: 3531 case Intrinsic::x86_pclmulqdq_256: 3532 case Intrinsic::x86_pclmulqdq_512: 3533 handlePclmulIntrinsic(I); 3534 break; 3535 3536 case Intrinsic::x86_sse41_round_sd: 3537 handleUnarySdIntrinsic(I); 3538 break; 3539 case Intrinsic::x86_sse2_max_sd: 3540 case Intrinsic::x86_sse2_min_sd: 3541 handleBinarySdIntrinsic(I); 3542 break; 3543 3544 case Intrinsic::fshl: 3545 case Intrinsic::fshr: 3546 handleFunnelShift(I); 3547 break; 3548 3549 case Intrinsic::is_constant: 3550 // The result of llvm.is.constant() is always defined. 3551 setShadow(&I, getCleanShadow(&I)); 3552 setOrigin(&I, getCleanOrigin()); 3553 break; 3554 3555 default: 3556 if (!handleUnknownIntrinsic(I)) 3557 visitInstruction(I); 3558 break; 3559 } 3560 } 3561 3562 void visitLibAtomicLoad(CallBase &CB) { 3563 // Since we use getNextNode here, we can't have CB terminate the BB. 3564 assert(isa<CallInst>(CB)); 3565 3566 IRBuilder<> IRB(&CB); 3567 Value *Size = CB.getArgOperand(0); 3568 Value *SrcPtr = CB.getArgOperand(1); 3569 Value *DstPtr = CB.getArgOperand(2); 3570 Value *Ordering = CB.getArgOperand(3); 3571 // Convert the call to have at least Acquire ordering to make sure 3572 // the shadow operations aren't reordered before it. 3573 Value *NewOrdering = 3574 IRB.CreateExtractElement(makeAddAcquireOrderingTable(IRB), Ordering); 3575 CB.setArgOperand(3, NewOrdering); 3576 3577 IRBuilder<> NextIRB(CB.getNextNode()); 3578 NextIRB.SetCurrentDebugLocation(CB.getDebugLoc()); 3579 3580 Value *SrcShadowPtr, *SrcOriginPtr; 3581 std::tie(SrcShadowPtr, SrcOriginPtr) = 3582 getShadowOriginPtr(SrcPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3583 /*isStore*/ false); 3584 Value *DstShadowPtr = 3585 getShadowOriginPtr(DstPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3586 /*isStore*/ true) 3587 .first; 3588 3589 NextIRB.CreateMemCpy(DstShadowPtr, Align(1), SrcShadowPtr, Align(1), Size); 3590 if (MS.TrackOrigins) { 3591 Value *SrcOrigin = NextIRB.CreateAlignedLoad(MS.OriginTy, SrcOriginPtr, 3592 kMinOriginAlignment); 3593 Value *NewOrigin = updateOrigin(SrcOrigin, NextIRB); 3594 NextIRB.CreateCall(MS.MsanSetOriginFn, {DstPtr, Size, NewOrigin}); 3595 } 3596 } 3597 3598 void visitLibAtomicStore(CallBase &CB) { 3599 IRBuilder<> IRB(&CB); 3600 Value *Size = CB.getArgOperand(0); 3601 Value *DstPtr = CB.getArgOperand(2); 3602 Value *Ordering = CB.getArgOperand(3); 3603 // Convert the call to have at least Release ordering to make sure 3604 // the shadow operations aren't reordered after it. 3605 Value *NewOrdering = 3606 IRB.CreateExtractElement(makeAddReleaseOrderingTable(IRB), Ordering); 3607 CB.setArgOperand(3, NewOrdering); 3608 3609 Value *DstShadowPtr = 3610 getShadowOriginPtr(DstPtr, IRB, IRB.getInt8Ty(), Align(1), 3611 /*isStore*/ true) 3612 .first; 3613 3614 // Atomic store always paints clean shadow/origin. See file header. 3615 IRB.CreateMemSet(DstShadowPtr, getCleanShadow(IRB.getInt8Ty()), Size, 3616 Align(1)); 3617 } 3618 3619 void visitCallBase(CallBase &CB) { 3620 assert(!CB.getMetadata("nosanitize")); 3621 if (CB.isInlineAsm()) { 3622 // For inline asm (either a call to asm function, or callbr instruction), 3623 // do the usual thing: check argument shadow and mark all outputs as 3624 // clean. Note that any side effects of the inline asm that are not 3625 // immediately visible in its constraints are not handled. 3626 if (ClHandleAsmConservative && MS.CompileKernel) 3627 visitAsmInstruction(CB); 3628 else 3629 visitInstruction(CB); 3630 return; 3631 } 3632 LibFunc LF; 3633 if (TLI->getLibFunc(CB, LF)) { 3634 // libatomic.a functions need to have special handling because there isn't 3635 // a good way to intercept them or compile the library with 3636 // instrumentation. 3637 switch (LF) { 3638 case LibFunc_atomic_load: 3639 if (!isa<CallInst>(CB)) { 3640 llvm::errs() << "MSAN -- cannot instrument invoke of libatomic load." 3641 "Ignoring!\n"; 3642 break; 3643 } 3644 visitLibAtomicLoad(CB); 3645 return; 3646 case LibFunc_atomic_store: 3647 visitLibAtomicStore(CB); 3648 return; 3649 default: 3650 break; 3651 } 3652 } 3653 3654 if (auto *Call = dyn_cast<CallInst>(&CB)) { 3655 assert(!isa<IntrinsicInst>(Call) && "intrinsics are handled elsewhere"); 3656 3657 // We are going to insert code that relies on the fact that the callee 3658 // will become a non-readonly function after it is instrumented by us. To 3659 // prevent this code from being optimized out, mark that function 3660 // non-readonly in advance. 3661 AttributeMask B; 3662 B.addAttribute(Attribute::ReadOnly) 3663 .addAttribute(Attribute::ReadNone) 3664 .addAttribute(Attribute::WriteOnly) 3665 .addAttribute(Attribute::ArgMemOnly) 3666 .addAttribute(Attribute::Speculatable); 3667 3668 Call->removeFnAttrs(B); 3669 if (Function *Func = Call->getCalledFunction()) { 3670 Func->removeFnAttrs(B); 3671 } 3672 3673 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3674 } 3675 IRBuilder<> IRB(&CB); 3676 bool MayCheckCall = MS.EagerChecks; 3677 if (Function *Func = CB.getCalledFunction()) { 3678 // __sanitizer_unaligned_{load,store} functions may be called by users 3679 // and always expects shadows in the TLS. So don't check them. 3680 MayCheckCall &= !Func->getName().startswith("__sanitizer_unaligned_"); 3681 } 3682 3683 unsigned ArgOffset = 0; 3684 LLVM_DEBUG(dbgs() << " CallSite: " << CB << "\n"); 3685 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 3686 ++ArgIt) { 3687 Value *A = *ArgIt; 3688 unsigned i = ArgIt - CB.arg_begin(); 3689 if (!A->getType()->isSized()) { 3690 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << CB << "\n"); 3691 continue; 3692 } 3693 unsigned Size = 0; 3694 const DataLayout &DL = F.getParent()->getDataLayout(); 3695 3696 bool ByVal = CB.paramHasAttr(i, Attribute::ByVal); 3697 bool NoUndef = CB.paramHasAttr(i, Attribute::NoUndef); 3698 bool EagerCheck = MayCheckCall && !ByVal && NoUndef; 3699 3700 if (EagerCheck) { 3701 insertShadowCheck(A, &CB); 3702 Size = DL.getTypeAllocSize(A->getType()); 3703 } else { 3704 bool ArgIsInitialized = false; 3705 Value *Store = nullptr; 3706 // Compute the Shadow for arg even if it is ByVal, because 3707 // in that case getShadow() will copy the actual arg shadow to 3708 // __msan_param_tls. 3709 Value *ArgShadow = getShadow(A); 3710 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3711 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3712 << " Shadow: " << *ArgShadow << "\n"); 3713 if (ByVal) { 3714 // ByVal requires some special handling as it's too big for a single 3715 // load 3716 assert(A->getType()->isPointerTy() && 3717 "ByVal argument is not a pointer!"); 3718 Size = DL.getTypeAllocSize(CB.getParamByValType(i)); 3719 if (ArgOffset + Size > kParamTLSSize) 3720 break; 3721 const MaybeAlign ParamAlignment(CB.getParamAlign(i)); 3722 MaybeAlign Alignment = llvm::None; 3723 if (ParamAlignment) 3724 Alignment = std::min(*ParamAlignment, kShadowTLSAlignment); 3725 Value *AShadowPtr = 3726 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3727 /*isStore*/ false) 3728 .first; 3729 3730 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3731 Alignment, Size); 3732 // TODO(glider): need to copy origins. 3733 } else { 3734 // Any other parameters mean we need bit-grained tracking of uninit 3735 // data 3736 Size = DL.getTypeAllocSize(A->getType()); 3737 if (ArgOffset + Size > kParamTLSSize) 3738 break; 3739 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3740 kShadowTLSAlignment); 3741 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3742 if (Cst && Cst->isNullValue()) 3743 ArgIsInitialized = true; 3744 } 3745 if (MS.TrackOrigins && !ArgIsInitialized) 3746 IRB.CreateStore(getOrigin(A), 3747 getOriginPtrForArgument(A, IRB, ArgOffset)); 3748 (void)Store; 3749 assert(Store != nullptr); 3750 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3751 } 3752 assert(Size != 0); 3753 ArgOffset += alignTo(Size, kShadowTLSAlignment); 3754 } 3755 LLVM_DEBUG(dbgs() << " done with call args\n"); 3756 3757 FunctionType *FT = CB.getFunctionType(); 3758 if (FT->isVarArg()) { 3759 VAHelper->visitCallBase(CB, IRB); 3760 } 3761 3762 // Now, get the shadow for the RetVal. 3763 if (!CB.getType()->isSized()) 3764 return; 3765 // Don't emit the epilogue for musttail call returns. 3766 if (isa<CallInst>(CB) && cast<CallInst>(CB).isMustTailCall()) 3767 return; 3768 3769 if (MayCheckCall && CB.hasRetAttr(Attribute::NoUndef)) { 3770 setShadow(&CB, getCleanShadow(&CB)); 3771 setOrigin(&CB, getCleanOrigin()); 3772 return; 3773 } 3774 3775 IRBuilder<> IRBBefore(&CB); 3776 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3777 Value *Base = getShadowPtrForRetval(&CB, IRBBefore); 3778 IRBBefore.CreateAlignedStore(getCleanShadow(&CB), Base, 3779 kShadowTLSAlignment); 3780 BasicBlock::iterator NextInsn; 3781 if (isa<CallInst>(CB)) { 3782 NextInsn = ++CB.getIterator(); 3783 assert(NextInsn != CB.getParent()->end()); 3784 } else { 3785 BasicBlock *NormalDest = cast<InvokeInst>(CB).getNormalDest(); 3786 if (!NormalDest->getSinglePredecessor()) { 3787 // FIXME: this case is tricky, so we are just conservative here. 3788 // Perhaps we need to split the edge between this BB and NormalDest, 3789 // but a naive attempt to use SplitEdge leads to a crash. 3790 setShadow(&CB, getCleanShadow(&CB)); 3791 setOrigin(&CB, getCleanOrigin()); 3792 return; 3793 } 3794 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3795 // Anything inserted there will be instrumented by MSan later! 3796 NextInsn = NormalDest->getFirstInsertionPt(); 3797 assert(NextInsn != NormalDest->end() && 3798 "Could not find insertion point for retval shadow load"); 3799 } 3800 IRBuilder<> IRBAfter(&*NextInsn); 3801 Value *RetvalShadow = IRBAfter.CreateAlignedLoad( 3802 getShadowTy(&CB), getShadowPtrForRetval(&CB, IRBAfter), 3803 kShadowTLSAlignment, "_msret"); 3804 setShadow(&CB, RetvalShadow); 3805 if (MS.TrackOrigins) 3806 setOrigin(&CB, IRBAfter.CreateLoad(MS.OriginTy, 3807 getOriginPtrForRetval(IRBAfter))); 3808 } 3809 3810 bool isAMustTailRetVal(Value *RetVal) { 3811 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3812 RetVal = I->getOperand(0); 3813 } 3814 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3815 return I->isMustTailCall(); 3816 } 3817 return false; 3818 } 3819 3820 void visitReturnInst(ReturnInst &I) { 3821 IRBuilder<> IRB(&I); 3822 Value *RetVal = I.getReturnValue(); 3823 if (!RetVal) return; 3824 // Don't emit the epilogue for musttail call returns. 3825 if (isAMustTailRetVal(RetVal)) return; 3826 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3827 bool HasNoUndef = 3828 F.hasRetAttribute(Attribute::NoUndef); 3829 bool StoreShadow = !(MS.EagerChecks && HasNoUndef); 3830 // FIXME: Consider using SpecialCaseList to specify a list of functions that 3831 // must always return fully initialized values. For now, we hardcode "main". 3832 bool EagerCheck = (MS.EagerChecks && HasNoUndef) || (F.getName() == "main"); 3833 3834 Value *Shadow = getShadow(RetVal); 3835 bool StoreOrigin = true; 3836 if (EagerCheck) { 3837 insertShadowCheck(RetVal, &I); 3838 Shadow = getCleanShadow(RetVal); 3839 StoreOrigin = false; 3840 } 3841 3842 // The caller may still expect information passed over TLS if we pass our 3843 // check 3844 if (StoreShadow) { 3845 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3846 if (MS.TrackOrigins && StoreOrigin) 3847 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3848 } 3849 } 3850 3851 void visitPHINode(PHINode &I) { 3852 IRBuilder<> IRB(&I); 3853 if (!PropagateShadow) { 3854 setShadow(&I, getCleanShadow(&I)); 3855 setOrigin(&I, getCleanOrigin()); 3856 return; 3857 } 3858 3859 ShadowPHINodes.push_back(&I); 3860 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3861 "_msphi_s")); 3862 if (MS.TrackOrigins) 3863 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3864 "_msphi_o")); 3865 } 3866 3867 Value *getLocalVarDescription(AllocaInst &I) { 3868 SmallString<2048> StackDescriptionStorage; 3869 raw_svector_ostream StackDescription(StackDescriptionStorage); 3870 // We create a string with a description of the stack allocation and 3871 // pass it into __msan_set_alloca_origin. 3872 // It will be printed by the run-time if stack-originated UMR is found. 3873 // The first 4 bytes of the string are set to '----' and will be replaced 3874 // by __msan_va_arg_overflow_size_tls at the first call. 3875 StackDescription << "----" << I.getName() << "@" << F.getName(); 3876 return createPrivateNonConstGlobalForString(*F.getParent(), 3877 StackDescription.str()); 3878 } 3879 3880 void poisonAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3881 if (PoisonStack && ClPoisonStackWithCall) { 3882 IRB.CreateCall(MS.MsanPoisonStackFn, 3883 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3884 } else { 3885 Value *ShadowBase, *OriginBase; 3886 std::tie(ShadowBase, OriginBase) = getShadowOriginPtr( 3887 &I, IRB, IRB.getInt8Ty(), Align(1), /*isStore*/ true); 3888 3889 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3890 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, I.getAlign()); 3891 } 3892 3893 if (PoisonStack && MS.TrackOrigins) { 3894 Value *Descr = getLocalVarDescription(I); 3895 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3896 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3897 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3898 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3899 } 3900 } 3901 3902 void poisonAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3903 Value *Descr = getLocalVarDescription(I); 3904 if (PoisonStack) { 3905 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3906 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3907 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3908 } else { 3909 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3910 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3911 } 3912 } 3913 3914 void instrumentAlloca(AllocaInst &I, Instruction *InsPoint = nullptr) { 3915 if (!InsPoint) 3916 InsPoint = &I; 3917 IRBuilder<> IRB(InsPoint->getNextNode()); 3918 const DataLayout &DL = F.getParent()->getDataLayout(); 3919 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3920 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3921 if (I.isArrayAllocation()) 3922 Len = IRB.CreateMul(Len, I.getArraySize()); 3923 3924 if (MS.CompileKernel) 3925 poisonAllocaKmsan(I, IRB, Len); 3926 else 3927 poisonAllocaUserspace(I, IRB, Len); 3928 } 3929 3930 void visitAllocaInst(AllocaInst &I) { 3931 setShadow(&I, getCleanShadow(&I)); 3932 setOrigin(&I, getCleanOrigin()); 3933 // We'll get to this alloca later unless it's poisoned at the corresponding 3934 // llvm.lifetime.start. 3935 AllocaSet.insert(&I); 3936 } 3937 3938 void visitSelectInst(SelectInst& I) { 3939 IRBuilder<> IRB(&I); 3940 // a = select b, c, d 3941 Value *B = I.getCondition(); 3942 Value *C = I.getTrueValue(); 3943 Value *D = I.getFalseValue(); 3944 Value *Sb = getShadow(B); 3945 Value *Sc = getShadow(C); 3946 Value *Sd = getShadow(D); 3947 3948 // Result shadow if condition shadow is 0. 3949 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3950 Value *Sa1; 3951 if (I.getType()->isAggregateType()) { 3952 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3953 // an extra "select". This results in much more compact IR. 3954 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3955 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3956 } else { 3957 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3958 // If Sb (condition is poisoned), look for bits in c and d that are equal 3959 // and both unpoisoned. 3960 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3961 3962 // Cast arguments to shadow-compatible type. 3963 C = CreateAppToShadowCast(IRB, C); 3964 D = CreateAppToShadowCast(IRB, D); 3965 3966 // Result shadow if condition shadow is 1. 3967 Sa1 = IRB.CreateOr({IRB.CreateXor(C, D), Sc, Sd}); 3968 } 3969 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3970 setShadow(&I, Sa); 3971 if (MS.TrackOrigins) { 3972 // Origins are always i32, so any vector conditions must be flattened. 3973 // FIXME: consider tracking vector origins for app vectors? 3974 if (B->getType()->isVectorTy()) { 3975 Type *FlatTy = getShadowTyNoVec(B->getType()); 3976 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3977 ConstantInt::getNullValue(FlatTy)); 3978 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3979 ConstantInt::getNullValue(FlatTy)); 3980 } 3981 // a = select b, c, d 3982 // Oa = Sb ? Ob : (b ? Oc : Od) 3983 setOrigin( 3984 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3985 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3986 getOrigin(I.getFalseValue())))); 3987 } 3988 } 3989 3990 void visitLandingPadInst(LandingPadInst &I) { 3991 // Do nothing. 3992 // See https://github.com/google/sanitizers/issues/504 3993 setShadow(&I, getCleanShadow(&I)); 3994 setOrigin(&I, getCleanOrigin()); 3995 } 3996 3997 void visitCatchSwitchInst(CatchSwitchInst &I) { 3998 setShadow(&I, getCleanShadow(&I)); 3999 setOrigin(&I, getCleanOrigin()); 4000 } 4001 4002 void visitFuncletPadInst(FuncletPadInst &I) { 4003 setShadow(&I, getCleanShadow(&I)); 4004 setOrigin(&I, getCleanOrigin()); 4005 } 4006 4007 void visitGetElementPtrInst(GetElementPtrInst &I) { 4008 handleShadowOr(I); 4009 } 4010 4011 void visitExtractValueInst(ExtractValueInst &I) { 4012 IRBuilder<> IRB(&I); 4013 Value *Agg = I.getAggregateOperand(); 4014 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 4015 Value *AggShadow = getShadow(Agg); 4016 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 4017 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 4018 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 4019 setShadow(&I, ResShadow); 4020 setOriginForNaryOp(I); 4021 } 4022 4023 void visitInsertValueInst(InsertValueInst &I) { 4024 IRBuilder<> IRB(&I); 4025 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 4026 Value *AggShadow = getShadow(I.getAggregateOperand()); 4027 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 4028 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 4029 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 4030 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 4031 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 4032 setShadow(&I, Res); 4033 setOriginForNaryOp(I); 4034 } 4035 4036 void dumpInst(Instruction &I) { 4037 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 4038 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 4039 } else { 4040 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 4041 } 4042 errs() << "QQQ " << I << "\n"; 4043 } 4044 4045 void visitResumeInst(ResumeInst &I) { 4046 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 4047 // Nothing to do here. 4048 } 4049 4050 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 4051 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 4052 // Nothing to do here. 4053 } 4054 4055 void visitCatchReturnInst(CatchReturnInst &CRI) { 4056 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 4057 // Nothing to do here. 4058 } 4059 4060 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 4061 const DataLayout &DL, bool isOutput) { 4062 // For each assembly argument, we check its value for being initialized. 4063 // If the argument is a pointer, we assume it points to a single element 4064 // of the corresponding type (or to a 8-byte word, if the type is unsized). 4065 // Each such pointer is instrumented with a call to the runtime library. 4066 Type *OpType = Operand->getType(); 4067 // Check the operand value itself. 4068 insertShadowCheck(Operand, &I); 4069 if (!OpType->isPointerTy() || !isOutput) { 4070 assert(!isOutput); 4071 return; 4072 } 4073 Type *ElType = OpType->getPointerElementType(); 4074 if (!ElType->isSized()) 4075 return; 4076 int Size = DL.getTypeStoreSize(ElType); 4077 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 4078 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 4079 IRB.CreateCall(MS.MsanInstrumentAsmStoreFn, {Ptr, SizeVal}); 4080 } 4081 4082 /// Get the number of output arguments returned by pointers. 4083 int getNumOutputArgs(InlineAsm *IA, CallBase *CB) { 4084 int NumRetOutputs = 0; 4085 int NumOutputs = 0; 4086 Type *RetTy = cast<Value>(CB)->getType(); 4087 if (!RetTy->isVoidTy()) { 4088 // Register outputs are returned via the CallInst return value. 4089 auto *ST = dyn_cast<StructType>(RetTy); 4090 if (ST) 4091 NumRetOutputs = ST->getNumElements(); 4092 else 4093 NumRetOutputs = 1; 4094 } 4095 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 4096 for (const InlineAsm::ConstraintInfo &Info : Constraints) { 4097 switch (Info.Type) { 4098 case InlineAsm::isOutput: 4099 NumOutputs++; 4100 break; 4101 default: 4102 break; 4103 } 4104 } 4105 return NumOutputs - NumRetOutputs; 4106 } 4107 4108 void visitAsmInstruction(Instruction &I) { 4109 // Conservative inline assembly handling: check for poisoned shadow of 4110 // asm() arguments, then unpoison the result and all the memory locations 4111 // pointed to by those arguments. 4112 // An inline asm() statement in C++ contains lists of input and output 4113 // arguments used by the assembly code. These are mapped to operands of the 4114 // CallInst as follows: 4115 // - nR register outputs ("=r) are returned by value in a single structure 4116 // (SSA value of the CallInst); 4117 // - nO other outputs ("=m" and others) are returned by pointer as first 4118 // nO operands of the CallInst; 4119 // - nI inputs ("r", "m" and others) are passed to CallInst as the 4120 // remaining nI operands. 4121 // The total number of asm() arguments in the source is nR+nO+nI, and the 4122 // corresponding CallInst has nO+nI+1 operands (the last operand is the 4123 // function to be called). 4124 const DataLayout &DL = F.getParent()->getDataLayout(); 4125 CallBase *CB = cast<CallBase>(&I); 4126 IRBuilder<> IRB(&I); 4127 InlineAsm *IA = cast<InlineAsm>(CB->getCalledOperand()); 4128 int OutputArgs = getNumOutputArgs(IA, CB); 4129 // The last operand of a CallInst is the function itself. 4130 int NumOperands = CB->getNumOperands() - 1; 4131 4132 // Check input arguments. Doing so before unpoisoning output arguments, so 4133 // that we won't overwrite uninit values before checking them. 4134 for (int i = OutputArgs; i < NumOperands; i++) { 4135 Value *Operand = CB->getOperand(i); 4136 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 4137 } 4138 // Unpoison output arguments. This must happen before the actual InlineAsm 4139 // call, so that the shadow for memory published in the asm() statement 4140 // remains valid. 4141 for (int i = 0; i < OutputArgs; i++) { 4142 Value *Operand = CB->getOperand(i); 4143 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 4144 } 4145 4146 setShadow(&I, getCleanShadow(&I)); 4147 setOrigin(&I, getCleanOrigin()); 4148 } 4149 4150 void visitFreezeInst(FreezeInst &I) { 4151 // Freeze always returns a fully defined value. 4152 setShadow(&I, getCleanShadow(&I)); 4153 setOrigin(&I, getCleanOrigin()); 4154 } 4155 4156 void visitInstruction(Instruction &I) { 4157 // Everything else: stop propagating and check for poisoned shadow. 4158 if (ClDumpStrictInstructions) 4159 dumpInst(I); 4160 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 4161 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 4162 Value *Operand = I.getOperand(i); 4163 if (Operand->getType()->isSized()) 4164 insertShadowCheck(Operand, &I); 4165 } 4166 setShadow(&I, getCleanShadow(&I)); 4167 setOrigin(&I, getCleanOrigin()); 4168 } 4169 }; 4170 4171 /// AMD64-specific implementation of VarArgHelper. 4172 struct VarArgAMD64Helper : public VarArgHelper { 4173 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 4174 // See a comment in visitCallBase for more details. 4175 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 4176 static const unsigned AMD64FpEndOffsetSSE = 176; 4177 // If SSE is disabled, fp_offset in va_list is zero. 4178 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 4179 4180 unsigned AMD64FpEndOffset; 4181 Function &F; 4182 MemorySanitizer &MS; 4183 MemorySanitizerVisitor &MSV; 4184 Value *VAArgTLSCopy = nullptr; 4185 Value *VAArgTLSOriginCopy = nullptr; 4186 Value *VAArgOverflowSize = nullptr; 4187 4188 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4189 4190 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4191 4192 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 4193 MemorySanitizerVisitor &MSV) 4194 : F(F), MS(MS), MSV(MSV) { 4195 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 4196 for (const auto &Attr : F.getAttributes().getFnAttrs()) { 4197 if (Attr.isStringAttribute() && 4198 (Attr.getKindAsString() == "target-features")) { 4199 if (Attr.getValueAsString().contains("-sse")) 4200 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 4201 break; 4202 } 4203 } 4204 } 4205 4206 ArgKind classifyArgument(Value* arg) { 4207 // A very rough approximation of X86_64 argument classification rules. 4208 Type *T = arg->getType(); 4209 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 4210 return AK_FloatingPoint; 4211 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4212 return AK_GeneralPurpose; 4213 if (T->isPointerTy()) 4214 return AK_GeneralPurpose; 4215 return AK_Memory; 4216 } 4217 4218 // For VarArg functions, store the argument shadow in an ABI-specific format 4219 // that corresponds to va_list layout. 4220 // We do this because Clang lowers va_arg in the frontend, and this pass 4221 // only sees the low level code that deals with va_list internals. 4222 // A much easier alternative (provided that Clang emits va_arg instructions) 4223 // would have been to associate each live instance of va_list with a copy of 4224 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 4225 // order. 4226 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4227 unsigned GpOffset = 0; 4228 unsigned FpOffset = AMD64GpEndOffset; 4229 unsigned OverflowOffset = AMD64FpEndOffset; 4230 const DataLayout &DL = F.getParent()->getDataLayout(); 4231 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4232 ++ArgIt) { 4233 Value *A = *ArgIt; 4234 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4235 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4236 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4237 if (IsByVal) { 4238 // ByVal arguments always go to the overflow area. 4239 // Fixed arguments passed through the overflow area will be stepped 4240 // over by va_start, so don't count them towards the offset. 4241 if (IsFixed) 4242 continue; 4243 assert(A->getType()->isPointerTy()); 4244 Type *RealTy = CB.getParamByValType(ArgNo); 4245 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4246 Value *ShadowBase = getShadowPtrForVAArgument( 4247 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 4248 Value *OriginBase = nullptr; 4249 if (MS.TrackOrigins) 4250 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 4251 OverflowOffset += alignTo(ArgSize, 8); 4252 if (!ShadowBase) 4253 continue; 4254 Value *ShadowPtr, *OriginPtr; 4255 std::tie(ShadowPtr, OriginPtr) = 4256 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 4257 /*isStore*/ false); 4258 4259 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 4260 kShadowTLSAlignment, ArgSize); 4261 if (MS.TrackOrigins) 4262 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 4263 kShadowTLSAlignment, ArgSize); 4264 } else { 4265 ArgKind AK = classifyArgument(A); 4266 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 4267 AK = AK_Memory; 4268 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 4269 AK = AK_Memory; 4270 Value *ShadowBase, *OriginBase = nullptr; 4271 switch (AK) { 4272 case AK_GeneralPurpose: 4273 ShadowBase = 4274 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 4275 if (MS.TrackOrigins) 4276 OriginBase = 4277 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 4278 GpOffset += 8; 4279 break; 4280 case AK_FloatingPoint: 4281 ShadowBase = 4282 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 4283 if (MS.TrackOrigins) 4284 OriginBase = 4285 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 4286 FpOffset += 16; 4287 break; 4288 case AK_Memory: 4289 if (IsFixed) 4290 continue; 4291 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4292 ShadowBase = 4293 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 4294 if (MS.TrackOrigins) 4295 OriginBase = 4296 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 4297 OverflowOffset += alignTo(ArgSize, 8); 4298 } 4299 // Take fixed arguments into account for GpOffset and FpOffset, 4300 // but don't actually store shadows for them. 4301 // TODO(glider): don't call get*PtrForVAArgument() for them. 4302 if (IsFixed) 4303 continue; 4304 if (!ShadowBase) 4305 continue; 4306 Value *Shadow = MSV.getShadow(A); 4307 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 4308 if (MS.TrackOrigins) { 4309 Value *Origin = MSV.getOrigin(A); 4310 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 4311 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 4312 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 4313 } 4314 } 4315 } 4316 Constant *OverflowSize = 4317 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 4318 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4319 } 4320 4321 /// Compute the shadow address for a given va_arg. 4322 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4323 unsigned ArgOffset, unsigned ArgSize) { 4324 // Make sure we don't overflow __msan_va_arg_tls. 4325 if (ArgOffset + ArgSize > kParamTLSSize) 4326 return nullptr; 4327 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4328 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4329 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4330 "_msarg_va_s"); 4331 } 4332 4333 /// Compute the origin address for a given va_arg. 4334 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 4335 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 4336 // getOriginPtrForVAArgument() is always called after 4337 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 4338 // overflow. 4339 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4340 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 4341 "_msarg_va_o"); 4342 } 4343 4344 void unpoisonVAListTagForInst(IntrinsicInst &I) { 4345 IRBuilder<> IRB(&I); 4346 Value *VAListTag = I.getArgOperand(0); 4347 Value *ShadowPtr, *OriginPtr; 4348 const Align Alignment = Align(8); 4349 std::tie(ShadowPtr, OriginPtr) = 4350 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 4351 /*isStore*/ true); 4352 4353 // Unpoison the whole __va_list_tag. 4354 // FIXME: magic ABI constants. 4355 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4356 /* size */ 24, Alignment, false); 4357 // We shouldn't need to zero out the origins, as they're only checked for 4358 // nonzero shadow. 4359 } 4360 4361 void visitVAStartInst(VAStartInst &I) override { 4362 if (F.getCallingConv() == CallingConv::Win64) 4363 return; 4364 VAStartInstrumentationList.push_back(&I); 4365 unpoisonVAListTagForInst(I); 4366 } 4367 4368 void visitVACopyInst(VACopyInst &I) override { 4369 if (F.getCallingConv() == CallingConv::Win64) return; 4370 unpoisonVAListTagForInst(I); 4371 } 4372 4373 void finalizeInstrumentation() override { 4374 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4375 "finalizeInstrumentation called twice"); 4376 if (!VAStartInstrumentationList.empty()) { 4377 // If there is a va_start in this function, make a backup copy of 4378 // va_arg_tls somewhere in the function entry block. 4379 IRBuilder<> IRB(MSV.FnPrologueEnd); 4380 VAArgOverflowSize = 4381 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4382 Value *CopySize = 4383 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 4384 VAArgOverflowSize); 4385 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4386 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4387 if (MS.TrackOrigins) { 4388 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4389 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 4390 Align(8), CopySize); 4391 } 4392 } 4393 4394 // Instrument va_start. 4395 // Copy va_list shadow from the backup copy of the TLS contents. 4396 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4397 CallInst *OrigInst = VAStartInstrumentationList[i]; 4398 IRBuilder<> IRB(OrigInst->getNextNode()); 4399 Value *VAListTag = OrigInst->getArgOperand(0); 4400 4401 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4402 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 4403 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4404 ConstantInt::get(MS.IntptrTy, 16)), 4405 PointerType::get(RegSaveAreaPtrTy, 0)); 4406 Value *RegSaveAreaPtr = 4407 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4408 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4409 const Align Alignment = Align(16); 4410 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4411 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4412 Alignment, /*isStore*/ true); 4413 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4414 AMD64FpEndOffset); 4415 if (MS.TrackOrigins) 4416 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 4417 Alignment, AMD64FpEndOffset); 4418 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4419 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 4420 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4421 ConstantInt::get(MS.IntptrTy, 8)), 4422 PointerType::get(OverflowArgAreaPtrTy, 0)); 4423 Value *OverflowArgAreaPtr = 4424 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 4425 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 4426 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 4427 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 4428 Alignment, /*isStore*/ true); 4429 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 4430 AMD64FpEndOffset); 4431 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 4432 VAArgOverflowSize); 4433 if (MS.TrackOrigins) { 4434 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 4435 AMD64FpEndOffset); 4436 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 4437 VAArgOverflowSize); 4438 } 4439 } 4440 } 4441 }; 4442 4443 /// MIPS64-specific implementation of VarArgHelper. 4444 struct VarArgMIPS64Helper : public VarArgHelper { 4445 Function &F; 4446 MemorySanitizer &MS; 4447 MemorySanitizerVisitor &MSV; 4448 Value *VAArgTLSCopy = nullptr; 4449 Value *VAArgSize = nullptr; 4450 4451 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4452 4453 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 4454 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4455 4456 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4457 unsigned VAArgOffset = 0; 4458 const DataLayout &DL = F.getParent()->getDataLayout(); 4459 for (auto ArgIt = CB.arg_begin() + CB.getFunctionType()->getNumParams(), 4460 End = CB.arg_end(); 4461 ArgIt != End; ++ArgIt) { 4462 Triple TargetTriple(F.getParent()->getTargetTriple()); 4463 Value *A = *ArgIt; 4464 Value *Base; 4465 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4466 if (TargetTriple.getArch() == Triple::mips64) { 4467 // Adjusting the shadow for argument with size < 8 to match the placement 4468 // of bits in big endian system 4469 if (ArgSize < 8) 4470 VAArgOffset += (8 - ArgSize); 4471 } 4472 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 4473 VAArgOffset += ArgSize; 4474 VAArgOffset = alignTo(VAArgOffset, 8); 4475 if (!Base) 4476 continue; 4477 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4478 } 4479 4480 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 4481 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4482 // a new class member i.e. it is the total size of all VarArgs. 4483 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4484 } 4485 4486 /// Compute the shadow address for a given va_arg. 4487 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4488 unsigned ArgOffset, unsigned ArgSize) { 4489 // Make sure we don't overflow __msan_va_arg_tls. 4490 if (ArgOffset + ArgSize > kParamTLSSize) 4491 return nullptr; 4492 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4493 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4494 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4495 "_msarg"); 4496 } 4497 4498 void visitVAStartInst(VAStartInst &I) override { 4499 IRBuilder<> IRB(&I); 4500 VAStartInstrumentationList.push_back(&I); 4501 Value *VAListTag = I.getArgOperand(0); 4502 Value *ShadowPtr, *OriginPtr; 4503 const Align Alignment = Align(8); 4504 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4505 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4506 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4507 /* size */ 8, Alignment, false); 4508 } 4509 4510 void visitVACopyInst(VACopyInst &I) override { 4511 IRBuilder<> IRB(&I); 4512 VAStartInstrumentationList.push_back(&I); 4513 Value *VAListTag = I.getArgOperand(0); 4514 Value *ShadowPtr, *OriginPtr; 4515 const Align Alignment = Align(8); 4516 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4517 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4518 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4519 /* size */ 8, Alignment, false); 4520 } 4521 4522 void finalizeInstrumentation() override { 4523 assert(!VAArgSize && !VAArgTLSCopy && 4524 "finalizeInstrumentation called twice"); 4525 IRBuilder<> IRB(MSV.FnPrologueEnd); 4526 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4527 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4528 VAArgSize); 4529 4530 if (!VAStartInstrumentationList.empty()) { 4531 // If there is a va_start in this function, make a backup copy of 4532 // va_arg_tls somewhere in the function entry block. 4533 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4534 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4535 } 4536 4537 // Instrument va_start. 4538 // Copy va_list shadow from the backup copy of the TLS contents. 4539 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4540 CallInst *OrigInst = VAStartInstrumentationList[i]; 4541 IRBuilder<> IRB(OrigInst->getNextNode()); 4542 Value *VAListTag = OrigInst->getArgOperand(0); 4543 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4544 Value *RegSaveAreaPtrPtr = 4545 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4546 PointerType::get(RegSaveAreaPtrTy, 0)); 4547 Value *RegSaveAreaPtr = 4548 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4549 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4550 const Align Alignment = Align(8); 4551 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4552 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4553 Alignment, /*isStore*/ true); 4554 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4555 CopySize); 4556 } 4557 } 4558 }; 4559 4560 /// AArch64-specific implementation of VarArgHelper. 4561 struct VarArgAArch64Helper : public VarArgHelper { 4562 static const unsigned kAArch64GrArgSize = 64; 4563 static const unsigned kAArch64VrArgSize = 128; 4564 4565 static const unsigned AArch64GrBegOffset = 0; 4566 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 4567 // Make VR space aligned to 16 bytes. 4568 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 4569 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 4570 + kAArch64VrArgSize; 4571 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 4572 4573 Function &F; 4574 MemorySanitizer &MS; 4575 MemorySanitizerVisitor &MSV; 4576 Value *VAArgTLSCopy = nullptr; 4577 Value *VAArgOverflowSize = nullptr; 4578 4579 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4580 4581 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4582 4583 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 4584 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4585 4586 ArgKind classifyArgument(Value* arg) { 4587 Type *T = arg->getType(); 4588 if (T->isFPOrFPVectorTy()) 4589 return AK_FloatingPoint; 4590 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4591 || (T->isPointerTy())) 4592 return AK_GeneralPurpose; 4593 return AK_Memory; 4594 } 4595 4596 // The instrumentation stores the argument shadow in a non ABI-specific 4597 // format because it does not know which argument is named (since Clang, 4598 // like x86_64 case, lowers the va_args in the frontend and this pass only 4599 // sees the low level code that deals with va_list internals). 4600 // The first seven GR registers are saved in the first 56 bytes of the 4601 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4602 // the remaining arguments. 4603 // Using constant offset within the va_arg TLS array allows fast copy 4604 // in the finalize instrumentation. 4605 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4606 unsigned GrOffset = AArch64GrBegOffset; 4607 unsigned VrOffset = AArch64VrBegOffset; 4608 unsigned OverflowOffset = AArch64VAEndOffset; 4609 4610 const DataLayout &DL = F.getParent()->getDataLayout(); 4611 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4612 ++ArgIt) { 4613 Value *A = *ArgIt; 4614 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4615 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4616 ArgKind AK = classifyArgument(A); 4617 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4618 AK = AK_Memory; 4619 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4620 AK = AK_Memory; 4621 Value *Base; 4622 switch (AK) { 4623 case AK_GeneralPurpose: 4624 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4625 GrOffset += 8; 4626 break; 4627 case AK_FloatingPoint: 4628 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4629 VrOffset += 16; 4630 break; 4631 case AK_Memory: 4632 // Don't count fixed arguments in the overflow area - va_start will 4633 // skip right over them. 4634 if (IsFixed) 4635 continue; 4636 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4637 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4638 alignTo(ArgSize, 8)); 4639 OverflowOffset += alignTo(ArgSize, 8); 4640 break; 4641 } 4642 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4643 // bother to actually store a shadow. 4644 if (IsFixed) 4645 continue; 4646 if (!Base) 4647 continue; 4648 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4649 } 4650 Constant *OverflowSize = 4651 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4652 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4653 } 4654 4655 /// Compute the shadow address for a given va_arg. 4656 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4657 unsigned ArgOffset, unsigned ArgSize) { 4658 // Make sure we don't overflow __msan_va_arg_tls. 4659 if (ArgOffset + ArgSize > kParamTLSSize) 4660 return nullptr; 4661 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4662 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4663 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4664 "_msarg"); 4665 } 4666 4667 void visitVAStartInst(VAStartInst &I) override { 4668 IRBuilder<> IRB(&I); 4669 VAStartInstrumentationList.push_back(&I); 4670 Value *VAListTag = I.getArgOperand(0); 4671 Value *ShadowPtr, *OriginPtr; 4672 const Align Alignment = Align(8); 4673 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4674 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4675 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4676 /* size */ 32, Alignment, false); 4677 } 4678 4679 void visitVACopyInst(VACopyInst &I) override { 4680 IRBuilder<> IRB(&I); 4681 VAStartInstrumentationList.push_back(&I); 4682 Value *VAListTag = I.getArgOperand(0); 4683 Value *ShadowPtr, *OriginPtr; 4684 const Align Alignment = Align(8); 4685 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4686 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4687 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4688 /* size */ 32, Alignment, false); 4689 } 4690 4691 // Retrieve a va_list field of 'void*' size. 4692 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4693 Value *SaveAreaPtrPtr = 4694 IRB.CreateIntToPtr( 4695 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4696 ConstantInt::get(MS.IntptrTy, offset)), 4697 Type::getInt64PtrTy(*MS.C)); 4698 return IRB.CreateLoad(Type::getInt64Ty(*MS.C), SaveAreaPtrPtr); 4699 } 4700 4701 // Retrieve a va_list field of 'int' size. 4702 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4703 Value *SaveAreaPtr = 4704 IRB.CreateIntToPtr( 4705 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4706 ConstantInt::get(MS.IntptrTy, offset)), 4707 Type::getInt32PtrTy(*MS.C)); 4708 Value *SaveArea32 = IRB.CreateLoad(IRB.getInt32Ty(), SaveAreaPtr); 4709 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4710 } 4711 4712 void finalizeInstrumentation() override { 4713 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4714 "finalizeInstrumentation called twice"); 4715 if (!VAStartInstrumentationList.empty()) { 4716 // If there is a va_start in this function, make a backup copy of 4717 // va_arg_tls somewhere in the function entry block. 4718 IRBuilder<> IRB(MSV.FnPrologueEnd); 4719 VAArgOverflowSize = 4720 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4721 Value *CopySize = 4722 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4723 VAArgOverflowSize); 4724 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4725 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4726 } 4727 4728 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4729 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4730 4731 // Instrument va_start, copy va_list shadow from the backup copy of 4732 // the TLS contents. 4733 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4734 CallInst *OrigInst = VAStartInstrumentationList[i]; 4735 IRBuilder<> IRB(OrigInst->getNextNode()); 4736 4737 Value *VAListTag = OrigInst->getArgOperand(0); 4738 4739 // The variadic ABI for AArch64 creates two areas to save the incoming 4740 // argument registers (one for 64-bit general register xn-x7 and another 4741 // for 128-bit FP/SIMD vn-v7). 4742 // We need then to propagate the shadow arguments on both regions 4743 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4744 // The remaining arguments are saved on shadow for 'va::stack'. 4745 // One caveat is it requires only to propagate the non-named arguments, 4746 // however on the call site instrumentation 'all' the arguments are 4747 // saved. So to copy the shadow values from the va_arg TLS array 4748 // we need to adjust the offset for both GR and VR fields based on 4749 // the __{gr,vr}_offs value (since they are stores based on incoming 4750 // named arguments). 4751 4752 // Read the stack pointer from the va_list. 4753 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4754 4755 // Read both the __gr_top and __gr_off and add them up. 4756 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4757 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4758 4759 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4760 4761 // Read both the __vr_top and __vr_off and add them up. 4762 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4763 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4764 4765 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4766 4767 // It does not know how many named arguments is being used and, on the 4768 // callsite all the arguments were saved. Since __gr_off is defined as 4769 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4770 // argument by ignoring the bytes of shadow from named arguments. 4771 Value *GrRegSaveAreaShadowPtrOff = 4772 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4773 4774 Value *GrRegSaveAreaShadowPtr = 4775 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4776 Align(8), /*isStore*/ true) 4777 .first; 4778 4779 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4780 GrRegSaveAreaShadowPtrOff); 4781 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4782 4783 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, Align(8), GrSrcPtr, Align(8), 4784 GrCopySize); 4785 4786 // Again, but for FP/SIMD values. 4787 Value *VrRegSaveAreaShadowPtrOff = 4788 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4789 4790 Value *VrRegSaveAreaShadowPtr = 4791 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4792 Align(8), /*isStore*/ true) 4793 .first; 4794 4795 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4796 IRB.getInt8Ty(), 4797 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4798 IRB.getInt32(AArch64VrBegOffset)), 4799 VrRegSaveAreaShadowPtrOff); 4800 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4801 4802 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, Align(8), VrSrcPtr, Align(8), 4803 VrCopySize); 4804 4805 // And finally for remaining arguments. 4806 Value *StackSaveAreaShadowPtr = 4807 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4808 Align(16), /*isStore*/ true) 4809 .first; 4810 4811 Value *StackSrcPtr = 4812 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4813 IRB.getInt32(AArch64VAEndOffset)); 4814 4815 IRB.CreateMemCpy(StackSaveAreaShadowPtr, Align(16), StackSrcPtr, 4816 Align(16), VAArgOverflowSize); 4817 } 4818 } 4819 }; 4820 4821 /// PowerPC64-specific implementation of VarArgHelper. 4822 struct VarArgPowerPC64Helper : public VarArgHelper { 4823 Function &F; 4824 MemorySanitizer &MS; 4825 MemorySanitizerVisitor &MSV; 4826 Value *VAArgTLSCopy = nullptr; 4827 Value *VAArgSize = nullptr; 4828 4829 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4830 4831 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4832 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4833 4834 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4835 // For PowerPC, we need to deal with alignment of stack arguments - 4836 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4837 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4838 // For that reason, we compute current offset from stack pointer (which is 4839 // always properly aligned), and offset for the first vararg, then subtract 4840 // them. 4841 unsigned VAArgBase; 4842 Triple TargetTriple(F.getParent()->getTargetTriple()); 4843 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4844 // and 32 bytes for ABIv2. This is usually determined by target 4845 // endianness, but in theory could be overridden by function attribute. 4846 if (TargetTriple.getArch() == Triple::ppc64) 4847 VAArgBase = 48; 4848 else 4849 VAArgBase = 32; 4850 unsigned VAArgOffset = VAArgBase; 4851 const DataLayout &DL = F.getParent()->getDataLayout(); 4852 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4853 ++ArgIt) { 4854 Value *A = *ArgIt; 4855 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4856 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4857 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4858 if (IsByVal) { 4859 assert(A->getType()->isPointerTy()); 4860 Type *RealTy = CB.getParamByValType(ArgNo); 4861 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4862 MaybeAlign ArgAlign = CB.getParamAlign(ArgNo); 4863 if (!ArgAlign || *ArgAlign < Align(8)) 4864 ArgAlign = Align(8); 4865 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4866 if (!IsFixed) { 4867 Value *Base = getShadowPtrForVAArgument( 4868 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4869 if (Base) { 4870 Value *AShadowPtr, *AOriginPtr; 4871 std::tie(AShadowPtr, AOriginPtr) = 4872 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4873 kShadowTLSAlignment, /*isStore*/ false); 4874 4875 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4876 kShadowTLSAlignment, ArgSize); 4877 } 4878 } 4879 VAArgOffset += alignTo(ArgSize, 8); 4880 } else { 4881 Value *Base; 4882 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4883 uint64_t ArgAlign = 8; 4884 if (A->getType()->isArrayTy()) { 4885 // Arrays are aligned to element size, except for long double 4886 // arrays, which are aligned to 8 bytes. 4887 Type *ElementTy = A->getType()->getArrayElementType(); 4888 if (!ElementTy->isPPC_FP128Ty()) 4889 ArgAlign = DL.getTypeAllocSize(ElementTy); 4890 } else if (A->getType()->isVectorTy()) { 4891 // Vectors are naturally aligned. 4892 ArgAlign = DL.getTypeAllocSize(A->getType()); 4893 } 4894 if (ArgAlign < 8) 4895 ArgAlign = 8; 4896 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4897 if (DL.isBigEndian()) { 4898 // Adjusting the shadow for argument with size < 8 to match the placement 4899 // of bits in big endian system 4900 if (ArgSize < 8) 4901 VAArgOffset += (8 - ArgSize); 4902 } 4903 if (!IsFixed) { 4904 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4905 VAArgOffset - VAArgBase, ArgSize); 4906 if (Base) 4907 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4908 } 4909 VAArgOffset += ArgSize; 4910 VAArgOffset = alignTo(VAArgOffset, 8); 4911 } 4912 if (IsFixed) 4913 VAArgBase = VAArgOffset; 4914 } 4915 4916 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4917 VAArgOffset - VAArgBase); 4918 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4919 // a new class member i.e. it is the total size of all VarArgs. 4920 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4921 } 4922 4923 /// Compute the shadow address for a given va_arg. 4924 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4925 unsigned ArgOffset, unsigned ArgSize) { 4926 // Make sure we don't overflow __msan_va_arg_tls. 4927 if (ArgOffset + ArgSize > kParamTLSSize) 4928 return nullptr; 4929 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4930 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4931 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4932 "_msarg"); 4933 } 4934 4935 void visitVAStartInst(VAStartInst &I) override { 4936 IRBuilder<> IRB(&I); 4937 VAStartInstrumentationList.push_back(&I); 4938 Value *VAListTag = I.getArgOperand(0); 4939 Value *ShadowPtr, *OriginPtr; 4940 const Align Alignment = Align(8); 4941 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4942 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4943 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4944 /* size */ 8, Alignment, false); 4945 } 4946 4947 void visitVACopyInst(VACopyInst &I) override { 4948 IRBuilder<> IRB(&I); 4949 Value *VAListTag = I.getArgOperand(0); 4950 Value *ShadowPtr, *OriginPtr; 4951 const Align Alignment = Align(8); 4952 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4953 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4954 // Unpoison the whole __va_list_tag. 4955 // FIXME: magic ABI constants. 4956 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4957 /* size */ 8, Alignment, false); 4958 } 4959 4960 void finalizeInstrumentation() override { 4961 assert(!VAArgSize && !VAArgTLSCopy && 4962 "finalizeInstrumentation called twice"); 4963 IRBuilder<> IRB(MSV.FnPrologueEnd); 4964 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4965 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4966 VAArgSize); 4967 4968 if (!VAStartInstrumentationList.empty()) { 4969 // If there is a va_start in this function, make a backup copy of 4970 // va_arg_tls somewhere in the function entry block. 4971 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4972 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4973 } 4974 4975 // Instrument va_start. 4976 // Copy va_list shadow from the backup copy of the TLS contents. 4977 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4978 CallInst *OrigInst = VAStartInstrumentationList[i]; 4979 IRBuilder<> IRB(OrigInst->getNextNode()); 4980 Value *VAListTag = OrigInst->getArgOperand(0); 4981 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4982 Value *RegSaveAreaPtrPtr = 4983 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4984 PointerType::get(RegSaveAreaPtrTy, 0)); 4985 Value *RegSaveAreaPtr = 4986 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4987 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4988 const Align Alignment = Align(8); 4989 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4990 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4991 Alignment, /*isStore*/ true); 4992 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4993 CopySize); 4994 } 4995 } 4996 }; 4997 4998 /// SystemZ-specific implementation of VarArgHelper. 4999 struct VarArgSystemZHelper : public VarArgHelper { 5000 static const unsigned SystemZGpOffset = 16; 5001 static const unsigned SystemZGpEndOffset = 56; 5002 static const unsigned SystemZFpOffset = 128; 5003 static const unsigned SystemZFpEndOffset = 160; 5004 static const unsigned SystemZMaxVrArgs = 8; 5005 static const unsigned SystemZRegSaveAreaSize = 160; 5006 static const unsigned SystemZOverflowOffset = 160; 5007 static const unsigned SystemZVAListTagSize = 32; 5008 static const unsigned SystemZOverflowArgAreaPtrOffset = 16; 5009 static const unsigned SystemZRegSaveAreaPtrOffset = 24; 5010 5011 Function &F; 5012 MemorySanitizer &MS; 5013 MemorySanitizerVisitor &MSV; 5014 Value *VAArgTLSCopy = nullptr; 5015 Value *VAArgTLSOriginCopy = nullptr; 5016 Value *VAArgOverflowSize = nullptr; 5017 5018 SmallVector<CallInst *, 16> VAStartInstrumentationList; 5019 5020 enum class ArgKind { 5021 GeneralPurpose, 5022 FloatingPoint, 5023 Vector, 5024 Memory, 5025 Indirect, 5026 }; 5027 5028 enum class ShadowExtension { None, Zero, Sign }; 5029 5030 VarArgSystemZHelper(Function &F, MemorySanitizer &MS, 5031 MemorySanitizerVisitor &MSV) 5032 : F(F), MS(MS), MSV(MSV) {} 5033 5034 ArgKind classifyArgument(Type *T, bool IsSoftFloatABI) { 5035 // T is a SystemZABIInfo::classifyArgumentType() output, and there are 5036 // only a few possibilities of what it can be. In particular, enums, single 5037 // element structs and large types have already been taken care of. 5038 5039 // Some i128 and fp128 arguments are converted to pointers only in the 5040 // back end. 5041 if (T->isIntegerTy(128) || T->isFP128Ty()) 5042 return ArgKind::Indirect; 5043 if (T->isFloatingPointTy()) 5044 return IsSoftFloatABI ? ArgKind::GeneralPurpose : ArgKind::FloatingPoint; 5045 if (T->isIntegerTy() || T->isPointerTy()) 5046 return ArgKind::GeneralPurpose; 5047 if (T->isVectorTy()) 5048 return ArgKind::Vector; 5049 return ArgKind::Memory; 5050 } 5051 5052 ShadowExtension getShadowExtension(const CallBase &CB, unsigned ArgNo) { 5053 // ABI says: "One of the simple integer types no more than 64 bits wide. 5054 // ... If such an argument is shorter than 64 bits, replace it by a full 5055 // 64-bit integer representing the same number, using sign or zero 5056 // extension". Shadow for an integer argument has the same type as the 5057 // argument itself, so it can be sign or zero extended as well. 5058 bool ZExt = CB.paramHasAttr(ArgNo, Attribute::ZExt); 5059 bool SExt = CB.paramHasAttr(ArgNo, Attribute::SExt); 5060 if (ZExt) { 5061 assert(!SExt); 5062 return ShadowExtension::Zero; 5063 } 5064 if (SExt) { 5065 assert(!ZExt); 5066 return ShadowExtension::Sign; 5067 } 5068 return ShadowExtension::None; 5069 } 5070 5071 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 5072 bool IsSoftFloatABI = CB.getCalledFunction() 5073 ->getFnAttribute("use-soft-float") 5074 .getValueAsBool(); 5075 unsigned GpOffset = SystemZGpOffset; 5076 unsigned FpOffset = SystemZFpOffset; 5077 unsigned VrIndex = 0; 5078 unsigned OverflowOffset = SystemZOverflowOffset; 5079 const DataLayout &DL = F.getParent()->getDataLayout(); 5080 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 5081 ++ArgIt) { 5082 Value *A = *ArgIt; 5083 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 5084 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 5085 // SystemZABIInfo does not produce ByVal parameters. 5086 assert(!CB.paramHasAttr(ArgNo, Attribute::ByVal)); 5087 Type *T = A->getType(); 5088 ArgKind AK = classifyArgument(T, IsSoftFloatABI); 5089 if (AK == ArgKind::Indirect) { 5090 T = PointerType::get(T, 0); 5091 AK = ArgKind::GeneralPurpose; 5092 } 5093 if (AK == ArgKind::GeneralPurpose && GpOffset >= SystemZGpEndOffset) 5094 AK = ArgKind::Memory; 5095 if (AK == ArgKind::FloatingPoint && FpOffset >= SystemZFpEndOffset) 5096 AK = ArgKind::Memory; 5097 if (AK == ArgKind::Vector && (VrIndex >= SystemZMaxVrArgs || !IsFixed)) 5098 AK = ArgKind::Memory; 5099 Value *ShadowBase = nullptr; 5100 Value *OriginBase = nullptr; 5101 ShadowExtension SE = ShadowExtension::None; 5102 switch (AK) { 5103 case ArgKind::GeneralPurpose: { 5104 // Always keep track of GpOffset, but store shadow only for varargs. 5105 uint64_t ArgSize = 8; 5106 if (GpOffset + ArgSize <= kParamTLSSize) { 5107 if (!IsFixed) { 5108 SE = getShadowExtension(CB, ArgNo); 5109 uint64_t GapSize = 0; 5110 if (SE == ShadowExtension::None) { 5111 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5112 assert(ArgAllocSize <= ArgSize); 5113 GapSize = ArgSize - ArgAllocSize; 5114 } 5115 ShadowBase = getShadowAddrForVAArgument(IRB, GpOffset + GapSize); 5116 if (MS.TrackOrigins) 5117 OriginBase = getOriginPtrForVAArgument(IRB, GpOffset + GapSize); 5118 } 5119 GpOffset += ArgSize; 5120 } else { 5121 GpOffset = kParamTLSSize; 5122 } 5123 break; 5124 } 5125 case ArgKind::FloatingPoint: { 5126 // Always keep track of FpOffset, but store shadow only for varargs. 5127 uint64_t ArgSize = 8; 5128 if (FpOffset + ArgSize <= kParamTLSSize) { 5129 if (!IsFixed) { 5130 // PoP says: "A short floating-point datum requires only the 5131 // left-most 32 bit positions of a floating-point register". 5132 // Therefore, in contrast to AK_GeneralPurpose and AK_Memory, 5133 // don't extend shadow and don't mind the gap. 5134 ShadowBase = getShadowAddrForVAArgument(IRB, FpOffset); 5135 if (MS.TrackOrigins) 5136 OriginBase = getOriginPtrForVAArgument(IRB, FpOffset); 5137 } 5138 FpOffset += ArgSize; 5139 } else { 5140 FpOffset = kParamTLSSize; 5141 } 5142 break; 5143 } 5144 case ArgKind::Vector: { 5145 // Keep track of VrIndex. No need to store shadow, since vector varargs 5146 // go through AK_Memory. 5147 assert(IsFixed); 5148 VrIndex++; 5149 break; 5150 } 5151 case ArgKind::Memory: { 5152 // Keep track of OverflowOffset and store shadow only for varargs. 5153 // Ignore fixed args, since we need to copy only the vararg portion of 5154 // the overflow area shadow. 5155 if (!IsFixed) { 5156 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5157 uint64_t ArgSize = alignTo(ArgAllocSize, 8); 5158 if (OverflowOffset + ArgSize <= kParamTLSSize) { 5159 SE = getShadowExtension(CB, ArgNo); 5160 uint64_t GapSize = 5161 SE == ShadowExtension::None ? ArgSize - ArgAllocSize : 0; 5162 ShadowBase = 5163 getShadowAddrForVAArgument(IRB, OverflowOffset + GapSize); 5164 if (MS.TrackOrigins) 5165 OriginBase = 5166 getOriginPtrForVAArgument(IRB, OverflowOffset + GapSize); 5167 OverflowOffset += ArgSize; 5168 } else { 5169 OverflowOffset = kParamTLSSize; 5170 } 5171 } 5172 break; 5173 } 5174 case ArgKind::Indirect: 5175 llvm_unreachable("Indirect must be converted to GeneralPurpose"); 5176 } 5177 if (ShadowBase == nullptr) 5178 continue; 5179 Value *Shadow = MSV.getShadow(A); 5180 if (SE != ShadowExtension::None) 5181 Shadow = MSV.CreateShadowCast(IRB, Shadow, IRB.getInt64Ty(), 5182 /*Signed*/ SE == ShadowExtension::Sign); 5183 ShadowBase = IRB.CreateIntToPtr( 5184 ShadowBase, PointerType::get(Shadow->getType(), 0), "_msarg_va_s"); 5185 IRB.CreateStore(Shadow, ShadowBase); 5186 if (MS.TrackOrigins) { 5187 Value *Origin = MSV.getOrigin(A); 5188 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 5189 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 5190 kMinOriginAlignment); 5191 } 5192 } 5193 Constant *OverflowSize = ConstantInt::get( 5194 IRB.getInt64Ty(), OverflowOffset - SystemZOverflowOffset); 5195 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 5196 } 5197 5198 Value *getShadowAddrForVAArgument(IRBuilder<> &IRB, unsigned ArgOffset) { 5199 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 5200 return IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5201 } 5202 5203 Value *getOriginPtrForVAArgument(IRBuilder<> &IRB, int ArgOffset) { 5204 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 5205 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5206 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 5207 "_msarg_va_o"); 5208 } 5209 5210 void unpoisonVAListTagForInst(IntrinsicInst &I) { 5211 IRBuilder<> IRB(&I); 5212 Value *VAListTag = I.getArgOperand(0); 5213 Value *ShadowPtr, *OriginPtr; 5214 const Align Alignment = Align(8); 5215 std::tie(ShadowPtr, OriginPtr) = 5216 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 5217 /*isStore*/ true); 5218 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 5219 SystemZVAListTagSize, Alignment, false); 5220 } 5221 5222 void visitVAStartInst(VAStartInst &I) override { 5223 VAStartInstrumentationList.push_back(&I); 5224 unpoisonVAListTagForInst(I); 5225 } 5226 5227 void visitVACopyInst(VACopyInst &I) override { unpoisonVAListTagForInst(I); } 5228 5229 void copyRegSaveArea(IRBuilder<> &IRB, Value *VAListTag) { 5230 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5231 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 5232 IRB.CreateAdd( 5233 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5234 ConstantInt::get(MS.IntptrTy, SystemZRegSaveAreaPtrOffset)), 5235 PointerType::get(RegSaveAreaPtrTy, 0)); 5236 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 5237 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 5238 const Align Alignment = Align(8); 5239 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 5240 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), Alignment, 5241 /*isStore*/ true); 5242 // TODO(iii): copy only fragments filled by visitCallBase() 5243 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 5244 SystemZRegSaveAreaSize); 5245 if (MS.TrackOrigins) 5246 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 5247 Alignment, SystemZRegSaveAreaSize); 5248 } 5249 5250 void copyOverflowArea(IRBuilder<> &IRB, Value *VAListTag) { 5251 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5252 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 5253 IRB.CreateAdd( 5254 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5255 ConstantInt::get(MS.IntptrTy, SystemZOverflowArgAreaPtrOffset)), 5256 PointerType::get(OverflowArgAreaPtrTy, 0)); 5257 Value *OverflowArgAreaPtr = 5258 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 5259 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 5260 const Align Alignment = Align(8); 5261 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 5262 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 5263 Alignment, /*isStore*/ true); 5264 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 5265 SystemZOverflowOffset); 5266 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 5267 VAArgOverflowSize); 5268 if (MS.TrackOrigins) { 5269 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 5270 SystemZOverflowOffset); 5271 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 5272 VAArgOverflowSize); 5273 } 5274 } 5275 5276 void finalizeInstrumentation() override { 5277 assert(!VAArgOverflowSize && !VAArgTLSCopy && 5278 "finalizeInstrumentation called twice"); 5279 if (!VAStartInstrumentationList.empty()) { 5280 // If there is a va_start in this function, make a backup copy of 5281 // va_arg_tls somewhere in the function entry block. 5282 IRBuilder<> IRB(MSV.FnPrologueEnd); 5283 VAArgOverflowSize = 5284 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 5285 Value *CopySize = 5286 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, SystemZOverflowOffset), 5287 VAArgOverflowSize); 5288 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5289 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 5290 if (MS.TrackOrigins) { 5291 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5292 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 5293 Align(8), CopySize); 5294 } 5295 } 5296 5297 // Instrument va_start. 5298 // Copy va_list shadow from the backup copy of the TLS contents. 5299 for (size_t VaStartNo = 0, VaStartNum = VAStartInstrumentationList.size(); 5300 VaStartNo < VaStartNum; VaStartNo++) { 5301 CallInst *OrigInst = VAStartInstrumentationList[VaStartNo]; 5302 IRBuilder<> IRB(OrigInst->getNextNode()); 5303 Value *VAListTag = OrigInst->getArgOperand(0); 5304 copyRegSaveArea(IRB, VAListTag); 5305 copyOverflowArea(IRB, VAListTag); 5306 } 5307 } 5308 }; 5309 5310 /// A no-op implementation of VarArgHelper. 5311 struct VarArgNoOpHelper : public VarArgHelper { 5312 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 5313 MemorySanitizerVisitor &MSV) {} 5314 5315 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override {} 5316 5317 void visitVAStartInst(VAStartInst &I) override {} 5318 5319 void visitVACopyInst(VACopyInst &I) override {} 5320 5321 void finalizeInstrumentation() override {} 5322 }; 5323 5324 } // end anonymous namespace 5325 5326 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 5327 MemorySanitizerVisitor &Visitor) { 5328 // VarArg handling is only implemented on AMD64. False positives are possible 5329 // on other platforms. 5330 Triple TargetTriple(Func.getParent()->getTargetTriple()); 5331 if (TargetTriple.getArch() == Triple::x86_64) 5332 return new VarArgAMD64Helper(Func, Msan, Visitor); 5333 else if (TargetTriple.isMIPS64()) 5334 return new VarArgMIPS64Helper(Func, Msan, Visitor); 5335 else if (TargetTriple.getArch() == Triple::aarch64) 5336 return new VarArgAArch64Helper(Func, Msan, Visitor); 5337 else if (TargetTriple.getArch() == Triple::ppc64 || 5338 TargetTriple.getArch() == Triple::ppc64le) 5339 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 5340 else if (TargetTriple.getArch() == Triple::systemz) 5341 return new VarArgSystemZHelper(Func, Msan, Visitor); 5342 else 5343 return new VarArgNoOpHelper(Func, Msan, Visitor); 5344 } 5345 5346 bool MemorySanitizer::sanitizeFunction(Function &F, TargetLibraryInfo &TLI) { 5347 if (!CompileKernel && F.getName() == kMsanModuleCtorName) 5348 return false; 5349 5350 if (F.hasFnAttribute(Attribute::DisableSanitizerInstrumentation)) 5351 return false; 5352 5353 MemorySanitizerVisitor Visitor(F, *this, TLI); 5354 5355 // Clear out readonly/readnone attributes. 5356 AttributeMask B; 5357 B.addAttribute(Attribute::ReadOnly) 5358 .addAttribute(Attribute::ReadNone) 5359 .addAttribute(Attribute::WriteOnly) 5360 .addAttribute(Attribute::ArgMemOnly) 5361 .addAttribute(Attribute::Speculatable); 5362 F.removeFnAttrs(B); 5363 5364 return Visitor.runOnFunction(); 5365 } 5366