1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 // 9 /// \file 10 /// This file is a part of MemorySanitizer, a detector of uninitialized 11 /// reads. 12 /// 13 /// The algorithm of the tool is similar to Memcheck 14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 15 /// byte of the application memory, poison the shadow of the malloc-ed 16 /// or alloca-ed memory, load the shadow bits on every memory read, 17 /// propagate the shadow bits through some of the arithmetic 18 /// instruction (including MOV), store the shadow bits on every memory 19 /// write, report a bug on some other instructions (e.g. JMP) if the 20 /// associated shadow is poisoned. 21 /// 22 /// But there are differences too. The first and the major one: 23 /// compiler instrumentation instead of binary instrumentation. This 24 /// gives us much better register allocation, possible compiler 25 /// optimizations and a fast start-up. But this brings the major issue 26 /// as well: msan needs to see all program events, including system 27 /// calls and reads/writes in system libraries, so we either need to 28 /// compile *everything* with msan or use a binary translation 29 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 30 /// Another difference from Memcheck is that we use 8 shadow bits per 31 /// byte of application memory and use a direct shadow mapping. This 32 /// greatly simplifies the instrumentation code and avoids races on 33 /// shadow updates (Memcheck is single-threaded so races are not a 34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 35 /// path storage that uses 8 bits per byte). 36 /// 37 /// The default value of shadow is 0, which means "clean" (not poisoned). 38 /// 39 /// Every module initializer should call __msan_init to ensure that the 40 /// shadow memory is ready. On error, __msan_warning is called. Since 41 /// parameters and return values may be passed via registers, we have a 42 /// specialized thread-local shadow for return values 43 /// (__msan_retval_tls) and parameters (__msan_param_tls). 44 /// 45 /// Origin tracking. 46 /// 47 /// MemorySanitizer can track origins (allocation points) of all uninitialized 48 /// values. This behavior is controlled with a flag (msan-track-origins) and is 49 /// disabled by default. 50 /// 51 /// Origins are 4-byte values created and interpreted by the runtime library. 52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 53 /// of application memory. Propagation of origins is basically a bunch of 54 /// "select" instructions that pick the origin of a dirty argument, if an 55 /// instruction has one. 56 /// 57 /// Every 4 aligned, consecutive bytes of application memory have one origin 58 /// value associated with them. If these bytes contain uninitialized data 59 /// coming from 2 different allocations, the last store wins. Because of this, 60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 61 /// practice. 62 /// 63 /// Origins are meaningless for fully initialized values, so MemorySanitizer 64 /// avoids storing origin to memory when a fully initialized value is stored. 65 /// This way it avoids needless overwriting origin of the 4-byte region on 66 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 67 /// 68 /// Atomic handling. 69 /// 70 /// Ideally, every atomic store of application value should update the 71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 72 /// of two disjoint locations can not be done without severe slowdown. 73 /// 74 /// Therefore, we implement an approximation that may err on the safe side. 75 /// In this implementation, every atomically accessed location in the program 76 /// may only change from (partially) uninitialized to fully initialized, but 77 /// not the other way around. We load the shadow _after_ the application load, 78 /// and we store the shadow _before_ the app store. Also, we always store clean 79 /// shadow (if the application store is atomic). This way, if the store-load 80 /// pair constitutes a happens-before arc, shadow store and load are correctly 81 /// ordered such that the load will get either the value that was stored, or 82 /// some later value (which is always clean). 83 /// 84 /// This does not work very well with Compare-And-Swap (CAS) and 85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 86 /// must store the new shadow before the app operation, and load the shadow 87 /// after the app operation. Computers don't work this way. Current 88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 89 /// value. It implements the store part as a simple atomic store by storing a 90 /// clean shadow. 91 /// 92 /// Instrumenting inline assembly. 93 /// 94 /// For inline assembly code LLVM has little idea about which memory locations 95 /// become initialized depending on the arguments. It can be possible to figure 96 /// out which arguments are meant to point to inputs and outputs, but the 97 /// actual semantics can be only visible at runtime. In the Linux kernel it's 98 /// also possible that the arguments only indicate the offset for a base taken 99 /// from a segment register, so it's dangerous to treat any asm() arguments as 100 /// pointers. We take a conservative approach generating calls to 101 /// __msan_instrument_asm_store(ptr, size) 102 /// , which defer the memory unpoisoning to the runtime library. 103 /// The latter can perform more complex address checks to figure out whether 104 /// it's safe to touch the shadow memory. 105 /// Like with atomic operations, we call __msan_instrument_asm_store() before 106 /// the assembly call, so that changes to the shadow memory will be seen by 107 /// other threads together with main memory initialization. 108 /// 109 /// KernelMemorySanitizer (KMSAN) implementation. 110 /// 111 /// The major differences between KMSAN and MSan instrumentation are: 112 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 113 /// - KMSAN allocates shadow and origin memory for each page separately, so 114 /// there are no explicit accesses to shadow and origin in the 115 /// instrumentation. 116 /// Shadow and origin values for a particular X-byte memory location 117 /// (X=1,2,4,8) are accessed through pointers obtained via the 118 /// __msan_metadata_ptr_for_load_X(ptr) 119 /// __msan_metadata_ptr_for_store_X(ptr) 120 /// functions. The corresponding functions check that the X-byte accesses 121 /// are possible and returns the pointers to shadow and origin memory. 122 /// Arbitrary sized accesses are handled with: 123 /// __msan_metadata_ptr_for_load_n(ptr, size) 124 /// __msan_metadata_ptr_for_store_n(ptr, size); 125 /// - TLS variables are stored in a single per-task struct. A call to a 126 /// function __msan_get_context_state() returning a pointer to that struct 127 /// is inserted into every instrumented function before the entry block; 128 /// - __msan_warning() takes a 32-bit origin parameter; 129 /// - local variables are poisoned with __msan_poison_alloca() upon function 130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 131 /// function; 132 /// - the pass doesn't declare any global variables or add global constructors 133 /// to the translation unit. 134 /// 135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 136 /// calls, making sure we're on the safe side wrt. possible false positives. 137 /// 138 /// KernelMemorySanitizer only supports X86_64 at the moment. 139 /// 140 //===----------------------------------------------------------------------===// 141 142 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h" 143 #include "llvm/ADT/APInt.h" 144 #include "llvm/ADT/ArrayRef.h" 145 #include "llvm/ADT/DepthFirstIterator.h" 146 #include "llvm/ADT/SmallSet.h" 147 #include "llvm/ADT/SmallString.h" 148 #include "llvm/ADT/SmallVector.h" 149 #include "llvm/ADT/StringExtras.h" 150 #include "llvm/ADT/StringRef.h" 151 #include "llvm/ADT/Triple.h" 152 #include "llvm/Analysis/TargetLibraryInfo.h" 153 #include "llvm/IR/Argument.h" 154 #include "llvm/IR/Attributes.h" 155 #include "llvm/IR/BasicBlock.h" 156 #include "llvm/IR/CallSite.h" 157 #include "llvm/IR/CallingConv.h" 158 #include "llvm/IR/Constant.h" 159 #include "llvm/IR/Constants.h" 160 #include "llvm/IR/DataLayout.h" 161 #include "llvm/IR/DerivedTypes.h" 162 #include "llvm/IR/Function.h" 163 #include "llvm/IR/GlobalValue.h" 164 #include "llvm/IR/GlobalVariable.h" 165 #include "llvm/IR/IRBuilder.h" 166 #include "llvm/IR/InlineAsm.h" 167 #include "llvm/IR/InstVisitor.h" 168 #include "llvm/IR/InstrTypes.h" 169 #include "llvm/IR/Instruction.h" 170 #include "llvm/IR/Instructions.h" 171 #include "llvm/IR/IntrinsicInst.h" 172 #include "llvm/IR/Intrinsics.h" 173 #include "llvm/IR/IntrinsicsX86.h" 174 #include "llvm/IR/LLVMContext.h" 175 #include "llvm/IR/MDBuilder.h" 176 #include "llvm/IR/Module.h" 177 #include "llvm/IR/Type.h" 178 #include "llvm/IR/Value.h" 179 #include "llvm/IR/ValueMap.h" 180 #include "llvm/InitializePasses.h" 181 #include "llvm/Pass.h" 182 #include "llvm/Support/AtomicOrdering.h" 183 #include "llvm/Support/Casting.h" 184 #include "llvm/Support/CommandLine.h" 185 #include "llvm/Support/Compiler.h" 186 #include "llvm/Support/Debug.h" 187 #include "llvm/Support/ErrorHandling.h" 188 #include "llvm/Support/MathExtras.h" 189 #include "llvm/Support/raw_ostream.h" 190 #include "llvm/Transforms/Instrumentation.h" 191 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 192 #include "llvm/Transforms/Utils/Local.h" 193 #include "llvm/Transforms/Utils/ModuleUtils.h" 194 #include <algorithm> 195 #include <cassert> 196 #include <cstddef> 197 #include <cstdint> 198 #include <memory> 199 #include <string> 200 #include <tuple> 201 202 using namespace llvm; 203 204 #define DEBUG_TYPE "msan" 205 206 static const unsigned kOriginSize = 4; 207 static const Align kMinOriginAlignment = Align(4); 208 static const Align kShadowTLSAlignment = Align(8); 209 210 // These constants must be kept in sync with the ones in msan.h. 211 static const unsigned kParamTLSSize = 800; 212 static const unsigned kRetvalTLSSize = 800; 213 214 // Accesses sizes are powers of two: 1, 2, 4, 8. 215 static const size_t kNumberOfAccessSizes = 4; 216 217 /// Track origins of uninitialized values. 218 /// 219 /// Adds a section to MemorySanitizer report that points to the allocation 220 /// (stack or heap) the uninitialized bits came from originally. 221 static cl::opt<int> ClTrackOrigins("msan-track-origins", 222 cl::desc("Track origins (allocation sites) of poisoned memory"), 223 cl::Hidden, cl::init(0)); 224 225 static cl::opt<bool> ClKeepGoing("msan-keep-going", 226 cl::desc("keep going after reporting a UMR"), 227 cl::Hidden, cl::init(false)); 228 229 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 230 cl::desc("poison uninitialized stack variables"), 231 cl::Hidden, cl::init(true)); 232 233 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 234 cl::desc("poison uninitialized stack variables with a call"), 235 cl::Hidden, cl::init(false)); 236 237 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 238 cl::desc("poison uninitialized stack variables with the given pattern"), 239 cl::Hidden, cl::init(0xff)); 240 241 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 242 cl::desc("poison undef temps"), 243 cl::Hidden, cl::init(true)); 244 245 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 246 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 247 cl::Hidden, cl::init(true)); 248 249 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 250 cl::desc("exact handling of relational integer ICmp"), 251 cl::Hidden, cl::init(false)); 252 253 static cl::opt<bool> ClHandleLifetimeIntrinsics( 254 "msan-handle-lifetime-intrinsics", 255 cl::desc( 256 "when possible, poison scoped variables at the beginning of the scope " 257 "(slower, but more precise)"), 258 cl::Hidden, cl::init(true)); 259 260 // When compiling the Linux kernel, we sometimes see false positives related to 261 // MSan being unable to understand that inline assembly calls may initialize 262 // local variables. 263 // This flag makes the compiler conservatively unpoison every memory location 264 // passed into an assembly call. Note that this may cause false positives. 265 // Because it's impossible to figure out the array sizes, we can only unpoison 266 // the first sizeof(type) bytes for each type* pointer. 267 // The instrumentation is only enabled in KMSAN builds, and only if 268 // -msan-handle-asm-conservative is on. This is done because we may want to 269 // quickly disable assembly instrumentation when it breaks. 270 static cl::opt<bool> ClHandleAsmConservative( 271 "msan-handle-asm-conservative", 272 cl::desc("conservative handling of inline assembly"), cl::Hidden, 273 cl::init(true)); 274 275 // This flag controls whether we check the shadow of the address 276 // operand of load or store. Such bugs are very rare, since load from 277 // a garbage address typically results in SEGV, but still happen 278 // (e.g. only lower bits of address are garbage, or the access happens 279 // early at program startup where malloc-ed memory is more likely to 280 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 281 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 282 cl::desc("report accesses through a pointer which has poisoned shadow"), 283 cl::Hidden, cl::init(true)); 284 285 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 286 cl::desc("print out instructions with default strict semantics"), 287 cl::Hidden, cl::init(false)); 288 289 static cl::opt<int> ClInstrumentationWithCallThreshold( 290 "msan-instrumentation-with-call-threshold", 291 cl::desc( 292 "If the function being instrumented requires more than " 293 "this number of checks and origin stores, use callbacks instead of " 294 "inline checks (-1 means never use callbacks)."), 295 cl::Hidden, cl::init(3500)); 296 297 static cl::opt<bool> 298 ClEnableKmsan("msan-kernel", 299 cl::desc("Enable KernelMemorySanitizer instrumentation"), 300 cl::Hidden, cl::init(false)); 301 302 // This is an experiment to enable handling of cases where shadow is a non-zero 303 // compile-time constant. For some unexplainable reason they were silently 304 // ignored in the instrumentation. 305 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 306 cl::desc("Insert checks for constant shadow values"), 307 cl::Hidden, cl::init(false)); 308 309 // This is off by default because of a bug in gold: 310 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 311 static cl::opt<bool> ClWithComdat("msan-with-comdat", 312 cl::desc("Place MSan constructors in comdat sections"), 313 cl::Hidden, cl::init(false)); 314 315 // These options allow to specify custom memory map parameters 316 // See MemoryMapParams for details. 317 static cl::opt<uint64_t> ClAndMask("msan-and-mask", 318 cl::desc("Define custom MSan AndMask"), 319 cl::Hidden, cl::init(0)); 320 321 static cl::opt<uint64_t> ClXorMask("msan-xor-mask", 322 cl::desc("Define custom MSan XorMask"), 323 cl::Hidden, cl::init(0)); 324 325 static cl::opt<uint64_t> ClShadowBase("msan-shadow-base", 326 cl::desc("Define custom MSan ShadowBase"), 327 cl::Hidden, cl::init(0)); 328 329 static cl::opt<uint64_t> ClOriginBase("msan-origin-base", 330 cl::desc("Define custom MSan OriginBase"), 331 cl::Hidden, cl::init(0)); 332 333 static const char *const kMsanModuleCtorName = "msan.module_ctor"; 334 static const char *const kMsanInitName = "__msan_init"; 335 336 namespace { 337 338 // Memory map parameters used in application-to-shadow address calculation. 339 // Offset = (Addr & ~AndMask) ^ XorMask 340 // Shadow = ShadowBase + Offset 341 // Origin = OriginBase + Offset 342 struct MemoryMapParams { 343 uint64_t AndMask; 344 uint64_t XorMask; 345 uint64_t ShadowBase; 346 uint64_t OriginBase; 347 }; 348 349 struct PlatformMemoryMapParams { 350 const MemoryMapParams *bits32; 351 const MemoryMapParams *bits64; 352 }; 353 354 } // end anonymous namespace 355 356 // i386 Linux 357 static const MemoryMapParams Linux_I386_MemoryMapParams = { 358 0x000080000000, // AndMask 359 0, // XorMask (not used) 360 0, // ShadowBase (not used) 361 0x000040000000, // OriginBase 362 }; 363 364 // x86_64 Linux 365 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 366 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 367 0x400000000000, // AndMask 368 0, // XorMask (not used) 369 0, // ShadowBase (not used) 370 0x200000000000, // OriginBase 371 #else 372 0, // AndMask (not used) 373 0x500000000000, // XorMask 374 0, // ShadowBase (not used) 375 0x100000000000, // OriginBase 376 #endif 377 }; 378 379 // mips64 Linux 380 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 381 0, // AndMask (not used) 382 0x008000000000, // XorMask 383 0, // ShadowBase (not used) 384 0x002000000000, // OriginBase 385 }; 386 387 // ppc64 Linux 388 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 389 0xE00000000000, // AndMask 390 0x100000000000, // XorMask 391 0x080000000000, // ShadowBase 392 0x1C0000000000, // OriginBase 393 }; 394 395 // s390x Linux 396 static const MemoryMapParams Linux_S390X_MemoryMapParams = { 397 0xC00000000000, // AndMask 398 0, // XorMask (not used) 399 0x080000000000, // ShadowBase 400 0x1C0000000000, // OriginBase 401 }; 402 403 // aarch64 Linux 404 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 405 0, // AndMask (not used) 406 0x06000000000, // XorMask 407 0, // ShadowBase (not used) 408 0x01000000000, // OriginBase 409 }; 410 411 // i386 FreeBSD 412 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 413 0x000180000000, // AndMask 414 0x000040000000, // XorMask 415 0x000020000000, // ShadowBase 416 0x000700000000, // OriginBase 417 }; 418 419 // x86_64 FreeBSD 420 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 421 0xc00000000000, // AndMask 422 0x200000000000, // XorMask 423 0x100000000000, // ShadowBase 424 0x380000000000, // OriginBase 425 }; 426 427 // x86_64 NetBSD 428 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 429 0, // AndMask 430 0x500000000000, // XorMask 431 0, // ShadowBase 432 0x100000000000, // OriginBase 433 }; 434 435 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 436 &Linux_I386_MemoryMapParams, 437 &Linux_X86_64_MemoryMapParams, 438 }; 439 440 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 441 nullptr, 442 &Linux_MIPS64_MemoryMapParams, 443 }; 444 445 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 446 nullptr, 447 &Linux_PowerPC64_MemoryMapParams, 448 }; 449 450 static const PlatformMemoryMapParams Linux_S390_MemoryMapParams = { 451 nullptr, 452 &Linux_S390X_MemoryMapParams, 453 }; 454 455 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 456 nullptr, 457 &Linux_AArch64_MemoryMapParams, 458 }; 459 460 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 461 &FreeBSD_I386_MemoryMapParams, 462 &FreeBSD_X86_64_MemoryMapParams, 463 }; 464 465 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 466 nullptr, 467 &NetBSD_X86_64_MemoryMapParams, 468 }; 469 470 namespace { 471 472 /// Instrument functions of a module to detect uninitialized reads. 473 /// 474 /// Instantiating MemorySanitizer inserts the msan runtime library API function 475 /// declarations into the module if they don't exist already. Instantiating 476 /// ensures the __msan_init function is in the list of global constructors for 477 /// the module. 478 class MemorySanitizer { 479 public: 480 MemorySanitizer(Module &M, MemorySanitizerOptions Options) 481 : CompileKernel(Options.Kernel), TrackOrigins(Options.TrackOrigins), 482 Recover(Options.Recover) { 483 initializeModule(M); 484 } 485 486 // MSan cannot be moved or copied because of MapParams. 487 MemorySanitizer(MemorySanitizer &&) = delete; 488 MemorySanitizer &operator=(MemorySanitizer &&) = delete; 489 MemorySanitizer(const MemorySanitizer &) = delete; 490 MemorySanitizer &operator=(const MemorySanitizer &) = delete; 491 492 bool sanitizeFunction(Function &F, TargetLibraryInfo &TLI); 493 494 private: 495 friend struct MemorySanitizerVisitor; 496 friend struct VarArgAMD64Helper; 497 friend struct VarArgMIPS64Helper; 498 friend struct VarArgAArch64Helper; 499 friend struct VarArgPowerPC64Helper; 500 friend struct VarArgSystemZHelper; 501 502 void initializeModule(Module &M); 503 void initializeCallbacks(Module &M); 504 void createKernelApi(Module &M); 505 void createUserspaceApi(Module &M); 506 507 /// True if we're compiling the Linux kernel. 508 bool CompileKernel; 509 /// Track origins (allocation points) of uninitialized values. 510 int TrackOrigins; 511 bool Recover; 512 513 LLVMContext *C; 514 Type *IntptrTy; 515 Type *OriginTy; 516 517 // XxxTLS variables represent the per-thread state in MSan and per-task state 518 // in KMSAN. 519 // For the userspace these point to thread-local globals. In the kernel land 520 // they point to the members of a per-task struct obtained via a call to 521 // __msan_get_context_state(). 522 523 /// Thread-local shadow storage for function parameters. 524 Value *ParamTLS; 525 526 /// Thread-local origin storage for function parameters. 527 Value *ParamOriginTLS; 528 529 /// Thread-local shadow storage for function return value. 530 Value *RetvalTLS; 531 532 /// Thread-local origin storage for function return value. 533 Value *RetvalOriginTLS; 534 535 /// Thread-local shadow storage for in-register va_arg function 536 /// parameters (x86_64-specific). 537 Value *VAArgTLS; 538 539 /// Thread-local shadow storage for in-register va_arg function 540 /// parameters (x86_64-specific). 541 Value *VAArgOriginTLS; 542 543 /// Thread-local shadow storage for va_arg overflow area 544 /// (x86_64-specific). 545 Value *VAArgOverflowSizeTLS; 546 547 /// Thread-local space used to pass origin value to the UMR reporting 548 /// function. 549 Value *OriginTLS; 550 551 /// Are the instrumentation callbacks set up? 552 bool CallbacksInitialized = false; 553 554 /// The run-time callback to print a warning. 555 FunctionCallee WarningFn; 556 557 // These arrays are indexed by log2(AccessSize). 558 FunctionCallee MaybeWarningFn[kNumberOfAccessSizes]; 559 FunctionCallee MaybeStoreOriginFn[kNumberOfAccessSizes]; 560 561 /// Run-time helper that generates a new origin value for a stack 562 /// allocation. 563 FunctionCallee MsanSetAllocaOrigin4Fn; 564 565 /// Run-time helper that poisons stack on function entry. 566 FunctionCallee MsanPoisonStackFn; 567 568 /// Run-time helper that records a store (or any event) of an 569 /// uninitialized value and returns an updated origin id encoding this info. 570 FunctionCallee MsanChainOriginFn; 571 572 /// MSan runtime replacements for memmove, memcpy and memset. 573 FunctionCallee MemmoveFn, MemcpyFn, MemsetFn; 574 575 /// KMSAN callback for task-local function argument shadow. 576 StructType *MsanContextStateTy; 577 FunctionCallee MsanGetContextStateFn; 578 579 /// Functions for poisoning/unpoisoning local variables 580 FunctionCallee MsanPoisonAllocaFn, MsanUnpoisonAllocaFn; 581 582 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 583 /// pointers. 584 FunctionCallee MsanMetadataPtrForLoadN, MsanMetadataPtrForStoreN; 585 FunctionCallee MsanMetadataPtrForLoad_1_8[4]; 586 FunctionCallee MsanMetadataPtrForStore_1_8[4]; 587 FunctionCallee MsanInstrumentAsmStoreFn; 588 589 /// Helper to choose between different MsanMetadataPtrXxx(). 590 FunctionCallee getKmsanShadowOriginAccessFn(bool isStore, int size); 591 592 /// Memory map parameters used in application-to-shadow calculation. 593 const MemoryMapParams *MapParams; 594 595 /// Custom memory map parameters used when -msan-shadow-base or 596 // -msan-origin-base is provided. 597 MemoryMapParams CustomMapParams; 598 599 MDNode *ColdCallWeights; 600 601 /// Branch weights for origin store. 602 MDNode *OriginStoreWeights; 603 604 /// An empty volatile inline asm that prevents callback merge. 605 InlineAsm *EmptyAsm; 606 }; 607 608 void insertModuleCtor(Module &M) { 609 getOrCreateSanitizerCtorAndInitFunctions( 610 M, kMsanModuleCtorName, kMsanInitName, 611 /*InitArgTypes=*/{}, 612 /*InitArgs=*/{}, 613 // This callback is invoked when the functions are created the first 614 // time. Hook them into the global ctors list in that case: 615 [&](Function *Ctor, FunctionCallee) { 616 if (!ClWithComdat) { 617 appendToGlobalCtors(M, Ctor, 0); 618 return; 619 } 620 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 621 Ctor->setComdat(MsanCtorComdat); 622 appendToGlobalCtors(M, Ctor, 0, Ctor); 623 }); 624 } 625 626 /// A legacy function pass for msan instrumentation. 627 /// 628 /// Instruments functions to detect uninitialized reads. 629 struct MemorySanitizerLegacyPass : public FunctionPass { 630 // Pass identification, replacement for typeid. 631 static char ID; 632 633 MemorySanitizerLegacyPass(MemorySanitizerOptions Options = {}) 634 : FunctionPass(ID), Options(Options) {} 635 StringRef getPassName() const override { return "MemorySanitizerLegacyPass"; } 636 637 void getAnalysisUsage(AnalysisUsage &AU) const override { 638 AU.addRequired<TargetLibraryInfoWrapperPass>(); 639 } 640 641 bool runOnFunction(Function &F) override { 642 return MSan->sanitizeFunction( 643 F, getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F)); 644 } 645 bool doInitialization(Module &M) override; 646 647 Optional<MemorySanitizer> MSan; 648 MemorySanitizerOptions Options; 649 }; 650 651 template <class T> T getOptOrDefault(const cl::opt<T> &Opt, T Default) { 652 return (Opt.getNumOccurrences() > 0) ? Opt : Default; 653 } 654 655 } // end anonymous namespace 656 657 MemorySanitizerOptions::MemorySanitizerOptions(int TO, bool R, bool K) 658 : Kernel(getOptOrDefault(ClEnableKmsan, K)), 659 TrackOrigins(getOptOrDefault(ClTrackOrigins, Kernel ? 2 : TO)), 660 Recover(getOptOrDefault(ClKeepGoing, Kernel || R)) {} 661 662 PreservedAnalyses MemorySanitizerPass::run(Function &F, 663 FunctionAnalysisManager &FAM) { 664 MemorySanitizer Msan(*F.getParent(), Options); 665 if (Msan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F))) 666 return PreservedAnalyses::none(); 667 return PreservedAnalyses::all(); 668 } 669 670 PreservedAnalyses MemorySanitizerPass::run(Module &M, 671 ModuleAnalysisManager &AM) { 672 if (Options.Kernel) 673 return PreservedAnalyses::all(); 674 insertModuleCtor(M); 675 return PreservedAnalyses::none(); 676 } 677 678 char MemorySanitizerLegacyPass::ID = 0; 679 680 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass, "msan", 681 "MemorySanitizer: detects uninitialized reads.", false, 682 false) 683 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 684 INITIALIZE_PASS_END(MemorySanitizerLegacyPass, "msan", 685 "MemorySanitizer: detects uninitialized reads.", false, 686 false) 687 688 FunctionPass * 689 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options) { 690 return new MemorySanitizerLegacyPass(Options); 691 } 692 693 /// Create a non-const global initialized with the given string. 694 /// 695 /// Creates a writable global for Str so that we can pass it to the 696 /// run-time lib. Runtime uses first 4 bytes of the string to store the 697 /// frame ID, so the string needs to be mutable. 698 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 699 StringRef Str) { 700 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 701 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 702 GlobalValue::PrivateLinkage, StrConst, ""); 703 } 704 705 /// Create KMSAN API callbacks. 706 void MemorySanitizer::createKernelApi(Module &M) { 707 IRBuilder<> IRB(*C); 708 709 // These will be initialized in insertKmsanPrologue(). 710 RetvalTLS = nullptr; 711 RetvalOriginTLS = nullptr; 712 ParamTLS = nullptr; 713 ParamOriginTLS = nullptr; 714 VAArgTLS = nullptr; 715 VAArgOriginTLS = nullptr; 716 VAArgOverflowSizeTLS = nullptr; 717 // OriginTLS is unused in the kernel. 718 OriginTLS = nullptr; 719 720 // __msan_warning() in the kernel takes an origin. 721 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 722 IRB.getInt32Ty()); 723 // Requests the per-task context state (kmsan_context_state*) from the 724 // runtime library. 725 MsanContextStateTy = StructType::get( 726 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 727 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 728 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 729 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), /* va_arg_origin */ 730 IRB.getInt64Ty(), ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 731 OriginTy); 732 MsanGetContextStateFn = M.getOrInsertFunction( 733 "__msan_get_context_state", PointerType::get(MsanContextStateTy, 0)); 734 735 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 736 PointerType::get(IRB.getInt32Ty(), 0)); 737 738 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 739 std::string name_load = 740 "__msan_metadata_ptr_for_load_" + std::to_string(size); 741 std::string name_store = 742 "__msan_metadata_ptr_for_store_" + std::to_string(size); 743 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 744 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 745 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 746 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 747 } 748 749 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 750 "__msan_metadata_ptr_for_load_n", RetTy, 751 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 752 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 753 "__msan_metadata_ptr_for_store_n", RetTy, 754 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 755 756 // Functions for poisoning and unpoisoning memory. 757 MsanPoisonAllocaFn = 758 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 759 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 760 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 761 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 762 } 763 764 static Constant *getOrInsertGlobal(Module &M, StringRef Name, Type *Ty) { 765 return M.getOrInsertGlobal(Name, Ty, [&] { 766 return new GlobalVariable(M, Ty, false, GlobalVariable::ExternalLinkage, 767 nullptr, Name, nullptr, 768 GlobalVariable::InitialExecTLSModel); 769 }); 770 } 771 772 /// Insert declarations for userspace-specific functions and globals. 773 void MemorySanitizer::createUserspaceApi(Module &M) { 774 IRBuilder<> IRB(*C); 775 // Create the callback. 776 // FIXME: this function should have "Cold" calling conv, 777 // which is not yet implemented. 778 StringRef WarningFnName = Recover ? "__msan_warning" 779 : "__msan_warning_noreturn"; 780 WarningFn = M.getOrInsertFunction(WarningFnName, IRB.getVoidTy()); 781 782 // Create the global TLS variables. 783 RetvalTLS = 784 getOrInsertGlobal(M, "__msan_retval_tls", 785 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8)); 786 787 RetvalOriginTLS = getOrInsertGlobal(M, "__msan_retval_origin_tls", OriginTy); 788 789 ParamTLS = 790 getOrInsertGlobal(M, "__msan_param_tls", 791 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 792 793 ParamOriginTLS = 794 getOrInsertGlobal(M, "__msan_param_origin_tls", 795 ArrayType::get(OriginTy, kParamTLSSize / 4)); 796 797 VAArgTLS = 798 getOrInsertGlobal(M, "__msan_va_arg_tls", 799 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 800 801 VAArgOriginTLS = 802 getOrInsertGlobal(M, "__msan_va_arg_origin_tls", 803 ArrayType::get(OriginTy, kParamTLSSize / 4)); 804 805 VAArgOverflowSizeTLS = 806 getOrInsertGlobal(M, "__msan_va_arg_overflow_size_tls", IRB.getInt64Ty()); 807 OriginTLS = getOrInsertGlobal(M, "__msan_origin_tls", IRB.getInt32Ty()); 808 809 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 810 AccessSizeIndex++) { 811 unsigned AccessSize = 1 << AccessSizeIndex; 812 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 813 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeWarningFnAttrs; 814 MaybeWarningFnAttrs.push_back(std::make_pair( 815 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 816 MaybeWarningFnAttrs.push_back(std::make_pair( 817 AttributeList::FirstArgIndex + 1, Attribute::get(*C, Attribute::ZExt))); 818 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 819 FunctionName, AttributeList::get(*C, MaybeWarningFnAttrs), 820 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt32Ty()); 821 822 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 823 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeStoreOriginFnAttrs; 824 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 825 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 826 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 827 AttributeList::FirstArgIndex + 2, Attribute::get(*C, Attribute::ZExt))); 828 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 829 FunctionName, AttributeList::get(*C, MaybeStoreOriginFnAttrs), 830 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt8PtrTy(), 831 IRB.getInt32Ty()); 832 } 833 834 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 835 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 836 IRB.getInt8PtrTy(), IntptrTy); 837 MsanPoisonStackFn = 838 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 839 IRB.getInt8PtrTy(), IntptrTy); 840 } 841 842 /// Insert extern declaration of runtime-provided functions and globals. 843 void MemorySanitizer::initializeCallbacks(Module &M) { 844 // Only do this once. 845 if (CallbacksInitialized) 846 return; 847 848 IRBuilder<> IRB(*C); 849 // Initialize callbacks that are common for kernel and userspace 850 // instrumentation. 851 MsanChainOriginFn = M.getOrInsertFunction( 852 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 853 MemmoveFn = M.getOrInsertFunction( 854 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 855 IRB.getInt8PtrTy(), IntptrTy); 856 MemcpyFn = M.getOrInsertFunction( 857 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 858 IntptrTy); 859 MemsetFn = M.getOrInsertFunction( 860 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 861 IntptrTy); 862 // We insert an empty inline asm after __msan_report* to avoid callback merge. 863 EmptyAsm = InlineAsm::get(FunctionType::get(IRB.getVoidTy(), false), 864 StringRef(""), StringRef(""), 865 /*hasSideEffects=*/true); 866 867 MsanInstrumentAsmStoreFn = 868 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 869 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 870 871 if (CompileKernel) { 872 createKernelApi(M); 873 } else { 874 createUserspaceApi(M); 875 } 876 CallbacksInitialized = true; 877 } 878 879 FunctionCallee MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, 880 int size) { 881 FunctionCallee *Fns = 882 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 883 switch (size) { 884 case 1: 885 return Fns[0]; 886 case 2: 887 return Fns[1]; 888 case 4: 889 return Fns[2]; 890 case 8: 891 return Fns[3]; 892 default: 893 return nullptr; 894 } 895 } 896 897 /// Module-level initialization. 898 /// 899 /// inserts a call to __msan_init to the module's constructor list. 900 void MemorySanitizer::initializeModule(Module &M) { 901 auto &DL = M.getDataLayout(); 902 903 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 904 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 905 // Check the overrides first 906 if (ShadowPassed || OriginPassed) { 907 CustomMapParams.AndMask = ClAndMask; 908 CustomMapParams.XorMask = ClXorMask; 909 CustomMapParams.ShadowBase = ClShadowBase; 910 CustomMapParams.OriginBase = ClOriginBase; 911 MapParams = &CustomMapParams; 912 } else { 913 Triple TargetTriple(M.getTargetTriple()); 914 switch (TargetTriple.getOS()) { 915 case Triple::FreeBSD: 916 switch (TargetTriple.getArch()) { 917 case Triple::x86_64: 918 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 919 break; 920 case Triple::x86: 921 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 922 break; 923 default: 924 report_fatal_error("unsupported architecture"); 925 } 926 break; 927 case Triple::NetBSD: 928 switch (TargetTriple.getArch()) { 929 case Triple::x86_64: 930 MapParams = NetBSD_X86_MemoryMapParams.bits64; 931 break; 932 default: 933 report_fatal_error("unsupported architecture"); 934 } 935 break; 936 case Triple::Linux: 937 switch (TargetTriple.getArch()) { 938 case Triple::x86_64: 939 MapParams = Linux_X86_MemoryMapParams.bits64; 940 break; 941 case Triple::x86: 942 MapParams = Linux_X86_MemoryMapParams.bits32; 943 break; 944 case Triple::mips64: 945 case Triple::mips64el: 946 MapParams = Linux_MIPS_MemoryMapParams.bits64; 947 break; 948 case Triple::ppc64: 949 case Triple::ppc64le: 950 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 951 break; 952 case Triple::systemz: 953 MapParams = Linux_S390_MemoryMapParams.bits64; 954 break; 955 case Triple::aarch64: 956 case Triple::aarch64_be: 957 MapParams = Linux_ARM_MemoryMapParams.bits64; 958 break; 959 default: 960 report_fatal_error("unsupported architecture"); 961 } 962 break; 963 default: 964 report_fatal_error("unsupported operating system"); 965 } 966 } 967 968 C = &(M.getContext()); 969 IRBuilder<> IRB(*C); 970 IntptrTy = IRB.getIntPtrTy(DL); 971 OriginTy = IRB.getInt32Ty(); 972 973 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 974 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 975 976 if (!CompileKernel) { 977 if (TrackOrigins) 978 M.getOrInsertGlobal("__msan_track_origins", IRB.getInt32Ty(), [&] { 979 return new GlobalVariable( 980 M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 981 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 982 }); 983 984 if (Recover) 985 M.getOrInsertGlobal("__msan_keep_going", IRB.getInt32Ty(), [&] { 986 return new GlobalVariable(M, IRB.getInt32Ty(), true, 987 GlobalValue::WeakODRLinkage, 988 IRB.getInt32(Recover), "__msan_keep_going"); 989 }); 990 } 991 } 992 993 bool MemorySanitizerLegacyPass::doInitialization(Module &M) { 994 if (!Options.Kernel) 995 insertModuleCtor(M); 996 MSan.emplace(M, Options); 997 return true; 998 } 999 1000 namespace { 1001 1002 /// A helper class that handles instrumentation of VarArg 1003 /// functions on a particular platform. 1004 /// 1005 /// Implementations are expected to insert the instrumentation 1006 /// necessary to propagate argument shadow through VarArg function 1007 /// calls. Visit* methods are called during an InstVisitor pass over 1008 /// the function, and should avoid creating new basic blocks. A new 1009 /// instance of this class is created for each instrumented function. 1010 struct VarArgHelper { 1011 virtual ~VarArgHelper() = default; 1012 1013 /// Visit a CallSite. 1014 virtual void visitCallSite(CallSite &CS, IRBuilder<> &IRB) = 0; 1015 1016 /// Visit a va_start call. 1017 virtual void visitVAStartInst(VAStartInst &I) = 0; 1018 1019 /// Visit a va_copy call. 1020 virtual void visitVACopyInst(VACopyInst &I) = 0; 1021 1022 /// Finalize function instrumentation. 1023 /// 1024 /// This method is called after visiting all interesting (see above) 1025 /// instructions in a function. 1026 virtual void finalizeInstrumentation() = 0; 1027 }; 1028 1029 struct MemorySanitizerVisitor; 1030 1031 } // end anonymous namespace 1032 1033 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 1034 MemorySanitizerVisitor &Visitor); 1035 1036 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 1037 if (TypeSize <= 8) return 0; 1038 return Log2_32_Ceil((TypeSize + 7) / 8); 1039 } 1040 1041 namespace { 1042 1043 /// This class does all the work for a given function. Store and Load 1044 /// instructions store and load corresponding shadow and origin 1045 /// values. Most instructions propagate shadow from arguments to their 1046 /// return values. Certain instructions (most importantly, BranchInst) 1047 /// test their argument shadow and print reports (with a runtime call) if it's 1048 /// non-zero. 1049 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 1050 Function &F; 1051 MemorySanitizer &MS; 1052 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 1053 ValueMap<Value*, Value*> ShadowMap, OriginMap; 1054 std::unique_ptr<VarArgHelper> VAHelper; 1055 const TargetLibraryInfo *TLI; 1056 BasicBlock *ActualFnStart; 1057 1058 // The following flags disable parts of MSan instrumentation based on 1059 // blacklist contents and command-line options. 1060 bool InsertChecks; 1061 bool PropagateShadow; 1062 bool PoisonStack; 1063 bool PoisonUndef; 1064 bool CheckReturnValue; 1065 1066 struct ShadowOriginAndInsertPoint { 1067 Value *Shadow; 1068 Value *Origin; 1069 Instruction *OrigIns; 1070 1071 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 1072 : Shadow(S), Origin(O), OrigIns(I) {} 1073 }; 1074 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 1075 bool InstrumentLifetimeStart = ClHandleLifetimeIntrinsics; 1076 SmallSet<AllocaInst *, 16> AllocaSet; 1077 SmallVector<std::pair<IntrinsicInst *, AllocaInst *>, 16> LifetimeStartList; 1078 SmallVector<StoreInst *, 16> StoreList; 1079 1080 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS, 1081 const TargetLibraryInfo &TLI) 1082 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)), TLI(&TLI) { 1083 bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory); 1084 InsertChecks = SanitizeFunction; 1085 PropagateShadow = SanitizeFunction; 1086 PoisonStack = SanitizeFunction && ClPoisonStack; 1087 PoisonUndef = SanitizeFunction && ClPoisonUndef; 1088 // FIXME: Consider using SpecialCaseList to specify a list of functions that 1089 // must always return fully initialized values. For now, we hardcode "main". 1090 CheckReturnValue = SanitizeFunction && (F.getName() == "main"); 1091 1092 MS.initializeCallbacks(*F.getParent()); 1093 if (MS.CompileKernel) 1094 ActualFnStart = insertKmsanPrologue(F); 1095 else 1096 ActualFnStart = &F.getEntryBlock(); 1097 1098 LLVM_DEBUG(if (!InsertChecks) dbgs() 1099 << "MemorySanitizer is not inserting checks into '" 1100 << F.getName() << "'\n"); 1101 } 1102 1103 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1104 if (MS.TrackOrigins <= 1) return V; 1105 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1106 } 1107 1108 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1109 const DataLayout &DL = F.getParent()->getDataLayout(); 1110 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1111 if (IntptrSize == kOriginSize) return Origin; 1112 assert(IntptrSize == kOriginSize * 2); 1113 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1114 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1115 } 1116 1117 /// Fill memory range with the given origin value. 1118 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1119 unsigned Size, Align Alignment) { 1120 const DataLayout &DL = F.getParent()->getDataLayout(); 1121 const Align IntptrAlignment = Align(DL.getABITypeAlignment(MS.IntptrTy)); 1122 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1123 assert(IntptrAlignment >= kMinOriginAlignment); 1124 assert(IntptrSize >= kOriginSize); 1125 1126 unsigned Ofs = 0; 1127 Align CurrentAlignment = Alignment; 1128 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1129 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1130 Value *IntptrOriginPtr = 1131 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1132 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1133 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1134 : IntptrOriginPtr; 1135 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1136 Ofs += IntptrSize / kOriginSize; 1137 CurrentAlignment = IntptrAlignment; 1138 } 1139 } 1140 1141 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1142 Value *GEP = 1143 i ? IRB.CreateConstGEP1_32(MS.OriginTy, OriginPtr, i) : OriginPtr; 1144 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1145 CurrentAlignment = kMinOriginAlignment; 1146 } 1147 } 1148 1149 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1150 Value *OriginPtr, Align Alignment, bool AsCall) { 1151 const DataLayout &DL = F.getParent()->getDataLayout(); 1152 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1153 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1154 if (Shadow->getType()->isAggregateType()) { 1155 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1156 OriginAlignment); 1157 } else { 1158 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1159 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1160 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1161 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1162 OriginAlignment); 1163 return; 1164 } 1165 1166 unsigned TypeSizeInBits = 1167 DL.getTypeSizeInBits(ConvertedShadow->getType()); 1168 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1169 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1170 FunctionCallee Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1171 Value *ConvertedShadow2 = IRB.CreateZExt( 1172 ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1173 IRB.CreateCall(Fn, {ConvertedShadow2, 1174 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), 1175 Origin}); 1176 } else { 1177 Value *Cmp = IRB.CreateICmpNE( 1178 ConvertedShadow, getCleanShadow(ConvertedShadow), "_mscmp"); 1179 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1180 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1181 IRBuilder<> IRBNew(CheckTerm); 1182 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1183 OriginAlignment); 1184 } 1185 } 1186 } 1187 1188 void materializeStores(bool InstrumentWithCalls) { 1189 for (StoreInst *SI : StoreList) { 1190 IRBuilder<> IRB(SI); 1191 Value *Val = SI->getValueOperand(); 1192 Value *Addr = SI->getPointerOperand(); 1193 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1194 Value *ShadowPtr, *OriginPtr; 1195 Type *ShadowTy = Shadow->getType(); 1196 const Align Alignment = assumeAligned(SI->getAlignment()); 1197 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1198 std::tie(ShadowPtr, OriginPtr) = 1199 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1200 1201 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1202 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1203 (void)NewSI; 1204 1205 if (SI->isAtomic()) 1206 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1207 1208 if (MS.TrackOrigins && !SI->isAtomic()) 1209 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1210 OriginAlignment, InstrumentWithCalls); 1211 } 1212 } 1213 1214 /// Helper function to insert a warning at IRB's current insert point. 1215 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1216 if (!Origin) 1217 Origin = (Value *)IRB.getInt32(0); 1218 if (MS.CompileKernel) { 1219 IRB.CreateCall(MS.WarningFn, Origin); 1220 } else { 1221 if (MS.TrackOrigins) { 1222 IRB.CreateStore(Origin, MS.OriginTLS); 1223 } 1224 IRB.CreateCall(MS.WarningFn, {}); 1225 } 1226 IRB.CreateCall(MS.EmptyAsm, {}); 1227 // FIXME: Insert UnreachableInst if !MS.Recover? 1228 // This may invalidate some of the following checks and needs to be done 1229 // at the very end. 1230 } 1231 1232 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1233 bool AsCall) { 1234 IRBuilder<> IRB(OrigIns); 1235 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1236 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1237 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1238 1239 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1240 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1241 insertWarningFn(IRB, Origin); 1242 } 1243 return; 1244 } 1245 1246 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1247 1248 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1249 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1250 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1251 FunctionCallee Fn = MS.MaybeWarningFn[SizeIndex]; 1252 Value *ConvertedShadow2 = 1253 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1254 IRB.CreateCall(Fn, {ConvertedShadow2, MS.TrackOrigins && Origin 1255 ? Origin 1256 : (Value *)IRB.getInt32(0)}); 1257 } else { 1258 Value *Cmp = IRB.CreateICmpNE(ConvertedShadow, 1259 getCleanShadow(ConvertedShadow), "_mscmp"); 1260 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1261 Cmp, OrigIns, 1262 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1263 1264 IRB.SetInsertPoint(CheckTerm); 1265 insertWarningFn(IRB, Origin); 1266 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1267 } 1268 } 1269 1270 void materializeChecks(bool InstrumentWithCalls) { 1271 for (const auto &ShadowData : InstrumentationList) { 1272 Instruction *OrigIns = ShadowData.OrigIns; 1273 Value *Shadow = ShadowData.Shadow; 1274 Value *Origin = ShadowData.Origin; 1275 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1276 } 1277 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1278 } 1279 1280 BasicBlock *insertKmsanPrologue(Function &F) { 1281 BasicBlock *ret = 1282 SplitBlock(&F.getEntryBlock(), F.getEntryBlock().getFirstNonPHI()); 1283 IRBuilder<> IRB(F.getEntryBlock().getFirstNonPHI()); 1284 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1285 Constant *Zero = IRB.getInt32(0); 1286 MS.ParamTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1287 {Zero, IRB.getInt32(0)}, "param_shadow"); 1288 MS.RetvalTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1289 {Zero, IRB.getInt32(1)}, "retval_shadow"); 1290 MS.VAArgTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1291 {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1292 MS.VAArgOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1293 {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1294 MS.VAArgOverflowSizeTLS = 1295 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1296 {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1297 MS.ParamOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1298 {Zero, IRB.getInt32(5)}, "param_origin"); 1299 MS.RetvalOriginTLS = 1300 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1301 {Zero, IRB.getInt32(6)}, "retval_origin"); 1302 return ret; 1303 } 1304 1305 /// Add MemorySanitizer instrumentation to a function. 1306 bool runOnFunction() { 1307 // In the presence of unreachable blocks, we may see Phi nodes with 1308 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1309 // blocks, such nodes will not have any shadow value associated with them. 1310 // It's easier to remove unreachable blocks than deal with missing shadow. 1311 removeUnreachableBlocks(F); 1312 1313 // Iterate all BBs in depth-first order and create shadow instructions 1314 // for all instructions (where applicable). 1315 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1316 for (BasicBlock *BB : depth_first(ActualFnStart)) 1317 visit(*BB); 1318 1319 // Finalize PHI nodes. 1320 for (PHINode *PN : ShadowPHINodes) { 1321 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1322 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1323 size_t NumValues = PN->getNumIncomingValues(); 1324 for (size_t v = 0; v < NumValues; v++) { 1325 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1326 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1327 } 1328 } 1329 1330 VAHelper->finalizeInstrumentation(); 1331 1332 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to 1333 // instrumenting only allocas. 1334 if (InstrumentLifetimeStart) { 1335 for (auto Item : LifetimeStartList) { 1336 instrumentAlloca(*Item.second, Item.first); 1337 AllocaSet.erase(Item.second); 1338 } 1339 } 1340 // Poison the allocas for which we didn't instrument the corresponding 1341 // lifetime intrinsics. 1342 for (AllocaInst *AI : AllocaSet) 1343 instrumentAlloca(*AI); 1344 1345 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1346 InstrumentationList.size() + StoreList.size() > 1347 (unsigned)ClInstrumentationWithCallThreshold; 1348 1349 // Insert shadow value checks. 1350 materializeChecks(InstrumentWithCalls); 1351 1352 // Delayed instrumentation of StoreInst. 1353 // This may not add new address checks. 1354 materializeStores(InstrumentWithCalls); 1355 1356 return true; 1357 } 1358 1359 /// Compute the shadow type that corresponds to a given Value. 1360 Type *getShadowTy(Value *V) { 1361 return getShadowTy(V->getType()); 1362 } 1363 1364 /// Compute the shadow type that corresponds to a given Type. 1365 Type *getShadowTy(Type *OrigTy) { 1366 if (!OrigTy->isSized()) { 1367 return nullptr; 1368 } 1369 // For integer type, shadow is the same as the original type. 1370 // This may return weird-sized types like i1. 1371 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1372 return IT; 1373 const DataLayout &DL = F.getParent()->getDataLayout(); 1374 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1375 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1376 return VectorType::get(IntegerType::get(*MS.C, EltSize), 1377 VT->getNumElements()); 1378 } 1379 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1380 return ArrayType::get(getShadowTy(AT->getElementType()), 1381 AT->getNumElements()); 1382 } 1383 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1384 SmallVector<Type*, 4> Elements; 1385 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1386 Elements.push_back(getShadowTy(ST->getElementType(i))); 1387 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1388 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1389 return Res; 1390 } 1391 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1392 return IntegerType::get(*MS.C, TypeSize); 1393 } 1394 1395 /// Flatten a vector type. 1396 Type *getShadowTyNoVec(Type *ty) { 1397 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1398 return IntegerType::get(*MS.C, vt->getBitWidth()); 1399 return ty; 1400 } 1401 1402 /// Convert a shadow value to it's flattened variant. 1403 Value *convertToShadowTyNoVec(Value *V, IRBuilder<> &IRB) { 1404 Type *Ty = V->getType(); 1405 Type *NoVecTy = getShadowTyNoVec(Ty); 1406 if (Ty == NoVecTy) return V; 1407 return IRB.CreateBitCast(V, NoVecTy); 1408 } 1409 1410 /// Compute the integer shadow offset that corresponds to a given 1411 /// application address. 1412 /// 1413 /// Offset = (Addr & ~AndMask) ^ XorMask 1414 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1415 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1416 1417 uint64_t AndMask = MS.MapParams->AndMask; 1418 if (AndMask) 1419 OffsetLong = 1420 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1421 1422 uint64_t XorMask = MS.MapParams->XorMask; 1423 if (XorMask) 1424 OffsetLong = 1425 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1426 return OffsetLong; 1427 } 1428 1429 /// Compute the shadow and origin addresses corresponding to a given 1430 /// application address. 1431 /// 1432 /// Shadow = ShadowBase + Offset 1433 /// Origin = (OriginBase + Offset) & ~3ULL 1434 std::pair<Value *, Value *> 1435 getShadowOriginPtrUserspace(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1436 MaybeAlign Alignment) { 1437 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1438 Value *ShadowLong = ShadowOffset; 1439 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1440 if (ShadowBase != 0) { 1441 ShadowLong = 1442 IRB.CreateAdd(ShadowLong, 1443 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1444 } 1445 Value *ShadowPtr = 1446 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1447 Value *OriginPtr = nullptr; 1448 if (MS.TrackOrigins) { 1449 Value *OriginLong = ShadowOffset; 1450 uint64_t OriginBase = MS.MapParams->OriginBase; 1451 if (OriginBase != 0) 1452 OriginLong = IRB.CreateAdd(OriginLong, 1453 ConstantInt::get(MS.IntptrTy, OriginBase)); 1454 if (!Alignment || *Alignment < kMinOriginAlignment) { 1455 uint64_t Mask = kMinOriginAlignment.value() - 1; 1456 OriginLong = 1457 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1458 } 1459 OriginPtr = 1460 IRB.CreateIntToPtr(OriginLong, PointerType::get(MS.OriginTy, 0)); 1461 } 1462 return std::make_pair(ShadowPtr, OriginPtr); 1463 } 1464 1465 std::pair<Value *, Value *> getShadowOriginPtrKernel(Value *Addr, 1466 IRBuilder<> &IRB, 1467 Type *ShadowTy, 1468 bool isStore) { 1469 Value *ShadowOriginPtrs; 1470 const DataLayout &DL = F.getParent()->getDataLayout(); 1471 int Size = DL.getTypeStoreSize(ShadowTy); 1472 1473 FunctionCallee Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1474 Value *AddrCast = 1475 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1476 if (Getter) { 1477 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1478 } else { 1479 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1480 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1481 : MS.MsanMetadataPtrForLoadN, 1482 {AddrCast, SizeVal}); 1483 } 1484 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1485 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1486 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1487 1488 return std::make_pair(ShadowPtr, OriginPtr); 1489 } 1490 1491 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1492 Type *ShadowTy, 1493 MaybeAlign Alignment, 1494 bool isStore) { 1495 if (MS.CompileKernel) 1496 return getShadowOriginPtrKernel(Addr, IRB, ShadowTy, isStore); 1497 return getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1498 } 1499 1500 /// Compute the shadow address for a given function argument. 1501 /// 1502 /// Shadow = ParamTLS+ArgOffset. 1503 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1504 int ArgOffset) { 1505 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1506 if (ArgOffset) 1507 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1508 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1509 "_msarg"); 1510 } 1511 1512 /// Compute the origin address for a given function argument. 1513 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1514 int ArgOffset) { 1515 if (!MS.TrackOrigins) 1516 return nullptr; 1517 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1518 if (ArgOffset) 1519 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1520 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1521 "_msarg_o"); 1522 } 1523 1524 /// Compute the shadow address for a retval. 1525 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1526 return IRB.CreatePointerCast(MS.RetvalTLS, 1527 PointerType::get(getShadowTy(A), 0), 1528 "_msret"); 1529 } 1530 1531 /// Compute the origin address for a retval. 1532 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1533 // We keep a single origin for the entire retval. Might be too optimistic. 1534 return MS.RetvalOriginTLS; 1535 } 1536 1537 /// Set SV to be the shadow value for V. 1538 void setShadow(Value *V, Value *SV) { 1539 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1540 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1541 } 1542 1543 /// Set Origin to be the origin value for V. 1544 void setOrigin(Value *V, Value *Origin) { 1545 if (!MS.TrackOrigins) return; 1546 assert(!OriginMap.count(V) && "Values may only have one origin"); 1547 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1548 OriginMap[V] = Origin; 1549 } 1550 1551 Constant *getCleanShadow(Type *OrigTy) { 1552 Type *ShadowTy = getShadowTy(OrigTy); 1553 if (!ShadowTy) 1554 return nullptr; 1555 return Constant::getNullValue(ShadowTy); 1556 } 1557 1558 /// Create a clean shadow value for a given value. 1559 /// 1560 /// Clean shadow (all zeroes) means all bits of the value are defined 1561 /// (initialized). 1562 Constant *getCleanShadow(Value *V) { 1563 return getCleanShadow(V->getType()); 1564 } 1565 1566 /// Create a dirty shadow of a given shadow type. 1567 Constant *getPoisonedShadow(Type *ShadowTy) { 1568 assert(ShadowTy); 1569 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1570 return Constant::getAllOnesValue(ShadowTy); 1571 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1572 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1573 getPoisonedShadow(AT->getElementType())); 1574 return ConstantArray::get(AT, Vals); 1575 } 1576 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1577 SmallVector<Constant *, 4> Vals; 1578 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1579 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1580 return ConstantStruct::get(ST, Vals); 1581 } 1582 llvm_unreachable("Unexpected shadow type"); 1583 } 1584 1585 /// Create a dirty shadow for a given value. 1586 Constant *getPoisonedShadow(Value *V) { 1587 Type *ShadowTy = getShadowTy(V); 1588 if (!ShadowTy) 1589 return nullptr; 1590 return getPoisonedShadow(ShadowTy); 1591 } 1592 1593 /// Create a clean (zero) origin. 1594 Value *getCleanOrigin() { 1595 return Constant::getNullValue(MS.OriginTy); 1596 } 1597 1598 /// Get the shadow value for a given Value. 1599 /// 1600 /// This function either returns the value set earlier with setShadow, 1601 /// or extracts if from ParamTLS (for function arguments). 1602 Value *getShadow(Value *V) { 1603 if (!PropagateShadow) return getCleanShadow(V); 1604 if (Instruction *I = dyn_cast<Instruction>(V)) { 1605 if (I->getMetadata("nosanitize")) 1606 return getCleanShadow(V); 1607 // For instructions the shadow is already stored in the map. 1608 Value *Shadow = ShadowMap[V]; 1609 if (!Shadow) { 1610 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1611 (void)I; 1612 assert(Shadow && "No shadow for a value"); 1613 } 1614 return Shadow; 1615 } 1616 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1617 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1618 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1619 (void)U; 1620 return AllOnes; 1621 } 1622 if (Argument *A = dyn_cast<Argument>(V)) { 1623 // For arguments we compute the shadow on demand and store it in the map. 1624 Value **ShadowPtr = &ShadowMap[V]; 1625 if (*ShadowPtr) 1626 return *ShadowPtr; 1627 Function *F = A->getParent(); 1628 IRBuilder<> EntryIRB(ActualFnStart->getFirstNonPHI()); 1629 unsigned ArgOffset = 0; 1630 const DataLayout &DL = F->getParent()->getDataLayout(); 1631 for (auto &FArg : F->args()) { 1632 if (!FArg.getType()->isSized()) { 1633 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1634 continue; 1635 } 1636 unsigned Size = 1637 FArg.hasByValAttr() 1638 ? DL.getTypeAllocSize(FArg.getType()->getPointerElementType()) 1639 : DL.getTypeAllocSize(FArg.getType()); 1640 if (A == &FArg) { 1641 bool Overflow = ArgOffset + Size > kParamTLSSize; 1642 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1643 if (FArg.hasByValAttr()) { 1644 // ByVal pointer itself has clean shadow. We copy the actual 1645 // argument shadow to the underlying memory. 1646 // Figure out maximal valid memcpy alignment. 1647 const Align ArgAlign = DL.getValueOrABITypeAlignment( 1648 MaybeAlign(FArg.getParamAlignment()), 1649 A->getType()->getPointerElementType()); 1650 Value *CpShadowPtr = 1651 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1652 /*isStore*/ true) 1653 .first; 1654 // TODO(glider): need to copy origins. 1655 if (Overflow) { 1656 // ParamTLS overflow. 1657 EntryIRB.CreateMemSet( 1658 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1659 Size, ArgAlign); 1660 } else { 1661 const Align CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1662 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1663 CopyAlign, Size); 1664 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1665 (void)Cpy; 1666 } 1667 *ShadowPtr = getCleanShadow(V); 1668 } else { 1669 if (Overflow) { 1670 // ParamTLS overflow. 1671 *ShadowPtr = getCleanShadow(V); 1672 } else { 1673 *ShadowPtr = EntryIRB.CreateAlignedLoad(getShadowTy(&FArg), Base, 1674 kShadowTLSAlignment); 1675 } 1676 } 1677 LLVM_DEBUG(dbgs() 1678 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1679 if (MS.TrackOrigins && !Overflow) { 1680 Value *OriginPtr = 1681 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1682 setOrigin(A, EntryIRB.CreateLoad(MS.OriginTy, OriginPtr)); 1683 } else { 1684 setOrigin(A, getCleanOrigin()); 1685 } 1686 } 1687 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1688 } 1689 assert(*ShadowPtr && "Could not find shadow for an argument"); 1690 return *ShadowPtr; 1691 } 1692 // For everything else the shadow is zero. 1693 return getCleanShadow(V); 1694 } 1695 1696 /// Get the shadow for i-th argument of the instruction I. 1697 Value *getShadow(Instruction *I, int i) { 1698 return getShadow(I->getOperand(i)); 1699 } 1700 1701 /// Get the origin for a value. 1702 Value *getOrigin(Value *V) { 1703 if (!MS.TrackOrigins) return nullptr; 1704 if (!PropagateShadow) return getCleanOrigin(); 1705 if (isa<Constant>(V)) return getCleanOrigin(); 1706 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1707 "Unexpected value type in getOrigin()"); 1708 if (Instruction *I = dyn_cast<Instruction>(V)) { 1709 if (I->getMetadata("nosanitize")) 1710 return getCleanOrigin(); 1711 } 1712 Value *Origin = OriginMap[V]; 1713 assert(Origin && "Missing origin"); 1714 return Origin; 1715 } 1716 1717 /// Get the origin for i-th argument of the instruction I. 1718 Value *getOrigin(Instruction *I, int i) { 1719 return getOrigin(I->getOperand(i)); 1720 } 1721 1722 /// Remember the place where a shadow check should be inserted. 1723 /// 1724 /// This location will be later instrumented with a check that will print a 1725 /// UMR warning in runtime if the shadow value is not 0. 1726 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1727 assert(Shadow); 1728 if (!InsertChecks) return; 1729 #ifndef NDEBUG 1730 Type *ShadowTy = Shadow->getType(); 1731 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) && 1732 "Can only insert checks for integer and vector shadow types"); 1733 #endif 1734 InstrumentationList.push_back( 1735 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1736 } 1737 1738 /// Remember the place where a shadow check should be inserted. 1739 /// 1740 /// This location will be later instrumented with a check that will print a 1741 /// UMR warning in runtime if the value is not fully defined. 1742 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1743 assert(Val); 1744 Value *Shadow, *Origin; 1745 if (ClCheckConstantShadow) { 1746 Shadow = getShadow(Val); 1747 if (!Shadow) return; 1748 Origin = getOrigin(Val); 1749 } else { 1750 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1751 if (!Shadow) return; 1752 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1753 } 1754 insertShadowCheck(Shadow, Origin, OrigIns); 1755 } 1756 1757 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1758 switch (a) { 1759 case AtomicOrdering::NotAtomic: 1760 return AtomicOrdering::NotAtomic; 1761 case AtomicOrdering::Unordered: 1762 case AtomicOrdering::Monotonic: 1763 case AtomicOrdering::Release: 1764 return AtomicOrdering::Release; 1765 case AtomicOrdering::Acquire: 1766 case AtomicOrdering::AcquireRelease: 1767 return AtomicOrdering::AcquireRelease; 1768 case AtomicOrdering::SequentiallyConsistent: 1769 return AtomicOrdering::SequentiallyConsistent; 1770 } 1771 llvm_unreachable("Unknown ordering"); 1772 } 1773 1774 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1775 switch (a) { 1776 case AtomicOrdering::NotAtomic: 1777 return AtomicOrdering::NotAtomic; 1778 case AtomicOrdering::Unordered: 1779 case AtomicOrdering::Monotonic: 1780 case AtomicOrdering::Acquire: 1781 return AtomicOrdering::Acquire; 1782 case AtomicOrdering::Release: 1783 case AtomicOrdering::AcquireRelease: 1784 return AtomicOrdering::AcquireRelease; 1785 case AtomicOrdering::SequentiallyConsistent: 1786 return AtomicOrdering::SequentiallyConsistent; 1787 } 1788 llvm_unreachable("Unknown ordering"); 1789 } 1790 1791 // ------------------- Visitors. 1792 using InstVisitor<MemorySanitizerVisitor>::visit; 1793 void visit(Instruction &I) { 1794 if (!I.getMetadata("nosanitize")) 1795 InstVisitor<MemorySanitizerVisitor>::visit(I); 1796 } 1797 1798 /// Instrument LoadInst 1799 /// 1800 /// Loads the corresponding shadow and (optionally) origin. 1801 /// Optionally, checks that the load address is fully defined. 1802 void visitLoadInst(LoadInst &I) { 1803 assert(I.getType()->isSized() && "Load type must have size"); 1804 assert(!I.getMetadata("nosanitize")); 1805 IRBuilder<> IRB(I.getNextNode()); 1806 Type *ShadowTy = getShadowTy(&I); 1807 Value *Addr = I.getPointerOperand(); 1808 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 1809 const Align Alignment = assumeAligned(I.getAlignment()); 1810 if (PropagateShadow) { 1811 std::tie(ShadowPtr, OriginPtr) = 1812 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1813 setShadow(&I, 1814 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 1815 } else { 1816 setShadow(&I, getCleanShadow(&I)); 1817 } 1818 1819 if (ClCheckAccessAddress) 1820 insertShadowCheck(I.getPointerOperand(), &I); 1821 1822 if (I.isAtomic()) 1823 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1824 1825 if (MS.TrackOrigins) { 1826 if (PropagateShadow) { 1827 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1828 setOrigin( 1829 &I, IRB.CreateAlignedLoad(MS.OriginTy, OriginPtr, OriginAlignment)); 1830 } else { 1831 setOrigin(&I, getCleanOrigin()); 1832 } 1833 } 1834 } 1835 1836 /// Instrument StoreInst 1837 /// 1838 /// Stores the corresponding shadow and (optionally) origin. 1839 /// Optionally, checks that the store address is fully defined. 1840 void visitStoreInst(StoreInst &I) { 1841 StoreList.push_back(&I); 1842 if (ClCheckAccessAddress) 1843 insertShadowCheck(I.getPointerOperand(), &I); 1844 } 1845 1846 void handleCASOrRMW(Instruction &I) { 1847 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1848 1849 IRBuilder<> IRB(&I); 1850 Value *Addr = I.getOperand(0); 1851 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, I.getType(), Align(1), 1852 /*isStore*/ true) 1853 .first; 1854 1855 if (ClCheckAccessAddress) 1856 insertShadowCheck(Addr, &I); 1857 1858 // Only test the conditional argument of cmpxchg instruction. 1859 // The other argument can potentially be uninitialized, but we can not 1860 // detect this situation reliably without possible false positives. 1861 if (isa<AtomicCmpXchgInst>(I)) 1862 insertShadowCheck(I.getOperand(1), &I); 1863 1864 IRB.CreateStore(getCleanShadow(&I), ShadowPtr); 1865 1866 setShadow(&I, getCleanShadow(&I)); 1867 setOrigin(&I, getCleanOrigin()); 1868 } 1869 1870 void visitAtomicRMWInst(AtomicRMWInst &I) { 1871 handleCASOrRMW(I); 1872 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1873 } 1874 1875 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1876 handleCASOrRMW(I); 1877 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1878 } 1879 1880 // Vector manipulation. 1881 void visitExtractElementInst(ExtractElementInst &I) { 1882 insertShadowCheck(I.getOperand(1), &I); 1883 IRBuilder<> IRB(&I); 1884 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 1885 "_msprop")); 1886 setOrigin(&I, getOrigin(&I, 0)); 1887 } 1888 1889 void visitInsertElementInst(InsertElementInst &I) { 1890 insertShadowCheck(I.getOperand(2), &I); 1891 IRBuilder<> IRB(&I); 1892 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 1893 I.getOperand(2), "_msprop")); 1894 setOriginForNaryOp(I); 1895 } 1896 1897 void visitShuffleVectorInst(ShuffleVectorInst &I) { 1898 IRBuilder<> IRB(&I); 1899 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 1900 I.getShuffleMask(), "_msprop")); 1901 setOriginForNaryOp(I); 1902 } 1903 1904 // Casts. 1905 void visitSExtInst(SExtInst &I) { 1906 IRBuilder<> IRB(&I); 1907 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 1908 setOrigin(&I, getOrigin(&I, 0)); 1909 } 1910 1911 void visitZExtInst(ZExtInst &I) { 1912 IRBuilder<> IRB(&I); 1913 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 1914 setOrigin(&I, getOrigin(&I, 0)); 1915 } 1916 1917 void visitTruncInst(TruncInst &I) { 1918 IRBuilder<> IRB(&I); 1919 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 1920 setOrigin(&I, getOrigin(&I, 0)); 1921 } 1922 1923 void visitBitCastInst(BitCastInst &I) { 1924 // Special case: if this is the bitcast (there is exactly 1 allowed) between 1925 // a musttail call and a ret, don't instrument. New instructions are not 1926 // allowed after a musttail call. 1927 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 1928 if (CI->isMustTailCall()) 1929 return; 1930 IRBuilder<> IRB(&I); 1931 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 1932 setOrigin(&I, getOrigin(&I, 0)); 1933 } 1934 1935 void visitPtrToIntInst(PtrToIntInst &I) { 1936 IRBuilder<> IRB(&I); 1937 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1938 "_msprop_ptrtoint")); 1939 setOrigin(&I, getOrigin(&I, 0)); 1940 } 1941 1942 void visitIntToPtrInst(IntToPtrInst &I) { 1943 IRBuilder<> IRB(&I); 1944 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1945 "_msprop_inttoptr")); 1946 setOrigin(&I, getOrigin(&I, 0)); 1947 } 1948 1949 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 1950 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 1951 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 1952 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 1953 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 1954 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 1955 1956 /// Propagate shadow for bitwise AND. 1957 /// 1958 /// This code is exact, i.e. if, for example, a bit in the left argument 1959 /// is defined and 0, then neither the value not definedness of the 1960 /// corresponding bit in B don't affect the resulting shadow. 1961 void visitAnd(BinaryOperator &I) { 1962 IRBuilder<> IRB(&I); 1963 // "And" of 0 and a poisoned value results in unpoisoned value. 1964 // 1&1 => 1; 0&1 => 0; p&1 => p; 1965 // 1&0 => 0; 0&0 => 0; p&0 => 0; 1966 // 1&p => p; 0&p => 0; p&p => p; 1967 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 1968 Value *S1 = getShadow(&I, 0); 1969 Value *S2 = getShadow(&I, 1); 1970 Value *V1 = I.getOperand(0); 1971 Value *V2 = I.getOperand(1); 1972 if (V1->getType() != S1->getType()) { 1973 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1974 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1975 } 1976 Value *S1S2 = IRB.CreateAnd(S1, S2); 1977 Value *V1S2 = IRB.CreateAnd(V1, S2); 1978 Value *S1V2 = IRB.CreateAnd(S1, V2); 1979 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 1980 setOriginForNaryOp(I); 1981 } 1982 1983 void visitOr(BinaryOperator &I) { 1984 IRBuilder<> IRB(&I); 1985 // "Or" of 1 and a poisoned value results in unpoisoned value. 1986 // 1|1 => 1; 0|1 => 1; p|1 => 1; 1987 // 1|0 => 1; 0|0 => 0; p|0 => p; 1988 // 1|p => 1; 0|p => p; p|p => p; 1989 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 1990 Value *S1 = getShadow(&I, 0); 1991 Value *S2 = getShadow(&I, 1); 1992 Value *V1 = IRB.CreateNot(I.getOperand(0)); 1993 Value *V2 = IRB.CreateNot(I.getOperand(1)); 1994 if (V1->getType() != S1->getType()) { 1995 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1996 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1997 } 1998 Value *S1S2 = IRB.CreateAnd(S1, S2); 1999 Value *V1S2 = IRB.CreateAnd(V1, S2); 2000 Value *S1V2 = IRB.CreateAnd(S1, V2); 2001 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2002 setOriginForNaryOp(I); 2003 } 2004 2005 /// Default propagation of shadow and/or origin. 2006 /// 2007 /// This class implements the general case of shadow propagation, used in all 2008 /// cases where we don't know and/or don't care about what the operation 2009 /// actually does. It converts all input shadow values to a common type 2010 /// (extending or truncating as necessary), and bitwise OR's them. 2011 /// 2012 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 2013 /// fully initialized), and less prone to false positives. 2014 /// 2015 /// This class also implements the general case of origin propagation. For a 2016 /// Nary operation, result origin is set to the origin of an argument that is 2017 /// not entirely initialized. If there is more than one such arguments, the 2018 /// rightmost of them is picked. It does not matter which one is picked if all 2019 /// arguments are initialized. 2020 template <bool CombineShadow> 2021 class Combiner { 2022 Value *Shadow = nullptr; 2023 Value *Origin = nullptr; 2024 IRBuilder<> &IRB; 2025 MemorySanitizerVisitor *MSV; 2026 2027 public: 2028 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 2029 : IRB(IRB), MSV(MSV) {} 2030 2031 /// Add a pair of shadow and origin values to the mix. 2032 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 2033 if (CombineShadow) { 2034 assert(OpShadow); 2035 if (!Shadow) 2036 Shadow = OpShadow; 2037 else { 2038 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 2039 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 2040 } 2041 } 2042 2043 if (MSV->MS.TrackOrigins) { 2044 assert(OpOrigin); 2045 if (!Origin) { 2046 Origin = OpOrigin; 2047 } else { 2048 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 2049 // No point in adding something that might result in 0 origin value. 2050 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 2051 Value *FlatShadow = MSV->convertToShadowTyNoVec(OpShadow, IRB); 2052 Value *Cond = 2053 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 2054 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 2055 } 2056 } 2057 } 2058 return *this; 2059 } 2060 2061 /// Add an application value to the mix. 2062 Combiner &Add(Value *V) { 2063 Value *OpShadow = MSV->getShadow(V); 2064 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 2065 return Add(OpShadow, OpOrigin); 2066 } 2067 2068 /// Set the current combined values as the given instruction's shadow 2069 /// and origin. 2070 void Done(Instruction *I) { 2071 if (CombineShadow) { 2072 assert(Shadow); 2073 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 2074 MSV->setShadow(I, Shadow); 2075 } 2076 if (MSV->MS.TrackOrigins) { 2077 assert(Origin); 2078 MSV->setOrigin(I, Origin); 2079 } 2080 } 2081 }; 2082 2083 using ShadowAndOriginCombiner = Combiner<true>; 2084 using OriginCombiner = Combiner<false>; 2085 2086 /// Propagate origin for arbitrary operation. 2087 void setOriginForNaryOp(Instruction &I) { 2088 if (!MS.TrackOrigins) return; 2089 IRBuilder<> IRB(&I); 2090 OriginCombiner OC(this, IRB); 2091 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2092 OC.Add(OI->get()); 2093 OC.Done(&I); 2094 } 2095 2096 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2097 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2098 "Vector of pointers is not a valid shadow type"); 2099 return Ty->isVectorTy() ? cast<VectorType>(Ty)->getNumElements() * 2100 Ty->getScalarSizeInBits() 2101 : Ty->getPrimitiveSizeInBits(); 2102 } 2103 2104 /// Cast between two shadow types, extending or truncating as 2105 /// necessary. 2106 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2107 bool Signed = false) { 2108 Type *srcTy = V->getType(); 2109 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2110 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2111 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2112 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2113 2114 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2115 return IRB.CreateIntCast(V, dstTy, Signed); 2116 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2117 cast<VectorType>(dstTy)->getNumElements() == 2118 cast<VectorType>(srcTy)->getNumElements()) 2119 return IRB.CreateIntCast(V, dstTy, Signed); 2120 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2121 Value *V2 = 2122 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2123 return IRB.CreateBitCast(V2, dstTy); 2124 // TODO: handle struct types. 2125 } 2126 2127 /// Cast an application value to the type of its own shadow. 2128 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2129 Type *ShadowTy = getShadowTy(V); 2130 if (V->getType() == ShadowTy) 2131 return V; 2132 if (V->getType()->isPtrOrPtrVectorTy()) 2133 return IRB.CreatePtrToInt(V, ShadowTy); 2134 else 2135 return IRB.CreateBitCast(V, ShadowTy); 2136 } 2137 2138 /// Propagate shadow for arbitrary operation. 2139 void handleShadowOr(Instruction &I) { 2140 IRBuilder<> IRB(&I); 2141 ShadowAndOriginCombiner SC(this, IRB); 2142 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2143 SC.Add(OI->get()); 2144 SC.Done(&I); 2145 } 2146 2147 void visitFNeg(UnaryOperator &I) { handleShadowOr(I); } 2148 2149 // Handle multiplication by constant. 2150 // 2151 // Handle a special case of multiplication by constant that may have one or 2152 // more zeros in the lower bits. This makes corresponding number of lower bits 2153 // of the result zero as well. We model it by shifting the other operand 2154 // shadow left by the required number of bits. Effectively, we transform 2155 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2156 // We use multiplication by 2**N instead of shift to cover the case of 2157 // multiplication by 0, which may occur in some elements of a vector operand. 2158 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2159 Value *OtherArg) { 2160 Constant *ShadowMul; 2161 Type *Ty = ConstArg->getType(); 2162 if (auto *VTy = dyn_cast<VectorType>(Ty)) { 2163 unsigned NumElements = VTy->getNumElements(); 2164 Type *EltTy = VTy->getElementType(); 2165 SmallVector<Constant *, 16> Elements; 2166 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2167 if (ConstantInt *Elt = 2168 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2169 const APInt &V = Elt->getValue(); 2170 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2171 Elements.push_back(ConstantInt::get(EltTy, V2)); 2172 } else { 2173 Elements.push_back(ConstantInt::get(EltTy, 1)); 2174 } 2175 } 2176 ShadowMul = ConstantVector::get(Elements); 2177 } else { 2178 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2179 const APInt &V = Elt->getValue(); 2180 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2181 ShadowMul = ConstantInt::get(Ty, V2); 2182 } else { 2183 ShadowMul = ConstantInt::get(Ty, 1); 2184 } 2185 } 2186 2187 IRBuilder<> IRB(&I); 2188 setShadow(&I, 2189 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2190 setOrigin(&I, getOrigin(OtherArg)); 2191 } 2192 2193 void visitMul(BinaryOperator &I) { 2194 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2195 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2196 if (constOp0 && !constOp1) 2197 handleMulByConstant(I, constOp0, I.getOperand(1)); 2198 else if (constOp1 && !constOp0) 2199 handleMulByConstant(I, constOp1, I.getOperand(0)); 2200 else 2201 handleShadowOr(I); 2202 } 2203 2204 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2205 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2206 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2207 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2208 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2209 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2210 2211 void handleIntegerDiv(Instruction &I) { 2212 IRBuilder<> IRB(&I); 2213 // Strict on the second argument. 2214 insertShadowCheck(I.getOperand(1), &I); 2215 setShadow(&I, getShadow(&I, 0)); 2216 setOrigin(&I, getOrigin(&I, 0)); 2217 } 2218 2219 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2220 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2221 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2222 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2223 2224 // Floating point division is side-effect free. We can not require that the 2225 // divisor is fully initialized and must propagate shadow. See PR37523. 2226 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2227 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2228 2229 /// Instrument == and != comparisons. 2230 /// 2231 /// Sometimes the comparison result is known even if some of the bits of the 2232 /// arguments are not. 2233 void handleEqualityComparison(ICmpInst &I) { 2234 IRBuilder<> IRB(&I); 2235 Value *A = I.getOperand(0); 2236 Value *B = I.getOperand(1); 2237 Value *Sa = getShadow(A); 2238 Value *Sb = getShadow(B); 2239 2240 // Get rid of pointers and vectors of pointers. 2241 // For ints (and vectors of ints), types of A and Sa match, 2242 // and this is a no-op. 2243 A = IRB.CreatePointerCast(A, Sa->getType()); 2244 B = IRB.CreatePointerCast(B, Sb->getType()); 2245 2246 // A == B <==> (C = A^B) == 0 2247 // A != B <==> (C = A^B) != 0 2248 // Sc = Sa | Sb 2249 Value *C = IRB.CreateXor(A, B); 2250 Value *Sc = IRB.CreateOr(Sa, Sb); 2251 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2252 // Result is defined if one of the following is true 2253 // * there is a defined 1 bit in C 2254 // * C is fully defined 2255 // Si = !(C & ~Sc) && Sc 2256 Value *Zero = Constant::getNullValue(Sc->getType()); 2257 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2258 Value *Si = 2259 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2260 IRB.CreateICmpEQ( 2261 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2262 Si->setName("_msprop_icmp"); 2263 setShadow(&I, Si); 2264 setOriginForNaryOp(I); 2265 } 2266 2267 /// Build the lowest possible value of V, taking into account V's 2268 /// uninitialized bits. 2269 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2270 bool isSigned) { 2271 if (isSigned) { 2272 // Split shadow into sign bit and other bits. 2273 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2274 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2275 // Maximise the undefined shadow bit, minimize other undefined bits. 2276 return 2277 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2278 } else { 2279 // Minimize undefined bits. 2280 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2281 } 2282 } 2283 2284 /// Build the highest possible value of V, taking into account V's 2285 /// uninitialized bits. 2286 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2287 bool isSigned) { 2288 if (isSigned) { 2289 // Split shadow into sign bit and other bits. 2290 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2291 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2292 // Minimise the undefined shadow bit, maximise other undefined bits. 2293 return 2294 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2295 } else { 2296 // Maximize undefined bits. 2297 return IRB.CreateOr(A, Sa); 2298 } 2299 } 2300 2301 /// Instrument relational comparisons. 2302 /// 2303 /// This function does exact shadow propagation for all relational 2304 /// comparisons of integers, pointers and vectors of those. 2305 /// FIXME: output seems suboptimal when one of the operands is a constant 2306 void handleRelationalComparisonExact(ICmpInst &I) { 2307 IRBuilder<> IRB(&I); 2308 Value *A = I.getOperand(0); 2309 Value *B = I.getOperand(1); 2310 Value *Sa = getShadow(A); 2311 Value *Sb = getShadow(B); 2312 2313 // Get rid of pointers and vectors of pointers. 2314 // For ints (and vectors of ints), types of A and Sa match, 2315 // and this is a no-op. 2316 A = IRB.CreatePointerCast(A, Sa->getType()); 2317 B = IRB.CreatePointerCast(B, Sb->getType()); 2318 2319 // Let [a0, a1] be the interval of possible values of A, taking into account 2320 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2321 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2322 bool IsSigned = I.isSigned(); 2323 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2324 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2325 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2326 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2327 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2328 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2329 Value *Si = IRB.CreateXor(S1, S2); 2330 setShadow(&I, Si); 2331 setOriginForNaryOp(I); 2332 } 2333 2334 /// Instrument signed relational comparisons. 2335 /// 2336 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2337 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2338 void handleSignedRelationalComparison(ICmpInst &I) { 2339 Constant *constOp; 2340 Value *op = nullptr; 2341 CmpInst::Predicate pre; 2342 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2343 op = I.getOperand(0); 2344 pre = I.getPredicate(); 2345 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2346 op = I.getOperand(1); 2347 pre = I.getSwappedPredicate(); 2348 } else { 2349 handleShadowOr(I); 2350 return; 2351 } 2352 2353 if ((constOp->isNullValue() && 2354 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2355 (constOp->isAllOnesValue() && 2356 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2357 IRBuilder<> IRB(&I); 2358 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2359 "_msprop_icmp_s"); 2360 setShadow(&I, Shadow); 2361 setOrigin(&I, getOrigin(op)); 2362 } else { 2363 handleShadowOr(I); 2364 } 2365 } 2366 2367 void visitICmpInst(ICmpInst &I) { 2368 if (!ClHandleICmp) { 2369 handleShadowOr(I); 2370 return; 2371 } 2372 if (I.isEquality()) { 2373 handleEqualityComparison(I); 2374 return; 2375 } 2376 2377 assert(I.isRelational()); 2378 if (ClHandleICmpExact) { 2379 handleRelationalComparisonExact(I); 2380 return; 2381 } 2382 if (I.isSigned()) { 2383 handleSignedRelationalComparison(I); 2384 return; 2385 } 2386 2387 assert(I.isUnsigned()); 2388 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2389 handleRelationalComparisonExact(I); 2390 return; 2391 } 2392 2393 handleShadowOr(I); 2394 } 2395 2396 void visitFCmpInst(FCmpInst &I) { 2397 handleShadowOr(I); 2398 } 2399 2400 void handleShift(BinaryOperator &I) { 2401 IRBuilder<> IRB(&I); 2402 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2403 // Otherwise perform the same shift on S1. 2404 Value *S1 = getShadow(&I, 0); 2405 Value *S2 = getShadow(&I, 1); 2406 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2407 S2->getType()); 2408 Value *V2 = I.getOperand(1); 2409 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2410 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2411 setOriginForNaryOp(I); 2412 } 2413 2414 void visitShl(BinaryOperator &I) { handleShift(I); } 2415 void visitAShr(BinaryOperator &I) { handleShift(I); } 2416 void visitLShr(BinaryOperator &I) { handleShift(I); } 2417 2418 /// Instrument llvm.memmove 2419 /// 2420 /// At this point we don't know if llvm.memmove will be inlined or not. 2421 /// If we don't instrument it and it gets inlined, 2422 /// our interceptor will not kick in and we will lose the memmove. 2423 /// If we instrument the call here, but it does not get inlined, 2424 /// we will memove the shadow twice: which is bad in case 2425 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2426 /// 2427 /// Similar situation exists for memcpy and memset. 2428 void visitMemMoveInst(MemMoveInst &I) { 2429 IRBuilder<> IRB(&I); 2430 IRB.CreateCall( 2431 MS.MemmoveFn, 2432 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2433 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2434 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2435 I.eraseFromParent(); 2436 } 2437 2438 // Similar to memmove: avoid copying shadow twice. 2439 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2440 // FIXME: consider doing manual inline for small constant sizes and proper 2441 // alignment. 2442 void visitMemCpyInst(MemCpyInst &I) { 2443 IRBuilder<> IRB(&I); 2444 IRB.CreateCall( 2445 MS.MemcpyFn, 2446 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2447 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2448 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2449 I.eraseFromParent(); 2450 } 2451 2452 // Same as memcpy. 2453 void visitMemSetInst(MemSetInst &I) { 2454 IRBuilder<> IRB(&I); 2455 IRB.CreateCall( 2456 MS.MemsetFn, 2457 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2458 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2459 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2460 I.eraseFromParent(); 2461 } 2462 2463 void visitVAStartInst(VAStartInst &I) { 2464 VAHelper->visitVAStartInst(I); 2465 } 2466 2467 void visitVACopyInst(VACopyInst &I) { 2468 VAHelper->visitVACopyInst(I); 2469 } 2470 2471 /// Handle vector store-like intrinsics. 2472 /// 2473 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2474 /// has 1 pointer argument and 1 vector argument, returns void. 2475 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2476 IRBuilder<> IRB(&I); 2477 Value* Addr = I.getArgOperand(0); 2478 Value *Shadow = getShadow(&I, 1); 2479 Value *ShadowPtr, *OriginPtr; 2480 2481 // We don't know the pointer alignment (could be unaligned SSE store!). 2482 // Have to assume to worst case. 2483 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2484 Addr, IRB, Shadow->getType(), Align(1), /*isStore*/ true); 2485 IRB.CreateAlignedStore(Shadow, ShadowPtr, Align(1)); 2486 2487 if (ClCheckAccessAddress) 2488 insertShadowCheck(Addr, &I); 2489 2490 // FIXME: factor out common code from materializeStores 2491 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2492 return true; 2493 } 2494 2495 /// Handle vector load-like intrinsics. 2496 /// 2497 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2498 /// has 1 pointer argument, returns a vector. 2499 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2500 IRBuilder<> IRB(&I); 2501 Value *Addr = I.getArgOperand(0); 2502 2503 Type *ShadowTy = getShadowTy(&I); 2504 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 2505 if (PropagateShadow) { 2506 // We don't know the pointer alignment (could be unaligned SSE load!). 2507 // Have to assume to worst case. 2508 const Align Alignment = Align(1); 2509 std::tie(ShadowPtr, OriginPtr) = 2510 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2511 setShadow(&I, 2512 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 2513 } else { 2514 setShadow(&I, getCleanShadow(&I)); 2515 } 2516 2517 if (ClCheckAccessAddress) 2518 insertShadowCheck(Addr, &I); 2519 2520 if (MS.TrackOrigins) { 2521 if (PropagateShadow) 2522 setOrigin(&I, IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2523 else 2524 setOrigin(&I, getCleanOrigin()); 2525 } 2526 return true; 2527 } 2528 2529 /// Handle (SIMD arithmetic)-like intrinsics. 2530 /// 2531 /// Instrument intrinsics with any number of arguments of the same type, 2532 /// equal to the return type. The type should be simple (no aggregates or 2533 /// pointers; vectors are fine). 2534 /// Caller guarantees that this intrinsic does not access memory. 2535 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2536 Type *RetTy = I.getType(); 2537 if (!(RetTy->isIntOrIntVectorTy() || 2538 RetTy->isFPOrFPVectorTy() || 2539 RetTy->isX86_MMXTy())) 2540 return false; 2541 2542 unsigned NumArgOperands = I.getNumArgOperands(); 2543 2544 for (unsigned i = 0; i < NumArgOperands; ++i) { 2545 Type *Ty = I.getArgOperand(i)->getType(); 2546 if (Ty != RetTy) 2547 return false; 2548 } 2549 2550 IRBuilder<> IRB(&I); 2551 ShadowAndOriginCombiner SC(this, IRB); 2552 for (unsigned i = 0; i < NumArgOperands; ++i) 2553 SC.Add(I.getArgOperand(i)); 2554 SC.Done(&I); 2555 2556 return true; 2557 } 2558 2559 /// Heuristically instrument unknown intrinsics. 2560 /// 2561 /// The main purpose of this code is to do something reasonable with all 2562 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2563 /// We recognize several classes of intrinsics by their argument types and 2564 /// ModRefBehaviour and apply special instrumentation when we are reasonably 2565 /// sure that we know what the intrinsic does. 2566 /// 2567 /// We special-case intrinsics where this approach fails. See llvm.bswap 2568 /// handling as an example of that. 2569 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2570 unsigned NumArgOperands = I.getNumArgOperands(); 2571 if (NumArgOperands == 0) 2572 return false; 2573 2574 if (NumArgOperands == 2 && 2575 I.getArgOperand(0)->getType()->isPointerTy() && 2576 I.getArgOperand(1)->getType()->isVectorTy() && 2577 I.getType()->isVoidTy() && 2578 !I.onlyReadsMemory()) { 2579 // This looks like a vector store. 2580 return handleVectorStoreIntrinsic(I); 2581 } 2582 2583 if (NumArgOperands == 1 && 2584 I.getArgOperand(0)->getType()->isPointerTy() && 2585 I.getType()->isVectorTy() && 2586 I.onlyReadsMemory()) { 2587 // This looks like a vector load. 2588 return handleVectorLoadIntrinsic(I); 2589 } 2590 2591 if (I.doesNotAccessMemory()) 2592 if (maybeHandleSimpleNomemIntrinsic(I)) 2593 return true; 2594 2595 // FIXME: detect and handle SSE maskstore/maskload 2596 return false; 2597 } 2598 2599 void handleInvariantGroup(IntrinsicInst &I) { 2600 setShadow(&I, getShadow(&I, 0)); 2601 setOrigin(&I, getOrigin(&I, 0)); 2602 } 2603 2604 void handleLifetimeStart(IntrinsicInst &I) { 2605 if (!PoisonStack) 2606 return; 2607 DenseMap<Value *, AllocaInst *> AllocaForValue; 2608 AllocaInst *AI = 2609 llvm::findAllocaForValue(I.getArgOperand(1), AllocaForValue); 2610 if (!AI) 2611 InstrumentLifetimeStart = false; 2612 LifetimeStartList.push_back(std::make_pair(&I, AI)); 2613 } 2614 2615 void handleBswap(IntrinsicInst &I) { 2616 IRBuilder<> IRB(&I); 2617 Value *Op = I.getArgOperand(0); 2618 Type *OpType = Op->getType(); 2619 Function *BswapFunc = Intrinsic::getDeclaration( 2620 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2621 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2622 setOrigin(&I, getOrigin(Op)); 2623 } 2624 2625 // Instrument vector convert intrinsic. 2626 // 2627 // This function instruments intrinsics like cvtsi2ss: 2628 // %Out = int_xxx_cvtyyy(%ConvertOp) 2629 // or 2630 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2631 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2632 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2633 // elements from \p CopyOp. 2634 // In most cases conversion involves floating-point value which may trigger a 2635 // hardware exception when not fully initialized. For this reason we require 2636 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2637 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2638 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2639 // return a fully initialized value. 2640 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements) { 2641 IRBuilder<> IRB(&I); 2642 Value *CopyOp, *ConvertOp; 2643 2644 switch (I.getNumArgOperands()) { 2645 case 3: 2646 assert(isa<ConstantInt>(I.getArgOperand(2)) && "Invalid rounding mode"); 2647 LLVM_FALLTHROUGH; 2648 case 2: 2649 CopyOp = I.getArgOperand(0); 2650 ConvertOp = I.getArgOperand(1); 2651 break; 2652 case 1: 2653 ConvertOp = I.getArgOperand(0); 2654 CopyOp = nullptr; 2655 break; 2656 default: 2657 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2658 } 2659 2660 // The first *NumUsedElements* elements of ConvertOp are converted to the 2661 // same number of output elements. The rest of the output is copied from 2662 // CopyOp, or (if not available) filled with zeroes. 2663 // Combine shadow for elements of ConvertOp that are used in this operation, 2664 // and insert a check. 2665 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2666 // int->any conversion. 2667 Value *ConvertShadow = getShadow(ConvertOp); 2668 Value *AggShadow = nullptr; 2669 if (ConvertOp->getType()->isVectorTy()) { 2670 AggShadow = IRB.CreateExtractElement( 2671 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2672 for (int i = 1; i < NumUsedElements; ++i) { 2673 Value *MoreShadow = IRB.CreateExtractElement( 2674 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2675 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2676 } 2677 } else { 2678 AggShadow = ConvertShadow; 2679 } 2680 assert(AggShadow->getType()->isIntegerTy()); 2681 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2682 2683 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2684 // ConvertOp. 2685 if (CopyOp) { 2686 assert(CopyOp->getType() == I.getType()); 2687 assert(CopyOp->getType()->isVectorTy()); 2688 Value *ResultShadow = getShadow(CopyOp); 2689 Type *EltTy = cast<VectorType>(ResultShadow->getType())->getElementType(); 2690 for (int i = 0; i < NumUsedElements; ++i) { 2691 ResultShadow = IRB.CreateInsertElement( 2692 ResultShadow, ConstantInt::getNullValue(EltTy), 2693 ConstantInt::get(IRB.getInt32Ty(), i)); 2694 } 2695 setShadow(&I, ResultShadow); 2696 setOrigin(&I, getOrigin(CopyOp)); 2697 } else { 2698 setShadow(&I, getCleanShadow(&I)); 2699 setOrigin(&I, getCleanOrigin()); 2700 } 2701 } 2702 2703 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2704 // zeroes if it is zero, and all ones otherwise. 2705 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2706 if (S->getType()->isVectorTy()) 2707 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2708 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2709 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2710 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2711 } 2712 2713 // Given a vector, extract its first element, and return all 2714 // zeroes if it is zero, and all ones otherwise. 2715 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2716 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2717 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2718 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2719 } 2720 2721 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2722 Type *T = S->getType(); 2723 assert(T->isVectorTy()); 2724 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2725 return IRB.CreateSExt(S2, T); 2726 } 2727 2728 // Instrument vector shift intrinsic. 2729 // 2730 // This function instruments intrinsics like int_x86_avx2_psll_w. 2731 // Intrinsic shifts %In by %ShiftSize bits. 2732 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2733 // size, and the rest is ignored. Behavior is defined even if shift size is 2734 // greater than register (or field) width. 2735 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2736 assert(I.getNumArgOperands() == 2); 2737 IRBuilder<> IRB(&I); 2738 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2739 // Otherwise perform the same shift on S1. 2740 Value *S1 = getShadow(&I, 0); 2741 Value *S2 = getShadow(&I, 1); 2742 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2743 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2744 Value *V1 = I.getOperand(0); 2745 Value *V2 = I.getOperand(1); 2746 Value *Shift = IRB.CreateCall(I.getFunctionType(), I.getCalledValue(), 2747 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2748 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2749 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2750 setOriginForNaryOp(I); 2751 } 2752 2753 // Get an X86_MMX-sized vector type. 2754 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2755 const unsigned X86_MMXSizeInBits = 64; 2756 assert(EltSizeInBits != 0 && (X86_MMXSizeInBits % EltSizeInBits) == 0 && 2757 "Illegal MMX vector element size"); 2758 return VectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2759 X86_MMXSizeInBits / EltSizeInBits); 2760 } 2761 2762 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2763 // intrinsic. 2764 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2765 switch (id) { 2766 case Intrinsic::x86_sse2_packsswb_128: 2767 case Intrinsic::x86_sse2_packuswb_128: 2768 return Intrinsic::x86_sse2_packsswb_128; 2769 2770 case Intrinsic::x86_sse2_packssdw_128: 2771 case Intrinsic::x86_sse41_packusdw: 2772 return Intrinsic::x86_sse2_packssdw_128; 2773 2774 case Intrinsic::x86_avx2_packsswb: 2775 case Intrinsic::x86_avx2_packuswb: 2776 return Intrinsic::x86_avx2_packsswb; 2777 2778 case Intrinsic::x86_avx2_packssdw: 2779 case Intrinsic::x86_avx2_packusdw: 2780 return Intrinsic::x86_avx2_packssdw; 2781 2782 case Intrinsic::x86_mmx_packsswb: 2783 case Intrinsic::x86_mmx_packuswb: 2784 return Intrinsic::x86_mmx_packsswb; 2785 2786 case Intrinsic::x86_mmx_packssdw: 2787 return Intrinsic::x86_mmx_packssdw; 2788 default: 2789 llvm_unreachable("unexpected intrinsic id"); 2790 } 2791 } 2792 2793 // Instrument vector pack intrinsic. 2794 // 2795 // This function instruments intrinsics like x86_mmx_packsswb, that 2796 // packs elements of 2 input vectors into half as many bits with saturation. 2797 // Shadow is propagated with the signed variant of the same intrinsic applied 2798 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2799 // EltSizeInBits is used only for x86mmx arguments. 2800 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2801 assert(I.getNumArgOperands() == 2); 2802 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2803 IRBuilder<> IRB(&I); 2804 Value *S1 = getShadow(&I, 0); 2805 Value *S2 = getShadow(&I, 1); 2806 assert(isX86_MMX || S1->getType()->isVectorTy()); 2807 2808 // SExt and ICmpNE below must apply to individual elements of input vectors. 2809 // In case of x86mmx arguments, cast them to appropriate vector types and 2810 // back. 2811 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2812 if (isX86_MMX) { 2813 S1 = IRB.CreateBitCast(S1, T); 2814 S2 = IRB.CreateBitCast(S2, T); 2815 } 2816 Value *S1_ext = IRB.CreateSExt( 2817 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2818 Value *S2_ext = IRB.CreateSExt( 2819 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2820 if (isX86_MMX) { 2821 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2822 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2823 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2824 } 2825 2826 Function *ShadowFn = Intrinsic::getDeclaration( 2827 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2828 2829 Value *S = 2830 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2831 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2832 setShadow(&I, S); 2833 setOriginForNaryOp(I); 2834 } 2835 2836 // Instrument sum-of-absolute-differences intrinsic. 2837 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2838 const unsigned SignificantBitsPerResultElement = 16; 2839 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2840 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2841 unsigned ZeroBitsPerResultElement = 2842 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2843 2844 IRBuilder<> IRB(&I); 2845 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2846 S = IRB.CreateBitCast(S, ResTy); 2847 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2848 ResTy); 2849 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2850 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2851 setShadow(&I, S); 2852 setOriginForNaryOp(I); 2853 } 2854 2855 // Instrument multiply-add intrinsic. 2856 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2857 unsigned EltSizeInBits = 0) { 2858 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2859 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2860 IRBuilder<> IRB(&I); 2861 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2862 S = IRB.CreateBitCast(S, ResTy); 2863 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2864 ResTy); 2865 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2866 setShadow(&I, S); 2867 setOriginForNaryOp(I); 2868 } 2869 2870 // Instrument compare-packed intrinsic. 2871 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 2872 // all-ones shadow. 2873 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 2874 IRBuilder<> IRB(&I); 2875 Type *ResTy = getShadowTy(&I); 2876 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2877 Value *S = IRB.CreateSExt( 2878 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 2879 setShadow(&I, S); 2880 setOriginForNaryOp(I); 2881 } 2882 2883 // Instrument compare-scalar intrinsic. 2884 // This handles both cmp* intrinsics which return the result in the first 2885 // element of a vector, and comi* which return the result as i32. 2886 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 2887 IRBuilder<> IRB(&I); 2888 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2889 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 2890 setShadow(&I, S); 2891 setOriginForNaryOp(I); 2892 } 2893 2894 void handleStmxcsr(IntrinsicInst &I) { 2895 IRBuilder<> IRB(&I); 2896 Value* Addr = I.getArgOperand(0); 2897 Type *Ty = IRB.getInt32Ty(); 2898 Value *ShadowPtr = 2899 getShadowOriginPtr(Addr, IRB, Ty, Align(1), /*isStore*/ true).first; 2900 2901 IRB.CreateStore(getCleanShadow(Ty), 2902 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 2903 2904 if (ClCheckAccessAddress) 2905 insertShadowCheck(Addr, &I); 2906 } 2907 2908 void handleLdmxcsr(IntrinsicInst &I) { 2909 if (!InsertChecks) return; 2910 2911 IRBuilder<> IRB(&I); 2912 Value *Addr = I.getArgOperand(0); 2913 Type *Ty = IRB.getInt32Ty(); 2914 const Align Alignment = Align(1); 2915 Value *ShadowPtr, *OriginPtr; 2916 std::tie(ShadowPtr, OriginPtr) = 2917 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 2918 2919 if (ClCheckAccessAddress) 2920 insertShadowCheck(Addr, &I); 2921 2922 Value *Shadow = IRB.CreateAlignedLoad(Ty, ShadowPtr, Alignment, "_ldmxcsr"); 2923 Value *Origin = MS.TrackOrigins ? IRB.CreateLoad(MS.OriginTy, OriginPtr) 2924 : getCleanOrigin(); 2925 insertShadowCheck(Shadow, Origin, &I); 2926 } 2927 2928 void handleMaskedStore(IntrinsicInst &I) { 2929 IRBuilder<> IRB(&I); 2930 Value *V = I.getArgOperand(0); 2931 Value *Addr = I.getArgOperand(1); 2932 const Align Alignment( 2933 cast<ConstantInt>(I.getArgOperand(2))->getZExtValue()); 2934 Value *Mask = I.getArgOperand(3); 2935 Value *Shadow = getShadow(V); 2936 2937 Value *ShadowPtr; 2938 Value *OriginPtr; 2939 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2940 Addr, IRB, Shadow->getType(), Alignment, /*isStore*/ true); 2941 2942 if (ClCheckAccessAddress) { 2943 insertShadowCheck(Addr, &I); 2944 // Uninitialized mask is kind of like uninitialized address, but not as 2945 // scary. 2946 insertShadowCheck(Mask, &I); 2947 } 2948 2949 IRB.CreateMaskedStore(Shadow, ShadowPtr, Alignment, Mask); 2950 2951 if (MS.TrackOrigins) { 2952 auto &DL = F.getParent()->getDataLayout(); 2953 paintOrigin(IRB, getOrigin(V), OriginPtr, 2954 DL.getTypeStoreSize(Shadow->getType()), 2955 std::max(Alignment, kMinOriginAlignment)); 2956 } 2957 } 2958 2959 bool handleMaskedLoad(IntrinsicInst &I) { 2960 IRBuilder<> IRB(&I); 2961 Value *Addr = I.getArgOperand(0); 2962 const Align Alignment( 2963 cast<ConstantInt>(I.getArgOperand(1))->getZExtValue()); 2964 Value *Mask = I.getArgOperand(2); 2965 Value *PassThru = I.getArgOperand(3); 2966 2967 Type *ShadowTy = getShadowTy(&I); 2968 Value *ShadowPtr, *OriginPtr; 2969 if (PropagateShadow) { 2970 std::tie(ShadowPtr, OriginPtr) = 2971 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2972 setShadow(&I, IRB.CreateMaskedLoad(ShadowPtr, Alignment, Mask, 2973 getShadow(PassThru), "_msmaskedld")); 2974 } else { 2975 setShadow(&I, getCleanShadow(&I)); 2976 } 2977 2978 if (ClCheckAccessAddress) { 2979 insertShadowCheck(Addr, &I); 2980 insertShadowCheck(Mask, &I); 2981 } 2982 2983 if (MS.TrackOrigins) { 2984 if (PropagateShadow) { 2985 // Choose between PassThru's and the loaded value's origins. 2986 Value *MaskedPassThruShadow = IRB.CreateAnd( 2987 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 2988 2989 Value *Acc = IRB.CreateExtractElement( 2990 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2991 for (int i = 1, 2992 N = cast<VectorType>(PassThru->getType())->getNumElements(); 2993 i < N; ++i) { 2994 Value *More = IRB.CreateExtractElement( 2995 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2996 Acc = IRB.CreateOr(Acc, More); 2997 } 2998 2999 Value *Origin = IRB.CreateSelect( 3000 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 3001 getOrigin(PassThru), IRB.CreateLoad(MS.OriginTy, OriginPtr)); 3002 3003 setOrigin(&I, Origin); 3004 } else { 3005 setOrigin(&I, getCleanOrigin()); 3006 } 3007 } 3008 return true; 3009 } 3010 3011 // Instrument BMI / BMI2 intrinsics. 3012 // All of these intrinsics are Z = I(X, Y) 3013 // where the types of all operands and the result match, and are either i32 or i64. 3014 // The following instrumentation happens to work for all of them: 3015 // Sz = I(Sx, Y) | (sext (Sy != 0)) 3016 void handleBmiIntrinsic(IntrinsicInst &I) { 3017 IRBuilder<> IRB(&I); 3018 Type *ShadowTy = getShadowTy(&I); 3019 3020 // If any bit of the mask operand is poisoned, then the whole thing is. 3021 Value *SMask = getShadow(&I, 1); 3022 SMask = IRB.CreateSExt(IRB.CreateICmpNE(SMask, getCleanShadow(ShadowTy)), 3023 ShadowTy); 3024 // Apply the same intrinsic to the shadow of the first operand. 3025 Value *S = IRB.CreateCall(I.getCalledFunction(), 3026 {getShadow(&I, 0), I.getOperand(1)}); 3027 S = IRB.CreateOr(SMask, S); 3028 setShadow(&I, S); 3029 setOriginForNaryOp(I); 3030 } 3031 3032 Constant *getPclmulMask(IRBuilder<> &IRB, unsigned Width, bool OddElements) { 3033 SmallVector<Constant *, 8> Mask; 3034 for (unsigned X = OddElements ? 1 : 0; X < Width; X += 2) { 3035 Constant *C = ConstantInt::get(IRB.getInt32Ty(), X); 3036 Mask.push_back(C); 3037 Mask.push_back(C); 3038 } 3039 return ConstantVector::get(Mask); 3040 } 3041 3042 // Instrument pclmul intrinsics. 3043 // These intrinsics operate either on odd or on even elements of the input 3044 // vectors, depending on the constant in the 3rd argument, ignoring the rest. 3045 // Replace the unused elements with copies of the used ones, ex: 3046 // (0, 1, 2, 3) -> (0, 0, 2, 2) (even case) 3047 // or 3048 // (0, 1, 2, 3) -> (1, 1, 3, 3) (odd case) 3049 // and then apply the usual shadow combining logic. 3050 void handlePclmulIntrinsic(IntrinsicInst &I) { 3051 IRBuilder<> IRB(&I); 3052 Type *ShadowTy = getShadowTy(&I); 3053 unsigned Width = 3054 cast<VectorType>(I.getArgOperand(0)->getType())->getNumElements(); 3055 assert(isa<ConstantInt>(I.getArgOperand(2)) && 3056 "pclmul 3rd operand must be a constant"); 3057 unsigned Imm = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 3058 Value *Shuf0 = 3059 IRB.CreateShuffleVector(getShadow(&I, 0), UndefValue::get(ShadowTy), 3060 getPclmulMask(IRB, Width, Imm & 0x01)); 3061 Value *Shuf1 = 3062 IRB.CreateShuffleVector(getShadow(&I, 1), UndefValue::get(ShadowTy), 3063 getPclmulMask(IRB, Width, Imm & 0x10)); 3064 ShadowAndOriginCombiner SOC(this, IRB); 3065 SOC.Add(Shuf0, getOrigin(&I, 0)); 3066 SOC.Add(Shuf1, getOrigin(&I, 1)); 3067 SOC.Done(&I); 3068 } 3069 3070 void visitIntrinsicInst(IntrinsicInst &I) { 3071 switch (I.getIntrinsicID()) { 3072 case Intrinsic::lifetime_start: 3073 handleLifetimeStart(I); 3074 break; 3075 case Intrinsic::launder_invariant_group: 3076 case Intrinsic::strip_invariant_group: 3077 handleInvariantGroup(I); 3078 break; 3079 case Intrinsic::bswap: 3080 handleBswap(I); 3081 break; 3082 case Intrinsic::masked_store: 3083 handleMaskedStore(I); 3084 break; 3085 case Intrinsic::masked_load: 3086 handleMaskedLoad(I); 3087 break; 3088 case Intrinsic::x86_sse_stmxcsr: 3089 handleStmxcsr(I); 3090 break; 3091 case Intrinsic::x86_sse_ldmxcsr: 3092 handleLdmxcsr(I); 3093 break; 3094 case Intrinsic::x86_avx512_vcvtsd2usi64: 3095 case Intrinsic::x86_avx512_vcvtsd2usi32: 3096 case Intrinsic::x86_avx512_vcvtss2usi64: 3097 case Intrinsic::x86_avx512_vcvtss2usi32: 3098 case Intrinsic::x86_avx512_cvttss2usi64: 3099 case Intrinsic::x86_avx512_cvttss2usi: 3100 case Intrinsic::x86_avx512_cvttsd2usi64: 3101 case Intrinsic::x86_avx512_cvttsd2usi: 3102 case Intrinsic::x86_avx512_cvtusi2ss: 3103 case Intrinsic::x86_avx512_cvtusi642sd: 3104 case Intrinsic::x86_avx512_cvtusi642ss: 3105 case Intrinsic::x86_sse2_cvtsd2si64: 3106 case Intrinsic::x86_sse2_cvtsd2si: 3107 case Intrinsic::x86_sse2_cvtsd2ss: 3108 case Intrinsic::x86_sse2_cvttsd2si64: 3109 case Intrinsic::x86_sse2_cvttsd2si: 3110 case Intrinsic::x86_sse_cvtss2si64: 3111 case Intrinsic::x86_sse_cvtss2si: 3112 case Intrinsic::x86_sse_cvttss2si64: 3113 case Intrinsic::x86_sse_cvttss2si: 3114 handleVectorConvertIntrinsic(I, 1); 3115 break; 3116 case Intrinsic::x86_sse_cvtps2pi: 3117 case Intrinsic::x86_sse_cvttps2pi: 3118 handleVectorConvertIntrinsic(I, 2); 3119 break; 3120 3121 case Intrinsic::x86_avx512_psll_w_512: 3122 case Intrinsic::x86_avx512_psll_d_512: 3123 case Intrinsic::x86_avx512_psll_q_512: 3124 case Intrinsic::x86_avx512_pslli_w_512: 3125 case Intrinsic::x86_avx512_pslli_d_512: 3126 case Intrinsic::x86_avx512_pslli_q_512: 3127 case Intrinsic::x86_avx512_psrl_w_512: 3128 case Intrinsic::x86_avx512_psrl_d_512: 3129 case Intrinsic::x86_avx512_psrl_q_512: 3130 case Intrinsic::x86_avx512_psra_w_512: 3131 case Intrinsic::x86_avx512_psra_d_512: 3132 case Intrinsic::x86_avx512_psra_q_512: 3133 case Intrinsic::x86_avx512_psrli_w_512: 3134 case Intrinsic::x86_avx512_psrli_d_512: 3135 case Intrinsic::x86_avx512_psrli_q_512: 3136 case Intrinsic::x86_avx512_psrai_w_512: 3137 case Intrinsic::x86_avx512_psrai_d_512: 3138 case Intrinsic::x86_avx512_psrai_q_512: 3139 case Intrinsic::x86_avx512_psra_q_256: 3140 case Intrinsic::x86_avx512_psra_q_128: 3141 case Intrinsic::x86_avx512_psrai_q_256: 3142 case Intrinsic::x86_avx512_psrai_q_128: 3143 case Intrinsic::x86_avx2_psll_w: 3144 case Intrinsic::x86_avx2_psll_d: 3145 case Intrinsic::x86_avx2_psll_q: 3146 case Intrinsic::x86_avx2_pslli_w: 3147 case Intrinsic::x86_avx2_pslli_d: 3148 case Intrinsic::x86_avx2_pslli_q: 3149 case Intrinsic::x86_avx2_psrl_w: 3150 case Intrinsic::x86_avx2_psrl_d: 3151 case Intrinsic::x86_avx2_psrl_q: 3152 case Intrinsic::x86_avx2_psra_w: 3153 case Intrinsic::x86_avx2_psra_d: 3154 case Intrinsic::x86_avx2_psrli_w: 3155 case Intrinsic::x86_avx2_psrli_d: 3156 case Intrinsic::x86_avx2_psrli_q: 3157 case Intrinsic::x86_avx2_psrai_w: 3158 case Intrinsic::x86_avx2_psrai_d: 3159 case Intrinsic::x86_sse2_psll_w: 3160 case Intrinsic::x86_sse2_psll_d: 3161 case Intrinsic::x86_sse2_psll_q: 3162 case Intrinsic::x86_sse2_pslli_w: 3163 case Intrinsic::x86_sse2_pslli_d: 3164 case Intrinsic::x86_sse2_pslli_q: 3165 case Intrinsic::x86_sse2_psrl_w: 3166 case Intrinsic::x86_sse2_psrl_d: 3167 case Intrinsic::x86_sse2_psrl_q: 3168 case Intrinsic::x86_sse2_psra_w: 3169 case Intrinsic::x86_sse2_psra_d: 3170 case Intrinsic::x86_sse2_psrli_w: 3171 case Intrinsic::x86_sse2_psrli_d: 3172 case Intrinsic::x86_sse2_psrli_q: 3173 case Intrinsic::x86_sse2_psrai_w: 3174 case Intrinsic::x86_sse2_psrai_d: 3175 case Intrinsic::x86_mmx_psll_w: 3176 case Intrinsic::x86_mmx_psll_d: 3177 case Intrinsic::x86_mmx_psll_q: 3178 case Intrinsic::x86_mmx_pslli_w: 3179 case Intrinsic::x86_mmx_pslli_d: 3180 case Intrinsic::x86_mmx_pslli_q: 3181 case Intrinsic::x86_mmx_psrl_w: 3182 case Intrinsic::x86_mmx_psrl_d: 3183 case Intrinsic::x86_mmx_psrl_q: 3184 case Intrinsic::x86_mmx_psra_w: 3185 case Intrinsic::x86_mmx_psra_d: 3186 case Intrinsic::x86_mmx_psrli_w: 3187 case Intrinsic::x86_mmx_psrli_d: 3188 case Intrinsic::x86_mmx_psrli_q: 3189 case Intrinsic::x86_mmx_psrai_w: 3190 case Intrinsic::x86_mmx_psrai_d: 3191 handleVectorShiftIntrinsic(I, /* Variable */ false); 3192 break; 3193 case Intrinsic::x86_avx2_psllv_d: 3194 case Intrinsic::x86_avx2_psllv_d_256: 3195 case Intrinsic::x86_avx512_psllv_d_512: 3196 case Intrinsic::x86_avx2_psllv_q: 3197 case Intrinsic::x86_avx2_psllv_q_256: 3198 case Intrinsic::x86_avx512_psllv_q_512: 3199 case Intrinsic::x86_avx2_psrlv_d: 3200 case Intrinsic::x86_avx2_psrlv_d_256: 3201 case Intrinsic::x86_avx512_psrlv_d_512: 3202 case Intrinsic::x86_avx2_psrlv_q: 3203 case Intrinsic::x86_avx2_psrlv_q_256: 3204 case Intrinsic::x86_avx512_psrlv_q_512: 3205 case Intrinsic::x86_avx2_psrav_d: 3206 case Intrinsic::x86_avx2_psrav_d_256: 3207 case Intrinsic::x86_avx512_psrav_d_512: 3208 case Intrinsic::x86_avx512_psrav_q_128: 3209 case Intrinsic::x86_avx512_psrav_q_256: 3210 case Intrinsic::x86_avx512_psrav_q_512: 3211 handleVectorShiftIntrinsic(I, /* Variable */ true); 3212 break; 3213 3214 case Intrinsic::x86_sse2_packsswb_128: 3215 case Intrinsic::x86_sse2_packssdw_128: 3216 case Intrinsic::x86_sse2_packuswb_128: 3217 case Intrinsic::x86_sse41_packusdw: 3218 case Intrinsic::x86_avx2_packsswb: 3219 case Intrinsic::x86_avx2_packssdw: 3220 case Intrinsic::x86_avx2_packuswb: 3221 case Intrinsic::x86_avx2_packusdw: 3222 handleVectorPackIntrinsic(I); 3223 break; 3224 3225 case Intrinsic::x86_mmx_packsswb: 3226 case Intrinsic::x86_mmx_packuswb: 3227 handleVectorPackIntrinsic(I, 16); 3228 break; 3229 3230 case Intrinsic::x86_mmx_packssdw: 3231 handleVectorPackIntrinsic(I, 32); 3232 break; 3233 3234 case Intrinsic::x86_mmx_psad_bw: 3235 case Intrinsic::x86_sse2_psad_bw: 3236 case Intrinsic::x86_avx2_psad_bw: 3237 handleVectorSadIntrinsic(I); 3238 break; 3239 3240 case Intrinsic::x86_sse2_pmadd_wd: 3241 case Intrinsic::x86_avx2_pmadd_wd: 3242 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3243 case Intrinsic::x86_avx2_pmadd_ub_sw: 3244 handleVectorPmaddIntrinsic(I); 3245 break; 3246 3247 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3248 handleVectorPmaddIntrinsic(I, 8); 3249 break; 3250 3251 case Intrinsic::x86_mmx_pmadd_wd: 3252 handleVectorPmaddIntrinsic(I, 16); 3253 break; 3254 3255 case Intrinsic::x86_sse_cmp_ss: 3256 case Intrinsic::x86_sse2_cmp_sd: 3257 case Intrinsic::x86_sse_comieq_ss: 3258 case Intrinsic::x86_sse_comilt_ss: 3259 case Intrinsic::x86_sse_comile_ss: 3260 case Intrinsic::x86_sse_comigt_ss: 3261 case Intrinsic::x86_sse_comige_ss: 3262 case Intrinsic::x86_sse_comineq_ss: 3263 case Intrinsic::x86_sse_ucomieq_ss: 3264 case Intrinsic::x86_sse_ucomilt_ss: 3265 case Intrinsic::x86_sse_ucomile_ss: 3266 case Intrinsic::x86_sse_ucomigt_ss: 3267 case Intrinsic::x86_sse_ucomige_ss: 3268 case Intrinsic::x86_sse_ucomineq_ss: 3269 case Intrinsic::x86_sse2_comieq_sd: 3270 case Intrinsic::x86_sse2_comilt_sd: 3271 case Intrinsic::x86_sse2_comile_sd: 3272 case Intrinsic::x86_sse2_comigt_sd: 3273 case Intrinsic::x86_sse2_comige_sd: 3274 case Intrinsic::x86_sse2_comineq_sd: 3275 case Intrinsic::x86_sse2_ucomieq_sd: 3276 case Intrinsic::x86_sse2_ucomilt_sd: 3277 case Intrinsic::x86_sse2_ucomile_sd: 3278 case Intrinsic::x86_sse2_ucomigt_sd: 3279 case Intrinsic::x86_sse2_ucomige_sd: 3280 case Intrinsic::x86_sse2_ucomineq_sd: 3281 handleVectorCompareScalarIntrinsic(I); 3282 break; 3283 3284 case Intrinsic::x86_sse_cmp_ps: 3285 case Intrinsic::x86_sse2_cmp_pd: 3286 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3287 // generates reasonably looking IR that fails in the backend with "Do not 3288 // know how to split the result of this operator!". 3289 handleVectorComparePackedIntrinsic(I); 3290 break; 3291 3292 case Intrinsic::x86_bmi_bextr_32: 3293 case Intrinsic::x86_bmi_bextr_64: 3294 case Intrinsic::x86_bmi_bzhi_32: 3295 case Intrinsic::x86_bmi_bzhi_64: 3296 case Intrinsic::x86_bmi_pdep_32: 3297 case Intrinsic::x86_bmi_pdep_64: 3298 case Intrinsic::x86_bmi_pext_32: 3299 case Intrinsic::x86_bmi_pext_64: 3300 handleBmiIntrinsic(I); 3301 break; 3302 3303 case Intrinsic::x86_pclmulqdq: 3304 case Intrinsic::x86_pclmulqdq_256: 3305 case Intrinsic::x86_pclmulqdq_512: 3306 handlePclmulIntrinsic(I); 3307 break; 3308 3309 case Intrinsic::is_constant: 3310 // The result of llvm.is.constant() is always defined. 3311 setShadow(&I, getCleanShadow(&I)); 3312 setOrigin(&I, getCleanOrigin()); 3313 break; 3314 3315 default: 3316 if (!handleUnknownIntrinsic(I)) 3317 visitInstruction(I); 3318 break; 3319 } 3320 } 3321 3322 void visitCallSite(CallSite CS) { 3323 Instruction &I = *CS.getInstruction(); 3324 assert(!I.getMetadata("nosanitize")); 3325 assert((CS.isCall() || CS.isInvoke() || CS.isCallBr()) && 3326 "Unknown type of CallSite"); 3327 if (CS.isCallBr() || (CS.isCall() && cast<CallInst>(&I)->isInlineAsm())) { 3328 // For inline asm (either a call to asm function, or callbr instruction), 3329 // do the usual thing: check argument shadow and mark all outputs as 3330 // clean. Note that any side effects of the inline asm that are not 3331 // immediately visible in its constraints are not handled. 3332 if (ClHandleAsmConservative && MS.CompileKernel) 3333 visitAsmInstruction(I); 3334 else 3335 visitInstruction(I); 3336 return; 3337 } 3338 if (CS.isCall()) { 3339 CallInst *Call = cast<CallInst>(&I); 3340 assert(!isa<IntrinsicInst>(&I) && "intrinsics are handled elsewhere"); 3341 3342 // We are going to insert code that relies on the fact that the callee 3343 // will become a non-readonly function after it is instrumented by us. To 3344 // prevent this code from being optimized out, mark that function 3345 // non-readonly in advance. 3346 if (Function *Func = Call->getCalledFunction()) { 3347 // Clear out readonly/readnone attributes. 3348 AttrBuilder B; 3349 B.addAttribute(Attribute::ReadOnly) 3350 .addAttribute(Attribute::ReadNone) 3351 .addAttribute(Attribute::WriteOnly) 3352 .addAttribute(Attribute::ArgMemOnly) 3353 .addAttribute(Attribute::Speculatable); 3354 Func->removeAttributes(AttributeList::FunctionIndex, B); 3355 } 3356 3357 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3358 } 3359 IRBuilder<> IRB(&I); 3360 3361 unsigned ArgOffset = 0; 3362 LLVM_DEBUG(dbgs() << " CallSite: " << I << "\n"); 3363 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3364 ArgIt != End; ++ArgIt) { 3365 Value *A = *ArgIt; 3366 unsigned i = ArgIt - CS.arg_begin(); 3367 if (!A->getType()->isSized()) { 3368 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << I << "\n"); 3369 continue; 3370 } 3371 unsigned Size = 0; 3372 Value *Store = nullptr; 3373 // Compute the Shadow for arg even if it is ByVal, because 3374 // in that case getShadow() will copy the actual arg shadow to 3375 // __msan_param_tls. 3376 Value *ArgShadow = getShadow(A); 3377 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3378 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3379 << " Shadow: " << *ArgShadow << "\n"); 3380 bool ArgIsInitialized = false; 3381 const DataLayout &DL = F.getParent()->getDataLayout(); 3382 if (CS.paramHasAttr(i, Attribute::ByVal)) { 3383 assert(A->getType()->isPointerTy() && 3384 "ByVal argument is not a pointer!"); 3385 Size = DL.getTypeAllocSize(A->getType()->getPointerElementType()); 3386 if (ArgOffset + Size > kParamTLSSize) break; 3387 const MaybeAlign ParamAlignment(CS.getParamAlignment(i)); 3388 MaybeAlign Alignment = llvm::None; 3389 if (ParamAlignment) 3390 Alignment = std::min(*ParamAlignment, kShadowTLSAlignment); 3391 Value *AShadowPtr = 3392 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3393 /*isStore*/ false) 3394 .first; 3395 3396 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3397 Alignment, Size); 3398 // TODO(glider): need to copy origins. 3399 } else { 3400 Size = DL.getTypeAllocSize(A->getType()); 3401 if (ArgOffset + Size > kParamTLSSize) break; 3402 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3403 kShadowTLSAlignment); 3404 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3405 if (Cst && Cst->isNullValue()) ArgIsInitialized = true; 3406 } 3407 if (MS.TrackOrigins && !ArgIsInitialized) 3408 IRB.CreateStore(getOrigin(A), 3409 getOriginPtrForArgument(A, IRB, ArgOffset)); 3410 (void)Store; 3411 assert(Size != 0 && Store != nullptr); 3412 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3413 ArgOffset += alignTo(Size, 8); 3414 } 3415 LLVM_DEBUG(dbgs() << " done with call args\n"); 3416 3417 FunctionType *FT = CS.getFunctionType(); 3418 if (FT->isVarArg()) { 3419 VAHelper->visitCallSite(CS, IRB); 3420 } 3421 3422 // Now, get the shadow for the RetVal. 3423 if (!I.getType()->isSized()) return; 3424 // Don't emit the epilogue for musttail call returns. 3425 if (CS.isCall() && cast<CallInst>(&I)->isMustTailCall()) return; 3426 IRBuilder<> IRBBefore(&I); 3427 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3428 Value *Base = getShadowPtrForRetval(&I, IRBBefore); 3429 IRBBefore.CreateAlignedStore(getCleanShadow(&I), Base, kShadowTLSAlignment); 3430 BasicBlock::iterator NextInsn; 3431 if (CS.isCall()) { 3432 NextInsn = ++I.getIterator(); 3433 assert(NextInsn != I.getParent()->end()); 3434 } else { 3435 BasicBlock *NormalDest = cast<InvokeInst>(&I)->getNormalDest(); 3436 if (!NormalDest->getSinglePredecessor()) { 3437 // FIXME: this case is tricky, so we are just conservative here. 3438 // Perhaps we need to split the edge between this BB and NormalDest, 3439 // but a naive attempt to use SplitEdge leads to a crash. 3440 setShadow(&I, getCleanShadow(&I)); 3441 setOrigin(&I, getCleanOrigin()); 3442 return; 3443 } 3444 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3445 // Anything inserted there will be instrumented by MSan later! 3446 NextInsn = NormalDest->getFirstInsertionPt(); 3447 assert(NextInsn != NormalDest->end() && 3448 "Could not find insertion point for retval shadow load"); 3449 } 3450 IRBuilder<> IRBAfter(&*NextInsn); 3451 Value *RetvalShadow = IRBAfter.CreateAlignedLoad( 3452 getShadowTy(&I), getShadowPtrForRetval(&I, IRBAfter), 3453 kShadowTLSAlignment, "_msret"); 3454 setShadow(&I, RetvalShadow); 3455 if (MS.TrackOrigins) 3456 setOrigin(&I, IRBAfter.CreateLoad(MS.OriginTy, 3457 getOriginPtrForRetval(IRBAfter))); 3458 } 3459 3460 bool isAMustTailRetVal(Value *RetVal) { 3461 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3462 RetVal = I->getOperand(0); 3463 } 3464 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3465 return I->isMustTailCall(); 3466 } 3467 return false; 3468 } 3469 3470 void visitReturnInst(ReturnInst &I) { 3471 IRBuilder<> IRB(&I); 3472 Value *RetVal = I.getReturnValue(); 3473 if (!RetVal) return; 3474 // Don't emit the epilogue for musttail call returns. 3475 if (isAMustTailRetVal(RetVal)) return; 3476 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3477 if (CheckReturnValue) { 3478 insertShadowCheck(RetVal, &I); 3479 Value *Shadow = getCleanShadow(RetVal); 3480 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3481 } else { 3482 Value *Shadow = getShadow(RetVal); 3483 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3484 if (MS.TrackOrigins) 3485 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3486 } 3487 } 3488 3489 void visitPHINode(PHINode &I) { 3490 IRBuilder<> IRB(&I); 3491 if (!PropagateShadow) { 3492 setShadow(&I, getCleanShadow(&I)); 3493 setOrigin(&I, getCleanOrigin()); 3494 return; 3495 } 3496 3497 ShadowPHINodes.push_back(&I); 3498 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3499 "_msphi_s")); 3500 if (MS.TrackOrigins) 3501 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3502 "_msphi_o")); 3503 } 3504 3505 Value *getLocalVarDescription(AllocaInst &I) { 3506 SmallString<2048> StackDescriptionStorage; 3507 raw_svector_ostream StackDescription(StackDescriptionStorage); 3508 // We create a string with a description of the stack allocation and 3509 // pass it into __msan_set_alloca_origin. 3510 // It will be printed by the run-time if stack-originated UMR is found. 3511 // The first 4 bytes of the string are set to '----' and will be replaced 3512 // by __msan_va_arg_overflow_size_tls at the first call. 3513 StackDescription << "----" << I.getName() << "@" << F.getName(); 3514 return createPrivateNonConstGlobalForString(*F.getParent(), 3515 StackDescription.str()); 3516 } 3517 3518 void poisonAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3519 if (PoisonStack && ClPoisonStackWithCall) { 3520 IRB.CreateCall(MS.MsanPoisonStackFn, 3521 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3522 } else { 3523 Value *ShadowBase, *OriginBase; 3524 std::tie(ShadowBase, OriginBase) = getShadowOriginPtr( 3525 &I, IRB, IRB.getInt8Ty(), Align(1), /*isStore*/ true); 3526 3527 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3528 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, 3529 MaybeAlign(I.getAlignment())); 3530 } 3531 3532 if (PoisonStack && MS.TrackOrigins) { 3533 Value *Descr = getLocalVarDescription(I); 3534 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3535 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3536 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3537 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3538 } 3539 } 3540 3541 void poisonAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3542 Value *Descr = getLocalVarDescription(I); 3543 if (PoisonStack) { 3544 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3545 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3546 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3547 } else { 3548 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3549 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3550 } 3551 } 3552 3553 void instrumentAlloca(AllocaInst &I, Instruction *InsPoint = nullptr) { 3554 if (!InsPoint) 3555 InsPoint = &I; 3556 IRBuilder<> IRB(InsPoint->getNextNode()); 3557 const DataLayout &DL = F.getParent()->getDataLayout(); 3558 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3559 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3560 if (I.isArrayAllocation()) 3561 Len = IRB.CreateMul(Len, I.getArraySize()); 3562 3563 if (MS.CompileKernel) 3564 poisonAllocaKmsan(I, IRB, Len); 3565 else 3566 poisonAllocaUserspace(I, IRB, Len); 3567 } 3568 3569 void visitAllocaInst(AllocaInst &I) { 3570 setShadow(&I, getCleanShadow(&I)); 3571 setOrigin(&I, getCleanOrigin()); 3572 // We'll get to this alloca later unless it's poisoned at the corresponding 3573 // llvm.lifetime.start. 3574 AllocaSet.insert(&I); 3575 } 3576 3577 void visitSelectInst(SelectInst& I) { 3578 IRBuilder<> IRB(&I); 3579 // a = select b, c, d 3580 Value *B = I.getCondition(); 3581 Value *C = I.getTrueValue(); 3582 Value *D = I.getFalseValue(); 3583 Value *Sb = getShadow(B); 3584 Value *Sc = getShadow(C); 3585 Value *Sd = getShadow(D); 3586 3587 // Result shadow if condition shadow is 0. 3588 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3589 Value *Sa1; 3590 if (I.getType()->isAggregateType()) { 3591 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3592 // an extra "select". This results in much more compact IR. 3593 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3594 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3595 } else { 3596 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3597 // If Sb (condition is poisoned), look for bits in c and d that are equal 3598 // and both unpoisoned. 3599 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3600 3601 // Cast arguments to shadow-compatible type. 3602 C = CreateAppToShadowCast(IRB, C); 3603 D = CreateAppToShadowCast(IRB, D); 3604 3605 // Result shadow if condition shadow is 1. 3606 Sa1 = IRB.CreateOr({IRB.CreateXor(C, D), Sc, Sd}); 3607 } 3608 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3609 setShadow(&I, Sa); 3610 if (MS.TrackOrigins) { 3611 // Origins are always i32, so any vector conditions must be flattened. 3612 // FIXME: consider tracking vector origins for app vectors? 3613 if (B->getType()->isVectorTy()) { 3614 Type *FlatTy = getShadowTyNoVec(B->getType()); 3615 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3616 ConstantInt::getNullValue(FlatTy)); 3617 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3618 ConstantInt::getNullValue(FlatTy)); 3619 } 3620 // a = select b, c, d 3621 // Oa = Sb ? Ob : (b ? Oc : Od) 3622 setOrigin( 3623 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3624 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3625 getOrigin(I.getFalseValue())))); 3626 } 3627 } 3628 3629 void visitLandingPadInst(LandingPadInst &I) { 3630 // Do nothing. 3631 // See https://github.com/google/sanitizers/issues/504 3632 setShadow(&I, getCleanShadow(&I)); 3633 setOrigin(&I, getCleanOrigin()); 3634 } 3635 3636 void visitCatchSwitchInst(CatchSwitchInst &I) { 3637 setShadow(&I, getCleanShadow(&I)); 3638 setOrigin(&I, getCleanOrigin()); 3639 } 3640 3641 void visitFuncletPadInst(FuncletPadInst &I) { 3642 setShadow(&I, getCleanShadow(&I)); 3643 setOrigin(&I, getCleanOrigin()); 3644 } 3645 3646 void visitGetElementPtrInst(GetElementPtrInst &I) { 3647 handleShadowOr(I); 3648 } 3649 3650 void visitExtractValueInst(ExtractValueInst &I) { 3651 IRBuilder<> IRB(&I); 3652 Value *Agg = I.getAggregateOperand(); 3653 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 3654 Value *AggShadow = getShadow(Agg); 3655 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3656 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 3657 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 3658 setShadow(&I, ResShadow); 3659 setOriginForNaryOp(I); 3660 } 3661 3662 void visitInsertValueInst(InsertValueInst &I) { 3663 IRBuilder<> IRB(&I); 3664 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 3665 Value *AggShadow = getShadow(I.getAggregateOperand()); 3666 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 3667 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3668 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 3669 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 3670 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 3671 setShadow(&I, Res); 3672 setOriginForNaryOp(I); 3673 } 3674 3675 void dumpInst(Instruction &I) { 3676 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 3677 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 3678 } else { 3679 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 3680 } 3681 errs() << "QQQ " << I << "\n"; 3682 } 3683 3684 void visitResumeInst(ResumeInst &I) { 3685 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 3686 // Nothing to do here. 3687 } 3688 3689 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 3690 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 3691 // Nothing to do here. 3692 } 3693 3694 void visitCatchReturnInst(CatchReturnInst &CRI) { 3695 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 3696 // Nothing to do here. 3697 } 3698 3699 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 3700 const DataLayout &DL, bool isOutput) { 3701 // For each assembly argument, we check its value for being initialized. 3702 // If the argument is a pointer, we assume it points to a single element 3703 // of the corresponding type (or to a 8-byte word, if the type is unsized). 3704 // Each such pointer is instrumented with a call to the runtime library. 3705 Type *OpType = Operand->getType(); 3706 // Check the operand value itself. 3707 insertShadowCheck(Operand, &I); 3708 if (!OpType->isPointerTy() || !isOutput) { 3709 assert(!isOutput); 3710 return; 3711 } 3712 Type *ElType = OpType->getPointerElementType(); 3713 if (!ElType->isSized()) 3714 return; 3715 int Size = DL.getTypeStoreSize(ElType); 3716 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 3717 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 3718 IRB.CreateCall(MS.MsanInstrumentAsmStoreFn, {Ptr, SizeVal}); 3719 } 3720 3721 /// Get the number of output arguments returned by pointers. 3722 int getNumOutputArgs(InlineAsm *IA, CallBase *CB) { 3723 int NumRetOutputs = 0; 3724 int NumOutputs = 0; 3725 Type *RetTy = cast<Value>(CB)->getType(); 3726 if (!RetTy->isVoidTy()) { 3727 // Register outputs are returned via the CallInst return value. 3728 auto *ST = dyn_cast<StructType>(RetTy); 3729 if (ST) 3730 NumRetOutputs = ST->getNumElements(); 3731 else 3732 NumRetOutputs = 1; 3733 } 3734 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 3735 for (size_t i = 0, n = Constraints.size(); i < n; i++) { 3736 InlineAsm::ConstraintInfo Info = Constraints[i]; 3737 switch (Info.Type) { 3738 case InlineAsm::isOutput: 3739 NumOutputs++; 3740 break; 3741 default: 3742 break; 3743 } 3744 } 3745 return NumOutputs - NumRetOutputs; 3746 } 3747 3748 void visitAsmInstruction(Instruction &I) { 3749 // Conservative inline assembly handling: check for poisoned shadow of 3750 // asm() arguments, then unpoison the result and all the memory locations 3751 // pointed to by those arguments. 3752 // An inline asm() statement in C++ contains lists of input and output 3753 // arguments used by the assembly code. These are mapped to operands of the 3754 // CallInst as follows: 3755 // - nR register outputs ("=r) are returned by value in a single structure 3756 // (SSA value of the CallInst); 3757 // - nO other outputs ("=m" and others) are returned by pointer as first 3758 // nO operands of the CallInst; 3759 // - nI inputs ("r", "m" and others) are passed to CallInst as the 3760 // remaining nI operands. 3761 // The total number of asm() arguments in the source is nR+nO+nI, and the 3762 // corresponding CallInst has nO+nI+1 operands (the last operand is the 3763 // function to be called). 3764 const DataLayout &DL = F.getParent()->getDataLayout(); 3765 CallBase *CB = cast<CallBase>(&I); 3766 IRBuilder<> IRB(&I); 3767 InlineAsm *IA = cast<InlineAsm>(CB->getCalledValue()); 3768 int OutputArgs = getNumOutputArgs(IA, CB); 3769 // The last operand of a CallInst is the function itself. 3770 int NumOperands = CB->getNumOperands() - 1; 3771 3772 // Check input arguments. Doing so before unpoisoning output arguments, so 3773 // that we won't overwrite uninit values before checking them. 3774 for (int i = OutputArgs; i < NumOperands; i++) { 3775 Value *Operand = CB->getOperand(i); 3776 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 3777 } 3778 // Unpoison output arguments. This must happen before the actual InlineAsm 3779 // call, so that the shadow for memory published in the asm() statement 3780 // remains valid. 3781 for (int i = 0; i < OutputArgs; i++) { 3782 Value *Operand = CB->getOperand(i); 3783 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 3784 } 3785 3786 setShadow(&I, getCleanShadow(&I)); 3787 setOrigin(&I, getCleanOrigin()); 3788 } 3789 3790 void visitInstruction(Instruction &I) { 3791 // Everything else: stop propagating and check for poisoned shadow. 3792 if (ClDumpStrictInstructions) 3793 dumpInst(I); 3794 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 3795 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 3796 Value *Operand = I.getOperand(i); 3797 if (Operand->getType()->isSized()) 3798 insertShadowCheck(Operand, &I); 3799 } 3800 setShadow(&I, getCleanShadow(&I)); 3801 setOrigin(&I, getCleanOrigin()); 3802 } 3803 }; 3804 3805 /// AMD64-specific implementation of VarArgHelper. 3806 struct VarArgAMD64Helper : public VarArgHelper { 3807 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 3808 // See a comment in visitCallSite for more details. 3809 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 3810 static const unsigned AMD64FpEndOffsetSSE = 176; 3811 // If SSE is disabled, fp_offset in va_list is zero. 3812 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 3813 3814 unsigned AMD64FpEndOffset; 3815 Function &F; 3816 MemorySanitizer &MS; 3817 MemorySanitizerVisitor &MSV; 3818 Value *VAArgTLSCopy = nullptr; 3819 Value *VAArgTLSOriginCopy = nullptr; 3820 Value *VAArgOverflowSize = nullptr; 3821 3822 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3823 3824 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 3825 3826 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 3827 MemorySanitizerVisitor &MSV) 3828 : F(F), MS(MS), MSV(MSV) { 3829 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 3830 for (const auto &Attr : F.getAttributes().getFnAttributes()) { 3831 if (Attr.isStringAttribute() && 3832 (Attr.getKindAsString() == "target-features")) { 3833 if (Attr.getValueAsString().contains("-sse")) 3834 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 3835 break; 3836 } 3837 } 3838 } 3839 3840 ArgKind classifyArgument(Value* arg) { 3841 // A very rough approximation of X86_64 argument classification rules. 3842 Type *T = arg->getType(); 3843 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 3844 return AK_FloatingPoint; 3845 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 3846 return AK_GeneralPurpose; 3847 if (T->isPointerTy()) 3848 return AK_GeneralPurpose; 3849 return AK_Memory; 3850 } 3851 3852 // For VarArg functions, store the argument shadow in an ABI-specific format 3853 // that corresponds to va_list layout. 3854 // We do this because Clang lowers va_arg in the frontend, and this pass 3855 // only sees the low level code that deals with va_list internals. 3856 // A much easier alternative (provided that Clang emits va_arg instructions) 3857 // would have been to associate each live instance of va_list with a copy of 3858 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 3859 // order. 3860 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3861 unsigned GpOffset = 0; 3862 unsigned FpOffset = AMD64GpEndOffset; 3863 unsigned OverflowOffset = AMD64FpEndOffset; 3864 const DataLayout &DL = F.getParent()->getDataLayout(); 3865 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3866 ArgIt != End; ++ArgIt) { 3867 Value *A = *ArgIt; 3868 unsigned ArgNo = CS.getArgumentNo(ArgIt); 3869 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 3870 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 3871 if (IsByVal) { 3872 // ByVal arguments always go to the overflow area. 3873 // Fixed arguments passed through the overflow area will be stepped 3874 // over by va_start, so don't count them towards the offset. 3875 if (IsFixed) 3876 continue; 3877 assert(A->getType()->isPointerTy()); 3878 Type *RealTy = A->getType()->getPointerElementType(); 3879 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 3880 Value *ShadowBase = getShadowPtrForVAArgument( 3881 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 3882 Value *OriginBase = nullptr; 3883 if (MS.TrackOrigins) 3884 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 3885 OverflowOffset += alignTo(ArgSize, 8); 3886 if (!ShadowBase) 3887 continue; 3888 Value *ShadowPtr, *OriginPtr; 3889 std::tie(ShadowPtr, OriginPtr) = 3890 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 3891 /*isStore*/ false); 3892 3893 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 3894 kShadowTLSAlignment, ArgSize); 3895 if (MS.TrackOrigins) 3896 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 3897 kShadowTLSAlignment, ArgSize); 3898 } else { 3899 ArgKind AK = classifyArgument(A); 3900 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 3901 AK = AK_Memory; 3902 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 3903 AK = AK_Memory; 3904 Value *ShadowBase, *OriginBase = nullptr; 3905 switch (AK) { 3906 case AK_GeneralPurpose: 3907 ShadowBase = 3908 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 3909 if (MS.TrackOrigins) 3910 OriginBase = 3911 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 3912 GpOffset += 8; 3913 break; 3914 case AK_FloatingPoint: 3915 ShadowBase = 3916 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 3917 if (MS.TrackOrigins) 3918 OriginBase = 3919 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 3920 FpOffset += 16; 3921 break; 3922 case AK_Memory: 3923 if (IsFixed) 3924 continue; 3925 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3926 ShadowBase = 3927 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 3928 if (MS.TrackOrigins) 3929 OriginBase = 3930 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 3931 OverflowOffset += alignTo(ArgSize, 8); 3932 } 3933 // Take fixed arguments into account for GpOffset and FpOffset, 3934 // but don't actually store shadows for them. 3935 // TODO(glider): don't call get*PtrForVAArgument() for them. 3936 if (IsFixed) 3937 continue; 3938 if (!ShadowBase) 3939 continue; 3940 Value *Shadow = MSV.getShadow(A); 3941 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 3942 if (MS.TrackOrigins) { 3943 Value *Origin = MSV.getOrigin(A); 3944 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 3945 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 3946 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 3947 } 3948 } 3949 } 3950 Constant *OverflowSize = 3951 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 3952 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 3953 } 3954 3955 /// Compute the shadow address for a given va_arg. 3956 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3957 unsigned ArgOffset, unsigned ArgSize) { 3958 // Make sure we don't overflow __msan_va_arg_tls. 3959 if (ArgOffset + ArgSize > kParamTLSSize) 3960 return nullptr; 3961 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3962 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3963 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3964 "_msarg_va_s"); 3965 } 3966 3967 /// Compute the origin address for a given va_arg. 3968 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 3969 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 3970 // getOriginPtrForVAArgument() is always called after 3971 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 3972 // overflow. 3973 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3974 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 3975 "_msarg_va_o"); 3976 } 3977 3978 void unpoisonVAListTagForInst(IntrinsicInst &I) { 3979 IRBuilder<> IRB(&I); 3980 Value *VAListTag = I.getArgOperand(0); 3981 Value *ShadowPtr, *OriginPtr; 3982 const Align Alignment = Align(8); 3983 std::tie(ShadowPtr, OriginPtr) = 3984 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 3985 /*isStore*/ true); 3986 3987 // Unpoison the whole __va_list_tag. 3988 // FIXME: magic ABI constants. 3989 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3990 /* size */ 24, Alignment, false); 3991 // We shouldn't need to zero out the origins, as they're only checked for 3992 // nonzero shadow. 3993 } 3994 3995 void visitVAStartInst(VAStartInst &I) override { 3996 if (F.getCallingConv() == CallingConv::Win64) 3997 return; 3998 VAStartInstrumentationList.push_back(&I); 3999 unpoisonVAListTagForInst(I); 4000 } 4001 4002 void visitVACopyInst(VACopyInst &I) override { 4003 if (F.getCallingConv() == CallingConv::Win64) return; 4004 unpoisonVAListTagForInst(I); 4005 } 4006 4007 void finalizeInstrumentation() override { 4008 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4009 "finalizeInstrumentation called twice"); 4010 if (!VAStartInstrumentationList.empty()) { 4011 // If there is a va_start in this function, make a backup copy of 4012 // va_arg_tls somewhere in the function entry block. 4013 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4014 VAArgOverflowSize = 4015 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4016 Value *CopySize = 4017 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 4018 VAArgOverflowSize); 4019 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4020 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4021 if (MS.TrackOrigins) { 4022 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4023 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 4024 Align(8), CopySize); 4025 } 4026 } 4027 4028 // Instrument va_start. 4029 // Copy va_list shadow from the backup copy of the TLS contents. 4030 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4031 CallInst *OrigInst = VAStartInstrumentationList[i]; 4032 IRBuilder<> IRB(OrigInst->getNextNode()); 4033 Value *VAListTag = OrigInst->getArgOperand(0); 4034 4035 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4036 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 4037 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4038 ConstantInt::get(MS.IntptrTy, 16)), 4039 PointerType::get(RegSaveAreaPtrTy, 0)); 4040 Value *RegSaveAreaPtr = 4041 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4042 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4043 const Align Alignment = Align(16); 4044 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4045 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4046 Alignment, /*isStore*/ true); 4047 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4048 AMD64FpEndOffset); 4049 if (MS.TrackOrigins) 4050 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 4051 Alignment, AMD64FpEndOffset); 4052 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4053 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 4054 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4055 ConstantInt::get(MS.IntptrTy, 8)), 4056 PointerType::get(OverflowArgAreaPtrTy, 0)); 4057 Value *OverflowArgAreaPtr = 4058 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 4059 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 4060 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 4061 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 4062 Alignment, /*isStore*/ true); 4063 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 4064 AMD64FpEndOffset); 4065 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 4066 VAArgOverflowSize); 4067 if (MS.TrackOrigins) { 4068 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 4069 AMD64FpEndOffset); 4070 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 4071 VAArgOverflowSize); 4072 } 4073 } 4074 } 4075 }; 4076 4077 /// MIPS64-specific implementation of VarArgHelper. 4078 struct VarArgMIPS64Helper : public VarArgHelper { 4079 Function &F; 4080 MemorySanitizer &MS; 4081 MemorySanitizerVisitor &MSV; 4082 Value *VAArgTLSCopy = nullptr; 4083 Value *VAArgSize = nullptr; 4084 4085 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4086 4087 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 4088 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4089 4090 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4091 unsigned VAArgOffset = 0; 4092 const DataLayout &DL = F.getParent()->getDataLayout(); 4093 for (CallSite::arg_iterator ArgIt = CS.arg_begin() + 4094 CS.getFunctionType()->getNumParams(), End = CS.arg_end(); 4095 ArgIt != End; ++ArgIt) { 4096 Triple TargetTriple(F.getParent()->getTargetTriple()); 4097 Value *A = *ArgIt; 4098 Value *Base; 4099 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4100 if (TargetTriple.getArch() == Triple::mips64) { 4101 // Adjusting the shadow for argument with size < 8 to match the placement 4102 // of bits in big endian system 4103 if (ArgSize < 8) 4104 VAArgOffset += (8 - ArgSize); 4105 } 4106 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 4107 VAArgOffset += ArgSize; 4108 VAArgOffset = alignTo(VAArgOffset, 8); 4109 if (!Base) 4110 continue; 4111 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4112 } 4113 4114 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 4115 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4116 // a new class member i.e. it is the total size of all VarArgs. 4117 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4118 } 4119 4120 /// Compute the shadow address for a given va_arg. 4121 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4122 unsigned ArgOffset, unsigned ArgSize) { 4123 // Make sure we don't overflow __msan_va_arg_tls. 4124 if (ArgOffset + ArgSize > kParamTLSSize) 4125 return nullptr; 4126 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4127 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4128 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4129 "_msarg"); 4130 } 4131 4132 void visitVAStartInst(VAStartInst &I) override { 4133 IRBuilder<> IRB(&I); 4134 VAStartInstrumentationList.push_back(&I); 4135 Value *VAListTag = I.getArgOperand(0); 4136 Value *ShadowPtr, *OriginPtr; 4137 const Align Alignment = Align(8); 4138 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4139 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4140 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4141 /* size */ 8, Alignment, false); 4142 } 4143 4144 void visitVACopyInst(VACopyInst &I) override { 4145 IRBuilder<> IRB(&I); 4146 VAStartInstrumentationList.push_back(&I); 4147 Value *VAListTag = I.getArgOperand(0); 4148 Value *ShadowPtr, *OriginPtr; 4149 const Align Alignment = Align(8); 4150 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4151 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4152 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4153 /* size */ 8, Alignment, false); 4154 } 4155 4156 void finalizeInstrumentation() override { 4157 assert(!VAArgSize && !VAArgTLSCopy && 4158 "finalizeInstrumentation called twice"); 4159 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4160 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4161 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4162 VAArgSize); 4163 4164 if (!VAStartInstrumentationList.empty()) { 4165 // If there is a va_start in this function, make a backup copy of 4166 // va_arg_tls somewhere in the function entry block. 4167 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4168 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4169 } 4170 4171 // Instrument va_start. 4172 // Copy va_list shadow from the backup copy of the TLS contents. 4173 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4174 CallInst *OrigInst = VAStartInstrumentationList[i]; 4175 IRBuilder<> IRB(OrigInst->getNextNode()); 4176 Value *VAListTag = OrigInst->getArgOperand(0); 4177 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4178 Value *RegSaveAreaPtrPtr = 4179 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4180 PointerType::get(RegSaveAreaPtrTy, 0)); 4181 Value *RegSaveAreaPtr = 4182 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4183 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4184 const Align Alignment = Align(8); 4185 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4186 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4187 Alignment, /*isStore*/ true); 4188 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4189 CopySize); 4190 } 4191 } 4192 }; 4193 4194 /// AArch64-specific implementation of VarArgHelper. 4195 struct VarArgAArch64Helper : public VarArgHelper { 4196 static const unsigned kAArch64GrArgSize = 64; 4197 static const unsigned kAArch64VrArgSize = 128; 4198 4199 static const unsigned AArch64GrBegOffset = 0; 4200 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 4201 // Make VR space aligned to 16 bytes. 4202 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 4203 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 4204 + kAArch64VrArgSize; 4205 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 4206 4207 Function &F; 4208 MemorySanitizer &MS; 4209 MemorySanitizerVisitor &MSV; 4210 Value *VAArgTLSCopy = nullptr; 4211 Value *VAArgOverflowSize = nullptr; 4212 4213 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4214 4215 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4216 4217 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 4218 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4219 4220 ArgKind classifyArgument(Value* arg) { 4221 Type *T = arg->getType(); 4222 if (T->isFPOrFPVectorTy()) 4223 return AK_FloatingPoint; 4224 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4225 || (T->isPointerTy())) 4226 return AK_GeneralPurpose; 4227 return AK_Memory; 4228 } 4229 4230 // The instrumentation stores the argument shadow in a non ABI-specific 4231 // format because it does not know which argument is named (since Clang, 4232 // like x86_64 case, lowers the va_args in the frontend and this pass only 4233 // sees the low level code that deals with va_list internals). 4234 // The first seven GR registers are saved in the first 56 bytes of the 4235 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4236 // the remaining arguments. 4237 // Using constant offset within the va_arg TLS array allows fast copy 4238 // in the finalize instrumentation. 4239 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4240 unsigned GrOffset = AArch64GrBegOffset; 4241 unsigned VrOffset = AArch64VrBegOffset; 4242 unsigned OverflowOffset = AArch64VAEndOffset; 4243 4244 const DataLayout &DL = F.getParent()->getDataLayout(); 4245 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4246 ArgIt != End; ++ArgIt) { 4247 Value *A = *ArgIt; 4248 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4249 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4250 ArgKind AK = classifyArgument(A); 4251 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4252 AK = AK_Memory; 4253 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4254 AK = AK_Memory; 4255 Value *Base; 4256 switch (AK) { 4257 case AK_GeneralPurpose: 4258 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4259 GrOffset += 8; 4260 break; 4261 case AK_FloatingPoint: 4262 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4263 VrOffset += 16; 4264 break; 4265 case AK_Memory: 4266 // Don't count fixed arguments in the overflow area - va_start will 4267 // skip right over them. 4268 if (IsFixed) 4269 continue; 4270 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4271 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4272 alignTo(ArgSize, 8)); 4273 OverflowOffset += alignTo(ArgSize, 8); 4274 break; 4275 } 4276 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4277 // bother to actually store a shadow. 4278 if (IsFixed) 4279 continue; 4280 if (!Base) 4281 continue; 4282 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4283 } 4284 Constant *OverflowSize = 4285 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4286 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4287 } 4288 4289 /// Compute the shadow address for a given va_arg. 4290 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4291 unsigned ArgOffset, unsigned ArgSize) { 4292 // Make sure we don't overflow __msan_va_arg_tls. 4293 if (ArgOffset + ArgSize > kParamTLSSize) 4294 return nullptr; 4295 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4296 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4297 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4298 "_msarg"); 4299 } 4300 4301 void visitVAStartInst(VAStartInst &I) override { 4302 IRBuilder<> IRB(&I); 4303 VAStartInstrumentationList.push_back(&I); 4304 Value *VAListTag = I.getArgOperand(0); 4305 Value *ShadowPtr, *OriginPtr; 4306 const Align Alignment = Align(8); 4307 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4308 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4309 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4310 /* size */ 32, Alignment, false); 4311 } 4312 4313 void visitVACopyInst(VACopyInst &I) override { 4314 IRBuilder<> IRB(&I); 4315 VAStartInstrumentationList.push_back(&I); 4316 Value *VAListTag = I.getArgOperand(0); 4317 Value *ShadowPtr, *OriginPtr; 4318 const Align Alignment = Align(8); 4319 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4320 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4321 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4322 /* size */ 32, Alignment, false); 4323 } 4324 4325 // Retrieve a va_list field of 'void*' size. 4326 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4327 Value *SaveAreaPtrPtr = 4328 IRB.CreateIntToPtr( 4329 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4330 ConstantInt::get(MS.IntptrTy, offset)), 4331 Type::getInt64PtrTy(*MS.C)); 4332 return IRB.CreateLoad(Type::getInt64Ty(*MS.C), SaveAreaPtrPtr); 4333 } 4334 4335 // Retrieve a va_list field of 'int' size. 4336 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4337 Value *SaveAreaPtr = 4338 IRB.CreateIntToPtr( 4339 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4340 ConstantInt::get(MS.IntptrTy, offset)), 4341 Type::getInt32PtrTy(*MS.C)); 4342 Value *SaveArea32 = IRB.CreateLoad(IRB.getInt32Ty(), SaveAreaPtr); 4343 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4344 } 4345 4346 void finalizeInstrumentation() override { 4347 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4348 "finalizeInstrumentation called twice"); 4349 if (!VAStartInstrumentationList.empty()) { 4350 // If there is a va_start in this function, make a backup copy of 4351 // va_arg_tls somewhere in the function entry block. 4352 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4353 VAArgOverflowSize = 4354 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4355 Value *CopySize = 4356 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4357 VAArgOverflowSize); 4358 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4359 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4360 } 4361 4362 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4363 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4364 4365 // Instrument va_start, copy va_list shadow from the backup copy of 4366 // the TLS contents. 4367 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4368 CallInst *OrigInst = VAStartInstrumentationList[i]; 4369 IRBuilder<> IRB(OrigInst->getNextNode()); 4370 4371 Value *VAListTag = OrigInst->getArgOperand(0); 4372 4373 // The variadic ABI for AArch64 creates two areas to save the incoming 4374 // argument registers (one for 64-bit general register xn-x7 and another 4375 // for 128-bit FP/SIMD vn-v7). 4376 // We need then to propagate the shadow arguments on both regions 4377 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4378 // The remaining arguments are saved on shadow for 'va::stack'. 4379 // One caveat is it requires only to propagate the non-named arguments, 4380 // however on the call site instrumentation 'all' the arguments are 4381 // saved. So to copy the shadow values from the va_arg TLS array 4382 // we need to adjust the offset for both GR and VR fields based on 4383 // the __{gr,vr}_offs value (since they are stores based on incoming 4384 // named arguments). 4385 4386 // Read the stack pointer from the va_list. 4387 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4388 4389 // Read both the __gr_top and __gr_off and add them up. 4390 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4391 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4392 4393 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4394 4395 // Read both the __vr_top and __vr_off and add them up. 4396 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4397 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4398 4399 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4400 4401 // It does not know how many named arguments is being used and, on the 4402 // callsite all the arguments were saved. Since __gr_off is defined as 4403 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4404 // argument by ignoring the bytes of shadow from named arguments. 4405 Value *GrRegSaveAreaShadowPtrOff = 4406 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4407 4408 Value *GrRegSaveAreaShadowPtr = 4409 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4410 Align(8), /*isStore*/ true) 4411 .first; 4412 4413 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4414 GrRegSaveAreaShadowPtrOff); 4415 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4416 4417 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, Align(8), GrSrcPtr, Align(8), 4418 GrCopySize); 4419 4420 // Again, but for FP/SIMD values. 4421 Value *VrRegSaveAreaShadowPtrOff = 4422 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4423 4424 Value *VrRegSaveAreaShadowPtr = 4425 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4426 Align(8), /*isStore*/ true) 4427 .first; 4428 4429 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4430 IRB.getInt8Ty(), 4431 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4432 IRB.getInt32(AArch64VrBegOffset)), 4433 VrRegSaveAreaShadowPtrOff); 4434 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4435 4436 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, Align(8), VrSrcPtr, Align(8), 4437 VrCopySize); 4438 4439 // And finally for remaining arguments. 4440 Value *StackSaveAreaShadowPtr = 4441 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4442 Align(16), /*isStore*/ true) 4443 .first; 4444 4445 Value *StackSrcPtr = 4446 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4447 IRB.getInt32(AArch64VAEndOffset)); 4448 4449 IRB.CreateMemCpy(StackSaveAreaShadowPtr, Align(16), StackSrcPtr, 4450 Align(16), VAArgOverflowSize); 4451 } 4452 } 4453 }; 4454 4455 /// PowerPC64-specific implementation of VarArgHelper. 4456 struct VarArgPowerPC64Helper : public VarArgHelper { 4457 Function &F; 4458 MemorySanitizer &MS; 4459 MemorySanitizerVisitor &MSV; 4460 Value *VAArgTLSCopy = nullptr; 4461 Value *VAArgSize = nullptr; 4462 4463 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4464 4465 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4466 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4467 4468 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4469 // For PowerPC, we need to deal with alignment of stack arguments - 4470 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4471 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4472 // and QPX vectors are aligned to 32 bytes. For that reason, we 4473 // compute current offset from stack pointer (which is always properly 4474 // aligned), and offset for the first vararg, then subtract them. 4475 unsigned VAArgBase; 4476 Triple TargetTriple(F.getParent()->getTargetTriple()); 4477 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4478 // and 32 bytes for ABIv2. This is usually determined by target 4479 // endianness, but in theory could be overridden by function attribute. 4480 // For simplicity, we ignore it here (it'd only matter for QPX vectors). 4481 if (TargetTriple.getArch() == Triple::ppc64) 4482 VAArgBase = 48; 4483 else 4484 VAArgBase = 32; 4485 unsigned VAArgOffset = VAArgBase; 4486 const DataLayout &DL = F.getParent()->getDataLayout(); 4487 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4488 ArgIt != End; ++ArgIt) { 4489 Value *A = *ArgIt; 4490 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4491 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4492 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 4493 if (IsByVal) { 4494 assert(A->getType()->isPointerTy()); 4495 Type *RealTy = A->getType()->getPointerElementType(); 4496 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4497 uint64_t ArgAlign = CS.getParamAlignment(ArgNo); 4498 if (ArgAlign < 8) 4499 ArgAlign = 8; 4500 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4501 if (!IsFixed) { 4502 Value *Base = getShadowPtrForVAArgument( 4503 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4504 if (Base) { 4505 Value *AShadowPtr, *AOriginPtr; 4506 std::tie(AShadowPtr, AOriginPtr) = 4507 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4508 kShadowTLSAlignment, /*isStore*/ false); 4509 4510 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4511 kShadowTLSAlignment, ArgSize); 4512 } 4513 } 4514 VAArgOffset += alignTo(ArgSize, 8); 4515 } else { 4516 Value *Base; 4517 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4518 uint64_t ArgAlign = 8; 4519 if (A->getType()->isArrayTy()) { 4520 // Arrays are aligned to element size, except for long double 4521 // arrays, which are aligned to 8 bytes. 4522 Type *ElementTy = A->getType()->getArrayElementType(); 4523 if (!ElementTy->isPPC_FP128Ty()) 4524 ArgAlign = DL.getTypeAllocSize(ElementTy); 4525 } else if (A->getType()->isVectorTy()) { 4526 // Vectors are naturally aligned. 4527 ArgAlign = DL.getTypeAllocSize(A->getType()); 4528 } 4529 if (ArgAlign < 8) 4530 ArgAlign = 8; 4531 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4532 if (DL.isBigEndian()) { 4533 // Adjusting the shadow for argument with size < 8 to match the placement 4534 // of bits in big endian system 4535 if (ArgSize < 8) 4536 VAArgOffset += (8 - ArgSize); 4537 } 4538 if (!IsFixed) { 4539 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4540 VAArgOffset - VAArgBase, ArgSize); 4541 if (Base) 4542 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4543 } 4544 VAArgOffset += ArgSize; 4545 VAArgOffset = alignTo(VAArgOffset, 8); 4546 } 4547 if (IsFixed) 4548 VAArgBase = VAArgOffset; 4549 } 4550 4551 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4552 VAArgOffset - VAArgBase); 4553 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4554 // a new class member i.e. it is the total size of all VarArgs. 4555 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4556 } 4557 4558 /// Compute the shadow address for a given va_arg. 4559 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4560 unsigned ArgOffset, unsigned ArgSize) { 4561 // Make sure we don't overflow __msan_va_arg_tls. 4562 if (ArgOffset + ArgSize > kParamTLSSize) 4563 return nullptr; 4564 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4565 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4566 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4567 "_msarg"); 4568 } 4569 4570 void visitVAStartInst(VAStartInst &I) override { 4571 IRBuilder<> IRB(&I); 4572 VAStartInstrumentationList.push_back(&I); 4573 Value *VAListTag = I.getArgOperand(0); 4574 Value *ShadowPtr, *OriginPtr; 4575 const Align Alignment = Align(8); 4576 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4577 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4578 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4579 /* size */ 8, Alignment, false); 4580 } 4581 4582 void visitVACopyInst(VACopyInst &I) override { 4583 IRBuilder<> IRB(&I); 4584 Value *VAListTag = I.getArgOperand(0); 4585 Value *ShadowPtr, *OriginPtr; 4586 const Align Alignment = Align(8); 4587 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4588 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4589 // Unpoison the whole __va_list_tag. 4590 // FIXME: magic ABI constants. 4591 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4592 /* size */ 8, Alignment, false); 4593 } 4594 4595 void finalizeInstrumentation() override { 4596 assert(!VAArgSize && !VAArgTLSCopy && 4597 "finalizeInstrumentation called twice"); 4598 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4599 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4600 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4601 VAArgSize); 4602 4603 if (!VAStartInstrumentationList.empty()) { 4604 // If there is a va_start in this function, make a backup copy of 4605 // va_arg_tls somewhere in the function entry block. 4606 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4607 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4608 } 4609 4610 // Instrument va_start. 4611 // Copy va_list shadow from the backup copy of the TLS contents. 4612 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4613 CallInst *OrigInst = VAStartInstrumentationList[i]; 4614 IRBuilder<> IRB(OrigInst->getNextNode()); 4615 Value *VAListTag = OrigInst->getArgOperand(0); 4616 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4617 Value *RegSaveAreaPtrPtr = 4618 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4619 PointerType::get(RegSaveAreaPtrTy, 0)); 4620 Value *RegSaveAreaPtr = 4621 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4622 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4623 const Align Alignment = Align(8); 4624 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4625 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4626 Alignment, /*isStore*/ true); 4627 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4628 CopySize); 4629 } 4630 } 4631 }; 4632 4633 /// SystemZ-specific implementation of VarArgHelper. 4634 struct VarArgSystemZHelper : public VarArgHelper { 4635 static const unsigned SystemZGpOffset = 16; 4636 static const unsigned SystemZGpEndOffset = 56; 4637 static const unsigned SystemZFpOffset = 128; 4638 static const unsigned SystemZFpEndOffset = 160; 4639 static const unsigned SystemZMaxVrArgs = 8; 4640 static const unsigned SystemZRegSaveAreaSize = 160; 4641 static const unsigned SystemZOverflowOffset = 160; 4642 static const unsigned SystemZVAListTagSize = 32; 4643 static const unsigned SystemZOverflowArgAreaPtrOffset = 16; 4644 static const unsigned SystemZRegSaveAreaPtrOffset = 24; 4645 4646 Function &F; 4647 MemorySanitizer &MS; 4648 MemorySanitizerVisitor &MSV; 4649 Value *VAArgTLSCopy = nullptr; 4650 Value *VAArgTLSOriginCopy = nullptr; 4651 Value *VAArgOverflowSize = nullptr; 4652 4653 SmallVector<CallInst *, 16> VAStartInstrumentationList; 4654 4655 enum class ArgKind { 4656 GeneralPurpose, 4657 FloatingPoint, 4658 Vector, 4659 Memory, 4660 Indirect, 4661 }; 4662 4663 enum class ShadowExtension { None, Zero, Sign }; 4664 4665 VarArgSystemZHelper(Function &F, MemorySanitizer &MS, 4666 MemorySanitizerVisitor &MSV) 4667 : F(F), MS(MS), MSV(MSV) {} 4668 4669 ArgKind classifyArgument(Type *T, bool IsSoftFloatABI) { 4670 // T is a SystemZABIInfo::classifyArgumentType() output, and there are 4671 // only a few possibilities of what it can be. In particular, enums, single 4672 // element structs and large types have already been taken care of. 4673 4674 // Some i128 and fp128 arguments are converted to pointers only in the 4675 // back end. 4676 if (T->isIntegerTy(128) || T->isFP128Ty()) 4677 return ArgKind::Indirect; 4678 if (T->isFloatingPointTy()) 4679 return IsSoftFloatABI ? ArgKind::GeneralPurpose : ArgKind::FloatingPoint; 4680 if (T->isIntegerTy() || T->isPointerTy()) 4681 return ArgKind::GeneralPurpose; 4682 if (T->isVectorTy()) 4683 return ArgKind::Vector; 4684 return ArgKind::Memory; 4685 } 4686 4687 ShadowExtension getShadowExtension(const CallSite &CS, unsigned ArgNo) { 4688 // ABI says: "One of the simple integer types no more than 64 bits wide. 4689 // ... If such an argument is shorter than 64 bits, replace it by a full 4690 // 64-bit integer representing the same number, using sign or zero 4691 // extension". Shadow for an integer argument has the same type as the 4692 // argument itself, so it can be sign or zero extended as well. 4693 bool ZExt = CS.paramHasAttr(ArgNo, Attribute::ZExt); 4694 bool SExt = CS.paramHasAttr(ArgNo, Attribute::SExt); 4695 if (ZExt) { 4696 assert(!SExt); 4697 return ShadowExtension::Zero; 4698 } 4699 if (SExt) { 4700 assert(!ZExt); 4701 return ShadowExtension::Sign; 4702 } 4703 return ShadowExtension::None; 4704 } 4705 4706 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4707 bool IsSoftFloatABI = CS.getCalledFunction() 4708 ->getFnAttribute("use-soft-float") 4709 .getValueAsString() == "true"; 4710 unsigned GpOffset = SystemZGpOffset; 4711 unsigned FpOffset = SystemZFpOffset; 4712 unsigned VrIndex = 0; 4713 unsigned OverflowOffset = SystemZOverflowOffset; 4714 const DataLayout &DL = F.getParent()->getDataLayout(); 4715 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4716 ArgIt != End; ++ArgIt) { 4717 Value *A = *ArgIt; 4718 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4719 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4720 // SystemZABIInfo does not produce ByVal parameters. 4721 assert(!CS.paramHasAttr(ArgNo, Attribute::ByVal)); 4722 Type *T = A->getType(); 4723 ArgKind AK = classifyArgument(T, IsSoftFloatABI); 4724 if (AK == ArgKind::Indirect) { 4725 T = PointerType::get(T, 0); 4726 AK = ArgKind::GeneralPurpose; 4727 } 4728 if (AK == ArgKind::GeneralPurpose && GpOffset >= SystemZGpEndOffset) 4729 AK = ArgKind::Memory; 4730 if (AK == ArgKind::FloatingPoint && FpOffset >= SystemZFpEndOffset) 4731 AK = ArgKind::Memory; 4732 if (AK == ArgKind::Vector && (VrIndex >= SystemZMaxVrArgs || !IsFixed)) 4733 AK = ArgKind::Memory; 4734 Value *ShadowBase = nullptr; 4735 Value *OriginBase = nullptr; 4736 ShadowExtension SE = ShadowExtension::None; 4737 switch (AK) { 4738 case ArgKind::GeneralPurpose: { 4739 // Always keep track of GpOffset, but store shadow only for varargs. 4740 uint64_t ArgSize = 8; 4741 if (GpOffset + ArgSize <= kParamTLSSize) { 4742 if (!IsFixed) { 4743 SE = getShadowExtension(CS, ArgNo); 4744 uint64_t GapSize = 0; 4745 if (SE == ShadowExtension::None) { 4746 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 4747 assert(ArgAllocSize <= ArgSize); 4748 GapSize = ArgSize - ArgAllocSize; 4749 } 4750 ShadowBase = getShadowAddrForVAArgument(IRB, GpOffset + GapSize); 4751 if (MS.TrackOrigins) 4752 OriginBase = getOriginPtrForVAArgument(IRB, GpOffset + GapSize); 4753 } 4754 GpOffset += ArgSize; 4755 } else { 4756 GpOffset = kParamTLSSize; 4757 } 4758 break; 4759 } 4760 case ArgKind::FloatingPoint: { 4761 // Always keep track of FpOffset, but store shadow only for varargs. 4762 uint64_t ArgSize = 8; 4763 if (FpOffset + ArgSize <= kParamTLSSize) { 4764 if (!IsFixed) { 4765 // PoP says: "A short floating-point datum requires only the 4766 // left-most 32 bit positions of a floating-point register". 4767 // Therefore, in contrast to AK_GeneralPurpose and AK_Memory, 4768 // don't extend shadow and don't mind the gap. 4769 ShadowBase = getShadowAddrForVAArgument(IRB, FpOffset); 4770 if (MS.TrackOrigins) 4771 OriginBase = getOriginPtrForVAArgument(IRB, FpOffset); 4772 } 4773 FpOffset += ArgSize; 4774 } else { 4775 FpOffset = kParamTLSSize; 4776 } 4777 break; 4778 } 4779 case ArgKind::Vector: { 4780 // Keep track of VrIndex. No need to store shadow, since vector varargs 4781 // go through AK_Memory. 4782 assert(IsFixed); 4783 VrIndex++; 4784 break; 4785 } 4786 case ArgKind::Memory: { 4787 // Keep track of OverflowOffset and store shadow only for varargs. 4788 // Ignore fixed args, since we need to copy only the vararg portion of 4789 // the overflow area shadow. 4790 if (!IsFixed) { 4791 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 4792 uint64_t ArgSize = alignTo(ArgAllocSize, 8); 4793 if (OverflowOffset + ArgSize <= kParamTLSSize) { 4794 SE = getShadowExtension(CS, ArgNo); 4795 uint64_t GapSize = 4796 SE == ShadowExtension::None ? ArgSize - ArgAllocSize : 0; 4797 ShadowBase = 4798 getShadowAddrForVAArgument(IRB, OverflowOffset + GapSize); 4799 if (MS.TrackOrigins) 4800 OriginBase = 4801 getOriginPtrForVAArgument(IRB, OverflowOffset + GapSize); 4802 OverflowOffset += ArgSize; 4803 } else { 4804 OverflowOffset = kParamTLSSize; 4805 } 4806 } 4807 break; 4808 } 4809 case ArgKind::Indirect: 4810 llvm_unreachable("Indirect must be converted to GeneralPurpose"); 4811 } 4812 if (ShadowBase == nullptr) 4813 continue; 4814 Value *Shadow = MSV.getShadow(A); 4815 if (SE != ShadowExtension::None) 4816 Shadow = MSV.CreateShadowCast(IRB, Shadow, IRB.getInt64Ty(), 4817 /*Signed*/ SE == ShadowExtension::Sign); 4818 ShadowBase = IRB.CreateIntToPtr( 4819 ShadowBase, PointerType::get(Shadow->getType(), 0), "_msarg_va_s"); 4820 IRB.CreateStore(Shadow, ShadowBase); 4821 if (MS.TrackOrigins) { 4822 Value *Origin = MSV.getOrigin(A); 4823 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 4824 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 4825 kMinOriginAlignment); 4826 } 4827 } 4828 Constant *OverflowSize = ConstantInt::get( 4829 IRB.getInt64Ty(), OverflowOffset - SystemZOverflowOffset); 4830 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4831 } 4832 4833 Value *getShadowAddrForVAArgument(IRBuilder<> &IRB, unsigned ArgOffset) { 4834 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4835 return IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4836 } 4837 4838 Value *getOriginPtrForVAArgument(IRBuilder<> &IRB, int ArgOffset) { 4839 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 4840 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4841 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 4842 "_msarg_va_o"); 4843 } 4844 4845 void unpoisonVAListTagForInst(IntrinsicInst &I) { 4846 IRBuilder<> IRB(&I); 4847 Value *VAListTag = I.getArgOperand(0); 4848 Value *ShadowPtr, *OriginPtr; 4849 const Align Alignment = Align(8); 4850 std::tie(ShadowPtr, OriginPtr) = 4851 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 4852 /*isStore*/ true); 4853 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4854 SystemZVAListTagSize, Alignment, false); 4855 } 4856 4857 void visitVAStartInst(VAStartInst &I) override { 4858 VAStartInstrumentationList.push_back(&I); 4859 unpoisonVAListTagForInst(I); 4860 } 4861 4862 void visitVACopyInst(VACopyInst &I) override { unpoisonVAListTagForInst(I); } 4863 4864 void copyRegSaveArea(IRBuilder<> &IRB, Value *VAListTag) { 4865 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4866 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 4867 IRB.CreateAdd( 4868 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4869 ConstantInt::get(MS.IntptrTy, SystemZRegSaveAreaPtrOffset)), 4870 PointerType::get(RegSaveAreaPtrTy, 0)); 4871 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4872 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4873 const Align Alignment = Align(8); 4874 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4875 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), Alignment, 4876 /*isStore*/ true); 4877 // TODO(iii): copy only fragments filled by visitCallSite() 4878 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4879 SystemZRegSaveAreaSize); 4880 if (MS.TrackOrigins) 4881 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 4882 Alignment, SystemZRegSaveAreaSize); 4883 } 4884 4885 void copyOverflowArea(IRBuilder<> &IRB, Value *VAListTag) { 4886 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4887 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 4888 IRB.CreateAdd( 4889 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4890 ConstantInt::get(MS.IntptrTy, SystemZOverflowArgAreaPtrOffset)), 4891 PointerType::get(OverflowArgAreaPtrTy, 0)); 4892 Value *OverflowArgAreaPtr = 4893 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 4894 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 4895 const Align Alignment = Align(8); 4896 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 4897 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 4898 Alignment, /*isStore*/ true); 4899 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 4900 SystemZOverflowOffset); 4901 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 4902 VAArgOverflowSize); 4903 if (MS.TrackOrigins) { 4904 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 4905 SystemZOverflowOffset); 4906 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 4907 VAArgOverflowSize); 4908 } 4909 } 4910 4911 void finalizeInstrumentation() override { 4912 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4913 "finalizeInstrumentation called twice"); 4914 if (!VAStartInstrumentationList.empty()) { 4915 // If there is a va_start in this function, make a backup copy of 4916 // va_arg_tls somewhere in the function entry block. 4917 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4918 VAArgOverflowSize = 4919 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4920 Value *CopySize = 4921 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, SystemZOverflowOffset), 4922 VAArgOverflowSize); 4923 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4924 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4925 if (MS.TrackOrigins) { 4926 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4927 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 4928 Align(8), CopySize); 4929 } 4930 } 4931 4932 // Instrument va_start. 4933 // Copy va_list shadow from the backup copy of the TLS contents. 4934 for (size_t VaStartNo = 0, VaStartNum = VAStartInstrumentationList.size(); 4935 VaStartNo < VaStartNum; VaStartNo++) { 4936 CallInst *OrigInst = VAStartInstrumentationList[VaStartNo]; 4937 IRBuilder<> IRB(OrigInst->getNextNode()); 4938 Value *VAListTag = OrigInst->getArgOperand(0); 4939 copyRegSaveArea(IRB, VAListTag); 4940 copyOverflowArea(IRB, VAListTag); 4941 } 4942 } 4943 }; 4944 4945 /// A no-op implementation of VarArgHelper. 4946 struct VarArgNoOpHelper : public VarArgHelper { 4947 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 4948 MemorySanitizerVisitor &MSV) {} 4949 4950 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {} 4951 4952 void visitVAStartInst(VAStartInst &I) override {} 4953 4954 void visitVACopyInst(VACopyInst &I) override {} 4955 4956 void finalizeInstrumentation() override {} 4957 }; 4958 4959 } // end anonymous namespace 4960 4961 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 4962 MemorySanitizerVisitor &Visitor) { 4963 // VarArg handling is only implemented on AMD64. False positives are possible 4964 // on other platforms. 4965 Triple TargetTriple(Func.getParent()->getTargetTriple()); 4966 if (TargetTriple.getArch() == Triple::x86_64) 4967 return new VarArgAMD64Helper(Func, Msan, Visitor); 4968 else if (TargetTriple.isMIPS64()) 4969 return new VarArgMIPS64Helper(Func, Msan, Visitor); 4970 else if (TargetTriple.getArch() == Triple::aarch64) 4971 return new VarArgAArch64Helper(Func, Msan, Visitor); 4972 else if (TargetTriple.getArch() == Triple::ppc64 || 4973 TargetTriple.getArch() == Triple::ppc64le) 4974 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 4975 else if (TargetTriple.getArch() == Triple::systemz) 4976 return new VarArgSystemZHelper(Func, Msan, Visitor); 4977 else 4978 return new VarArgNoOpHelper(Func, Msan, Visitor); 4979 } 4980 4981 bool MemorySanitizer::sanitizeFunction(Function &F, TargetLibraryInfo &TLI) { 4982 if (!CompileKernel && F.getName() == kMsanModuleCtorName) 4983 return false; 4984 4985 MemorySanitizerVisitor Visitor(F, *this, TLI); 4986 4987 // Clear out readonly/readnone attributes. 4988 AttrBuilder B; 4989 B.addAttribute(Attribute::ReadOnly) 4990 .addAttribute(Attribute::ReadNone) 4991 .addAttribute(Attribute::WriteOnly) 4992 .addAttribute(Attribute::ArgMemOnly) 4993 .addAttribute(Attribute::Speculatable); 4994 F.removeAttributes(AttributeList::FunctionIndex, B); 4995 4996 return Visitor.runOnFunction(); 4997 } 4998