1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 // 9 /// \file 10 /// This file is a part of MemorySanitizer, a detector of uninitialized 11 /// reads. 12 /// 13 /// The algorithm of the tool is similar to Memcheck 14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 15 /// byte of the application memory, poison the shadow of the malloc-ed 16 /// or alloca-ed memory, load the shadow bits on every memory read, 17 /// propagate the shadow bits through some of the arithmetic 18 /// instruction (including MOV), store the shadow bits on every memory 19 /// write, report a bug on some other instructions (e.g. JMP) if the 20 /// associated shadow is poisoned. 21 /// 22 /// But there are differences too. The first and the major one: 23 /// compiler instrumentation instead of binary instrumentation. This 24 /// gives us much better register allocation, possible compiler 25 /// optimizations and a fast start-up. But this brings the major issue 26 /// as well: msan needs to see all program events, including system 27 /// calls and reads/writes in system libraries, so we either need to 28 /// compile *everything* with msan or use a binary translation 29 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 30 /// Another difference from Memcheck is that we use 8 shadow bits per 31 /// byte of application memory and use a direct shadow mapping. This 32 /// greatly simplifies the instrumentation code and avoids races on 33 /// shadow updates (Memcheck is single-threaded so races are not a 34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 35 /// path storage that uses 8 bits per byte). 36 /// 37 /// The default value of shadow is 0, which means "clean" (not poisoned). 38 /// 39 /// Every module initializer should call __msan_init to ensure that the 40 /// shadow memory is ready. On error, __msan_warning is called. Since 41 /// parameters and return values may be passed via registers, we have a 42 /// specialized thread-local shadow for return values 43 /// (__msan_retval_tls) and parameters (__msan_param_tls). 44 /// 45 /// Origin tracking. 46 /// 47 /// MemorySanitizer can track origins (allocation points) of all uninitialized 48 /// values. This behavior is controlled with a flag (msan-track-origins) and is 49 /// disabled by default. 50 /// 51 /// Origins are 4-byte values created and interpreted by the runtime library. 52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 53 /// of application memory. Propagation of origins is basically a bunch of 54 /// "select" instructions that pick the origin of a dirty argument, if an 55 /// instruction has one. 56 /// 57 /// Every 4 aligned, consecutive bytes of application memory have one origin 58 /// value associated with them. If these bytes contain uninitialized data 59 /// coming from 2 different allocations, the last store wins. Because of this, 60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 61 /// practice. 62 /// 63 /// Origins are meaningless for fully initialized values, so MemorySanitizer 64 /// avoids storing origin to memory when a fully initialized value is stored. 65 /// This way it avoids needless overwritting origin of the 4-byte region on 66 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 67 /// 68 /// Atomic handling. 69 /// 70 /// Ideally, every atomic store of application value should update the 71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 72 /// of two disjoint locations can not be done without severe slowdown. 73 /// 74 /// Therefore, we implement an approximation that may err on the safe side. 75 /// In this implementation, every atomically accessed location in the program 76 /// may only change from (partially) uninitialized to fully initialized, but 77 /// not the other way around. We load the shadow _after_ the application load, 78 /// and we store the shadow _before_ the app store. Also, we always store clean 79 /// shadow (if the application store is atomic). This way, if the store-load 80 /// pair constitutes a happens-before arc, shadow store and load are correctly 81 /// ordered such that the load will get either the value that was stored, or 82 /// some later value (which is always clean). 83 /// 84 /// This does not work very well with Compare-And-Swap (CAS) and 85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 86 /// must store the new shadow before the app operation, and load the shadow 87 /// after the app operation. Computers don't work this way. Current 88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 89 /// value. It implements the store part as a simple atomic store by storing a 90 /// clean shadow. 91 /// 92 /// Instrumenting inline assembly. 93 /// 94 /// For inline assembly code LLVM has little idea about which memory locations 95 /// become initialized depending on the arguments. It can be possible to figure 96 /// out which arguments are meant to point to inputs and outputs, but the 97 /// actual semantics can be only visible at runtime. In the Linux kernel it's 98 /// also possible that the arguments only indicate the offset for a base taken 99 /// from a segment register, so it's dangerous to treat any asm() arguments as 100 /// pointers. We take a conservative approach generating calls to 101 /// __msan_instrument_asm_store(ptr, size) 102 /// , which defer the memory unpoisoning to the runtime library. 103 /// The latter can perform more complex address checks to figure out whether 104 /// it's safe to touch the shadow memory. 105 /// Like with atomic operations, we call __msan_instrument_asm_store() before 106 /// the assembly call, so that changes to the shadow memory will be seen by 107 /// other threads together with main memory initialization. 108 /// 109 /// KernelMemorySanitizer (KMSAN) implementation. 110 /// 111 /// The major differences between KMSAN and MSan instrumentation are: 112 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 113 /// - KMSAN allocates shadow and origin memory for each page separately, so 114 /// there are no explicit accesses to shadow and origin in the 115 /// instrumentation. 116 /// Shadow and origin values for a particular X-byte memory location 117 /// (X=1,2,4,8) are accessed through pointers obtained via the 118 /// __msan_metadata_ptr_for_load_X(ptr) 119 /// __msan_metadata_ptr_for_store_X(ptr) 120 /// functions. The corresponding functions check that the X-byte accesses 121 /// are possible and returns the pointers to shadow and origin memory. 122 /// Arbitrary sized accesses are handled with: 123 /// __msan_metadata_ptr_for_load_n(ptr, size) 124 /// __msan_metadata_ptr_for_store_n(ptr, size); 125 /// - TLS variables are stored in a single per-task struct. A call to a 126 /// function __msan_get_context_state() returning a pointer to that struct 127 /// is inserted into every instrumented function before the entry block; 128 /// - __msan_warning() takes a 32-bit origin parameter; 129 /// - local variables are poisoned with __msan_poison_alloca() upon function 130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 131 /// function; 132 /// - the pass doesn't declare any global variables or add global constructors 133 /// to the translation unit. 134 /// 135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 136 /// calls, making sure we're on the safe side wrt. possible false positives. 137 /// 138 /// KernelMemorySanitizer only supports X86_64 at the moment. 139 /// 140 //===----------------------------------------------------------------------===// 141 142 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h" 143 #include "llvm/ADT/APInt.h" 144 #include "llvm/ADT/ArrayRef.h" 145 #include "llvm/ADT/DepthFirstIterator.h" 146 #include "llvm/ADT/SmallString.h" 147 #include "llvm/ADT/SmallVector.h" 148 #include "llvm/ADT/StringExtras.h" 149 #include "llvm/ADT/StringRef.h" 150 #include "llvm/ADT/Triple.h" 151 #include "llvm/Analysis/TargetLibraryInfo.h" 152 #include "llvm/IR/Argument.h" 153 #include "llvm/IR/Attributes.h" 154 #include "llvm/IR/BasicBlock.h" 155 #include "llvm/IR/CallSite.h" 156 #include "llvm/IR/CallingConv.h" 157 #include "llvm/IR/Constant.h" 158 #include "llvm/IR/Constants.h" 159 #include "llvm/IR/DataLayout.h" 160 #include "llvm/IR/DerivedTypes.h" 161 #include "llvm/IR/Function.h" 162 #include "llvm/IR/GlobalValue.h" 163 #include "llvm/IR/GlobalVariable.h" 164 #include "llvm/IR/IRBuilder.h" 165 #include "llvm/IR/InlineAsm.h" 166 #include "llvm/IR/InstVisitor.h" 167 #include "llvm/IR/InstrTypes.h" 168 #include "llvm/IR/Instruction.h" 169 #include "llvm/IR/Instructions.h" 170 #include "llvm/IR/IntrinsicInst.h" 171 #include "llvm/IR/Intrinsics.h" 172 #include "llvm/IR/LLVMContext.h" 173 #include "llvm/IR/MDBuilder.h" 174 #include "llvm/IR/Module.h" 175 #include "llvm/IR/Type.h" 176 #include "llvm/IR/Value.h" 177 #include "llvm/IR/ValueMap.h" 178 #include "llvm/Pass.h" 179 #include "llvm/Support/AtomicOrdering.h" 180 #include "llvm/Support/Casting.h" 181 #include "llvm/Support/CommandLine.h" 182 #include "llvm/Support/Compiler.h" 183 #include "llvm/Support/Debug.h" 184 #include "llvm/Support/ErrorHandling.h" 185 #include "llvm/Support/MathExtras.h" 186 #include "llvm/Support/raw_ostream.h" 187 #include "llvm/Transforms/Instrumentation.h" 188 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 189 #include "llvm/Transforms/Utils/Local.h" 190 #include "llvm/Transforms/Utils/ModuleUtils.h" 191 #include <algorithm> 192 #include <cassert> 193 #include <cstddef> 194 #include <cstdint> 195 #include <memory> 196 #include <string> 197 #include <tuple> 198 199 using namespace llvm; 200 201 #define DEBUG_TYPE "msan" 202 203 static const unsigned kOriginSize = 4; 204 static const unsigned kMinOriginAlignment = 4; 205 static const unsigned kShadowTLSAlignment = 8; 206 207 // These constants must be kept in sync with the ones in msan.h. 208 static const unsigned kParamTLSSize = 800; 209 static const unsigned kRetvalTLSSize = 800; 210 211 // Accesses sizes are powers of two: 1, 2, 4, 8. 212 static const size_t kNumberOfAccessSizes = 4; 213 214 /// Track origins of uninitialized values. 215 /// 216 /// Adds a section to MemorySanitizer report that points to the allocation 217 /// (stack or heap) the uninitialized bits came from originally. 218 static cl::opt<int> ClTrackOrigins("msan-track-origins", 219 cl::desc("Track origins (allocation sites) of poisoned memory"), 220 cl::Hidden, cl::init(0)); 221 222 static cl::opt<bool> ClKeepGoing("msan-keep-going", 223 cl::desc("keep going after reporting a UMR"), 224 cl::Hidden, cl::init(false)); 225 226 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 227 cl::desc("poison uninitialized stack variables"), 228 cl::Hidden, cl::init(true)); 229 230 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 231 cl::desc("poison uninitialized stack variables with a call"), 232 cl::Hidden, cl::init(false)); 233 234 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 235 cl::desc("poison uninitialized stack variables with the given pattern"), 236 cl::Hidden, cl::init(0xff)); 237 238 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 239 cl::desc("poison undef temps"), 240 cl::Hidden, cl::init(true)); 241 242 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 243 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 244 cl::Hidden, cl::init(true)); 245 246 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 247 cl::desc("exact handling of relational integer ICmp"), 248 cl::Hidden, cl::init(false)); 249 250 // When compiling the Linux kernel, we sometimes see false positives related to 251 // MSan being unable to understand that inline assembly calls may initialize 252 // local variables. 253 // This flag makes the compiler conservatively unpoison every memory location 254 // passed into an assembly call. Note that this may cause false positives. 255 // Because it's impossible to figure out the array sizes, we can only unpoison 256 // the first sizeof(type) bytes for each type* pointer. 257 // The instrumentation is only enabled in KMSAN builds, and only if 258 // -msan-handle-asm-conservative is on. This is done because we may want to 259 // quickly disable assembly instrumentation when it breaks. 260 static cl::opt<bool> ClHandleAsmConservative( 261 "msan-handle-asm-conservative", 262 cl::desc("conservative handling of inline assembly"), cl::Hidden, 263 cl::init(true)); 264 265 // This flag controls whether we check the shadow of the address 266 // operand of load or store. Such bugs are very rare, since load from 267 // a garbage address typically results in SEGV, but still happen 268 // (e.g. only lower bits of address are garbage, or the access happens 269 // early at program startup where malloc-ed memory is more likely to 270 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 271 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 272 cl::desc("report accesses through a pointer which has poisoned shadow"), 273 cl::Hidden, cl::init(true)); 274 275 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 276 cl::desc("print out instructions with default strict semantics"), 277 cl::Hidden, cl::init(false)); 278 279 static cl::opt<int> ClInstrumentationWithCallThreshold( 280 "msan-instrumentation-with-call-threshold", 281 cl::desc( 282 "If the function being instrumented requires more than " 283 "this number of checks and origin stores, use callbacks instead of " 284 "inline checks (-1 means never use callbacks)."), 285 cl::Hidden, cl::init(3500)); 286 287 static cl::opt<bool> 288 ClEnableKmsan("msan-kernel", 289 cl::desc("Enable KernelMemorySanitizer instrumentation"), 290 cl::Hidden, cl::init(false)); 291 292 // This is an experiment to enable handling of cases where shadow is a non-zero 293 // compile-time constant. For some unexplainable reason they were silently 294 // ignored in the instrumentation. 295 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 296 cl::desc("Insert checks for constant shadow values"), 297 cl::Hidden, cl::init(false)); 298 299 // This is off by default because of a bug in gold: 300 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 301 static cl::opt<bool> ClWithComdat("msan-with-comdat", 302 cl::desc("Place MSan constructors in comdat sections"), 303 cl::Hidden, cl::init(false)); 304 305 // These options allow to specify custom memory map parameters 306 // See MemoryMapParams for details. 307 static cl::opt<unsigned long long> ClAndMask("msan-and-mask", 308 cl::desc("Define custom MSan AndMask"), 309 cl::Hidden, cl::init(0)); 310 311 static cl::opt<unsigned long long> ClXorMask("msan-xor-mask", 312 cl::desc("Define custom MSan XorMask"), 313 cl::Hidden, cl::init(0)); 314 315 static cl::opt<unsigned long long> ClShadowBase("msan-shadow-base", 316 cl::desc("Define custom MSan ShadowBase"), 317 cl::Hidden, cl::init(0)); 318 319 static cl::opt<unsigned long long> ClOriginBase("msan-origin-base", 320 cl::desc("Define custom MSan OriginBase"), 321 cl::Hidden, cl::init(0)); 322 323 static const char *const kMsanModuleCtorName = "msan.module_ctor"; 324 static const char *const kMsanInitName = "__msan_init"; 325 326 namespace { 327 328 // Memory map parameters used in application-to-shadow address calculation. 329 // Offset = (Addr & ~AndMask) ^ XorMask 330 // Shadow = ShadowBase + Offset 331 // Origin = OriginBase + Offset 332 struct MemoryMapParams { 333 uint64_t AndMask; 334 uint64_t XorMask; 335 uint64_t ShadowBase; 336 uint64_t OriginBase; 337 }; 338 339 struct PlatformMemoryMapParams { 340 const MemoryMapParams *bits32; 341 const MemoryMapParams *bits64; 342 }; 343 344 } // end anonymous namespace 345 346 // i386 Linux 347 static const MemoryMapParams Linux_I386_MemoryMapParams = { 348 0x000080000000, // AndMask 349 0, // XorMask (not used) 350 0, // ShadowBase (not used) 351 0x000040000000, // OriginBase 352 }; 353 354 // x86_64 Linux 355 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 356 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 357 0x400000000000, // AndMask 358 0, // XorMask (not used) 359 0, // ShadowBase (not used) 360 0x200000000000, // OriginBase 361 #else 362 0, // AndMask (not used) 363 0x500000000000, // XorMask 364 0, // ShadowBase (not used) 365 0x100000000000, // OriginBase 366 #endif 367 }; 368 369 // mips64 Linux 370 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 371 0, // AndMask (not used) 372 0x008000000000, // XorMask 373 0, // ShadowBase (not used) 374 0x002000000000, // OriginBase 375 }; 376 377 // ppc64 Linux 378 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 379 0xE00000000000, // AndMask 380 0x100000000000, // XorMask 381 0x080000000000, // ShadowBase 382 0x1C0000000000, // OriginBase 383 }; 384 385 // aarch64 Linux 386 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 387 0, // AndMask (not used) 388 0x06000000000, // XorMask 389 0, // ShadowBase (not used) 390 0x01000000000, // OriginBase 391 }; 392 393 // i386 FreeBSD 394 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 395 0x000180000000, // AndMask 396 0x000040000000, // XorMask 397 0x000020000000, // ShadowBase 398 0x000700000000, // OriginBase 399 }; 400 401 // x86_64 FreeBSD 402 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 403 0xc00000000000, // AndMask 404 0x200000000000, // XorMask 405 0x100000000000, // ShadowBase 406 0x380000000000, // OriginBase 407 }; 408 409 // x86_64 NetBSD 410 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 411 0, // AndMask 412 0x500000000000, // XorMask 413 0, // ShadowBase 414 0x100000000000, // OriginBase 415 }; 416 417 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 418 &Linux_I386_MemoryMapParams, 419 &Linux_X86_64_MemoryMapParams, 420 }; 421 422 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 423 nullptr, 424 &Linux_MIPS64_MemoryMapParams, 425 }; 426 427 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 428 nullptr, 429 &Linux_PowerPC64_MemoryMapParams, 430 }; 431 432 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 433 nullptr, 434 &Linux_AArch64_MemoryMapParams, 435 }; 436 437 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 438 &FreeBSD_I386_MemoryMapParams, 439 &FreeBSD_X86_64_MemoryMapParams, 440 }; 441 442 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 443 nullptr, 444 &NetBSD_X86_64_MemoryMapParams, 445 }; 446 447 namespace { 448 449 /// Instrument functions of a module to detect uninitialized reads. 450 /// 451 /// Instantiating MemorySanitizer inserts the msan runtime library API function 452 /// declarations into the module if they don't exist already. Instantiating 453 /// ensures the __msan_init function is in the list of global constructors for 454 /// the module. 455 class MemorySanitizer { 456 public: 457 MemorySanitizer(Module &M, MemorySanitizerOptions Options) { 458 this->CompileKernel = 459 ClEnableKmsan.getNumOccurrences() > 0 ? ClEnableKmsan : Options.Kernel; 460 if (ClTrackOrigins.getNumOccurrences() > 0) 461 this->TrackOrigins = ClTrackOrigins; 462 else 463 this->TrackOrigins = this->CompileKernel ? 2 : Options.TrackOrigins; 464 this->Recover = ClKeepGoing.getNumOccurrences() > 0 465 ? ClKeepGoing 466 : (this->CompileKernel | Options.Recover); 467 initializeModule(M); 468 } 469 470 // MSan cannot be moved or copied because of MapParams. 471 MemorySanitizer(MemorySanitizer &&) = delete; 472 MemorySanitizer &operator=(MemorySanitizer &&) = delete; 473 MemorySanitizer(const MemorySanitizer &) = delete; 474 MemorySanitizer &operator=(const MemorySanitizer &) = delete; 475 476 bool sanitizeFunction(Function &F, TargetLibraryInfo &TLI); 477 478 private: 479 friend struct MemorySanitizerVisitor; 480 friend struct VarArgAMD64Helper; 481 friend struct VarArgMIPS64Helper; 482 friend struct VarArgAArch64Helper; 483 friend struct VarArgPowerPC64Helper; 484 485 void initializeModule(Module &M); 486 void initializeCallbacks(Module &M); 487 void createKernelApi(Module &M); 488 void createUserspaceApi(Module &M); 489 490 /// True if we're compiling the Linux kernel. 491 bool CompileKernel; 492 /// Track origins (allocation points) of uninitialized values. 493 int TrackOrigins; 494 bool Recover; 495 496 LLVMContext *C; 497 Type *IntptrTy; 498 Type *OriginTy; 499 500 // XxxTLS variables represent the per-thread state in MSan and per-task state 501 // in KMSAN. 502 // For the userspace these point to thread-local globals. In the kernel land 503 // they point to the members of a per-task struct obtained via a call to 504 // __msan_get_context_state(). 505 506 /// Thread-local shadow storage for function parameters. 507 Value *ParamTLS; 508 509 /// Thread-local origin storage for function parameters. 510 Value *ParamOriginTLS; 511 512 /// Thread-local shadow storage for function return value. 513 Value *RetvalTLS; 514 515 /// Thread-local origin storage for function return value. 516 Value *RetvalOriginTLS; 517 518 /// Thread-local shadow storage for in-register va_arg function 519 /// parameters (x86_64-specific). 520 Value *VAArgTLS; 521 522 /// Thread-local shadow storage for in-register va_arg function 523 /// parameters (x86_64-specific). 524 Value *VAArgOriginTLS; 525 526 /// Thread-local shadow storage for va_arg overflow area 527 /// (x86_64-specific). 528 Value *VAArgOverflowSizeTLS; 529 530 /// Thread-local space used to pass origin value to the UMR reporting 531 /// function. 532 Value *OriginTLS; 533 534 /// Are the instrumentation callbacks set up? 535 bool CallbacksInitialized = false; 536 537 /// The run-time callback to print a warning. 538 FunctionCallee WarningFn; 539 540 // These arrays are indexed by log2(AccessSize). 541 FunctionCallee MaybeWarningFn[kNumberOfAccessSizes]; 542 FunctionCallee MaybeStoreOriginFn[kNumberOfAccessSizes]; 543 544 /// Run-time helper that generates a new origin value for a stack 545 /// allocation. 546 FunctionCallee MsanSetAllocaOrigin4Fn; 547 548 /// Run-time helper that poisons stack on function entry. 549 FunctionCallee MsanPoisonStackFn; 550 551 /// Run-time helper that records a store (or any event) of an 552 /// uninitialized value and returns an updated origin id encoding this info. 553 FunctionCallee MsanChainOriginFn; 554 555 /// MSan runtime replacements for memmove, memcpy and memset. 556 FunctionCallee MemmoveFn, MemcpyFn, MemsetFn; 557 558 /// KMSAN callback for task-local function argument shadow. 559 StructType *MsanContextStateTy; 560 FunctionCallee MsanGetContextStateFn; 561 562 /// Functions for poisoning/unpoisoning local variables 563 FunctionCallee MsanPoisonAllocaFn, MsanUnpoisonAllocaFn; 564 565 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 566 /// pointers. 567 FunctionCallee MsanMetadataPtrForLoadN, MsanMetadataPtrForStoreN; 568 FunctionCallee MsanMetadataPtrForLoad_1_8[4]; 569 FunctionCallee MsanMetadataPtrForStore_1_8[4]; 570 FunctionCallee MsanInstrumentAsmStoreFn; 571 572 /// Helper to choose between different MsanMetadataPtrXxx(). 573 FunctionCallee getKmsanShadowOriginAccessFn(bool isStore, int size); 574 575 /// Memory map parameters used in application-to-shadow calculation. 576 const MemoryMapParams *MapParams; 577 578 /// Custom memory map parameters used when -msan-shadow-base or 579 // -msan-origin-base is provided. 580 MemoryMapParams CustomMapParams; 581 582 MDNode *ColdCallWeights; 583 584 /// Branch weights for origin store. 585 MDNode *OriginStoreWeights; 586 587 /// An empty volatile inline asm that prevents callback merge. 588 InlineAsm *EmptyAsm; 589 590 Function *MsanCtorFunction; 591 }; 592 593 /// A legacy function pass for msan instrumentation. 594 /// 595 /// Instruments functions to detect unitialized reads. 596 struct MemorySanitizerLegacyPass : public FunctionPass { 597 // Pass identification, replacement for typeid. 598 static char ID; 599 600 MemorySanitizerLegacyPass(MemorySanitizerOptions Options = {}) 601 : FunctionPass(ID), Options(Options) {} 602 StringRef getPassName() const override { return "MemorySanitizerLegacyPass"; } 603 604 void getAnalysisUsage(AnalysisUsage &AU) const override { 605 AU.addRequired<TargetLibraryInfoWrapperPass>(); 606 } 607 608 bool runOnFunction(Function &F) override { 609 return MSan->sanitizeFunction( 610 F, getAnalysis<TargetLibraryInfoWrapperPass>().getTLI()); 611 } 612 bool doInitialization(Module &M) override; 613 614 Optional<MemorySanitizer> MSan; 615 MemorySanitizerOptions Options; 616 }; 617 618 } // end anonymous namespace 619 620 PreservedAnalyses MemorySanitizerPass::run(Function &F, 621 FunctionAnalysisManager &FAM) { 622 MemorySanitizer Msan(*F.getParent(), Options); 623 if (Msan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F))) 624 return PreservedAnalyses::none(); 625 return PreservedAnalyses::all(); 626 } 627 628 char MemorySanitizerLegacyPass::ID = 0; 629 630 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass, "msan", 631 "MemorySanitizer: detects uninitialized reads.", false, 632 false) 633 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 634 INITIALIZE_PASS_END(MemorySanitizerLegacyPass, "msan", 635 "MemorySanitizer: detects uninitialized reads.", false, 636 false) 637 638 FunctionPass * 639 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options) { 640 return new MemorySanitizerLegacyPass(Options); 641 } 642 643 /// Create a non-const global initialized with the given string. 644 /// 645 /// Creates a writable global for Str so that we can pass it to the 646 /// run-time lib. Runtime uses first 4 bytes of the string to store the 647 /// frame ID, so the string needs to be mutable. 648 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 649 StringRef Str) { 650 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 651 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 652 GlobalValue::PrivateLinkage, StrConst, ""); 653 } 654 655 /// Create KMSAN API callbacks. 656 void MemorySanitizer::createKernelApi(Module &M) { 657 IRBuilder<> IRB(*C); 658 659 // These will be initialized in insertKmsanPrologue(). 660 RetvalTLS = nullptr; 661 RetvalOriginTLS = nullptr; 662 ParamTLS = nullptr; 663 ParamOriginTLS = nullptr; 664 VAArgTLS = nullptr; 665 VAArgOriginTLS = nullptr; 666 VAArgOverflowSizeTLS = nullptr; 667 // OriginTLS is unused in the kernel. 668 OriginTLS = nullptr; 669 670 // __msan_warning() in the kernel takes an origin. 671 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 672 IRB.getInt32Ty()); 673 // Requests the per-task context state (kmsan_context_state*) from the 674 // runtime library. 675 MsanContextStateTy = StructType::get( 676 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 677 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 678 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 679 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), /* va_arg_origin */ 680 IRB.getInt64Ty(), ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 681 OriginTy); 682 MsanGetContextStateFn = M.getOrInsertFunction( 683 "__msan_get_context_state", PointerType::get(MsanContextStateTy, 0)); 684 685 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 686 PointerType::get(IRB.getInt32Ty(), 0)); 687 688 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 689 std::string name_load = 690 "__msan_metadata_ptr_for_load_" + std::to_string(size); 691 std::string name_store = 692 "__msan_metadata_ptr_for_store_" + std::to_string(size); 693 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 694 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 695 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 696 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 697 } 698 699 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 700 "__msan_metadata_ptr_for_load_n", RetTy, 701 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 702 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 703 "__msan_metadata_ptr_for_store_n", RetTy, 704 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 705 706 // Functions for poisoning and unpoisoning memory. 707 MsanPoisonAllocaFn = 708 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 709 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 710 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 711 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 712 } 713 714 static Constant *getOrInsertGlobal(Module &M, StringRef Name, Type *Ty) { 715 return M.getOrInsertGlobal(Name, Ty, [&] { 716 return new GlobalVariable(M, Ty, false, GlobalVariable::ExternalLinkage, 717 nullptr, Name, nullptr, 718 GlobalVariable::InitialExecTLSModel); 719 }); 720 } 721 722 /// Insert declarations for userspace-specific functions and globals. 723 void MemorySanitizer::createUserspaceApi(Module &M) { 724 IRBuilder<> IRB(*C); 725 // Create the callback. 726 // FIXME: this function should have "Cold" calling conv, 727 // which is not yet implemented. 728 StringRef WarningFnName = Recover ? "__msan_warning" 729 : "__msan_warning_noreturn"; 730 WarningFn = M.getOrInsertFunction(WarningFnName, IRB.getVoidTy()); 731 732 // Create the global TLS variables. 733 RetvalTLS = 734 getOrInsertGlobal(M, "__msan_retval_tls", 735 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8)); 736 737 RetvalOriginTLS = getOrInsertGlobal(M, "__msan_retval_origin_tls", OriginTy); 738 739 ParamTLS = 740 getOrInsertGlobal(M, "__msan_param_tls", 741 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 742 743 ParamOriginTLS = 744 getOrInsertGlobal(M, "__msan_param_origin_tls", 745 ArrayType::get(OriginTy, kParamTLSSize / 4)); 746 747 VAArgTLS = 748 getOrInsertGlobal(M, "__msan_va_arg_tls", 749 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 750 751 VAArgOriginTLS = 752 getOrInsertGlobal(M, "__msan_va_arg_origin_tls", 753 ArrayType::get(OriginTy, kParamTLSSize / 4)); 754 755 VAArgOverflowSizeTLS = 756 getOrInsertGlobal(M, "__msan_va_arg_overflow_size_tls", IRB.getInt64Ty()); 757 OriginTLS = getOrInsertGlobal(M, "__msan_origin_tls", IRB.getInt32Ty()); 758 759 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 760 AccessSizeIndex++) { 761 unsigned AccessSize = 1 << AccessSizeIndex; 762 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 763 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 764 FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), 765 IRB.getInt32Ty()); 766 767 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 768 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 769 FunctionName, IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), 770 IRB.getInt8PtrTy(), IRB.getInt32Ty()); 771 } 772 773 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 774 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 775 IRB.getInt8PtrTy(), IntptrTy); 776 MsanPoisonStackFn = 777 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 778 IRB.getInt8PtrTy(), IntptrTy); 779 } 780 781 /// Insert extern declaration of runtime-provided functions and globals. 782 void MemorySanitizer::initializeCallbacks(Module &M) { 783 // Only do this once. 784 if (CallbacksInitialized) 785 return; 786 787 IRBuilder<> IRB(*C); 788 // Initialize callbacks that are common for kernel and userspace 789 // instrumentation. 790 MsanChainOriginFn = M.getOrInsertFunction( 791 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 792 MemmoveFn = M.getOrInsertFunction( 793 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 794 IRB.getInt8PtrTy(), IntptrTy); 795 MemcpyFn = M.getOrInsertFunction( 796 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 797 IntptrTy); 798 MemsetFn = M.getOrInsertFunction( 799 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 800 IntptrTy); 801 // We insert an empty inline asm after __msan_report* to avoid callback merge. 802 EmptyAsm = InlineAsm::get(FunctionType::get(IRB.getVoidTy(), false), 803 StringRef(""), StringRef(""), 804 /*hasSideEffects=*/true); 805 806 MsanInstrumentAsmStoreFn = 807 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 808 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 809 810 if (CompileKernel) { 811 createKernelApi(M); 812 } else { 813 createUserspaceApi(M); 814 } 815 CallbacksInitialized = true; 816 } 817 818 FunctionCallee MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, 819 int size) { 820 FunctionCallee *Fns = 821 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 822 switch (size) { 823 case 1: 824 return Fns[0]; 825 case 2: 826 return Fns[1]; 827 case 4: 828 return Fns[2]; 829 case 8: 830 return Fns[3]; 831 default: 832 return nullptr; 833 } 834 } 835 836 /// Module-level initialization. 837 /// 838 /// inserts a call to __msan_init to the module's constructor list. 839 void MemorySanitizer::initializeModule(Module &M) { 840 auto &DL = M.getDataLayout(); 841 842 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 843 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 844 // Check the overrides first 845 if (ShadowPassed || OriginPassed) { 846 CustomMapParams.AndMask = ClAndMask; 847 CustomMapParams.XorMask = ClXorMask; 848 CustomMapParams.ShadowBase = ClShadowBase; 849 CustomMapParams.OriginBase = ClOriginBase; 850 MapParams = &CustomMapParams; 851 } else { 852 Triple TargetTriple(M.getTargetTriple()); 853 switch (TargetTriple.getOS()) { 854 case Triple::FreeBSD: 855 switch (TargetTriple.getArch()) { 856 case Triple::x86_64: 857 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 858 break; 859 case Triple::x86: 860 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 861 break; 862 default: 863 report_fatal_error("unsupported architecture"); 864 } 865 break; 866 case Triple::NetBSD: 867 switch (TargetTriple.getArch()) { 868 case Triple::x86_64: 869 MapParams = NetBSD_X86_MemoryMapParams.bits64; 870 break; 871 default: 872 report_fatal_error("unsupported architecture"); 873 } 874 break; 875 case Triple::Linux: 876 switch (TargetTriple.getArch()) { 877 case Triple::x86_64: 878 MapParams = Linux_X86_MemoryMapParams.bits64; 879 break; 880 case Triple::x86: 881 MapParams = Linux_X86_MemoryMapParams.bits32; 882 break; 883 case Triple::mips64: 884 case Triple::mips64el: 885 MapParams = Linux_MIPS_MemoryMapParams.bits64; 886 break; 887 case Triple::ppc64: 888 case Triple::ppc64le: 889 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 890 break; 891 case Triple::aarch64: 892 case Triple::aarch64_be: 893 MapParams = Linux_ARM_MemoryMapParams.bits64; 894 break; 895 default: 896 report_fatal_error("unsupported architecture"); 897 } 898 break; 899 default: 900 report_fatal_error("unsupported operating system"); 901 } 902 } 903 904 C = &(M.getContext()); 905 IRBuilder<> IRB(*C); 906 IntptrTy = IRB.getIntPtrTy(DL); 907 OriginTy = IRB.getInt32Ty(); 908 909 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 910 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 911 912 if (!CompileKernel) { 913 std::tie(MsanCtorFunction, std::ignore) = 914 getOrCreateSanitizerCtorAndInitFunctions( 915 M, kMsanModuleCtorName, kMsanInitName, 916 /*InitArgTypes=*/{}, 917 /*InitArgs=*/{}, 918 // This callback is invoked when the functions are created the first 919 // time. Hook them into the global ctors list in that case: 920 [&](Function *Ctor, FunctionCallee) { 921 if (!ClWithComdat) { 922 appendToGlobalCtors(M, Ctor, 0); 923 return; 924 } 925 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 926 Ctor->setComdat(MsanCtorComdat); 927 appendToGlobalCtors(M, Ctor, 0, Ctor); 928 }); 929 930 if (TrackOrigins) 931 M.getOrInsertGlobal("__msan_track_origins", IRB.getInt32Ty(), [&] { 932 return new GlobalVariable( 933 M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 934 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 935 }); 936 937 if (Recover) 938 M.getOrInsertGlobal("__msan_keep_going", IRB.getInt32Ty(), [&] { 939 return new GlobalVariable(M, IRB.getInt32Ty(), true, 940 GlobalValue::WeakODRLinkage, 941 IRB.getInt32(Recover), "__msan_keep_going"); 942 }); 943 } 944 } 945 946 bool MemorySanitizerLegacyPass::doInitialization(Module &M) { 947 MSan.emplace(M, Options); 948 return true; 949 } 950 951 namespace { 952 953 /// A helper class that handles instrumentation of VarArg 954 /// functions on a particular platform. 955 /// 956 /// Implementations are expected to insert the instrumentation 957 /// necessary to propagate argument shadow through VarArg function 958 /// calls. Visit* methods are called during an InstVisitor pass over 959 /// the function, and should avoid creating new basic blocks. A new 960 /// instance of this class is created for each instrumented function. 961 struct VarArgHelper { 962 virtual ~VarArgHelper() = default; 963 964 /// Visit a CallSite. 965 virtual void visitCallSite(CallSite &CS, IRBuilder<> &IRB) = 0; 966 967 /// Visit a va_start call. 968 virtual void visitVAStartInst(VAStartInst &I) = 0; 969 970 /// Visit a va_copy call. 971 virtual void visitVACopyInst(VACopyInst &I) = 0; 972 973 /// Finalize function instrumentation. 974 /// 975 /// This method is called after visiting all interesting (see above) 976 /// instructions in a function. 977 virtual void finalizeInstrumentation() = 0; 978 }; 979 980 struct MemorySanitizerVisitor; 981 982 } // end anonymous namespace 983 984 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 985 MemorySanitizerVisitor &Visitor); 986 987 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 988 if (TypeSize <= 8) return 0; 989 return Log2_32_Ceil((TypeSize + 7) / 8); 990 } 991 992 namespace { 993 994 /// This class does all the work for a given function. Store and Load 995 /// instructions store and load corresponding shadow and origin 996 /// values. Most instructions propagate shadow from arguments to their 997 /// return values. Certain instructions (most importantly, BranchInst) 998 /// test their argument shadow and print reports (with a runtime call) if it's 999 /// non-zero. 1000 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 1001 Function &F; 1002 MemorySanitizer &MS; 1003 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 1004 ValueMap<Value*, Value*> ShadowMap, OriginMap; 1005 std::unique_ptr<VarArgHelper> VAHelper; 1006 const TargetLibraryInfo *TLI; 1007 BasicBlock *ActualFnStart; 1008 1009 // The following flags disable parts of MSan instrumentation based on 1010 // blacklist contents and command-line options. 1011 bool InsertChecks; 1012 bool PropagateShadow; 1013 bool PoisonStack; 1014 bool PoisonUndef; 1015 bool CheckReturnValue; 1016 1017 struct ShadowOriginAndInsertPoint { 1018 Value *Shadow; 1019 Value *Origin; 1020 Instruction *OrigIns; 1021 1022 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 1023 : Shadow(S), Origin(O), OrigIns(I) {} 1024 }; 1025 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 1026 SmallVector<StoreInst *, 16> StoreList; 1027 1028 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS, 1029 const TargetLibraryInfo &TLI) 1030 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)), TLI(&TLI) { 1031 bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory); 1032 InsertChecks = SanitizeFunction; 1033 PropagateShadow = SanitizeFunction; 1034 PoisonStack = SanitizeFunction && ClPoisonStack; 1035 PoisonUndef = SanitizeFunction && ClPoisonUndef; 1036 // FIXME: Consider using SpecialCaseList to specify a list of functions that 1037 // must always return fully initialized values. For now, we hardcode "main". 1038 CheckReturnValue = SanitizeFunction && (F.getName() == "main"); 1039 1040 MS.initializeCallbacks(*F.getParent()); 1041 if (MS.CompileKernel) 1042 ActualFnStart = insertKmsanPrologue(F); 1043 else 1044 ActualFnStart = &F.getEntryBlock(); 1045 1046 LLVM_DEBUG(if (!InsertChecks) dbgs() 1047 << "MemorySanitizer is not inserting checks into '" 1048 << F.getName() << "'\n"); 1049 } 1050 1051 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1052 if (MS.TrackOrigins <= 1) return V; 1053 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1054 } 1055 1056 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1057 const DataLayout &DL = F.getParent()->getDataLayout(); 1058 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1059 if (IntptrSize == kOriginSize) return Origin; 1060 assert(IntptrSize == kOriginSize * 2); 1061 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1062 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1063 } 1064 1065 /// Fill memory range with the given origin value. 1066 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1067 unsigned Size, unsigned Alignment) { 1068 const DataLayout &DL = F.getParent()->getDataLayout(); 1069 unsigned IntptrAlignment = DL.getABITypeAlignment(MS.IntptrTy); 1070 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1071 assert(IntptrAlignment >= kMinOriginAlignment); 1072 assert(IntptrSize >= kOriginSize); 1073 1074 unsigned Ofs = 0; 1075 unsigned CurrentAlignment = Alignment; 1076 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1077 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1078 Value *IntptrOriginPtr = 1079 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1080 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1081 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1082 : IntptrOriginPtr; 1083 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1084 Ofs += IntptrSize / kOriginSize; 1085 CurrentAlignment = IntptrAlignment; 1086 } 1087 } 1088 1089 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1090 Value *GEP = 1091 i ? IRB.CreateConstGEP1_32(MS.OriginTy, OriginPtr, i) : OriginPtr; 1092 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1093 CurrentAlignment = kMinOriginAlignment; 1094 } 1095 } 1096 1097 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1098 Value *OriginPtr, unsigned Alignment, bool AsCall) { 1099 const DataLayout &DL = F.getParent()->getDataLayout(); 1100 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1101 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1102 if (Shadow->getType()->isAggregateType()) { 1103 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1104 OriginAlignment); 1105 } else { 1106 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1107 Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow); 1108 if (ConstantShadow) { 1109 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1110 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1111 OriginAlignment); 1112 return; 1113 } 1114 1115 unsigned TypeSizeInBits = 1116 DL.getTypeSizeInBits(ConvertedShadow->getType()); 1117 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1118 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1119 FunctionCallee Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1120 Value *ConvertedShadow2 = IRB.CreateZExt( 1121 ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1122 IRB.CreateCall(Fn, {ConvertedShadow2, 1123 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), 1124 Origin}); 1125 } else { 1126 Value *Cmp = IRB.CreateICmpNE( 1127 ConvertedShadow, getCleanShadow(ConvertedShadow), "_mscmp"); 1128 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1129 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1130 IRBuilder<> IRBNew(CheckTerm); 1131 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1132 OriginAlignment); 1133 } 1134 } 1135 } 1136 1137 void materializeStores(bool InstrumentWithCalls) { 1138 for (StoreInst *SI : StoreList) { 1139 IRBuilder<> IRB(SI); 1140 Value *Val = SI->getValueOperand(); 1141 Value *Addr = SI->getPointerOperand(); 1142 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1143 Value *ShadowPtr, *OriginPtr; 1144 Type *ShadowTy = Shadow->getType(); 1145 unsigned Alignment = SI->getAlignment(); 1146 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1147 std::tie(ShadowPtr, OriginPtr) = 1148 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1149 1150 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1151 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1152 (void)NewSI; 1153 1154 if (SI->isAtomic()) 1155 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1156 1157 if (MS.TrackOrigins && !SI->isAtomic()) 1158 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1159 OriginAlignment, InstrumentWithCalls); 1160 } 1161 } 1162 1163 /// Helper function to insert a warning at IRB's current insert point. 1164 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1165 if (!Origin) 1166 Origin = (Value *)IRB.getInt32(0); 1167 if (MS.CompileKernel) { 1168 IRB.CreateCall(MS.WarningFn, Origin); 1169 } else { 1170 if (MS.TrackOrigins) { 1171 IRB.CreateStore(Origin, MS.OriginTLS); 1172 } 1173 IRB.CreateCall(MS.WarningFn, {}); 1174 } 1175 IRB.CreateCall(MS.EmptyAsm, {}); 1176 // FIXME: Insert UnreachableInst if !MS.Recover? 1177 // This may invalidate some of the following checks and needs to be done 1178 // at the very end. 1179 } 1180 1181 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1182 bool AsCall) { 1183 IRBuilder<> IRB(OrigIns); 1184 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1185 Value *ConvertedShadow = convertToShadowTyNoVec(Shadow, IRB); 1186 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1187 1188 Constant *ConstantShadow = dyn_cast_or_null<Constant>(ConvertedShadow); 1189 if (ConstantShadow) { 1190 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1191 insertWarningFn(IRB, Origin); 1192 } 1193 return; 1194 } 1195 1196 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1197 1198 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1199 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1200 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1201 FunctionCallee Fn = MS.MaybeWarningFn[SizeIndex]; 1202 Value *ConvertedShadow2 = 1203 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1204 IRB.CreateCall(Fn, {ConvertedShadow2, MS.TrackOrigins && Origin 1205 ? Origin 1206 : (Value *)IRB.getInt32(0)}); 1207 } else { 1208 Value *Cmp = IRB.CreateICmpNE(ConvertedShadow, 1209 getCleanShadow(ConvertedShadow), "_mscmp"); 1210 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1211 Cmp, OrigIns, 1212 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1213 1214 IRB.SetInsertPoint(CheckTerm); 1215 insertWarningFn(IRB, Origin); 1216 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1217 } 1218 } 1219 1220 void materializeChecks(bool InstrumentWithCalls) { 1221 for (const auto &ShadowData : InstrumentationList) { 1222 Instruction *OrigIns = ShadowData.OrigIns; 1223 Value *Shadow = ShadowData.Shadow; 1224 Value *Origin = ShadowData.Origin; 1225 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1226 } 1227 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1228 } 1229 1230 BasicBlock *insertKmsanPrologue(Function &F) { 1231 BasicBlock *ret = 1232 SplitBlock(&F.getEntryBlock(), F.getEntryBlock().getFirstNonPHI()); 1233 IRBuilder<> IRB(F.getEntryBlock().getFirstNonPHI()); 1234 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1235 Constant *Zero = IRB.getInt32(0); 1236 MS.ParamTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1237 {Zero, IRB.getInt32(0)}, "param_shadow"); 1238 MS.RetvalTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1239 {Zero, IRB.getInt32(1)}, "retval_shadow"); 1240 MS.VAArgTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1241 {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1242 MS.VAArgOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1243 {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1244 MS.VAArgOverflowSizeTLS = 1245 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1246 {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1247 MS.ParamOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1248 {Zero, IRB.getInt32(5)}, "param_origin"); 1249 MS.RetvalOriginTLS = 1250 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1251 {Zero, IRB.getInt32(6)}, "retval_origin"); 1252 return ret; 1253 } 1254 1255 /// Add MemorySanitizer instrumentation to a function. 1256 bool runOnFunction() { 1257 // In the presence of unreachable blocks, we may see Phi nodes with 1258 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1259 // blocks, such nodes will not have any shadow value associated with them. 1260 // It's easier to remove unreachable blocks than deal with missing shadow. 1261 removeUnreachableBlocks(F); 1262 1263 // Iterate all BBs in depth-first order and create shadow instructions 1264 // for all instructions (where applicable). 1265 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1266 for (BasicBlock *BB : depth_first(ActualFnStart)) 1267 visit(*BB); 1268 1269 // Finalize PHI nodes. 1270 for (PHINode *PN : ShadowPHINodes) { 1271 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1272 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1273 size_t NumValues = PN->getNumIncomingValues(); 1274 for (size_t v = 0; v < NumValues; v++) { 1275 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1276 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1277 } 1278 } 1279 1280 VAHelper->finalizeInstrumentation(); 1281 1282 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1283 InstrumentationList.size() + StoreList.size() > 1284 (unsigned)ClInstrumentationWithCallThreshold; 1285 1286 // Insert shadow value checks. 1287 materializeChecks(InstrumentWithCalls); 1288 1289 // Delayed instrumentation of StoreInst. 1290 // This may not add new address checks. 1291 materializeStores(InstrumentWithCalls); 1292 1293 return true; 1294 } 1295 1296 /// Compute the shadow type that corresponds to a given Value. 1297 Type *getShadowTy(Value *V) { 1298 return getShadowTy(V->getType()); 1299 } 1300 1301 /// Compute the shadow type that corresponds to a given Type. 1302 Type *getShadowTy(Type *OrigTy) { 1303 if (!OrigTy->isSized()) { 1304 return nullptr; 1305 } 1306 // For integer type, shadow is the same as the original type. 1307 // This may return weird-sized types like i1. 1308 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1309 return IT; 1310 const DataLayout &DL = F.getParent()->getDataLayout(); 1311 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1312 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1313 return VectorType::get(IntegerType::get(*MS.C, EltSize), 1314 VT->getNumElements()); 1315 } 1316 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1317 return ArrayType::get(getShadowTy(AT->getElementType()), 1318 AT->getNumElements()); 1319 } 1320 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1321 SmallVector<Type*, 4> Elements; 1322 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1323 Elements.push_back(getShadowTy(ST->getElementType(i))); 1324 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1325 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1326 return Res; 1327 } 1328 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1329 return IntegerType::get(*MS.C, TypeSize); 1330 } 1331 1332 /// Flatten a vector type. 1333 Type *getShadowTyNoVec(Type *ty) { 1334 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1335 return IntegerType::get(*MS.C, vt->getBitWidth()); 1336 return ty; 1337 } 1338 1339 /// Convert a shadow value to it's flattened variant. 1340 Value *convertToShadowTyNoVec(Value *V, IRBuilder<> &IRB) { 1341 Type *Ty = V->getType(); 1342 Type *NoVecTy = getShadowTyNoVec(Ty); 1343 if (Ty == NoVecTy) return V; 1344 return IRB.CreateBitCast(V, NoVecTy); 1345 } 1346 1347 /// Compute the integer shadow offset that corresponds to a given 1348 /// application address. 1349 /// 1350 /// Offset = (Addr & ~AndMask) ^ XorMask 1351 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1352 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1353 1354 uint64_t AndMask = MS.MapParams->AndMask; 1355 if (AndMask) 1356 OffsetLong = 1357 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1358 1359 uint64_t XorMask = MS.MapParams->XorMask; 1360 if (XorMask) 1361 OffsetLong = 1362 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1363 return OffsetLong; 1364 } 1365 1366 /// Compute the shadow and origin addresses corresponding to a given 1367 /// application address. 1368 /// 1369 /// Shadow = ShadowBase + Offset 1370 /// Origin = (OriginBase + Offset) & ~3ULL 1371 std::pair<Value *, Value *> getShadowOriginPtrUserspace(Value *Addr, 1372 IRBuilder<> &IRB, 1373 Type *ShadowTy, 1374 unsigned Alignment) { 1375 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1376 Value *ShadowLong = ShadowOffset; 1377 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1378 if (ShadowBase != 0) { 1379 ShadowLong = 1380 IRB.CreateAdd(ShadowLong, 1381 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1382 } 1383 Value *ShadowPtr = 1384 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1385 Value *OriginPtr = nullptr; 1386 if (MS.TrackOrigins) { 1387 Value *OriginLong = ShadowOffset; 1388 uint64_t OriginBase = MS.MapParams->OriginBase; 1389 if (OriginBase != 0) 1390 OriginLong = IRB.CreateAdd(OriginLong, 1391 ConstantInt::get(MS.IntptrTy, OriginBase)); 1392 if (Alignment < kMinOriginAlignment) { 1393 uint64_t Mask = kMinOriginAlignment - 1; 1394 OriginLong = 1395 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1396 } 1397 OriginPtr = 1398 IRB.CreateIntToPtr(OriginLong, PointerType::get(MS.OriginTy, 0)); 1399 } 1400 return std::make_pair(ShadowPtr, OriginPtr); 1401 } 1402 1403 std::pair<Value *, Value *> 1404 getShadowOriginPtrKernel(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1405 unsigned Alignment, bool isStore) { 1406 Value *ShadowOriginPtrs; 1407 const DataLayout &DL = F.getParent()->getDataLayout(); 1408 int Size = DL.getTypeStoreSize(ShadowTy); 1409 1410 FunctionCallee Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1411 Value *AddrCast = 1412 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1413 if (Getter) { 1414 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1415 } else { 1416 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1417 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1418 : MS.MsanMetadataPtrForLoadN, 1419 {AddrCast, SizeVal}); 1420 } 1421 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1422 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1423 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1424 1425 return std::make_pair(ShadowPtr, OriginPtr); 1426 } 1427 1428 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1429 Type *ShadowTy, 1430 unsigned Alignment, 1431 bool isStore) { 1432 std::pair<Value *, Value *> ret; 1433 if (MS.CompileKernel) 1434 ret = getShadowOriginPtrKernel(Addr, IRB, ShadowTy, Alignment, isStore); 1435 else 1436 ret = getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1437 return ret; 1438 } 1439 1440 /// Compute the shadow address for a given function argument. 1441 /// 1442 /// Shadow = ParamTLS+ArgOffset. 1443 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1444 int ArgOffset) { 1445 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1446 if (ArgOffset) 1447 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1448 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1449 "_msarg"); 1450 } 1451 1452 /// Compute the origin address for a given function argument. 1453 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1454 int ArgOffset) { 1455 if (!MS.TrackOrigins) 1456 return nullptr; 1457 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1458 if (ArgOffset) 1459 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1460 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1461 "_msarg_o"); 1462 } 1463 1464 /// Compute the shadow address for a retval. 1465 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1466 return IRB.CreatePointerCast(MS.RetvalTLS, 1467 PointerType::get(getShadowTy(A), 0), 1468 "_msret"); 1469 } 1470 1471 /// Compute the origin address for a retval. 1472 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1473 // We keep a single origin for the entire retval. Might be too optimistic. 1474 return MS.RetvalOriginTLS; 1475 } 1476 1477 /// Set SV to be the shadow value for V. 1478 void setShadow(Value *V, Value *SV) { 1479 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1480 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1481 } 1482 1483 /// Set Origin to be the origin value for V. 1484 void setOrigin(Value *V, Value *Origin) { 1485 if (!MS.TrackOrigins) return; 1486 assert(!OriginMap.count(V) && "Values may only have one origin"); 1487 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1488 OriginMap[V] = Origin; 1489 } 1490 1491 Constant *getCleanShadow(Type *OrigTy) { 1492 Type *ShadowTy = getShadowTy(OrigTy); 1493 if (!ShadowTy) 1494 return nullptr; 1495 return Constant::getNullValue(ShadowTy); 1496 } 1497 1498 /// Create a clean shadow value for a given value. 1499 /// 1500 /// Clean shadow (all zeroes) means all bits of the value are defined 1501 /// (initialized). 1502 Constant *getCleanShadow(Value *V) { 1503 return getCleanShadow(V->getType()); 1504 } 1505 1506 /// Create a dirty shadow of a given shadow type. 1507 Constant *getPoisonedShadow(Type *ShadowTy) { 1508 assert(ShadowTy); 1509 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1510 return Constant::getAllOnesValue(ShadowTy); 1511 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1512 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1513 getPoisonedShadow(AT->getElementType())); 1514 return ConstantArray::get(AT, Vals); 1515 } 1516 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1517 SmallVector<Constant *, 4> Vals; 1518 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1519 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1520 return ConstantStruct::get(ST, Vals); 1521 } 1522 llvm_unreachable("Unexpected shadow type"); 1523 } 1524 1525 /// Create a dirty shadow for a given value. 1526 Constant *getPoisonedShadow(Value *V) { 1527 Type *ShadowTy = getShadowTy(V); 1528 if (!ShadowTy) 1529 return nullptr; 1530 return getPoisonedShadow(ShadowTy); 1531 } 1532 1533 /// Create a clean (zero) origin. 1534 Value *getCleanOrigin() { 1535 return Constant::getNullValue(MS.OriginTy); 1536 } 1537 1538 /// Get the shadow value for a given Value. 1539 /// 1540 /// This function either returns the value set earlier with setShadow, 1541 /// or extracts if from ParamTLS (for function arguments). 1542 Value *getShadow(Value *V) { 1543 if (!PropagateShadow) return getCleanShadow(V); 1544 if (Instruction *I = dyn_cast<Instruction>(V)) { 1545 if (I->getMetadata("nosanitize")) 1546 return getCleanShadow(V); 1547 // For instructions the shadow is already stored in the map. 1548 Value *Shadow = ShadowMap[V]; 1549 if (!Shadow) { 1550 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1551 (void)I; 1552 assert(Shadow && "No shadow for a value"); 1553 } 1554 return Shadow; 1555 } 1556 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1557 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1558 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1559 (void)U; 1560 return AllOnes; 1561 } 1562 if (Argument *A = dyn_cast<Argument>(V)) { 1563 // For arguments we compute the shadow on demand and store it in the map. 1564 Value **ShadowPtr = &ShadowMap[V]; 1565 if (*ShadowPtr) 1566 return *ShadowPtr; 1567 Function *F = A->getParent(); 1568 IRBuilder<> EntryIRB(ActualFnStart->getFirstNonPHI()); 1569 unsigned ArgOffset = 0; 1570 const DataLayout &DL = F->getParent()->getDataLayout(); 1571 for (auto &FArg : F->args()) { 1572 if (!FArg.getType()->isSized()) { 1573 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1574 continue; 1575 } 1576 unsigned Size = 1577 FArg.hasByValAttr() 1578 ? DL.getTypeAllocSize(FArg.getType()->getPointerElementType()) 1579 : DL.getTypeAllocSize(FArg.getType()); 1580 if (A == &FArg) { 1581 bool Overflow = ArgOffset + Size > kParamTLSSize; 1582 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1583 if (FArg.hasByValAttr()) { 1584 // ByVal pointer itself has clean shadow. We copy the actual 1585 // argument shadow to the underlying memory. 1586 // Figure out maximal valid memcpy alignment. 1587 unsigned ArgAlign = FArg.getParamAlignment(); 1588 if (ArgAlign == 0) { 1589 Type *EltType = A->getType()->getPointerElementType(); 1590 ArgAlign = DL.getABITypeAlignment(EltType); 1591 } 1592 Value *CpShadowPtr = 1593 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1594 /*isStore*/ true) 1595 .first; 1596 // TODO(glider): need to copy origins. 1597 if (Overflow) { 1598 // ParamTLS overflow. 1599 EntryIRB.CreateMemSet( 1600 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1601 Size, ArgAlign); 1602 } else { 1603 unsigned CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1604 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1605 CopyAlign, Size); 1606 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1607 (void)Cpy; 1608 } 1609 *ShadowPtr = getCleanShadow(V); 1610 } else { 1611 if (Overflow) { 1612 // ParamTLS overflow. 1613 *ShadowPtr = getCleanShadow(V); 1614 } else { 1615 *ShadowPtr = EntryIRB.CreateAlignedLoad(getShadowTy(&FArg), Base, 1616 kShadowTLSAlignment); 1617 } 1618 } 1619 LLVM_DEBUG(dbgs() 1620 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1621 if (MS.TrackOrigins && !Overflow) { 1622 Value *OriginPtr = 1623 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1624 setOrigin(A, EntryIRB.CreateLoad(MS.OriginTy, OriginPtr)); 1625 } else { 1626 setOrigin(A, getCleanOrigin()); 1627 } 1628 } 1629 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1630 } 1631 assert(*ShadowPtr && "Could not find shadow for an argument"); 1632 return *ShadowPtr; 1633 } 1634 // For everything else the shadow is zero. 1635 return getCleanShadow(V); 1636 } 1637 1638 /// Get the shadow for i-th argument of the instruction I. 1639 Value *getShadow(Instruction *I, int i) { 1640 return getShadow(I->getOperand(i)); 1641 } 1642 1643 /// Get the origin for a value. 1644 Value *getOrigin(Value *V) { 1645 if (!MS.TrackOrigins) return nullptr; 1646 if (!PropagateShadow) return getCleanOrigin(); 1647 if (isa<Constant>(V)) return getCleanOrigin(); 1648 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1649 "Unexpected value type in getOrigin()"); 1650 if (Instruction *I = dyn_cast<Instruction>(V)) { 1651 if (I->getMetadata("nosanitize")) 1652 return getCleanOrigin(); 1653 } 1654 Value *Origin = OriginMap[V]; 1655 assert(Origin && "Missing origin"); 1656 return Origin; 1657 } 1658 1659 /// Get the origin for i-th argument of the instruction I. 1660 Value *getOrigin(Instruction *I, int i) { 1661 return getOrigin(I->getOperand(i)); 1662 } 1663 1664 /// Remember the place where a shadow check should be inserted. 1665 /// 1666 /// This location will be later instrumented with a check that will print a 1667 /// UMR warning in runtime if the shadow value is not 0. 1668 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1669 assert(Shadow); 1670 if (!InsertChecks) return; 1671 #ifndef NDEBUG 1672 Type *ShadowTy = Shadow->getType(); 1673 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) && 1674 "Can only insert checks for integer and vector shadow types"); 1675 #endif 1676 InstrumentationList.push_back( 1677 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1678 } 1679 1680 /// Remember the place where a shadow check should be inserted. 1681 /// 1682 /// This location will be later instrumented with a check that will print a 1683 /// UMR warning in runtime if the value is not fully defined. 1684 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1685 assert(Val); 1686 Value *Shadow, *Origin; 1687 if (ClCheckConstantShadow) { 1688 Shadow = getShadow(Val); 1689 if (!Shadow) return; 1690 Origin = getOrigin(Val); 1691 } else { 1692 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1693 if (!Shadow) return; 1694 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1695 } 1696 insertShadowCheck(Shadow, Origin, OrigIns); 1697 } 1698 1699 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1700 switch (a) { 1701 case AtomicOrdering::NotAtomic: 1702 return AtomicOrdering::NotAtomic; 1703 case AtomicOrdering::Unordered: 1704 case AtomicOrdering::Monotonic: 1705 case AtomicOrdering::Release: 1706 return AtomicOrdering::Release; 1707 case AtomicOrdering::Acquire: 1708 case AtomicOrdering::AcquireRelease: 1709 return AtomicOrdering::AcquireRelease; 1710 case AtomicOrdering::SequentiallyConsistent: 1711 return AtomicOrdering::SequentiallyConsistent; 1712 } 1713 llvm_unreachable("Unknown ordering"); 1714 } 1715 1716 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1717 switch (a) { 1718 case AtomicOrdering::NotAtomic: 1719 return AtomicOrdering::NotAtomic; 1720 case AtomicOrdering::Unordered: 1721 case AtomicOrdering::Monotonic: 1722 case AtomicOrdering::Acquire: 1723 return AtomicOrdering::Acquire; 1724 case AtomicOrdering::Release: 1725 case AtomicOrdering::AcquireRelease: 1726 return AtomicOrdering::AcquireRelease; 1727 case AtomicOrdering::SequentiallyConsistent: 1728 return AtomicOrdering::SequentiallyConsistent; 1729 } 1730 llvm_unreachable("Unknown ordering"); 1731 } 1732 1733 // ------------------- Visitors. 1734 using InstVisitor<MemorySanitizerVisitor>::visit; 1735 void visit(Instruction &I) { 1736 if (!I.getMetadata("nosanitize")) 1737 InstVisitor<MemorySanitizerVisitor>::visit(I); 1738 } 1739 1740 /// Instrument LoadInst 1741 /// 1742 /// Loads the corresponding shadow and (optionally) origin. 1743 /// Optionally, checks that the load address is fully defined. 1744 void visitLoadInst(LoadInst &I) { 1745 assert(I.getType()->isSized() && "Load type must have size"); 1746 assert(!I.getMetadata("nosanitize")); 1747 IRBuilder<> IRB(I.getNextNode()); 1748 Type *ShadowTy = getShadowTy(&I); 1749 Value *Addr = I.getPointerOperand(); 1750 Value *ShadowPtr, *OriginPtr; 1751 unsigned Alignment = I.getAlignment(); 1752 if (PropagateShadow) { 1753 std::tie(ShadowPtr, OriginPtr) = 1754 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1755 setShadow(&I, 1756 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 1757 } else { 1758 setShadow(&I, getCleanShadow(&I)); 1759 } 1760 1761 if (ClCheckAccessAddress) 1762 insertShadowCheck(I.getPointerOperand(), &I); 1763 1764 if (I.isAtomic()) 1765 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1766 1767 if (MS.TrackOrigins) { 1768 if (PropagateShadow) { 1769 unsigned OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1770 setOrigin( 1771 &I, IRB.CreateAlignedLoad(MS.OriginTy, OriginPtr, OriginAlignment)); 1772 } else { 1773 setOrigin(&I, getCleanOrigin()); 1774 } 1775 } 1776 } 1777 1778 /// Instrument StoreInst 1779 /// 1780 /// Stores the corresponding shadow and (optionally) origin. 1781 /// Optionally, checks that the store address is fully defined. 1782 void visitStoreInst(StoreInst &I) { 1783 StoreList.push_back(&I); 1784 if (ClCheckAccessAddress) 1785 insertShadowCheck(I.getPointerOperand(), &I); 1786 } 1787 1788 void handleCASOrRMW(Instruction &I) { 1789 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1790 1791 IRBuilder<> IRB(&I); 1792 Value *Addr = I.getOperand(0); 1793 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, I.getType(), 1794 /*Alignment*/ 1, /*isStore*/ true) 1795 .first; 1796 1797 if (ClCheckAccessAddress) 1798 insertShadowCheck(Addr, &I); 1799 1800 // Only test the conditional argument of cmpxchg instruction. 1801 // The other argument can potentially be uninitialized, but we can not 1802 // detect this situation reliably without possible false positives. 1803 if (isa<AtomicCmpXchgInst>(I)) 1804 insertShadowCheck(I.getOperand(1), &I); 1805 1806 IRB.CreateStore(getCleanShadow(&I), ShadowPtr); 1807 1808 setShadow(&I, getCleanShadow(&I)); 1809 setOrigin(&I, getCleanOrigin()); 1810 } 1811 1812 void visitAtomicRMWInst(AtomicRMWInst &I) { 1813 handleCASOrRMW(I); 1814 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1815 } 1816 1817 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1818 handleCASOrRMW(I); 1819 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1820 } 1821 1822 // Vector manipulation. 1823 void visitExtractElementInst(ExtractElementInst &I) { 1824 insertShadowCheck(I.getOperand(1), &I); 1825 IRBuilder<> IRB(&I); 1826 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 1827 "_msprop")); 1828 setOrigin(&I, getOrigin(&I, 0)); 1829 } 1830 1831 void visitInsertElementInst(InsertElementInst &I) { 1832 insertShadowCheck(I.getOperand(2), &I); 1833 IRBuilder<> IRB(&I); 1834 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 1835 I.getOperand(2), "_msprop")); 1836 setOriginForNaryOp(I); 1837 } 1838 1839 void visitShuffleVectorInst(ShuffleVectorInst &I) { 1840 insertShadowCheck(I.getOperand(2), &I); 1841 IRBuilder<> IRB(&I); 1842 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 1843 I.getOperand(2), "_msprop")); 1844 setOriginForNaryOp(I); 1845 } 1846 1847 // Casts. 1848 void visitSExtInst(SExtInst &I) { 1849 IRBuilder<> IRB(&I); 1850 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 1851 setOrigin(&I, getOrigin(&I, 0)); 1852 } 1853 1854 void visitZExtInst(ZExtInst &I) { 1855 IRBuilder<> IRB(&I); 1856 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 1857 setOrigin(&I, getOrigin(&I, 0)); 1858 } 1859 1860 void visitTruncInst(TruncInst &I) { 1861 IRBuilder<> IRB(&I); 1862 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 1863 setOrigin(&I, getOrigin(&I, 0)); 1864 } 1865 1866 void visitBitCastInst(BitCastInst &I) { 1867 // Special case: if this is the bitcast (there is exactly 1 allowed) between 1868 // a musttail call and a ret, don't instrument. New instructions are not 1869 // allowed after a musttail call. 1870 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 1871 if (CI->isMustTailCall()) 1872 return; 1873 IRBuilder<> IRB(&I); 1874 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 1875 setOrigin(&I, getOrigin(&I, 0)); 1876 } 1877 1878 void visitPtrToIntInst(PtrToIntInst &I) { 1879 IRBuilder<> IRB(&I); 1880 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1881 "_msprop_ptrtoint")); 1882 setOrigin(&I, getOrigin(&I, 0)); 1883 } 1884 1885 void visitIntToPtrInst(IntToPtrInst &I) { 1886 IRBuilder<> IRB(&I); 1887 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 1888 "_msprop_inttoptr")); 1889 setOrigin(&I, getOrigin(&I, 0)); 1890 } 1891 1892 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 1893 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 1894 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 1895 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 1896 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 1897 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 1898 1899 /// Propagate shadow for bitwise AND. 1900 /// 1901 /// This code is exact, i.e. if, for example, a bit in the left argument 1902 /// is defined and 0, then neither the value not definedness of the 1903 /// corresponding bit in B don't affect the resulting shadow. 1904 void visitAnd(BinaryOperator &I) { 1905 IRBuilder<> IRB(&I); 1906 // "And" of 0 and a poisoned value results in unpoisoned value. 1907 // 1&1 => 1; 0&1 => 0; p&1 => p; 1908 // 1&0 => 0; 0&0 => 0; p&0 => 0; 1909 // 1&p => p; 0&p => 0; p&p => p; 1910 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 1911 Value *S1 = getShadow(&I, 0); 1912 Value *S2 = getShadow(&I, 1); 1913 Value *V1 = I.getOperand(0); 1914 Value *V2 = I.getOperand(1); 1915 if (V1->getType() != S1->getType()) { 1916 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1917 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1918 } 1919 Value *S1S2 = IRB.CreateAnd(S1, S2); 1920 Value *V1S2 = IRB.CreateAnd(V1, S2); 1921 Value *S1V2 = IRB.CreateAnd(S1, V2); 1922 setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2))); 1923 setOriginForNaryOp(I); 1924 } 1925 1926 void visitOr(BinaryOperator &I) { 1927 IRBuilder<> IRB(&I); 1928 // "Or" of 1 and a poisoned value results in unpoisoned value. 1929 // 1|1 => 1; 0|1 => 1; p|1 => 1; 1930 // 1|0 => 1; 0|0 => 0; p|0 => p; 1931 // 1|p => 1; 0|p => p; p|p => p; 1932 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 1933 Value *S1 = getShadow(&I, 0); 1934 Value *S2 = getShadow(&I, 1); 1935 Value *V1 = IRB.CreateNot(I.getOperand(0)); 1936 Value *V2 = IRB.CreateNot(I.getOperand(1)); 1937 if (V1->getType() != S1->getType()) { 1938 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 1939 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 1940 } 1941 Value *S1S2 = IRB.CreateAnd(S1, S2); 1942 Value *V1S2 = IRB.CreateAnd(V1, S2); 1943 Value *S1V2 = IRB.CreateAnd(S1, V2); 1944 setShadow(&I, IRB.CreateOr(S1S2, IRB.CreateOr(V1S2, S1V2))); 1945 setOriginForNaryOp(I); 1946 } 1947 1948 /// Default propagation of shadow and/or origin. 1949 /// 1950 /// This class implements the general case of shadow propagation, used in all 1951 /// cases where we don't know and/or don't care about what the operation 1952 /// actually does. It converts all input shadow values to a common type 1953 /// (extending or truncating as necessary), and bitwise OR's them. 1954 /// 1955 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 1956 /// fully initialized), and less prone to false positives. 1957 /// 1958 /// This class also implements the general case of origin propagation. For a 1959 /// Nary operation, result origin is set to the origin of an argument that is 1960 /// not entirely initialized. If there is more than one such arguments, the 1961 /// rightmost of them is picked. It does not matter which one is picked if all 1962 /// arguments are initialized. 1963 template <bool CombineShadow> 1964 class Combiner { 1965 Value *Shadow = nullptr; 1966 Value *Origin = nullptr; 1967 IRBuilder<> &IRB; 1968 MemorySanitizerVisitor *MSV; 1969 1970 public: 1971 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 1972 : IRB(IRB), MSV(MSV) {} 1973 1974 /// Add a pair of shadow and origin values to the mix. 1975 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 1976 if (CombineShadow) { 1977 assert(OpShadow); 1978 if (!Shadow) 1979 Shadow = OpShadow; 1980 else { 1981 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 1982 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 1983 } 1984 } 1985 1986 if (MSV->MS.TrackOrigins) { 1987 assert(OpOrigin); 1988 if (!Origin) { 1989 Origin = OpOrigin; 1990 } else { 1991 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 1992 // No point in adding something that might result in 0 origin value. 1993 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 1994 Value *FlatShadow = MSV->convertToShadowTyNoVec(OpShadow, IRB); 1995 Value *Cond = 1996 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 1997 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 1998 } 1999 } 2000 } 2001 return *this; 2002 } 2003 2004 /// Add an application value to the mix. 2005 Combiner &Add(Value *V) { 2006 Value *OpShadow = MSV->getShadow(V); 2007 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 2008 return Add(OpShadow, OpOrigin); 2009 } 2010 2011 /// Set the current combined values as the given instruction's shadow 2012 /// and origin. 2013 void Done(Instruction *I) { 2014 if (CombineShadow) { 2015 assert(Shadow); 2016 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 2017 MSV->setShadow(I, Shadow); 2018 } 2019 if (MSV->MS.TrackOrigins) { 2020 assert(Origin); 2021 MSV->setOrigin(I, Origin); 2022 } 2023 } 2024 }; 2025 2026 using ShadowAndOriginCombiner = Combiner<true>; 2027 using OriginCombiner = Combiner<false>; 2028 2029 /// Propagate origin for arbitrary operation. 2030 void setOriginForNaryOp(Instruction &I) { 2031 if (!MS.TrackOrigins) return; 2032 IRBuilder<> IRB(&I); 2033 OriginCombiner OC(this, IRB); 2034 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2035 OC.Add(OI->get()); 2036 OC.Done(&I); 2037 } 2038 2039 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2040 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2041 "Vector of pointers is not a valid shadow type"); 2042 return Ty->isVectorTy() ? 2043 Ty->getVectorNumElements() * Ty->getScalarSizeInBits() : 2044 Ty->getPrimitiveSizeInBits(); 2045 } 2046 2047 /// Cast between two shadow types, extending or truncating as 2048 /// necessary. 2049 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2050 bool Signed = false) { 2051 Type *srcTy = V->getType(); 2052 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2053 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2054 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2055 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2056 2057 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2058 return IRB.CreateIntCast(V, dstTy, Signed); 2059 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2060 dstTy->getVectorNumElements() == srcTy->getVectorNumElements()) 2061 return IRB.CreateIntCast(V, dstTy, Signed); 2062 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2063 Value *V2 = 2064 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2065 return IRB.CreateBitCast(V2, dstTy); 2066 // TODO: handle struct types. 2067 } 2068 2069 /// Cast an application value to the type of its own shadow. 2070 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2071 Type *ShadowTy = getShadowTy(V); 2072 if (V->getType() == ShadowTy) 2073 return V; 2074 if (V->getType()->isPtrOrPtrVectorTy()) 2075 return IRB.CreatePtrToInt(V, ShadowTy); 2076 else 2077 return IRB.CreateBitCast(V, ShadowTy); 2078 } 2079 2080 /// Propagate shadow for arbitrary operation. 2081 void handleShadowOr(Instruction &I) { 2082 IRBuilder<> IRB(&I); 2083 ShadowAndOriginCombiner SC(this, IRB); 2084 for (Instruction::op_iterator OI = I.op_begin(); OI != I.op_end(); ++OI) 2085 SC.Add(OI->get()); 2086 SC.Done(&I); 2087 } 2088 2089 // Handle multiplication by constant. 2090 // 2091 // Handle a special case of multiplication by constant that may have one or 2092 // more zeros in the lower bits. This makes corresponding number of lower bits 2093 // of the result zero as well. We model it by shifting the other operand 2094 // shadow left by the required number of bits. Effectively, we transform 2095 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2096 // We use multiplication by 2**N instead of shift to cover the case of 2097 // multiplication by 0, which may occur in some elements of a vector operand. 2098 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2099 Value *OtherArg) { 2100 Constant *ShadowMul; 2101 Type *Ty = ConstArg->getType(); 2102 if (Ty->isVectorTy()) { 2103 unsigned NumElements = Ty->getVectorNumElements(); 2104 Type *EltTy = Ty->getSequentialElementType(); 2105 SmallVector<Constant *, 16> Elements; 2106 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2107 if (ConstantInt *Elt = 2108 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2109 const APInt &V = Elt->getValue(); 2110 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2111 Elements.push_back(ConstantInt::get(EltTy, V2)); 2112 } else { 2113 Elements.push_back(ConstantInt::get(EltTy, 1)); 2114 } 2115 } 2116 ShadowMul = ConstantVector::get(Elements); 2117 } else { 2118 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2119 const APInt &V = Elt->getValue(); 2120 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2121 ShadowMul = ConstantInt::get(Ty, V2); 2122 } else { 2123 ShadowMul = ConstantInt::get(Ty, 1); 2124 } 2125 } 2126 2127 IRBuilder<> IRB(&I); 2128 setShadow(&I, 2129 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2130 setOrigin(&I, getOrigin(OtherArg)); 2131 } 2132 2133 void visitMul(BinaryOperator &I) { 2134 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2135 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2136 if (constOp0 && !constOp1) 2137 handleMulByConstant(I, constOp0, I.getOperand(1)); 2138 else if (constOp1 && !constOp0) 2139 handleMulByConstant(I, constOp1, I.getOperand(0)); 2140 else 2141 handleShadowOr(I); 2142 } 2143 2144 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2145 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2146 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2147 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2148 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2149 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2150 2151 void handleIntegerDiv(Instruction &I) { 2152 IRBuilder<> IRB(&I); 2153 // Strict on the second argument. 2154 insertShadowCheck(I.getOperand(1), &I); 2155 setShadow(&I, getShadow(&I, 0)); 2156 setOrigin(&I, getOrigin(&I, 0)); 2157 } 2158 2159 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2160 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2161 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2162 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2163 2164 // Floating point division is side-effect free. We can not require that the 2165 // divisor is fully initialized and must propagate shadow. See PR37523. 2166 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2167 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2168 2169 /// Instrument == and != comparisons. 2170 /// 2171 /// Sometimes the comparison result is known even if some of the bits of the 2172 /// arguments are not. 2173 void handleEqualityComparison(ICmpInst &I) { 2174 IRBuilder<> IRB(&I); 2175 Value *A = I.getOperand(0); 2176 Value *B = I.getOperand(1); 2177 Value *Sa = getShadow(A); 2178 Value *Sb = getShadow(B); 2179 2180 // Get rid of pointers and vectors of pointers. 2181 // For ints (and vectors of ints), types of A and Sa match, 2182 // and this is a no-op. 2183 A = IRB.CreatePointerCast(A, Sa->getType()); 2184 B = IRB.CreatePointerCast(B, Sb->getType()); 2185 2186 // A == B <==> (C = A^B) == 0 2187 // A != B <==> (C = A^B) != 0 2188 // Sc = Sa | Sb 2189 Value *C = IRB.CreateXor(A, B); 2190 Value *Sc = IRB.CreateOr(Sa, Sb); 2191 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2192 // Result is defined if one of the following is true 2193 // * there is a defined 1 bit in C 2194 // * C is fully defined 2195 // Si = !(C & ~Sc) && Sc 2196 Value *Zero = Constant::getNullValue(Sc->getType()); 2197 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2198 Value *Si = 2199 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2200 IRB.CreateICmpEQ( 2201 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2202 Si->setName("_msprop_icmp"); 2203 setShadow(&I, Si); 2204 setOriginForNaryOp(I); 2205 } 2206 2207 /// Build the lowest possible value of V, taking into account V's 2208 /// uninitialized bits. 2209 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2210 bool isSigned) { 2211 if (isSigned) { 2212 // Split shadow into sign bit and other bits. 2213 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2214 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2215 // Maximise the undefined shadow bit, minimize other undefined bits. 2216 return 2217 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2218 } else { 2219 // Minimize undefined bits. 2220 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2221 } 2222 } 2223 2224 /// Build the highest possible value of V, taking into account V's 2225 /// uninitialized bits. 2226 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2227 bool isSigned) { 2228 if (isSigned) { 2229 // Split shadow into sign bit and other bits. 2230 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2231 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2232 // Minimise the undefined shadow bit, maximise other undefined bits. 2233 return 2234 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2235 } else { 2236 // Maximize undefined bits. 2237 return IRB.CreateOr(A, Sa); 2238 } 2239 } 2240 2241 /// Instrument relational comparisons. 2242 /// 2243 /// This function does exact shadow propagation for all relational 2244 /// comparisons of integers, pointers and vectors of those. 2245 /// FIXME: output seems suboptimal when one of the operands is a constant 2246 void handleRelationalComparisonExact(ICmpInst &I) { 2247 IRBuilder<> IRB(&I); 2248 Value *A = I.getOperand(0); 2249 Value *B = I.getOperand(1); 2250 Value *Sa = getShadow(A); 2251 Value *Sb = getShadow(B); 2252 2253 // Get rid of pointers and vectors of pointers. 2254 // For ints (and vectors of ints), types of A and Sa match, 2255 // and this is a no-op. 2256 A = IRB.CreatePointerCast(A, Sa->getType()); 2257 B = IRB.CreatePointerCast(B, Sb->getType()); 2258 2259 // Let [a0, a1] be the interval of possible values of A, taking into account 2260 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2261 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2262 bool IsSigned = I.isSigned(); 2263 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2264 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2265 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2266 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2267 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2268 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2269 Value *Si = IRB.CreateXor(S1, S2); 2270 setShadow(&I, Si); 2271 setOriginForNaryOp(I); 2272 } 2273 2274 /// Instrument signed relational comparisons. 2275 /// 2276 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2277 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2278 void handleSignedRelationalComparison(ICmpInst &I) { 2279 Constant *constOp; 2280 Value *op = nullptr; 2281 CmpInst::Predicate pre; 2282 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2283 op = I.getOperand(0); 2284 pre = I.getPredicate(); 2285 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2286 op = I.getOperand(1); 2287 pre = I.getSwappedPredicate(); 2288 } else { 2289 handleShadowOr(I); 2290 return; 2291 } 2292 2293 if ((constOp->isNullValue() && 2294 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2295 (constOp->isAllOnesValue() && 2296 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2297 IRBuilder<> IRB(&I); 2298 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2299 "_msprop_icmp_s"); 2300 setShadow(&I, Shadow); 2301 setOrigin(&I, getOrigin(op)); 2302 } else { 2303 handleShadowOr(I); 2304 } 2305 } 2306 2307 void visitICmpInst(ICmpInst &I) { 2308 if (!ClHandleICmp) { 2309 handleShadowOr(I); 2310 return; 2311 } 2312 if (I.isEquality()) { 2313 handleEqualityComparison(I); 2314 return; 2315 } 2316 2317 assert(I.isRelational()); 2318 if (ClHandleICmpExact) { 2319 handleRelationalComparisonExact(I); 2320 return; 2321 } 2322 if (I.isSigned()) { 2323 handleSignedRelationalComparison(I); 2324 return; 2325 } 2326 2327 assert(I.isUnsigned()); 2328 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2329 handleRelationalComparisonExact(I); 2330 return; 2331 } 2332 2333 handleShadowOr(I); 2334 } 2335 2336 void visitFCmpInst(FCmpInst &I) { 2337 handleShadowOr(I); 2338 } 2339 2340 void handleShift(BinaryOperator &I) { 2341 IRBuilder<> IRB(&I); 2342 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2343 // Otherwise perform the same shift on S1. 2344 Value *S1 = getShadow(&I, 0); 2345 Value *S2 = getShadow(&I, 1); 2346 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2347 S2->getType()); 2348 Value *V2 = I.getOperand(1); 2349 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2350 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2351 setOriginForNaryOp(I); 2352 } 2353 2354 void visitShl(BinaryOperator &I) { handleShift(I); } 2355 void visitAShr(BinaryOperator &I) { handleShift(I); } 2356 void visitLShr(BinaryOperator &I) { handleShift(I); } 2357 2358 /// Instrument llvm.memmove 2359 /// 2360 /// At this point we don't know if llvm.memmove will be inlined or not. 2361 /// If we don't instrument it and it gets inlined, 2362 /// our interceptor will not kick in and we will lose the memmove. 2363 /// If we instrument the call here, but it does not get inlined, 2364 /// we will memove the shadow twice: which is bad in case 2365 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2366 /// 2367 /// Similar situation exists for memcpy and memset. 2368 void visitMemMoveInst(MemMoveInst &I) { 2369 IRBuilder<> IRB(&I); 2370 IRB.CreateCall( 2371 MS.MemmoveFn, 2372 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2373 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2374 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2375 I.eraseFromParent(); 2376 } 2377 2378 // Similar to memmove: avoid copying shadow twice. 2379 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2380 // FIXME: consider doing manual inline for small constant sizes and proper 2381 // alignment. 2382 void visitMemCpyInst(MemCpyInst &I) { 2383 IRBuilder<> IRB(&I); 2384 IRB.CreateCall( 2385 MS.MemcpyFn, 2386 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2387 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2388 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2389 I.eraseFromParent(); 2390 } 2391 2392 // Same as memcpy. 2393 void visitMemSetInst(MemSetInst &I) { 2394 IRBuilder<> IRB(&I); 2395 IRB.CreateCall( 2396 MS.MemsetFn, 2397 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2398 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2399 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2400 I.eraseFromParent(); 2401 } 2402 2403 void visitVAStartInst(VAStartInst &I) { 2404 VAHelper->visitVAStartInst(I); 2405 } 2406 2407 void visitVACopyInst(VACopyInst &I) { 2408 VAHelper->visitVACopyInst(I); 2409 } 2410 2411 /// Handle vector store-like intrinsics. 2412 /// 2413 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2414 /// has 1 pointer argument and 1 vector argument, returns void. 2415 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2416 IRBuilder<> IRB(&I); 2417 Value* Addr = I.getArgOperand(0); 2418 Value *Shadow = getShadow(&I, 1); 2419 Value *ShadowPtr, *OriginPtr; 2420 2421 // We don't know the pointer alignment (could be unaligned SSE store!). 2422 // Have to assume to worst case. 2423 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2424 Addr, IRB, Shadow->getType(), /*Alignment*/ 1, /*isStore*/ true); 2425 IRB.CreateAlignedStore(Shadow, ShadowPtr, 1); 2426 2427 if (ClCheckAccessAddress) 2428 insertShadowCheck(Addr, &I); 2429 2430 // FIXME: factor out common code from materializeStores 2431 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2432 return true; 2433 } 2434 2435 /// Handle vector load-like intrinsics. 2436 /// 2437 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2438 /// has 1 pointer argument, returns a vector. 2439 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2440 IRBuilder<> IRB(&I); 2441 Value *Addr = I.getArgOperand(0); 2442 2443 Type *ShadowTy = getShadowTy(&I); 2444 Value *ShadowPtr, *OriginPtr; 2445 if (PropagateShadow) { 2446 // We don't know the pointer alignment (could be unaligned SSE load!). 2447 // Have to assume to worst case. 2448 unsigned Alignment = 1; 2449 std::tie(ShadowPtr, OriginPtr) = 2450 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2451 setShadow(&I, 2452 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 2453 } else { 2454 setShadow(&I, getCleanShadow(&I)); 2455 } 2456 2457 if (ClCheckAccessAddress) 2458 insertShadowCheck(Addr, &I); 2459 2460 if (MS.TrackOrigins) { 2461 if (PropagateShadow) 2462 setOrigin(&I, IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2463 else 2464 setOrigin(&I, getCleanOrigin()); 2465 } 2466 return true; 2467 } 2468 2469 /// Handle (SIMD arithmetic)-like intrinsics. 2470 /// 2471 /// Instrument intrinsics with any number of arguments of the same type, 2472 /// equal to the return type. The type should be simple (no aggregates or 2473 /// pointers; vectors are fine). 2474 /// Caller guarantees that this intrinsic does not access memory. 2475 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2476 Type *RetTy = I.getType(); 2477 if (!(RetTy->isIntOrIntVectorTy() || 2478 RetTy->isFPOrFPVectorTy() || 2479 RetTy->isX86_MMXTy())) 2480 return false; 2481 2482 unsigned NumArgOperands = I.getNumArgOperands(); 2483 2484 for (unsigned i = 0; i < NumArgOperands; ++i) { 2485 Type *Ty = I.getArgOperand(i)->getType(); 2486 if (Ty != RetTy) 2487 return false; 2488 } 2489 2490 IRBuilder<> IRB(&I); 2491 ShadowAndOriginCombiner SC(this, IRB); 2492 for (unsigned i = 0; i < NumArgOperands; ++i) 2493 SC.Add(I.getArgOperand(i)); 2494 SC.Done(&I); 2495 2496 return true; 2497 } 2498 2499 /// Heuristically instrument unknown intrinsics. 2500 /// 2501 /// The main purpose of this code is to do something reasonable with all 2502 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2503 /// We recognize several classes of intrinsics by their argument types and 2504 /// ModRefBehaviour and apply special intrumentation when we are reasonably 2505 /// sure that we know what the intrinsic does. 2506 /// 2507 /// We special-case intrinsics where this approach fails. See llvm.bswap 2508 /// handling as an example of that. 2509 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2510 unsigned NumArgOperands = I.getNumArgOperands(); 2511 if (NumArgOperands == 0) 2512 return false; 2513 2514 if (NumArgOperands == 2 && 2515 I.getArgOperand(0)->getType()->isPointerTy() && 2516 I.getArgOperand(1)->getType()->isVectorTy() && 2517 I.getType()->isVoidTy() && 2518 !I.onlyReadsMemory()) { 2519 // This looks like a vector store. 2520 return handleVectorStoreIntrinsic(I); 2521 } 2522 2523 if (NumArgOperands == 1 && 2524 I.getArgOperand(0)->getType()->isPointerTy() && 2525 I.getType()->isVectorTy() && 2526 I.onlyReadsMemory()) { 2527 // This looks like a vector load. 2528 return handleVectorLoadIntrinsic(I); 2529 } 2530 2531 if (I.doesNotAccessMemory()) 2532 if (maybeHandleSimpleNomemIntrinsic(I)) 2533 return true; 2534 2535 // FIXME: detect and handle SSE maskstore/maskload 2536 return false; 2537 } 2538 2539 void handleBswap(IntrinsicInst &I) { 2540 IRBuilder<> IRB(&I); 2541 Value *Op = I.getArgOperand(0); 2542 Type *OpType = Op->getType(); 2543 Function *BswapFunc = Intrinsic::getDeclaration( 2544 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2545 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2546 setOrigin(&I, getOrigin(Op)); 2547 } 2548 2549 // Instrument vector convert instrinsic. 2550 // 2551 // This function instruments intrinsics like cvtsi2ss: 2552 // %Out = int_xxx_cvtyyy(%ConvertOp) 2553 // or 2554 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2555 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2556 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2557 // elements from \p CopyOp. 2558 // In most cases conversion involves floating-point value which may trigger a 2559 // hardware exception when not fully initialized. For this reason we require 2560 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2561 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2562 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2563 // return a fully initialized value. 2564 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements) { 2565 IRBuilder<> IRB(&I); 2566 Value *CopyOp, *ConvertOp; 2567 2568 switch (I.getNumArgOperands()) { 2569 case 3: 2570 assert(isa<ConstantInt>(I.getArgOperand(2)) && "Invalid rounding mode"); 2571 LLVM_FALLTHROUGH; 2572 case 2: 2573 CopyOp = I.getArgOperand(0); 2574 ConvertOp = I.getArgOperand(1); 2575 break; 2576 case 1: 2577 ConvertOp = I.getArgOperand(0); 2578 CopyOp = nullptr; 2579 break; 2580 default: 2581 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2582 } 2583 2584 // The first *NumUsedElements* elements of ConvertOp are converted to the 2585 // same number of output elements. The rest of the output is copied from 2586 // CopyOp, or (if not available) filled with zeroes. 2587 // Combine shadow for elements of ConvertOp that are used in this operation, 2588 // and insert a check. 2589 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2590 // int->any conversion. 2591 Value *ConvertShadow = getShadow(ConvertOp); 2592 Value *AggShadow = nullptr; 2593 if (ConvertOp->getType()->isVectorTy()) { 2594 AggShadow = IRB.CreateExtractElement( 2595 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2596 for (int i = 1; i < NumUsedElements; ++i) { 2597 Value *MoreShadow = IRB.CreateExtractElement( 2598 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2599 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2600 } 2601 } else { 2602 AggShadow = ConvertShadow; 2603 } 2604 assert(AggShadow->getType()->isIntegerTy()); 2605 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2606 2607 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2608 // ConvertOp. 2609 if (CopyOp) { 2610 assert(CopyOp->getType() == I.getType()); 2611 assert(CopyOp->getType()->isVectorTy()); 2612 Value *ResultShadow = getShadow(CopyOp); 2613 Type *EltTy = ResultShadow->getType()->getVectorElementType(); 2614 for (int i = 0; i < NumUsedElements; ++i) { 2615 ResultShadow = IRB.CreateInsertElement( 2616 ResultShadow, ConstantInt::getNullValue(EltTy), 2617 ConstantInt::get(IRB.getInt32Ty(), i)); 2618 } 2619 setShadow(&I, ResultShadow); 2620 setOrigin(&I, getOrigin(CopyOp)); 2621 } else { 2622 setShadow(&I, getCleanShadow(&I)); 2623 setOrigin(&I, getCleanOrigin()); 2624 } 2625 } 2626 2627 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2628 // zeroes if it is zero, and all ones otherwise. 2629 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2630 if (S->getType()->isVectorTy()) 2631 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2632 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2633 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2634 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2635 } 2636 2637 // Given a vector, extract its first element, and return all 2638 // zeroes if it is zero, and all ones otherwise. 2639 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2640 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2641 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2642 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2643 } 2644 2645 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2646 Type *T = S->getType(); 2647 assert(T->isVectorTy()); 2648 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2649 return IRB.CreateSExt(S2, T); 2650 } 2651 2652 // Instrument vector shift instrinsic. 2653 // 2654 // This function instruments intrinsics like int_x86_avx2_psll_w. 2655 // Intrinsic shifts %In by %ShiftSize bits. 2656 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2657 // size, and the rest is ignored. Behavior is defined even if shift size is 2658 // greater than register (or field) width. 2659 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2660 assert(I.getNumArgOperands() == 2); 2661 IRBuilder<> IRB(&I); 2662 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2663 // Otherwise perform the same shift on S1. 2664 Value *S1 = getShadow(&I, 0); 2665 Value *S2 = getShadow(&I, 1); 2666 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2667 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2668 Value *V1 = I.getOperand(0); 2669 Value *V2 = I.getOperand(1); 2670 Value *Shift = IRB.CreateCall(I.getFunctionType(), I.getCalledValue(), 2671 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2672 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2673 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2674 setOriginForNaryOp(I); 2675 } 2676 2677 // Get an X86_MMX-sized vector type. 2678 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2679 const unsigned X86_MMXSizeInBits = 64; 2680 return VectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2681 X86_MMXSizeInBits / EltSizeInBits); 2682 } 2683 2684 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2685 // intrinsic. 2686 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2687 switch (id) { 2688 case Intrinsic::x86_sse2_packsswb_128: 2689 case Intrinsic::x86_sse2_packuswb_128: 2690 return Intrinsic::x86_sse2_packsswb_128; 2691 2692 case Intrinsic::x86_sse2_packssdw_128: 2693 case Intrinsic::x86_sse41_packusdw: 2694 return Intrinsic::x86_sse2_packssdw_128; 2695 2696 case Intrinsic::x86_avx2_packsswb: 2697 case Intrinsic::x86_avx2_packuswb: 2698 return Intrinsic::x86_avx2_packsswb; 2699 2700 case Intrinsic::x86_avx2_packssdw: 2701 case Intrinsic::x86_avx2_packusdw: 2702 return Intrinsic::x86_avx2_packssdw; 2703 2704 case Intrinsic::x86_mmx_packsswb: 2705 case Intrinsic::x86_mmx_packuswb: 2706 return Intrinsic::x86_mmx_packsswb; 2707 2708 case Intrinsic::x86_mmx_packssdw: 2709 return Intrinsic::x86_mmx_packssdw; 2710 default: 2711 llvm_unreachable("unexpected intrinsic id"); 2712 } 2713 } 2714 2715 // Instrument vector pack instrinsic. 2716 // 2717 // This function instruments intrinsics like x86_mmx_packsswb, that 2718 // packs elements of 2 input vectors into half as many bits with saturation. 2719 // Shadow is propagated with the signed variant of the same intrinsic applied 2720 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2721 // EltSizeInBits is used only for x86mmx arguments. 2722 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2723 assert(I.getNumArgOperands() == 2); 2724 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2725 IRBuilder<> IRB(&I); 2726 Value *S1 = getShadow(&I, 0); 2727 Value *S2 = getShadow(&I, 1); 2728 assert(isX86_MMX || S1->getType()->isVectorTy()); 2729 2730 // SExt and ICmpNE below must apply to individual elements of input vectors. 2731 // In case of x86mmx arguments, cast them to appropriate vector types and 2732 // back. 2733 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2734 if (isX86_MMX) { 2735 S1 = IRB.CreateBitCast(S1, T); 2736 S2 = IRB.CreateBitCast(S2, T); 2737 } 2738 Value *S1_ext = IRB.CreateSExt( 2739 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2740 Value *S2_ext = IRB.CreateSExt( 2741 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2742 if (isX86_MMX) { 2743 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2744 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2745 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2746 } 2747 2748 Function *ShadowFn = Intrinsic::getDeclaration( 2749 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2750 2751 Value *S = 2752 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2753 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2754 setShadow(&I, S); 2755 setOriginForNaryOp(I); 2756 } 2757 2758 // Instrument sum-of-absolute-differencies intrinsic. 2759 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2760 const unsigned SignificantBitsPerResultElement = 16; 2761 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2762 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2763 unsigned ZeroBitsPerResultElement = 2764 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2765 2766 IRBuilder<> IRB(&I); 2767 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2768 S = IRB.CreateBitCast(S, ResTy); 2769 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2770 ResTy); 2771 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2772 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2773 setShadow(&I, S); 2774 setOriginForNaryOp(I); 2775 } 2776 2777 // Instrument multiply-add intrinsic. 2778 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2779 unsigned EltSizeInBits = 0) { 2780 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2781 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2782 IRBuilder<> IRB(&I); 2783 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2784 S = IRB.CreateBitCast(S, ResTy); 2785 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2786 ResTy); 2787 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2788 setShadow(&I, S); 2789 setOriginForNaryOp(I); 2790 } 2791 2792 // Instrument compare-packed intrinsic. 2793 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 2794 // all-ones shadow. 2795 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 2796 IRBuilder<> IRB(&I); 2797 Type *ResTy = getShadowTy(&I); 2798 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2799 Value *S = IRB.CreateSExt( 2800 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 2801 setShadow(&I, S); 2802 setOriginForNaryOp(I); 2803 } 2804 2805 // Instrument compare-scalar intrinsic. 2806 // This handles both cmp* intrinsics which return the result in the first 2807 // element of a vector, and comi* which return the result as i32. 2808 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 2809 IRBuilder<> IRB(&I); 2810 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2811 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 2812 setShadow(&I, S); 2813 setOriginForNaryOp(I); 2814 } 2815 2816 void handleStmxcsr(IntrinsicInst &I) { 2817 IRBuilder<> IRB(&I); 2818 Value* Addr = I.getArgOperand(0); 2819 Type *Ty = IRB.getInt32Ty(); 2820 Value *ShadowPtr = 2821 getShadowOriginPtr(Addr, IRB, Ty, /*Alignment*/ 1, /*isStore*/ true) 2822 .first; 2823 2824 IRB.CreateStore(getCleanShadow(Ty), 2825 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 2826 2827 if (ClCheckAccessAddress) 2828 insertShadowCheck(Addr, &I); 2829 } 2830 2831 void handleLdmxcsr(IntrinsicInst &I) { 2832 if (!InsertChecks) return; 2833 2834 IRBuilder<> IRB(&I); 2835 Value *Addr = I.getArgOperand(0); 2836 Type *Ty = IRB.getInt32Ty(); 2837 unsigned Alignment = 1; 2838 Value *ShadowPtr, *OriginPtr; 2839 std::tie(ShadowPtr, OriginPtr) = 2840 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 2841 2842 if (ClCheckAccessAddress) 2843 insertShadowCheck(Addr, &I); 2844 2845 Value *Shadow = IRB.CreateAlignedLoad(Ty, ShadowPtr, Alignment, "_ldmxcsr"); 2846 Value *Origin = MS.TrackOrigins ? IRB.CreateLoad(MS.OriginTy, OriginPtr) 2847 : getCleanOrigin(); 2848 insertShadowCheck(Shadow, Origin, &I); 2849 } 2850 2851 void handleMaskedStore(IntrinsicInst &I) { 2852 IRBuilder<> IRB(&I); 2853 Value *V = I.getArgOperand(0); 2854 Value *Addr = I.getArgOperand(1); 2855 unsigned Align = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 2856 Value *Mask = I.getArgOperand(3); 2857 Value *Shadow = getShadow(V); 2858 2859 Value *ShadowPtr; 2860 Value *OriginPtr; 2861 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2862 Addr, IRB, Shadow->getType(), Align, /*isStore*/ true); 2863 2864 if (ClCheckAccessAddress) { 2865 insertShadowCheck(Addr, &I); 2866 // Uninitialized mask is kind of like uninitialized address, but not as 2867 // scary. 2868 insertShadowCheck(Mask, &I); 2869 } 2870 2871 IRB.CreateMaskedStore(Shadow, ShadowPtr, Align, Mask); 2872 2873 if (MS.TrackOrigins) { 2874 auto &DL = F.getParent()->getDataLayout(); 2875 paintOrigin(IRB, getOrigin(V), OriginPtr, 2876 DL.getTypeStoreSize(Shadow->getType()), 2877 std::max(Align, kMinOriginAlignment)); 2878 } 2879 } 2880 2881 bool handleMaskedLoad(IntrinsicInst &I) { 2882 IRBuilder<> IRB(&I); 2883 Value *Addr = I.getArgOperand(0); 2884 unsigned Align = cast<ConstantInt>(I.getArgOperand(1))->getZExtValue(); 2885 Value *Mask = I.getArgOperand(2); 2886 Value *PassThru = I.getArgOperand(3); 2887 2888 Type *ShadowTy = getShadowTy(&I); 2889 Value *ShadowPtr, *OriginPtr; 2890 if (PropagateShadow) { 2891 std::tie(ShadowPtr, OriginPtr) = 2892 getShadowOriginPtr(Addr, IRB, ShadowTy, Align, /*isStore*/ false); 2893 setShadow(&I, IRB.CreateMaskedLoad(ShadowPtr, Align, Mask, 2894 getShadow(PassThru), "_msmaskedld")); 2895 } else { 2896 setShadow(&I, getCleanShadow(&I)); 2897 } 2898 2899 if (ClCheckAccessAddress) { 2900 insertShadowCheck(Addr, &I); 2901 insertShadowCheck(Mask, &I); 2902 } 2903 2904 if (MS.TrackOrigins) { 2905 if (PropagateShadow) { 2906 // Choose between PassThru's and the loaded value's origins. 2907 Value *MaskedPassThruShadow = IRB.CreateAnd( 2908 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 2909 2910 Value *Acc = IRB.CreateExtractElement( 2911 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2912 for (int i = 1, N = PassThru->getType()->getVectorNumElements(); i < N; 2913 ++i) { 2914 Value *More = IRB.CreateExtractElement( 2915 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2916 Acc = IRB.CreateOr(Acc, More); 2917 } 2918 2919 Value *Origin = IRB.CreateSelect( 2920 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 2921 getOrigin(PassThru), IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2922 2923 setOrigin(&I, Origin); 2924 } else { 2925 setOrigin(&I, getCleanOrigin()); 2926 } 2927 } 2928 return true; 2929 } 2930 2931 2932 void visitIntrinsicInst(IntrinsicInst &I) { 2933 switch (I.getIntrinsicID()) { 2934 case Intrinsic::bswap: 2935 handleBswap(I); 2936 break; 2937 case Intrinsic::masked_store: 2938 handleMaskedStore(I); 2939 break; 2940 case Intrinsic::masked_load: 2941 handleMaskedLoad(I); 2942 break; 2943 case Intrinsic::x86_sse_stmxcsr: 2944 handleStmxcsr(I); 2945 break; 2946 case Intrinsic::x86_sse_ldmxcsr: 2947 handleLdmxcsr(I); 2948 break; 2949 case Intrinsic::x86_avx512_vcvtsd2usi64: 2950 case Intrinsic::x86_avx512_vcvtsd2usi32: 2951 case Intrinsic::x86_avx512_vcvtss2usi64: 2952 case Intrinsic::x86_avx512_vcvtss2usi32: 2953 case Intrinsic::x86_avx512_cvttss2usi64: 2954 case Intrinsic::x86_avx512_cvttss2usi: 2955 case Intrinsic::x86_avx512_cvttsd2usi64: 2956 case Intrinsic::x86_avx512_cvttsd2usi: 2957 case Intrinsic::x86_avx512_cvtusi2ss: 2958 case Intrinsic::x86_avx512_cvtusi642sd: 2959 case Intrinsic::x86_avx512_cvtusi642ss: 2960 case Intrinsic::x86_sse2_cvtsd2si64: 2961 case Intrinsic::x86_sse2_cvtsd2si: 2962 case Intrinsic::x86_sse2_cvtsd2ss: 2963 case Intrinsic::x86_sse2_cvttsd2si64: 2964 case Intrinsic::x86_sse2_cvttsd2si: 2965 case Intrinsic::x86_sse_cvtss2si64: 2966 case Intrinsic::x86_sse_cvtss2si: 2967 case Intrinsic::x86_sse_cvttss2si64: 2968 case Intrinsic::x86_sse_cvttss2si: 2969 handleVectorConvertIntrinsic(I, 1); 2970 break; 2971 case Intrinsic::x86_sse_cvtps2pi: 2972 case Intrinsic::x86_sse_cvttps2pi: 2973 handleVectorConvertIntrinsic(I, 2); 2974 break; 2975 2976 case Intrinsic::x86_avx512_psll_w_512: 2977 case Intrinsic::x86_avx512_psll_d_512: 2978 case Intrinsic::x86_avx512_psll_q_512: 2979 case Intrinsic::x86_avx512_pslli_w_512: 2980 case Intrinsic::x86_avx512_pslli_d_512: 2981 case Intrinsic::x86_avx512_pslli_q_512: 2982 case Intrinsic::x86_avx512_psrl_w_512: 2983 case Intrinsic::x86_avx512_psrl_d_512: 2984 case Intrinsic::x86_avx512_psrl_q_512: 2985 case Intrinsic::x86_avx512_psra_w_512: 2986 case Intrinsic::x86_avx512_psra_d_512: 2987 case Intrinsic::x86_avx512_psra_q_512: 2988 case Intrinsic::x86_avx512_psrli_w_512: 2989 case Intrinsic::x86_avx512_psrli_d_512: 2990 case Intrinsic::x86_avx512_psrli_q_512: 2991 case Intrinsic::x86_avx512_psrai_w_512: 2992 case Intrinsic::x86_avx512_psrai_d_512: 2993 case Intrinsic::x86_avx512_psrai_q_512: 2994 case Intrinsic::x86_avx512_psra_q_256: 2995 case Intrinsic::x86_avx512_psra_q_128: 2996 case Intrinsic::x86_avx512_psrai_q_256: 2997 case Intrinsic::x86_avx512_psrai_q_128: 2998 case Intrinsic::x86_avx2_psll_w: 2999 case Intrinsic::x86_avx2_psll_d: 3000 case Intrinsic::x86_avx2_psll_q: 3001 case Intrinsic::x86_avx2_pslli_w: 3002 case Intrinsic::x86_avx2_pslli_d: 3003 case Intrinsic::x86_avx2_pslli_q: 3004 case Intrinsic::x86_avx2_psrl_w: 3005 case Intrinsic::x86_avx2_psrl_d: 3006 case Intrinsic::x86_avx2_psrl_q: 3007 case Intrinsic::x86_avx2_psra_w: 3008 case Intrinsic::x86_avx2_psra_d: 3009 case Intrinsic::x86_avx2_psrli_w: 3010 case Intrinsic::x86_avx2_psrli_d: 3011 case Intrinsic::x86_avx2_psrli_q: 3012 case Intrinsic::x86_avx2_psrai_w: 3013 case Intrinsic::x86_avx2_psrai_d: 3014 case Intrinsic::x86_sse2_psll_w: 3015 case Intrinsic::x86_sse2_psll_d: 3016 case Intrinsic::x86_sse2_psll_q: 3017 case Intrinsic::x86_sse2_pslli_w: 3018 case Intrinsic::x86_sse2_pslli_d: 3019 case Intrinsic::x86_sse2_pslli_q: 3020 case Intrinsic::x86_sse2_psrl_w: 3021 case Intrinsic::x86_sse2_psrl_d: 3022 case Intrinsic::x86_sse2_psrl_q: 3023 case Intrinsic::x86_sse2_psra_w: 3024 case Intrinsic::x86_sse2_psra_d: 3025 case Intrinsic::x86_sse2_psrli_w: 3026 case Intrinsic::x86_sse2_psrli_d: 3027 case Intrinsic::x86_sse2_psrli_q: 3028 case Intrinsic::x86_sse2_psrai_w: 3029 case Intrinsic::x86_sse2_psrai_d: 3030 case Intrinsic::x86_mmx_psll_w: 3031 case Intrinsic::x86_mmx_psll_d: 3032 case Intrinsic::x86_mmx_psll_q: 3033 case Intrinsic::x86_mmx_pslli_w: 3034 case Intrinsic::x86_mmx_pslli_d: 3035 case Intrinsic::x86_mmx_pslli_q: 3036 case Intrinsic::x86_mmx_psrl_w: 3037 case Intrinsic::x86_mmx_psrl_d: 3038 case Intrinsic::x86_mmx_psrl_q: 3039 case Intrinsic::x86_mmx_psra_w: 3040 case Intrinsic::x86_mmx_psra_d: 3041 case Intrinsic::x86_mmx_psrli_w: 3042 case Intrinsic::x86_mmx_psrli_d: 3043 case Intrinsic::x86_mmx_psrli_q: 3044 case Intrinsic::x86_mmx_psrai_w: 3045 case Intrinsic::x86_mmx_psrai_d: 3046 handleVectorShiftIntrinsic(I, /* Variable */ false); 3047 break; 3048 case Intrinsic::x86_avx2_psllv_d: 3049 case Intrinsic::x86_avx2_psllv_d_256: 3050 case Intrinsic::x86_avx512_psllv_d_512: 3051 case Intrinsic::x86_avx2_psllv_q: 3052 case Intrinsic::x86_avx2_psllv_q_256: 3053 case Intrinsic::x86_avx512_psllv_q_512: 3054 case Intrinsic::x86_avx2_psrlv_d: 3055 case Intrinsic::x86_avx2_psrlv_d_256: 3056 case Intrinsic::x86_avx512_psrlv_d_512: 3057 case Intrinsic::x86_avx2_psrlv_q: 3058 case Intrinsic::x86_avx2_psrlv_q_256: 3059 case Intrinsic::x86_avx512_psrlv_q_512: 3060 case Intrinsic::x86_avx2_psrav_d: 3061 case Intrinsic::x86_avx2_psrav_d_256: 3062 case Intrinsic::x86_avx512_psrav_d_512: 3063 case Intrinsic::x86_avx512_psrav_q_128: 3064 case Intrinsic::x86_avx512_psrav_q_256: 3065 case Intrinsic::x86_avx512_psrav_q_512: 3066 handleVectorShiftIntrinsic(I, /* Variable */ true); 3067 break; 3068 3069 case Intrinsic::x86_sse2_packsswb_128: 3070 case Intrinsic::x86_sse2_packssdw_128: 3071 case Intrinsic::x86_sse2_packuswb_128: 3072 case Intrinsic::x86_sse41_packusdw: 3073 case Intrinsic::x86_avx2_packsswb: 3074 case Intrinsic::x86_avx2_packssdw: 3075 case Intrinsic::x86_avx2_packuswb: 3076 case Intrinsic::x86_avx2_packusdw: 3077 handleVectorPackIntrinsic(I); 3078 break; 3079 3080 case Intrinsic::x86_mmx_packsswb: 3081 case Intrinsic::x86_mmx_packuswb: 3082 handleVectorPackIntrinsic(I, 16); 3083 break; 3084 3085 case Intrinsic::x86_mmx_packssdw: 3086 handleVectorPackIntrinsic(I, 32); 3087 break; 3088 3089 case Intrinsic::x86_mmx_psad_bw: 3090 case Intrinsic::x86_sse2_psad_bw: 3091 case Intrinsic::x86_avx2_psad_bw: 3092 handleVectorSadIntrinsic(I); 3093 break; 3094 3095 case Intrinsic::x86_sse2_pmadd_wd: 3096 case Intrinsic::x86_avx2_pmadd_wd: 3097 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3098 case Intrinsic::x86_avx2_pmadd_ub_sw: 3099 handleVectorPmaddIntrinsic(I); 3100 break; 3101 3102 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3103 handleVectorPmaddIntrinsic(I, 8); 3104 break; 3105 3106 case Intrinsic::x86_mmx_pmadd_wd: 3107 handleVectorPmaddIntrinsic(I, 16); 3108 break; 3109 3110 case Intrinsic::x86_sse_cmp_ss: 3111 case Intrinsic::x86_sse2_cmp_sd: 3112 case Intrinsic::x86_sse_comieq_ss: 3113 case Intrinsic::x86_sse_comilt_ss: 3114 case Intrinsic::x86_sse_comile_ss: 3115 case Intrinsic::x86_sse_comigt_ss: 3116 case Intrinsic::x86_sse_comige_ss: 3117 case Intrinsic::x86_sse_comineq_ss: 3118 case Intrinsic::x86_sse_ucomieq_ss: 3119 case Intrinsic::x86_sse_ucomilt_ss: 3120 case Intrinsic::x86_sse_ucomile_ss: 3121 case Intrinsic::x86_sse_ucomigt_ss: 3122 case Intrinsic::x86_sse_ucomige_ss: 3123 case Intrinsic::x86_sse_ucomineq_ss: 3124 case Intrinsic::x86_sse2_comieq_sd: 3125 case Intrinsic::x86_sse2_comilt_sd: 3126 case Intrinsic::x86_sse2_comile_sd: 3127 case Intrinsic::x86_sse2_comigt_sd: 3128 case Intrinsic::x86_sse2_comige_sd: 3129 case Intrinsic::x86_sse2_comineq_sd: 3130 case Intrinsic::x86_sse2_ucomieq_sd: 3131 case Intrinsic::x86_sse2_ucomilt_sd: 3132 case Intrinsic::x86_sse2_ucomile_sd: 3133 case Intrinsic::x86_sse2_ucomigt_sd: 3134 case Intrinsic::x86_sse2_ucomige_sd: 3135 case Intrinsic::x86_sse2_ucomineq_sd: 3136 handleVectorCompareScalarIntrinsic(I); 3137 break; 3138 3139 case Intrinsic::x86_sse_cmp_ps: 3140 case Intrinsic::x86_sse2_cmp_pd: 3141 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3142 // generates reasonably looking IR that fails in the backend with "Do not 3143 // know how to split the result of this operator!". 3144 handleVectorComparePackedIntrinsic(I); 3145 break; 3146 3147 case Intrinsic::is_constant: 3148 // The result of llvm.is.constant() is always defined. 3149 setShadow(&I, getCleanShadow(&I)); 3150 setOrigin(&I, getCleanOrigin()); 3151 break; 3152 3153 default: 3154 if (!handleUnknownIntrinsic(I)) 3155 visitInstruction(I); 3156 break; 3157 } 3158 } 3159 3160 void visitCallSite(CallSite CS) { 3161 Instruction &I = *CS.getInstruction(); 3162 assert(!I.getMetadata("nosanitize")); 3163 assert((CS.isCall() || CS.isInvoke()) && "Unknown type of CallSite"); 3164 if (CS.isCall()) { 3165 CallInst *Call = cast<CallInst>(&I); 3166 3167 // For inline asm, do the usual thing: check argument shadow and mark all 3168 // outputs as clean. Note that any side effects of the inline asm that are 3169 // not immediately visible in its constraints are not handled. 3170 if (Call->isInlineAsm()) { 3171 if (ClHandleAsmConservative && MS.CompileKernel) 3172 visitAsmInstruction(I); 3173 else 3174 visitInstruction(I); 3175 return; 3176 } 3177 3178 assert(!isa<IntrinsicInst>(&I) && "intrinsics are handled elsewhere"); 3179 3180 // We are going to insert code that relies on the fact that the callee 3181 // will become a non-readonly function after it is instrumented by us. To 3182 // prevent this code from being optimized out, mark that function 3183 // non-readonly in advance. 3184 if (Function *Func = Call->getCalledFunction()) { 3185 // Clear out readonly/readnone attributes. 3186 AttrBuilder B; 3187 B.addAttribute(Attribute::ReadOnly) 3188 .addAttribute(Attribute::ReadNone); 3189 Func->removeAttributes(AttributeList::FunctionIndex, B); 3190 } 3191 3192 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3193 } 3194 IRBuilder<> IRB(&I); 3195 3196 unsigned ArgOffset = 0; 3197 LLVM_DEBUG(dbgs() << " CallSite: " << I << "\n"); 3198 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3199 ArgIt != End; ++ArgIt) { 3200 Value *A = *ArgIt; 3201 unsigned i = ArgIt - CS.arg_begin(); 3202 if (!A->getType()->isSized()) { 3203 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << I << "\n"); 3204 continue; 3205 } 3206 unsigned Size = 0; 3207 Value *Store = nullptr; 3208 // Compute the Shadow for arg even if it is ByVal, because 3209 // in that case getShadow() will copy the actual arg shadow to 3210 // __msan_param_tls. 3211 Value *ArgShadow = getShadow(A); 3212 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3213 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3214 << " Shadow: " << *ArgShadow << "\n"); 3215 bool ArgIsInitialized = false; 3216 const DataLayout &DL = F.getParent()->getDataLayout(); 3217 if (CS.paramHasAttr(i, Attribute::ByVal)) { 3218 assert(A->getType()->isPointerTy() && 3219 "ByVal argument is not a pointer!"); 3220 Size = DL.getTypeAllocSize(A->getType()->getPointerElementType()); 3221 if (ArgOffset + Size > kParamTLSSize) break; 3222 unsigned ParamAlignment = CS.getParamAlignment(i); 3223 unsigned Alignment = std::min(ParamAlignment, kShadowTLSAlignment); 3224 Value *AShadowPtr = 3225 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3226 /*isStore*/ false) 3227 .first; 3228 3229 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3230 Alignment, Size); 3231 // TODO(glider): need to copy origins. 3232 } else { 3233 Size = DL.getTypeAllocSize(A->getType()); 3234 if (ArgOffset + Size > kParamTLSSize) break; 3235 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3236 kShadowTLSAlignment); 3237 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3238 if (Cst && Cst->isNullValue()) ArgIsInitialized = true; 3239 } 3240 if (MS.TrackOrigins && !ArgIsInitialized) 3241 IRB.CreateStore(getOrigin(A), 3242 getOriginPtrForArgument(A, IRB, ArgOffset)); 3243 (void)Store; 3244 assert(Size != 0 && Store != nullptr); 3245 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3246 ArgOffset += alignTo(Size, 8); 3247 } 3248 LLVM_DEBUG(dbgs() << " done with call args\n"); 3249 3250 FunctionType *FT = CS.getFunctionType(); 3251 if (FT->isVarArg()) { 3252 VAHelper->visitCallSite(CS, IRB); 3253 } 3254 3255 // Now, get the shadow for the RetVal. 3256 if (!I.getType()->isSized()) return; 3257 // Don't emit the epilogue for musttail call returns. 3258 if (CS.isCall() && cast<CallInst>(&I)->isMustTailCall()) return; 3259 IRBuilder<> IRBBefore(&I); 3260 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3261 Value *Base = getShadowPtrForRetval(&I, IRBBefore); 3262 IRBBefore.CreateAlignedStore(getCleanShadow(&I), Base, kShadowTLSAlignment); 3263 BasicBlock::iterator NextInsn; 3264 if (CS.isCall()) { 3265 NextInsn = ++I.getIterator(); 3266 assert(NextInsn != I.getParent()->end()); 3267 } else { 3268 BasicBlock *NormalDest = cast<InvokeInst>(&I)->getNormalDest(); 3269 if (!NormalDest->getSinglePredecessor()) { 3270 // FIXME: this case is tricky, so we are just conservative here. 3271 // Perhaps we need to split the edge between this BB and NormalDest, 3272 // but a naive attempt to use SplitEdge leads to a crash. 3273 setShadow(&I, getCleanShadow(&I)); 3274 setOrigin(&I, getCleanOrigin()); 3275 return; 3276 } 3277 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3278 // Anything inserted there will be instrumented by MSan later! 3279 NextInsn = NormalDest->getFirstInsertionPt(); 3280 assert(NextInsn != NormalDest->end() && 3281 "Could not find insertion point for retval shadow load"); 3282 } 3283 IRBuilder<> IRBAfter(&*NextInsn); 3284 Value *RetvalShadow = IRBAfter.CreateAlignedLoad( 3285 getShadowTy(&I), getShadowPtrForRetval(&I, IRBAfter), 3286 kShadowTLSAlignment, "_msret"); 3287 setShadow(&I, RetvalShadow); 3288 if (MS.TrackOrigins) 3289 setOrigin(&I, IRBAfter.CreateLoad(MS.OriginTy, 3290 getOriginPtrForRetval(IRBAfter))); 3291 } 3292 3293 bool isAMustTailRetVal(Value *RetVal) { 3294 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3295 RetVal = I->getOperand(0); 3296 } 3297 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3298 return I->isMustTailCall(); 3299 } 3300 return false; 3301 } 3302 3303 void visitReturnInst(ReturnInst &I) { 3304 IRBuilder<> IRB(&I); 3305 Value *RetVal = I.getReturnValue(); 3306 if (!RetVal) return; 3307 // Don't emit the epilogue for musttail call returns. 3308 if (isAMustTailRetVal(RetVal)) return; 3309 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3310 if (CheckReturnValue) { 3311 insertShadowCheck(RetVal, &I); 3312 Value *Shadow = getCleanShadow(RetVal); 3313 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3314 } else { 3315 Value *Shadow = getShadow(RetVal); 3316 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3317 if (MS.TrackOrigins) 3318 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3319 } 3320 } 3321 3322 void visitPHINode(PHINode &I) { 3323 IRBuilder<> IRB(&I); 3324 if (!PropagateShadow) { 3325 setShadow(&I, getCleanShadow(&I)); 3326 setOrigin(&I, getCleanOrigin()); 3327 return; 3328 } 3329 3330 ShadowPHINodes.push_back(&I); 3331 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3332 "_msphi_s")); 3333 if (MS.TrackOrigins) 3334 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3335 "_msphi_o")); 3336 } 3337 3338 Value *getLocalVarDescription(AllocaInst &I) { 3339 SmallString<2048> StackDescriptionStorage; 3340 raw_svector_ostream StackDescription(StackDescriptionStorage); 3341 // We create a string with a description of the stack allocation and 3342 // pass it into __msan_set_alloca_origin. 3343 // It will be printed by the run-time if stack-originated UMR is found. 3344 // The first 4 bytes of the string are set to '----' and will be replaced 3345 // by __msan_va_arg_overflow_size_tls at the first call. 3346 StackDescription << "----" << I.getName() << "@" << F.getName(); 3347 return createPrivateNonConstGlobalForString(*F.getParent(), 3348 StackDescription.str()); 3349 } 3350 3351 void instrumentAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3352 if (PoisonStack && ClPoisonStackWithCall) { 3353 IRB.CreateCall(MS.MsanPoisonStackFn, 3354 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3355 } else { 3356 Value *ShadowBase, *OriginBase; 3357 std::tie(ShadowBase, OriginBase) = 3358 getShadowOriginPtr(&I, IRB, IRB.getInt8Ty(), 1, /*isStore*/ true); 3359 3360 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3361 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, I.getAlignment()); 3362 } 3363 3364 if (PoisonStack && MS.TrackOrigins) { 3365 Value *Descr = getLocalVarDescription(I); 3366 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3367 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3368 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3369 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3370 } 3371 } 3372 3373 void instrumentAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3374 Value *Descr = getLocalVarDescription(I); 3375 if (PoisonStack) { 3376 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3377 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3378 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3379 } else { 3380 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3381 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3382 } 3383 } 3384 3385 void visitAllocaInst(AllocaInst &I) { 3386 setShadow(&I, getCleanShadow(&I)); 3387 setOrigin(&I, getCleanOrigin()); 3388 IRBuilder<> IRB(I.getNextNode()); 3389 const DataLayout &DL = F.getParent()->getDataLayout(); 3390 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3391 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3392 if (I.isArrayAllocation()) 3393 Len = IRB.CreateMul(Len, I.getArraySize()); 3394 3395 if (MS.CompileKernel) 3396 instrumentAllocaKmsan(I, IRB, Len); 3397 else 3398 instrumentAllocaUserspace(I, IRB, Len); 3399 } 3400 3401 void visitSelectInst(SelectInst& I) { 3402 IRBuilder<> IRB(&I); 3403 // a = select b, c, d 3404 Value *B = I.getCondition(); 3405 Value *C = I.getTrueValue(); 3406 Value *D = I.getFalseValue(); 3407 Value *Sb = getShadow(B); 3408 Value *Sc = getShadow(C); 3409 Value *Sd = getShadow(D); 3410 3411 // Result shadow if condition shadow is 0. 3412 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3413 Value *Sa1; 3414 if (I.getType()->isAggregateType()) { 3415 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3416 // an extra "select". This results in much more compact IR. 3417 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3418 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3419 } else { 3420 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3421 // If Sb (condition is poisoned), look for bits in c and d that are equal 3422 // and both unpoisoned. 3423 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3424 3425 // Cast arguments to shadow-compatible type. 3426 C = CreateAppToShadowCast(IRB, C); 3427 D = CreateAppToShadowCast(IRB, D); 3428 3429 // Result shadow if condition shadow is 1. 3430 Sa1 = IRB.CreateOr(IRB.CreateXor(C, D), IRB.CreateOr(Sc, Sd)); 3431 } 3432 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3433 setShadow(&I, Sa); 3434 if (MS.TrackOrigins) { 3435 // Origins are always i32, so any vector conditions must be flattened. 3436 // FIXME: consider tracking vector origins for app vectors? 3437 if (B->getType()->isVectorTy()) { 3438 Type *FlatTy = getShadowTyNoVec(B->getType()); 3439 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3440 ConstantInt::getNullValue(FlatTy)); 3441 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3442 ConstantInt::getNullValue(FlatTy)); 3443 } 3444 // a = select b, c, d 3445 // Oa = Sb ? Ob : (b ? Oc : Od) 3446 setOrigin( 3447 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3448 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3449 getOrigin(I.getFalseValue())))); 3450 } 3451 } 3452 3453 void visitLandingPadInst(LandingPadInst &I) { 3454 // Do nothing. 3455 // See https://github.com/google/sanitizers/issues/504 3456 setShadow(&I, getCleanShadow(&I)); 3457 setOrigin(&I, getCleanOrigin()); 3458 } 3459 3460 void visitCatchSwitchInst(CatchSwitchInst &I) { 3461 setShadow(&I, getCleanShadow(&I)); 3462 setOrigin(&I, getCleanOrigin()); 3463 } 3464 3465 void visitFuncletPadInst(FuncletPadInst &I) { 3466 setShadow(&I, getCleanShadow(&I)); 3467 setOrigin(&I, getCleanOrigin()); 3468 } 3469 3470 void visitGetElementPtrInst(GetElementPtrInst &I) { 3471 handleShadowOr(I); 3472 } 3473 3474 void visitExtractValueInst(ExtractValueInst &I) { 3475 IRBuilder<> IRB(&I); 3476 Value *Agg = I.getAggregateOperand(); 3477 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 3478 Value *AggShadow = getShadow(Agg); 3479 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3480 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 3481 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 3482 setShadow(&I, ResShadow); 3483 setOriginForNaryOp(I); 3484 } 3485 3486 void visitInsertValueInst(InsertValueInst &I) { 3487 IRBuilder<> IRB(&I); 3488 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 3489 Value *AggShadow = getShadow(I.getAggregateOperand()); 3490 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 3491 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 3492 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 3493 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 3494 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 3495 setShadow(&I, Res); 3496 setOriginForNaryOp(I); 3497 } 3498 3499 void dumpInst(Instruction &I) { 3500 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 3501 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 3502 } else { 3503 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 3504 } 3505 errs() << "QQQ " << I << "\n"; 3506 } 3507 3508 void visitResumeInst(ResumeInst &I) { 3509 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 3510 // Nothing to do here. 3511 } 3512 3513 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 3514 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 3515 // Nothing to do here. 3516 } 3517 3518 void visitCatchReturnInst(CatchReturnInst &CRI) { 3519 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 3520 // Nothing to do here. 3521 } 3522 3523 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 3524 const DataLayout &DL, bool isOutput) { 3525 // For each assembly argument, we check its value for being initialized. 3526 // If the argument is a pointer, we assume it points to a single element 3527 // of the corresponding type (or to a 8-byte word, if the type is unsized). 3528 // Each such pointer is instrumented with a call to the runtime library. 3529 Type *OpType = Operand->getType(); 3530 // Check the operand value itself. 3531 insertShadowCheck(Operand, &I); 3532 if (!OpType->isPointerTy() || !isOutput) { 3533 assert(!isOutput); 3534 return; 3535 } 3536 Type *ElType = OpType->getPointerElementType(); 3537 if (!ElType->isSized()) 3538 return; 3539 int Size = DL.getTypeStoreSize(ElType); 3540 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 3541 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 3542 IRB.CreateCall(MS.MsanInstrumentAsmStoreFn, {Ptr, SizeVal}); 3543 } 3544 3545 /// Get the number of output arguments returned by pointers. 3546 int getNumOutputArgs(InlineAsm *IA, CallInst *CI) { 3547 int NumRetOutputs = 0; 3548 int NumOutputs = 0; 3549 Type *RetTy = dyn_cast<Value>(CI)->getType(); 3550 if (!RetTy->isVoidTy()) { 3551 // Register outputs are returned via the CallInst return value. 3552 StructType *ST = dyn_cast_or_null<StructType>(RetTy); 3553 if (ST) 3554 NumRetOutputs = ST->getNumElements(); 3555 else 3556 NumRetOutputs = 1; 3557 } 3558 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 3559 for (size_t i = 0, n = Constraints.size(); i < n; i++) { 3560 InlineAsm::ConstraintInfo Info = Constraints[i]; 3561 switch (Info.Type) { 3562 case InlineAsm::isOutput: 3563 NumOutputs++; 3564 break; 3565 default: 3566 break; 3567 } 3568 } 3569 return NumOutputs - NumRetOutputs; 3570 } 3571 3572 void visitAsmInstruction(Instruction &I) { 3573 // Conservative inline assembly handling: check for poisoned shadow of 3574 // asm() arguments, then unpoison the result and all the memory locations 3575 // pointed to by those arguments. 3576 // An inline asm() statement in C++ contains lists of input and output 3577 // arguments used by the assembly code. These are mapped to operands of the 3578 // CallInst as follows: 3579 // - nR register outputs ("=r) are returned by value in a single structure 3580 // (SSA value of the CallInst); 3581 // - nO other outputs ("=m" and others) are returned by pointer as first 3582 // nO operands of the CallInst; 3583 // - nI inputs ("r", "m" and others) are passed to CallInst as the 3584 // remaining nI operands. 3585 // The total number of asm() arguments in the source is nR+nO+nI, and the 3586 // corresponding CallInst has nO+nI+1 operands (the last operand is the 3587 // function to be called). 3588 const DataLayout &DL = F.getParent()->getDataLayout(); 3589 CallInst *CI = dyn_cast<CallInst>(&I); 3590 IRBuilder<> IRB(&I); 3591 InlineAsm *IA = cast<InlineAsm>(CI->getCalledValue()); 3592 int OutputArgs = getNumOutputArgs(IA, CI); 3593 // The last operand of a CallInst is the function itself. 3594 int NumOperands = CI->getNumOperands() - 1; 3595 3596 // Check input arguments. Doing so before unpoisoning output arguments, so 3597 // that we won't overwrite uninit values before checking them. 3598 for (int i = OutputArgs; i < NumOperands; i++) { 3599 Value *Operand = CI->getOperand(i); 3600 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 3601 } 3602 // Unpoison output arguments. This must happen before the actual InlineAsm 3603 // call, so that the shadow for memory published in the asm() statement 3604 // remains valid. 3605 for (int i = 0; i < OutputArgs; i++) { 3606 Value *Operand = CI->getOperand(i); 3607 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 3608 } 3609 3610 setShadow(&I, getCleanShadow(&I)); 3611 setOrigin(&I, getCleanOrigin()); 3612 } 3613 3614 void visitInstruction(Instruction &I) { 3615 // Everything else: stop propagating and check for poisoned shadow. 3616 if (ClDumpStrictInstructions) 3617 dumpInst(I); 3618 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 3619 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 3620 Value *Operand = I.getOperand(i); 3621 if (Operand->getType()->isSized()) 3622 insertShadowCheck(Operand, &I); 3623 } 3624 setShadow(&I, getCleanShadow(&I)); 3625 setOrigin(&I, getCleanOrigin()); 3626 } 3627 }; 3628 3629 /// AMD64-specific implementation of VarArgHelper. 3630 struct VarArgAMD64Helper : public VarArgHelper { 3631 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 3632 // See a comment in visitCallSite for more details. 3633 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 3634 static const unsigned AMD64FpEndOffsetSSE = 176; 3635 // If SSE is disabled, fp_offset in va_list is zero. 3636 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 3637 3638 unsigned AMD64FpEndOffset; 3639 Function &F; 3640 MemorySanitizer &MS; 3641 MemorySanitizerVisitor &MSV; 3642 Value *VAArgTLSCopy = nullptr; 3643 Value *VAArgTLSOriginCopy = nullptr; 3644 Value *VAArgOverflowSize = nullptr; 3645 3646 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3647 3648 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 3649 3650 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 3651 MemorySanitizerVisitor &MSV) 3652 : F(F), MS(MS), MSV(MSV) { 3653 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 3654 for (const auto &Attr : F.getAttributes().getFnAttributes()) { 3655 if (Attr.isStringAttribute() && 3656 (Attr.getKindAsString() == "target-features")) { 3657 if (Attr.getValueAsString().contains("-sse")) 3658 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 3659 break; 3660 } 3661 } 3662 } 3663 3664 ArgKind classifyArgument(Value* arg) { 3665 // A very rough approximation of X86_64 argument classification rules. 3666 Type *T = arg->getType(); 3667 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 3668 return AK_FloatingPoint; 3669 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 3670 return AK_GeneralPurpose; 3671 if (T->isPointerTy()) 3672 return AK_GeneralPurpose; 3673 return AK_Memory; 3674 } 3675 3676 // For VarArg functions, store the argument shadow in an ABI-specific format 3677 // that corresponds to va_list layout. 3678 // We do this because Clang lowers va_arg in the frontend, and this pass 3679 // only sees the low level code that deals with va_list internals. 3680 // A much easier alternative (provided that Clang emits va_arg instructions) 3681 // would have been to associate each live instance of va_list with a copy of 3682 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 3683 // order. 3684 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3685 unsigned GpOffset = 0; 3686 unsigned FpOffset = AMD64GpEndOffset; 3687 unsigned OverflowOffset = AMD64FpEndOffset; 3688 const DataLayout &DL = F.getParent()->getDataLayout(); 3689 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 3690 ArgIt != End; ++ArgIt) { 3691 Value *A = *ArgIt; 3692 unsigned ArgNo = CS.getArgumentNo(ArgIt); 3693 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 3694 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 3695 if (IsByVal) { 3696 // ByVal arguments always go to the overflow area. 3697 // Fixed arguments passed through the overflow area will be stepped 3698 // over by va_start, so don't count them towards the offset. 3699 if (IsFixed) 3700 continue; 3701 assert(A->getType()->isPointerTy()); 3702 Type *RealTy = A->getType()->getPointerElementType(); 3703 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 3704 Value *ShadowBase = getShadowPtrForVAArgument( 3705 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 3706 Value *OriginBase = nullptr; 3707 if (MS.TrackOrigins) 3708 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 3709 OverflowOffset += alignTo(ArgSize, 8); 3710 if (!ShadowBase) 3711 continue; 3712 Value *ShadowPtr, *OriginPtr; 3713 std::tie(ShadowPtr, OriginPtr) = 3714 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 3715 /*isStore*/ false); 3716 3717 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 3718 kShadowTLSAlignment, ArgSize); 3719 if (MS.TrackOrigins) 3720 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 3721 kShadowTLSAlignment, ArgSize); 3722 } else { 3723 ArgKind AK = classifyArgument(A); 3724 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 3725 AK = AK_Memory; 3726 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 3727 AK = AK_Memory; 3728 Value *ShadowBase, *OriginBase = nullptr; 3729 switch (AK) { 3730 case AK_GeneralPurpose: 3731 ShadowBase = 3732 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 3733 if (MS.TrackOrigins) 3734 OriginBase = 3735 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 3736 GpOffset += 8; 3737 break; 3738 case AK_FloatingPoint: 3739 ShadowBase = 3740 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 3741 if (MS.TrackOrigins) 3742 OriginBase = 3743 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 3744 FpOffset += 16; 3745 break; 3746 case AK_Memory: 3747 if (IsFixed) 3748 continue; 3749 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3750 ShadowBase = 3751 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 3752 if (MS.TrackOrigins) 3753 OriginBase = 3754 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 3755 OverflowOffset += alignTo(ArgSize, 8); 3756 } 3757 // Take fixed arguments into account for GpOffset and FpOffset, 3758 // but don't actually store shadows for them. 3759 // TODO(glider): don't call get*PtrForVAArgument() for them. 3760 if (IsFixed) 3761 continue; 3762 if (!ShadowBase) 3763 continue; 3764 Value *Shadow = MSV.getShadow(A); 3765 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 3766 if (MS.TrackOrigins) { 3767 Value *Origin = MSV.getOrigin(A); 3768 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 3769 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 3770 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 3771 } 3772 } 3773 } 3774 Constant *OverflowSize = 3775 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 3776 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 3777 } 3778 3779 /// Compute the shadow address for a given va_arg. 3780 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3781 unsigned ArgOffset, unsigned ArgSize) { 3782 // Make sure we don't overflow __msan_va_arg_tls. 3783 if (ArgOffset + ArgSize > kParamTLSSize) 3784 return nullptr; 3785 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3786 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3787 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3788 "_msarg_va_s"); 3789 } 3790 3791 /// Compute the origin address for a given va_arg. 3792 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 3793 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 3794 // getOriginPtrForVAArgument() is always called after 3795 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 3796 // overflow. 3797 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3798 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 3799 "_msarg_va_o"); 3800 } 3801 3802 void unpoisonVAListTagForInst(IntrinsicInst &I) { 3803 IRBuilder<> IRB(&I); 3804 Value *VAListTag = I.getArgOperand(0); 3805 Value *ShadowPtr, *OriginPtr; 3806 unsigned Alignment = 8; 3807 std::tie(ShadowPtr, OriginPtr) = 3808 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 3809 /*isStore*/ true); 3810 3811 // Unpoison the whole __va_list_tag. 3812 // FIXME: magic ABI constants. 3813 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3814 /* size */ 24, Alignment, false); 3815 // We shouldn't need to zero out the origins, as they're only checked for 3816 // nonzero shadow. 3817 } 3818 3819 void visitVAStartInst(VAStartInst &I) override { 3820 if (F.getCallingConv() == CallingConv::Win64) 3821 return; 3822 VAStartInstrumentationList.push_back(&I); 3823 unpoisonVAListTagForInst(I); 3824 } 3825 3826 void visitVACopyInst(VACopyInst &I) override { 3827 if (F.getCallingConv() == CallingConv::Win64) return; 3828 unpoisonVAListTagForInst(I); 3829 } 3830 3831 void finalizeInstrumentation() override { 3832 assert(!VAArgOverflowSize && !VAArgTLSCopy && 3833 "finalizeInstrumentation called twice"); 3834 if (!VAStartInstrumentationList.empty()) { 3835 // If there is a va_start in this function, make a backup copy of 3836 // va_arg_tls somewhere in the function entry block. 3837 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 3838 VAArgOverflowSize = 3839 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 3840 Value *CopySize = 3841 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 3842 VAArgOverflowSize); 3843 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3844 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 3845 if (MS.TrackOrigins) { 3846 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3847 IRB.CreateMemCpy(VAArgTLSOriginCopy, 8, MS.VAArgOriginTLS, 8, CopySize); 3848 } 3849 } 3850 3851 // Instrument va_start. 3852 // Copy va_list shadow from the backup copy of the TLS contents. 3853 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 3854 CallInst *OrigInst = VAStartInstrumentationList[i]; 3855 IRBuilder<> IRB(OrigInst->getNextNode()); 3856 Value *VAListTag = OrigInst->getArgOperand(0); 3857 3858 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 3859 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 3860 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3861 ConstantInt::get(MS.IntptrTy, 16)), 3862 PointerType::get(RegSaveAreaPtrTy, 0)); 3863 Value *RegSaveAreaPtr = 3864 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 3865 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 3866 unsigned Alignment = 16; 3867 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 3868 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 3869 Alignment, /*isStore*/ true); 3870 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 3871 AMD64FpEndOffset); 3872 if (MS.TrackOrigins) 3873 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 3874 Alignment, AMD64FpEndOffset); 3875 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 3876 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 3877 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 3878 ConstantInt::get(MS.IntptrTy, 8)), 3879 PointerType::get(OverflowArgAreaPtrTy, 0)); 3880 Value *OverflowArgAreaPtr = 3881 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 3882 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 3883 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 3884 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 3885 Alignment, /*isStore*/ true); 3886 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 3887 AMD64FpEndOffset); 3888 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 3889 VAArgOverflowSize); 3890 if (MS.TrackOrigins) { 3891 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 3892 AMD64FpEndOffset); 3893 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 3894 VAArgOverflowSize); 3895 } 3896 } 3897 } 3898 }; 3899 3900 /// MIPS64-specific implementation of VarArgHelper. 3901 struct VarArgMIPS64Helper : public VarArgHelper { 3902 Function &F; 3903 MemorySanitizer &MS; 3904 MemorySanitizerVisitor &MSV; 3905 Value *VAArgTLSCopy = nullptr; 3906 Value *VAArgSize = nullptr; 3907 3908 SmallVector<CallInst*, 16> VAStartInstrumentationList; 3909 3910 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 3911 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 3912 3913 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 3914 unsigned VAArgOffset = 0; 3915 const DataLayout &DL = F.getParent()->getDataLayout(); 3916 for (CallSite::arg_iterator ArgIt = CS.arg_begin() + 3917 CS.getFunctionType()->getNumParams(), End = CS.arg_end(); 3918 ArgIt != End; ++ArgIt) { 3919 Triple TargetTriple(F.getParent()->getTargetTriple()); 3920 Value *A = *ArgIt; 3921 Value *Base; 3922 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 3923 if (TargetTriple.getArch() == Triple::mips64) { 3924 // Adjusting the shadow for argument with size < 8 to match the placement 3925 // of bits in big endian system 3926 if (ArgSize < 8) 3927 VAArgOffset += (8 - ArgSize); 3928 } 3929 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 3930 VAArgOffset += ArgSize; 3931 VAArgOffset = alignTo(VAArgOffset, 8); 3932 if (!Base) 3933 continue; 3934 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 3935 } 3936 3937 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 3938 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 3939 // a new class member i.e. it is the total size of all VarArgs. 3940 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 3941 } 3942 3943 /// Compute the shadow address for a given va_arg. 3944 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 3945 unsigned ArgOffset, unsigned ArgSize) { 3946 // Make sure we don't overflow __msan_va_arg_tls. 3947 if (ArgOffset + ArgSize > kParamTLSSize) 3948 return nullptr; 3949 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 3950 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 3951 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 3952 "_msarg"); 3953 } 3954 3955 void visitVAStartInst(VAStartInst &I) override { 3956 IRBuilder<> IRB(&I); 3957 VAStartInstrumentationList.push_back(&I); 3958 Value *VAListTag = I.getArgOperand(0); 3959 Value *ShadowPtr, *OriginPtr; 3960 unsigned Alignment = 8; 3961 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 3962 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 3963 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3964 /* size */ 8, Alignment, false); 3965 } 3966 3967 void visitVACopyInst(VACopyInst &I) override { 3968 IRBuilder<> IRB(&I); 3969 VAStartInstrumentationList.push_back(&I); 3970 Value *VAListTag = I.getArgOperand(0); 3971 Value *ShadowPtr, *OriginPtr; 3972 unsigned Alignment = 8; 3973 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 3974 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 3975 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 3976 /* size */ 8, Alignment, false); 3977 } 3978 3979 void finalizeInstrumentation() override { 3980 assert(!VAArgSize && !VAArgTLSCopy && 3981 "finalizeInstrumentation called twice"); 3982 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 3983 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 3984 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 3985 VAArgSize); 3986 3987 if (!VAStartInstrumentationList.empty()) { 3988 // If there is a va_start in this function, make a backup copy of 3989 // va_arg_tls somewhere in the function entry block. 3990 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 3991 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 3992 } 3993 3994 // Instrument va_start. 3995 // Copy va_list shadow from the backup copy of the TLS contents. 3996 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 3997 CallInst *OrigInst = VAStartInstrumentationList[i]; 3998 IRBuilder<> IRB(OrigInst->getNextNode()); 3999 Value *VAListTag = OrigInst->getArgOperand(0); 4000 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4001 Value *RegSaveAreaPtrPtr = 4002 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4003 PointerType::get(RegSaveAreaPtrTy, 0)); 4004 Value *RegSaveAreaPtr = 4005 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4006 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4007 unsigned Alignment = 8; 4008 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4009 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4010 Alignment, /*isStore*/ true); 4011 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4012 CopySize); 4013 } 4014 } 4015 }; 4016 4017 /// AArch64-specific implementation of VarArgHelper. 4018 struct VarArgAArch64Helper : public VarArgHelper { 4019 static const unsigned kAArch64GrArgSize = 64; 4020 static const unsigned kAArch64VrArgSize = 128; 4021 4022 static const unsigned AArch64GrBegOffset = 0; 4023 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 4024 // Make VR space aligned to 16 bytes. 4025 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 4026 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 4027 + kAArch64VrArgSize; 4028 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 4029 4030 Function &F; 4031 MemorySanitizer &MS; 4032 MemorySanitizerVisitor &MSV; 4033 Value *VAArgTLSCopy = nullptr; 4034 Value *VAArgOverflowSize = nullptr; 4035 4036 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4037 4038 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4039 4040 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 4041 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4042 4043 ArgKind classifyArgument(Value* arg) { 4044 Type *T = arg->getType(); 4045 if (T->isFPOrFPVectorTy()) 4046 return AK_FloatingPoint; 4047 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4048 || (T->isPointerTy())) 4049 return AK_GeneralPurpose; 4050 return AK_Memory; 4051 } 4052 4053 // The instrumentation stores the argument shadow in a non ABI-specific 4054 // format because it does not know which argument is named (since Clang, 4055 // like x86_64 case, lowers the va_args in the frontend and this pass only 4056 // sees the low level code that deals with va_list internals). 4057 // The first seven GR registers are saved in the first 56 bytes of the 4058 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4059 // the remaining arguments. 4060 // Using constant offset within the va_arg TLS array allows fast copy 4061 // in the finalize instrumentation. 4062 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4063 unsigned GrOffset = AArch64GrBegOffset; 4064 unsigned VrOffset = AArch64VrBegOffset; 4065 unsigned OverflowOffset = AArch64VAEndOffset; 4066 4067 const DataLayout &DL = F.getParent()->getDataLayout(); 4068 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4069 ArgIt != End; ++ArgIt) { 4070 Value *A = *ArgIt; 4071 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4072 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4073 ArgKind AK = classifyArgument(A); 4074 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4075 AK = AK_Memory; 4076 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4077 AK = AK_Memory; 4078 Value *Base; 4079 switch (AK) { 4080 case AK_GeneralPurpose: 4081 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4082 GrOffset += 8; 4083 break; 4084 case AK_FloatingPoint: 4085 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4086 VrOffset += 16; 4087 break; 4088 case AK_Memory: 4089 // Don't count fixed arguments in the overflow area - va_start will 4090 // skip right over them. 4091 if (IsFixed) 4092 continue; 4093 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4094 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4095 alignTo(ArgSize, 8)); 4096 OverflowOffset += alignTo(ArgSize, 8); 4097 break; 4098 } 4099 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4100 // bother to actually store a shadow. 4101 if (IsFixed) 4102 continue; 4103 if (!Base) 4104 continue; 4105 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4106 } 4107 Constant *OverflowSize = 4108 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4109 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4110 } 4111 4112 /// Compute the shadow address for a given va_arg. 4113 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4114 unsigned ArgOffset, unsigned ArgSize) { 4115 // Make sure we don't overflow __msan_va_arg_tls. 4116 if (ArgOffset + ArgSize > kParamTLSSize) 4117 return nullptr; 4118 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4119 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4120 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4121 "_msarg"); 4122 } 4123 4124 void visitVAStartInst(VAStartInst &I) override { 4125 IRBuilder<> IRB(&I); 4126 VAStartInstrumentationList.push_back(&I); 4127 Value *VAListTag = I.getArgOperand(0); 4128 Value *ShadowPtr, *OriginPtr; 4129 unsigned Alignment = 8; 4130 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4131 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4132 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4133 /* size */ 32, Alignment, false); 4134 } 4135 4136 void visitVACopyInst(VACopyInst &I) override { 4137 IRBuilder<> IRB(&I); 4138 VAStartInstrumentationList.push_back(&I); 4139 Value *VAListTag = I.getArgOperand(0); 4140 Value *ShadowPtr, *OriginPtr; 4141 unsigned Alignment = 8; 4142 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4143 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4144 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4145 /* size */ 32, Alignment, false); 4146 } 4147 4148 // Retrieve a va_list field of 'void*' size. 4149 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4150 Value *SaveAreaPtrPtr = 4151 IRB.CreateIntToPtr( 4152 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4153 ConstantInt::get(MS.IntptrTy, offset)), 4154 Type::getInt64PtrTy(*MS.C)); 4155 return IRB.CreateLoad(Type::getInt64Ty(*MS.C), SaveAreaPtrPtr); 4156 } 4157 4158 // Retrieve a va_list field of 'int' size. 4159 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4160 Value *SaveAreaPtr = 4161 IRB.CreateIntToPtr( 4162 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4163 ConstantInt::get(MS.IntptrTy, offset)), 4164 Type::getInt32PtrTy(*MS.C)); 4165 Value *SaveArea32 = IRB.CreateLoad(IRB.getInt32Ty(), SaveAreaPtr); 4166 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4167 } 4168 4169 void finalizeInstrumentation() override { 4170 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4171 "finalizeInstrumentation called twice"); 4172 if (!VAStartInstrumentationList.empty()) { 4173 // If there is a va_start in this function, make a backup copy of 4174 // va_arg_tls somewhere in the function entry block. 4175 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4176 VAArgOverflowSize = 4177 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4178 Value *CopySize = 4179 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4180 VAArgOverflowSize); 4181 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4182 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 4183 } 4184 4185 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4186 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4187 4188 // Instrument va_start, copy va_list shadow from the backup copy of 4189 // the TLS contents. 4190 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4191 CallInst *OrigInst = VAStartInstrumentationList[i]; 4192 IRBuilder<> IRB(OrigInst->getNextNode()); 4193 4194 Value *VAListTag = OrigInst->getArgOperand(0); 4195 4196 // The variadic ABI for AArch64 creates two areas to save the incoming 4197 // argument registers (one for 64-bit general register xn-x7 and another 4198 // for 128-bit FP/SIMD vn-v7). 4199 // We need then to propagate the shadow arguments on both regions 4200 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4201 // The remaning arguments are saved on shadow for 'va::stack'. 4202 // One caveat is it requires only to propagate the non-named arguments, 4203 // however on the call site instrumentation 'all' the arguments are 4204 // saved. So to copy the shadow values from the va_arg TLS array 4205 // we need to adjust the offset for both GR and VR fields based on 4206 // the __{gr,vr}_offs value (since they are stores based on incoming 4207 // named arguments). 4208 4209 // Read the stack pointer from the va_list. 4210 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4211 4212 // Read both the __gr_top and __gr_off and add them up. 4213 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4214 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4215 4216 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4217 4218 // Read both the __vr_top and __vr_off and add them up. 4219 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4220 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4221 4222 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4223 4224 // It does not know how many named arguments is being used and, on the 4225 // callsite all the arguments were saved. Since __gr_off is defined as 4226 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4227 // argument by ignoring the bytes of shadow from named arguments. 4228 Value *GrRegSaveAreaShadowPtrOff = 4229 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4230 4231 Value *GrRegSaveAreaShadowPtr = 4232 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4233 /*Alignment*/ 8, /*isStore*/ true) 4234 .first; 4235 4236 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4237 GrRegSaveAreaShadowPtrOff); 4238 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4239 4240 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, 8, GrSrcPtr, 8, GrCopySize); 4241 4242 // Again, but for FP/SIMD values. 4243 Value *VrRegSaveAreaShadowPtrOff = 4244 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4245 4246 Value *VrRegSaveAreaShadowPtr = 4247 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4248 /*Alignment*/ 8, /*isStore*/ true) 4249 .first; 4250 4251 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4252 IRB.getInt8Ty(), 4253 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4254 IRB.getInt32(AArch64VrBegOffset)), 4255 VrRegSaveAreaShadowPtrOff); 4256 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4257 4258 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, 8, VrSrcPtr, 8, VrCopySize); 4259 4260 // And finally for remaining arguments. 4261 Value *StackSaveAreaShadowPtr = 4262 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4263 /*Alignment*/ 16, /*isStore*/ true) 4264 .first; 4265 4266 Value *StackSrcPtr = 4267 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4268 IRB.getInt32(AArch64VAEndOffset)); 4269 4270 IRB.CreateMemCpy(StackSaveAreaShadowPtr, 16, StackSrcPtr, 16, 4271 VAArgOverflowSize); 4272 } 4273 } 4274 }; 4275 4276 /// PowerPC64-specific implementation of VarArgHelper. 4277 struct VarArgPowerPC64Helper : public VarArgHelper { 4278 Function &F; 4279 MemorySanitizer &MS; 4280 MemorySanitizerVisitor &MSV; 4281 Value *VAArgTLSCopy = nullptr; 4282 Value *VAArgSize = nullptr; 4283 4284 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4285 4286 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4287 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4288 4289 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override { 4290 // For PowerPC, we need to deal with alignment of stack arguments - 4291 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4292 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4293 // and QPX vectors are aligned to 32 bytes. For that reason, we 4294 // compute current offset from stack pointer (which is always properly 4295 // aligned), and offset for the first vararg, then subtract them. 4296 unsigned VAArgBase; 4297 Triple TargetTriple(F.getParent()->getTargetTriple()); 4298 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4299 // and 32 bytes for ABIv2. This is usually determined by target 4300 // endianness, but in theory could be overriden by function attribute. 4301 // For simplicity, we ignore it here (it'd only matter for QPX vectors). 4302 if (TargetTriple.getArch() == Triple::ppc64) 4303 VAArgBase = 48; 4304 else 4305 VAArgBase = 32; 4306 unsigned VAArgOffset = VAArgBase; 4307 const DataLayout &DL = F.getParent()->getDataLayout(); 4308 for (CallSite::arg_iterator ArgIt = CS.arg_begin(), End = CS.arg_end(); 4309 ArgIt != End; ++ArgIt) { 4310 Value *A = *ArgIt; 4311 unsigned ArgNo = CS.getArgumentNo(ArgIt); 4312 bool IsFixed = ArgNo < CS.getFunctionType()->getNumParams(); 4313 bool IsByVal = CS.paramHasAttr(ArgNo, Attribute::ByVal); 4314 if (IsByVal) { 4315 assert(A->getType()->isPointerTy()); 4316 Type *RealTy = A->getType()->getPointerElementType(); 4317 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4318 uint64_t ArgAlign = CS.getParamAlignment(ArgNo); 4319 if (ArgAlign < 8) 4320 ArgAlign = 8; 4321 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4322 if (!IsFixed) { 4323 Value *Base = getShadowPtrForVAArgument( 4324 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4325 if (Base) { 4326 Value *AShadowPtr, *AOriginPtr; 4327 std::tie(AShadowPtr, AOriginPtr) = 4328 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4329 kShadowTLSAlignment, /*isStore*/ false); 4330 4331 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4332 kShadowTLSAlignment, ArgSize); 4333 } 4334 } 4335 VAArgOffset += alignTo(ArgSize, 8); 4336 } else { 4337 Value *Base; 4338 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4339 uint64_t ArgAlign = 8; 4340 if (A->getType()->isArrayTy()) { 4341 // Arrays are aligned to element size, except for long double 4342 // arrays, which are aligned to 8 bytes. 4343 Type *ElementTy = A->getType()->getArrayElementType(); 4344 if (!ElementTy->isPPC_FP128Ty()) 4345 ArgAlign = DL.getTypeAllocSize(ElementTy); 4346 } else if (A->getType()->isVectorTy()) { 4347 // Vectors are naturally aligned. 4348 ArgAlign = DL.getTypeAllocSize(A->getType()); 4349 } 4350 if (ArgAlign < 8) 4351 ArgAlign = 8; 4352 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4353 if (DL.isBigEndian()) { 4354 // Adjusting the shadow for argument with size < 8 to match the placement 4355 // of bits in big endian system 4356 if (ArgSize < 8) 4357 VAArgOffset += (8 - ArgSize); 4358 } 4359 if (!IsFixed) { 4360 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4361 VAArgOffset - VAArgBase, ArgSize); 4362 if (Base) 4363 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4364 } 4365 VAArgOffset += ArgSize; 4366 VAArgOffset = alignTo(VAArgOffset, 8); 4367 } 4368 if (IsFixed) 4369 VAArgBase = VAArgOffset; 4370 } 4371 4372 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4373 VAArgOffset - VAArgBase); 4374 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4375 // a new class member i.e. it is the total size of all VarArgs. 4376 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4377 } 4378 4379 /// Compute the shadow address for a given va_arg. 4380 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4381 unsigned ArgOffset, unsigned ArgSize) { 4382 // Make sure we don't overflow __msan_va_arg_tls. 4383 if (ArgOffset + ArgSize > kParamTLSSize) 4384 return nullptr; 4385 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4386 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4387 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4388 "_msarg"); 4389 } 4390 4391 void visitVAStartInst(VAStartInst &I) override { 4392 IRBuilder<> IRB(&I); 4393 VAStartInstrumentationList.push_back(&I); 4394 Value *VAListTag = I.getArgOperand(0); 4395 Value *ShadowPtr, *OriginPtr; 4396 unsigned Alignment = 8; 4397 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4398 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4399 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4400 /* size */ 8, Alignment, false); 4401 } 4402 4403 void visitVACopyInst(VACopyInst &I) override { 4404 IRBuilder<> IRB(&I); 4405 Value *VAListTag = I.getArgOperand(0); 4406 Value *ShadowPtr, *OriginPtr; 4407 unsigned Alignment = 8; 4408 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4409 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4410 // Unpoison the whole __va_list_tag. 4411 // FIXME: magic ABI constants. 4412 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4413 /* size */ 8, Alignment, false); 4414 } 4415 4416 void finalizeInstrumentation() override { 4417 assert(!VAArgSize && !VAArgTLSCopy && 4418 "finalizeInstrumentation called twice"); 4419 IRBuilder<> IRB(MSV.ActualFnStart->getFirstNonPHI()); 4420 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4421 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4422 VAArgSize); 4423 4424 if (!VAStartInstrumentationList.empty()) { 4425 // If there is a va_start in this function, make a backup copy of 4426 // va_arg_tls somewhere in the function entry block. 4427 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4428 IRB.CreateMemCpy(VAArgTLSCopy, 8, MS.VAArgTLS, 8, CopySize); 4429 } 4430 4431 // Instrument va_start. 4432 // Copy va_list shadow from the backup copy of the TLS contents. 4433 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4434 CallInst *OrigInst = VAStartInstrumentationList[i]; 4435 IRBuilder<> IRB(OrigInst->getNextNode()); 4436 Value *VAListTag = OrigInst->getArgOperand(0); 4437 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4438 Value *RegSaveAreaPtrPtr = 4439 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4440 PointerType::get(RegSaveAreaPtrTy, 0)); 4441 Value *RegSaveAreaPtr = 4442 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4443 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4444 unsigned Alignment = 8; 4445 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4446 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4447 Alignment, /*isStore*/ true); 4448 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4449 CopySize); 4450 } 4451 } 4452 }; 4453 4454 /// A no-op implementation of VarArgHelper. 4455 struct VarArgNoOpHelper : public VarArgHelper { 4456 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 4457 MemorySanitizerVisitor &MSV) {} 4458 4459 void visitCallSite(CallSite &CS, IRBuilder<> &IRB) override {} 4460 4461 void visitVAStartInst(VAStartInst &I) override {} 4462 4463 void visitVACopyInst(VACopyInst &I) override {} 4464 4465 void finalizeInstrumentation() override {} 4466 }; 4467 4468 } // end anonymous namespace 4469 4470 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 4471 MemorySanitizerVisitor &Visitor) { 4472 // VarArg handling is only implemented on AMD64. False positives are possible 4473 // on other platforms. 4474 Triple TargetTriple(Func.getParent()->getTargetTriple()); 4475 if (TargetTriple.getArch() == Triple::x86_64) 4476 return new VarArgAMD64Helper(Func, Msan, Visitor); 4477 else if (TargetTriple.isMIPS64()) 4478 return new VarArgMIPS64Helper(Func, Msan, Visitor); 4479 else if (TargetTriple.getArch() == Triple::aarch64) 4480 return new VarArgAArch64Helper(Func, Msan, Visitor); 4481 else if (TargetTriple.getArch() == Triple::ppc64 || 4482 TargetTriple.getArch() == Triple::ppc64le) 4483 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 4484 else 4485 return new VarArgNoOpHelper(Func, Msan, Visitor); 4486 } 4487 4488 bool MemorySanitizer::sanitizeFunction(Function &F, TargetLibraryInfo &TLI) { 4489 if (!CompileKernel && (&F == MsanCtorFunction)) 4490 return false; 4491 MemorySanitizerVisitor Visitor(F, *this, TLI); 4492 4493 // Clear out readonly/readnone attributes. 4494 AttrBuilder B; 4495 B.addAttribute(Attribute::ReadOnly) 4496 .addAttribute(Attribute::ReadNone); 4497 F.removeAttributes(AttributeList::FunctionIndex, B); 4498 4499 return Visitor.runOnFunction(); 4500 } 4501