1 //===- InstrRefBasedImpl.cpp - Tracking Debug Value MIs -------------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 /// \file InstrRefBasedImpl.cpp 9 /// 10 /// This is a separate implementation of LiveDebugValues, see 11 /// LiveDebugValues.cpp and VarLocBasedImpl.cpp for more information. 12 /// 13 /// This pass propagates variable locations between basic blocks, resolving 14 /// control flow conflicts between them. The problem is much like SSA 15 /// construction, where each DBG_VALUE instruction assigns the *value* that 16 /// a variable has, and every instruction where the variable is in scope uses 17 /// that variable. The resulting map of instruction-to-value is then translated 18 /// into a register (or spill) location for each variable over each instruction. 19 /// 20 /// This pass determines which DBG_VALUE dominates which instructions, or if 21 /// none do, where values must be merged (like PHI nodes). The added 22 /// complication is that because codegen has already finished, a PHI node may 23 /// be needed for a variable location to be correct, but no register or spill 24 /// slot merges the necessary values. In these circumstances, the variable 25 /// location is dropped. 26 /// 27 /// What makes this analysis non-trivial is loops: we cannot tell in advance 28 /// whether a variable location is live throughout a loop, or whether its 29 /// location is clobbered (or redefined by another DBG_VALUE), without 30 /// exploring all the way through. 31 /// 32 /// To make this simpler we perform two kinds of analysis. First, we identify 33 /// every value defined by every instruction (ignoring those that only move 34 /// another value), then compute a map of which values are available for each 35 /// instruction. This is stronger than a reaching-def analysis, as we create 36 /// PHI values where other values merge. 37 /// 38 /// Secondly, for each variable, we effectively re-construct SSA using each 39 /// DBG_VALUE as a def. The DBG_VALUEs read a value-number computed by the 40 /// first analysis from the location they refer to. We can then compute the 41 /// dominance frontiers of where a variable has a value, and create PHI nodes 42 /// where they merge. 43 /// This isn't precisely SSA-construction though, because the function shape 44 /// is pre-defined. If a variable location requires a PHI node, but no 45 /// PHI for the relevant values is present in the function (as computed by the 46 /// first analysis), the location must be dropped. 47 /// 48 /// Once both are complete, we can pass back over all instructions knowing: 49 /// * What _value_ each variable should contain, either defined by an 50 /// instruction or where control flow merges 51 /// * What the location of that value is (if any). 52 /// Allowing us to create appropriate live-in DBG_VALUEs, and DBG_VALUEs when 53 /// a value moves location. After this pass runs, all variable locations within 54 /// a block should be specified by DBG_VALUEs within that block, allowing 55 /// DbgEntityHistoryCalculator to focus on individual blocks. 56 /// 57 /// This pass is able to go fast because the size of the first 58 /// reaching-definition analysis is proportional to the working-set size of 59 /// the function, which the compiler tries to keep small. (It's also 60 /// proportional to the number of blocks). Additionally, we repeatedly perform 61 /// the second reaching-definition analysis with only the variables and blocks 62 /// in a single lexical scope, exploiting their locality. 63 /// 64 /// Determining where PHIs happen is trickier with this approach, and it comes 65 /// to a head in the major problem for LiveDebugValues: is a value live-through 66 /// a loop, or not? Your garden-variety dataflow analysis aims to build a set of 67 /// facts about a function, however this analysis needs to generate new value 68 /// numbers at joins. 69 /// 70 /// To do this, consider a lattice of all definition values, from instructions 71 /// and from PHIs. Each PHI is characterised by the RPO number of the block it 72 /// occurs in. Each value pair A, B can be ordered by RPO(A) < RPO(B): 73 /// with non-PHI values at the top, and any PHI value in the last block (by RPO 74 /// order) at the bottom. 75 /// 76 /// (Awkwardly: lower-down-the _lattice_ means a greater RPO _number_. Below, 77 /// "rank" always refers to the former). 78 /// 79 /// At any join, for each register, we consider: 80 /// * All incoming values, and 81 /// * The PREVIOUS live-in value at this join. 82 /// If all incoming values agree: that's the live-in value. If they do not, the 83 /// incoming values are ranked according to the partial order, and the NEXT 84 /// LOWEST rank after the PREVIOUS live-in value is picked (multiple values of 85 /// the same rank are ignored as conflicting). If there are no candidate values, 86 /// or if the rank of the live-in would be lower than the rank of the current 87 /// blocks PHIs, create a new PHI value. 88 /// 89 /// Intuitively: if it's not immediately obvious what value a join should result 90 /// in, we iteratively descend from instruction-definitions down through PHI 91 /// values, getting closer to the current block each time. If the current block 92 /// is a loop head, this ordering is effectively searching outer levels of 93 /// loops, to find a value that's live-through the current loop. 94 /// 95 /// If there is no value that's live-through this loop, a PHI is created for 96 /// this location instead. We can't use a lower-ranked PHI because by definition 97 /// it doesn't dominate the current block. We can't create a PHI value any 98 /// earlier, because we risk creating a PHI value at a location where values do 99 /// not in fact merge, thus misrepresenting the truth, and not making the true 100 /// live-through value for variable locations. 101 /// 102 /// This algorithm applies to both calculating the availability of values in 103 /// the first analysis, and the location of variables in the second. However 104 /// for the second we add an extra dimension of pain: creating a variable 105 /// location PHI is only valid if, for each incoming edge, 106 /// * There is a value for the variable on the incoming edge, and 107 /// * All the edges have that value in the same register. 108 /// Or put another way: we can only create a variable-location PHI if there is 109 /// a matching machine-location PHI, each input to which is the variables value 110 /// in the predecessor block. 111 /// 112 /// To accomodate this difference, each point on the lattice is split in 113 /// two: a "proposed" PHI and "definite" PHI. Any PHI that can immediately 114 /// have a location determined are "definite" PHIs, and no further work is 115 /// needed. Otherwise, a location that all non-backedge predecessors agree 116 /// on is picked and propagated as a "proposed" PHI value. If that PHI value 117 /// is truly live-through, it'll appear on the loop backedges on the next 118 /// dataflow iteration, after which the block live-in moves to be a "definite" 119 /// PHI. If it's not truly live-through, the variable value will be downgraded 120 /// further as we explore the lattice, or remains "proposed" and is considered 121 /// invalid once dataflow completes. 122 /// 123 /// ### Terminology 124 /// 125 /// A machine location is a register or spill slot, a value is something that's 126 /// defined by an instruction or PHI node, while a variable value is the value 127 /// assigned to a variable. A variable location is a machine location, that must 128 /// contain the appropriate variable value. A value that is a PHI node is 129 /// occasionally called an mphi. 130 /// 131 /// The first dataflow problem is the "machine value location" problem, 132 /// because we're determining which machine locations contain which values. 133 /// The "locations" are constant: what's unknown is what value they contain. 134 /// 135 /// The second dataflow problem (the one for variables) is the "variable value 136 /// problem", because it's determining what values a variable has, rather than 137 /// what location those values are placed in. Unfortunately, it's not that 138 /// simple, because producing a PHI value always involves picking a location. 139 /// This is an imperfection that we just have to accept, at least for now. 140 /// 141 /// TODO: 142 /// Overlapping fragments 143 /// Entry values 144 /// Add back DEBUG statements for debugging this 145 /// Collect statistics 146 /// 147 //===----------------------------------------------------------------------===// 148 149 #include "llvm/ADT/DenseMap.h" 150 #include "llvm/ADT/PostOrderIterator.h" 151 #include "llvm/ADT/SmallPtrSet.h" 152 #include "llvm/ADT/SmallSet.h" 153 #include "llvm/ADT/SmallVector.h" 154 #include "llvm/ADT/Statistic.h" 155 #include "llvm/ADT/UniqueVector.h" 156 #include "llvm/CodeGen/LexicalScopes.h" 157 #include "llvm/CodeGen/MachineBasicBlock.h" 158 #include "llvm/CodeGen/MachineFrameInfo.h" 159 #include "llvm/CodeGen/MachineFunction.h" 160 #include "llvm/CodeGen/MachineFunctionPass.h" 161 #include "llvm/CodeGen/MachineInstr.h" 162 #include "llvm/CodeGen/MachineInstrBuilder.h" 163 #include "llvm/CodeGen/MachineMemOperand.h" 164 #include "llvm/CodeGen/MachineOperand.h" 165 #include "llvm/CodeGen/PseudoSourceValue.h" 166 #include "llvm/CodeGen/RegisterScavenging.h" 167 #include "llvm/CodeGen/TargetFrameLowering.h" 168 #include "llvm/CodeGen/TargetInstrInfo.h" 169 #include "llvm/CodeGen/TargetLowering.h" 170 #include "llvm/CodeGen/TargetPassConfig.h" 171 #include "llvm/CodeGen/TargetRegisterInfo.h" 172 #include "llvm/CodeGen/TargetSubtargetInfo.h" 173 #include "llvm/Config/llvm-config.h" 174 #include "llvm/IR/DIBuilder.h" 175 #include "llvm/IR/DebugInfoMetadata.h" 176 #include "llvm/IR/DebugLoc.h" 177 #include "llvm/IR/Function.h" 178 #include "llvm/IR/Module.h" 179 #include "llvm/InitializePasses.h" 180 #include "llvm/MC/MCRegisterInfo.h" 181 #include "llvm/Pass.h" 182 #include "llvm/Support/Casting.h" 183 #include "llvm/Support/Compiler.h" 184 #include "llvm/Support/Debug.h" 185 #include "llvm/Support/raw_ostream.h" 186 #include <algorithm> 187 #include <cassert> 188 #include <cstdint> 189 #include <functional> 190 #include <queue> 191 #include <tuple> 192 #include <utility> 193 #include <vector> 194 #include <limits.h> 195 #include <limits> 196 197 #include "LiveDebugValues.h" 198 199 using namespace llvm; 200 201 #define DEBUG_TYPE "livedebugvalues" 202 203 STATISTIC(NumInserted, "Number of DBG_VALUE instructions inserted"); 204 STATISTIC(NumRemoved, "Number of DBG_VALUE instructions removed"); 205 206 // Act more like the VarLoc implementation, by propagating some locations too 207 // far and ignoring some transfers. 208 static cl::opt<bool> EmulateOldLDV("emulate-old-livedebugvalues", cl::Hidden, 209 cl::desc("Act like old LiveDebugValues did"), 210 cl::init(false)); 211 212 // Rely on isStoreToStackSlotPostFE and similar to observe all stack spills. 213 static cl::opt<bool> 214 ObserveAllStackops("observe-all-stack-ops", cl::Hidden, 215 cl::desc("Allow non-kill spill and restores"), 216 cl::init(false)); 217 218 namespace { 219 220 // The location at which a spilled value resides. It consists of a register and 221 // an offset. 222 struct SpillLoc { 223 unsigned SpillBase; 224 int SpillOffset; 225 bool operator==(const SpillLoc &Other) const { 226 return std::tie(SpillBase, SpillOffset) == 227 std::tie(Other.SpillBase, Other.SpillOffset); 228 } 229 bool operator<(const SpillLoc &Other) const { 230 return std::tie(SpillBase, SpillOffset) < 231 std::tie(Other.SpillBase, Other.SpillOffset); 232 } 233 }; 234 235 class LocIdx { 236 unsigned Location; 237 238 // Default constructor is private, initializing to an illegal location number. 239 // Use only for "not an entry" elements in IndexedMaps. 240 LocIdx() : Location(UINT_MAX) { } 241 242 public: 243 #define NUM_LOC_BITS 24 244 LocIdx(unsigned L) : Location(L) { 245 assert(L < (1 << NUM_LOC_BITS) && "Machine locations must fit in 24 bits"); 246 } 247 248 static LocIdx MakeIllegalLoc() { 249 return LocIdx(); 250 } 251 252 bool isIllegal() const { 253 return Location == UINT_MAX; 254 } 255 256 uint64_t asU64() const { 257 return Location; 258 } 259 260 bool operator==(unsigned L) const { 261 return Location == L; 262 } 263 264 bool operator==(const LocIdx &L) const { 265 return Location == L.Location; 266 } 267 268 bool operator!=(unsigned L) const { 269 return !(*this == L); 270 } 271 272 bool operator!=(const LocIdx &L) const { 273 return !(*this == L); 274 } 275 276 bool operator<(const LocIdx &Other) const { 277 return Location < Other.Location; 278 } 279 }; 280 281 class LocIdxToIndexFunctor { 282 public: 283 using argument_type = LocIdx; 284 unsigned operator()(const LocIdx &L) const { 285 return L.asU64(); 286 } 287 }; 288 289 /// Unique identifier for a value defined by an instruction, as a value type. 290 /// Casts back and forth to a uint64_t. Probably replacable with something less 291 /// bit-constrained. Each value identifies the instruction and machine location 292 /// where the value is defined, although there may be no corresponding machine 293 /// operand for it (ex: regmasks clobbering values). The instructions are 294 /// one-based, and definitions that are PHIs have instruction number zero. 295 /// 296 /// The obvious limits of a 1M block function or 1M instruction blocks are 297 /// problematic; but by that point we should probably have bailed out of 298 /// trying to analyse the function. 299 class ValueIDNum { 300 uint64_t BlockNo : 20; /// The block where the def happens. 301 uint64_t InstNo : 20; /// The Instruction where the def happens. 302 /// One based, is distance from start of block. 303 uint64_t LocNo : NUM_LOC_BITS; /// The machine location where the def happens. 304 305 public: 306 // XXX -- temporarily enabled while the live-in / live-out tables are moved 307 // to something more type-y 308 ValueIDNum() : BlockNo(0xFFFFF), 309 InstNo(0xFFFFF), 310 LocNo(0xFFFFFF) { } 311 312 ValueIDNum(uint64_t Block, uint64_t Inst, uint64_t Loc) 313 : BlockNo(Block), InstNo(Inst), LocNo(Loc) { } 314 315 ValueIDNum(uint64_t Block, uint64_t Inst, LocIdx Loc) 316 : BlockNo(Block), InstNo(Inst), LocNo(Loc.asU64()) { } 317 318 uint64_t getBlock() const { return BlockNo; } 319 uint64_t getInst() const { return InstNo; } 320 uint64_t getLoc() const { return LocNo; } 321 bool isPHI() const { return InstNo == 0; } 322 323 uint64_t asU64() const { 324 uint64_t TmpBlock = BlockNo; 325 uint64_t TmpInst = InstNo; 326 return TmpBlock << 44ull | TmpInst << NUM_LOC_BITS | LocNo; 327 } 328 329 static ValueIDNum fromU64(uint64_t v) { 330 uint64_t L = (v & 0x3FFF); 331 return {v >> 44ull, ((v >> NUM_LOC_BITS) & 0xFFFFF), L}; 332 } 333 334 bool operator<(const ValueIDNum &Other) const { 335 return asU64() < Other.asU64(); 336 } 337 338 bool operator==(const ValueIDNum &Other) const { 339 return std::tie(BlockNo, InstNo, LocNo) == 340 std::tie(Other.BlockNo, Other.InstNo, Other.LocNo); 341 } 342 343 bool operator!=(const ValueIDNum &Other) const { return !(*this == Other); } 344 345 std::string asString(const std::string &mlocname) const { 346 return Twine("bb ") 347 .concat(Twine(BlockNo).concat(Twine(" inst ").concat( 348 Twine(InstNo).concat(Twine(" loc ").concat(Twine(mlocname)))))) 349 .str(); 350 } 351 352 static ValueIDNum EmptyValue; 353 }; 354 355 } // end anonymous namespace 356 357 namespace { 358 359 /// Meta qualifiers for a value. Pair of whatever expression is used to qualify 360 /// the the value, and Boolean of whether or not it's indirect. 361 class DbgValueProperties { 362 public: 363 DbgValueProperties(const DIExpression *DIExpr, bool Indirect) 364 : DIExpr(DIExpr), Indirect(Indirect) {} 365 366 DbgValueProperties(const DbgValueProperties &Cpy) 367 : DIExpr(Cpy.DIExpr), Indirect(Cpy.Indirect) {} 368 369 /// Extract properties from an existing DBG_VALUE instruction. 370 DbgValueProperties(const MachineInstr &MI) { 371 assert(MI.isDebugValue()); 372 DIExpr = MI.getDebugExpression(); 373 Indirect = MI.getOperand(1).isImm(); 374 } 375 376 bool operator==(const DbgValueProperties &Other) const { 377 return std::tie(DIExpr, Indirect) == std::tie(Other.DIExpr, Other.Indirect); 378 } 379 380 bool operator!=(const DbgValueProperties &Other) const { 381 return !(*this == Other); 382 } 383 384 const DIExpression *DIExpr; 385 bool Indirect; 386 }; 387 388 /// Tracker for what values are in machine locations. Listens to the Things 389 /// being Done by various instructions, and maintains a table of what machine 390 /// locations have what values (as defined by a ValueIDNum). 391 /// 392 /// There are potentially a much larger number of machine locations on the 393 /// target machine than the actual working-set size of the function. On x86 for 394 /// example, we're extremely unlikely to want to track values through control 395 /// or debug registers. To avoid doing so, MLocTracker has several layers of 396 /// indirection going on, with two kinds of ``location'': 397 /// * A LocID uniquely identifies a register or spill location, with a 398 /// predictable value. 399 /// * A LocIdx is a key (in the database sense) for a LocID and a ValueIDNum. 400 /// Whenever a location is def'd or used by a MachineInstr, we automagically 401 /// create a new LocIdx for a location, but not otherwise. This ensures we only 402 /// account for locations that are actually used or defined. The cost is another 403 /// vector lookup (of LocID -> LocIdx) over any other implementation. This is 404 /// fairly cheap, and the compiler tries to reduce the working-set at any one 405 /// time in the function anyway. 406 /// 407 /// Register mask operands completely blow this out of the water; I've just 408 /// piled hacks on top of hacks to get around that. 409 class MLocTracker { 410 public: 411 MachineFunction &MF; 412 const TargetInstrInfo &TII; 413 const TargetRegisterInfo &TRI; 414 const TargetLowering &TLI; 415 416 /// IndexedMap type, mapping from LocIdx to ValueIDNum. 417 typedef IndexedMap<ValueIDNum, LocIdxToIndexFunctor> LocToValueType; 418 419 /// Map of LocIdxes to the ValueIDNums that they store. This is tightly 420 /// packed, entries only exist for locations that are being tracked. 421 LocToValueType LocIdxToIDNum; 422 423 /// "Map" of machine location IDs (i.e., raw register or spill number) to the 424 /// LocIdx key / number for that location. There are always at least as many 425 /// as the number of registers on the target -- if the value in the register 426 /// is not being tracked, then the LocIdx value will be zero. New entries are 427 /// appended if a new spill slot begins being tracked. 428 /// This, and the corresponding reverse map persist for the analysis of the 429 /// whole function, and is necessarying for decoding various vectors of 430 /// values. 431 std::vector<LocIdx> LocIDToLocIdx; 432 433 /// Inverse map of LocIDToLocIdx. 434 IndexedMap<unsigned, LocIdxToIndexFunctor> LocIdxToLocID; 435 436 /// Unique-ification of spill slots. Used to number them -- their LocID 437 /// number is the index in SpillLocs minus one plus NumRegs. 438 UniqueVector<SpillLoc> SpillLocs; 439 440 // If we discover a new machine location, assign it an mphi with this 441 // block number. 442 unsigned CurBB; 443 444 /// Cached local copy of the number of registers the target has. 445 unsigned NumRegs; 446 447 /// Collection of register mask operands that have been observed. Second part 448 /// of pair indicates the instruction that they happened in. Used to 449 /// reconstruct where defs happened if we start tracking a location later 450 /// on. 451 SmallVector<std::pair<const MachineOperand *, unsigned>, 32> Masks; 452 453 /// Iterator for locations and the values they contain. Dereferencing 454 /// produces a struct/pair containing the LocIdx key for this location, 455 /// and a reference to the value currently stored. Simplifies the process 456 /// of seeking a particular location. 457 class MLocIterator { 458 LocToValueType &ValueMap; 459 LocIdx Idx; 460 461 public: 462 class value_type { 463 public: 464 value_type(LocIdx Idx, ValueIDNum &Value) : Idx(Idx), Value(Value) { } 465 const LocIdx Idx; /// Read-only index of this location. 466 ValueIDNum &Value; /// Reference to the stored value at this location. 467 }; 468 469 MLocIterator(LocToValueType &ValueMap, LocIdx Idx) 470 : ValueMap(ValueMap), Idx(Idx) { } 471 472 bool operator==(const MLocIterator &Other) const { 473 assert(&ValueMap == &Other.ValueMap); 474 return Idx == Other.Idx; 475 } 476 477 bool operator!=(const MLocIterator &Other) const { 478 return !(*this == Other); 479 } 480 481 void operator++() { 482 Idx = LocIdx(Idx.asU64() + 1); 483 } 484 485 value_type operator*() { 486 return value_type(Idx, ValueMap[LocIdx(Idx)]); 487 } 488 }; 489 490 MLocTracker(MachineFunction &MF, const TargetInstrInfo &TII, 491 const TargetRegisterInfo &TRI, const TargetLowering &TLI) 492 : MF(MF), TII(TII), TRI(TRI), TLI(TLI), 493 LocIdxToIDNum(ValueIDNum::EmptyValue), 494 LocIdxToLocID(0) { 495 NumRegs = TRI.getNumRegs(); 496 reset(); 497 LocIDToLocIdx.resize(NumRegs, LocIdx::MakeIllegalLoc()); 498 assert(NumRegs < (1u << NUM_LOC_BITS)); // Detect bit packing failure 499 500 // Always track SP. This avoids the implicit clobbering caused by regmasks 501 // from affectings its values. (LiveDebugValues disbelieves calls and 502 // regmasks that claim to clobber SP). 503 unsigned SP = TLI.getStackPointerRegisterToSaveRestore(); 504 if (SP) { 505 unsigned ID = getLocID(SP, false); 506 (void)lookupOrTrackRegister(ID); 507 } 508 } 509 510 /// Produce location ID number for indexing LocIDToLocIdx. Takes the register 511 /// or spill number, and flag for whether it's a spill or not. 512 unsigned getLocID(unsigned RegOrSpill, bool isSpill) { 513 return (isSpill) ? RegOrSpill + NumRegs - 1 : RegOrSpill; 514 } 515 516 /// Accessor for reading the value at Idx. 517 ValueIDNum getNumAtPos(LocIdx Idx) const { 518 assert(Idx.asU64() < LocIdxToIDNum.size()); 519 return LocIdxToIDNum[Idx]; 520 } 521 522 unsigned getNumLocs(void) const { return LocIdxToIDNum.size(); } 523 524 /// Reset all locations to contain a PHI value at the designated block. Used 525 /// sometimes for actual PHI values, othertimes to indicate the block entry 526 /// value (before any more information is known). 527 void setMPhis(unsigned NewCurBB) { 528 CurBB = NewCurBB; 529 for (auto Location : locations()) 530 Location.Value = {CurBB, 0, Location.Idx}; 531 } 532 533 /// Load values for each location from array of ValueIDNums. Take current 534 /// bbnum just in case we read a value from a hitherto untouched register. 535 void loadFromArray(ValueIDNum *Locs, unsigned NewCurBB) { 536 CurBB = NewCurBB; 537 // Iterate over all tracked locations, and load each locations live-in 538 // value into our local index. 539 for (auto Location : locations()) 540 Location.Value = Locs[Location.Idx.asU64()]; 541 } 542 543 /// Wipe any un-necessary location records after traversing a block. 544 void reset(void) { 545 // We could reset all the location values too; however either loadFromArray 546 // or setMPhis should be called before this object is re-used. Just 547 // clear Masks, they're definitely not needed. 548 Masks.clear(); 549 } 550 551 /// Clear all data. Destroys the LocID <=> LocIdx map, which makes most of 552 /// the information in this pass uninterpretable. 553 void clear(void) { 554 reset(); 555 LocIDToLocIdx.clear(); 556 LocIdxToLocID.clear(); 557 LocIdxToIDNum.clear(); 558 //SpillLocs.reset(); XXX UniqueVector::reset assumes a SpillLoc casts from 0 559 SpillLocs = decltype(SpillLocs)(); 560 561 LocIDToLocIdx.resize(NumRegs, LocIdx::MakeIllegalLoc()); 562 } 563 564 /// Set a locaiton to a certain value. 565 void setMLoc(LocIdx L, ValueIDNum Num) { 566 assert(L.asU64() < LocIdxToIDNum.size()); 567 LocIdxToIDNum[L] = Num; 568 } 569 570 /// Create a LocIdx for an untracked register ID. Initialize it to either an 571 /// mphi value representing a live-in, or a recent register mask clobber. 572 LocIdx trackRegister(unsigned ID) { 573 assert(ID != 0); 574 LocIdx NewIdx = LocIdx(LocIdxToIDNum.size()); 575 LocIdxToIDNum.grow(NewIdx); 576 LocIdxToLocID.grow(NewIdx); 577 578 // Default: it's an mphi. 579 ValueIDNum ValNum = {CurBB, 0, NewIdx}; 580 // Was this reg ever touched by a regmask? 581 for (const auto &MaskPair : reverse(Masks)) { 582 if (MaskPair.first->clobbersPhysReg(ID)) { 583 // There was an earlier def we skipped. 584 ValNum = {CurBB, MaskPair.second, NewIdx}; 585 break; 586 } 587 } 588 589 LocIdxToIDNum[NewIdx] = ValNum; 590 LocIdxToLocID[NewIdx] = ID; 591 return NewIdx; 592 } 593 594 LocIdx lookupOrTrackRegister(unsigned ID) { 595 LocIdx &Index = LocIDToLocIdx[ID]; 596 if (Index.isIllegal()) 597 Index = trackRegister(ID); 598 return Index; 599 } 600 601 /// Record a definition of the specified register at the given block / inst. 602 /// This doesn't take a ValueIDNum, because the definition and its location 603 /// are synonymous. 604 void defReg(Register R, unsigned BB, unsigned Inst) { 605 unsigned ID = getLocID(R, false); 606 LocIdx Idx = lookupOrTrackRegister(ID); 607 ValueIDNum ValueID = {BB, Inst, Idx}; 608 LocIdxToIDNum[Idx] = ValueID; 609 } 610 611 /// Set a register to a value number. To be used if the value number is 612 /// known in advance. 613 void setReg(Register R, ValueIDNum ValueID) { 614 unsigned ID = getLocID(R, false); 615 LocIdx Idx = lookupOrTrackRegister(ID); 616 LocIdxToIDNum[Idx] = ValueID; 617 } 618 619 ValueIDNum readReg(Register R) { 620 unsigned ID = getLocID(R, false); 621 LocIdx Idx = lookupOrTrackRegister(ID); 622 return LocIdxToIDNum[Idx]; 623 } 624 625 /// Reset a register value to zero / empty. Needed to replicate the 626 /// VarLoc implementation where a copy to/from a register effectively 627 /// clears the contents of the source register. (Values can only have one 628 /// machine location in VarLocBasedImpl). 629 void wipeRegister(Register R) { 630 unsigned ID = getLocID(R, false); 631 LocIdx Idx = LocIDToLocIdx[ID]; 632 LocIdxToIDNum[Idx] = ValueIDNum::EmptyValue; 633 } 634 635 /// Determine the LocIdx of an existing register. 636 LocIdx getRegMLoc(Register R) { 637 unsigned ID = getLocID(R, false); 638 return LocIDToLocIdx[ID]; 639 } 640 641 /// Record a RegMask operand being executed. Defs any register we currently 642 /// track, stores a pointer to the mask in case we have to account for it 643 /// later. 644 void writeRegMask(const MachineOperand *MO, unsigned CurBB, unsigned InstID) { 645 // Ensure SP exists, so that we don't override it later. 646 unsigned SP = TLI.getStackPointerRegisterToSaveRestore(); 647 648 // Def any register we track have that isn't preserved. The regmask 649 // terminates the liveness of a register, meaning its value can't be 650 // relied upon -- we represent this by giving it a new value. 651 for (auto Location : locations()) { 652 unsigned ID = LocIdxToLocID[Location.Idx]; 653 // Don't clobber SP, even if the mask says it's clobbered. 654 if (ID < NumRegs && ID != SP && MO->clobbersPhysReg(ID)) 655 defReg(ID, CurBB, InstID); 656 } 657 Masks.push_back(std::make_pair(MO, InstID)); 658 } 659 660 /// Find LocIdx for SpillLoc \p L, creating a new one if it's not tracked. 661 LocIdx getOrTrackSpillLoc(SpillLoc L) { 662 unsigned SpillID = SpillLocs.idFor(L); 663 if (SpillID == 0) { 664 SpillID = SpillLocs.insert(L); 665 unsigned L = getLocID(SpillID, true); 666 LocIdx Idx = LocIdx(LocIdxToIDNum.size()); // New idx 667 LocIdxToIDNum.grow(Idx); 668 LocIdxToLocID.grow(Idx); 669 LocIDToLocIdx.push_back(Idx); 670 LocIdxToLocID[Idx] = L; 671 return Idx; 672 } else { 673 unsigned L = getLocID(SpillID, true); 674 LocIdx Idx = LocIDToLocIdx[L]; 675 return Idx; 676 } 677 } 678 679 /// Set the value stored in a spill slot. 680 void setSpill(SpillLoc L, ValueIDNum ValueID) { 681 LocIdx Idx = getOrTrackSpillLoc(L); 682 LocIdxToIDNum[Idx] = ValueID; 683 } 684 685 /// Read whatever value is in a spill slot, or None if it isn't tracked. 686 Optional<ValueIDNum> readSpill(SpillLoc L) { 687 unsigned SpillID = SpillLocs.idFor(L); 688 if (SpillID == 0) 689 return None; 690 691 unsigned LocID = getLocID(SpillID, true); 692 LocIdx Idx = LocIDToLocIdx[LocID]; 693 return LocIdxToIDNum[Idx]; 694 } 695 696 /// Determine the LocIdx of a spill slot. Return None if it previously 697 /// hasn't had a value assigned. 698 Optional<LocIdx> getSpillMLoc(SpillLoc L) { 699 unsigned SpillID = SpillLocs.idFor(L); 700 if (SpillID == 0) 701 return None; 702 unsigned LocNo = getLocID(SpillID, true); 703 return LocIDToLocIdx[LocNo]; 704 } 705 706 /// Return true if Idx is a spill machine location. 707 bool isSpill(LocIdx Idx) const { 708 return LocIdxToLocID[Idx] >= NumRegs; 709 } 710 711 MLocIterator begin() { 712 return MLocIterator(LocIdxToIDNum, 0); 713 } 714 715 MLocIterator end() { 716 return MLocIterator(LocIdxToIDNum, LocIdxToIDNum.size()); 717 } 718 719 /// Return a range over all locations currently tracked. 720 iterator_range<MLocIterator> locations() { 721 return llvm::make_range(begin(), end()); 722 } 723 724 std::string LocIdxToName(LocIdx Idx) const { 725 unsigned ID = LocIdxToLocID[Idx]; 726 if (ID >= NumRegs) 727 return Twine("slot ").concat(Twine(ID - NumRegs)).str(); 728 else 729 return TRI.getRegAsmName(ID).str(); 730 } 731 732 std::string IDAsString(const ValueIDNum &Num) const { 733 std::string DefName = LocIdxToName(Num.getLoc()); 734 return Num.asString(DefName); 735 } 736 737 LLVM_DUMP_METHOD 738 void dump() { 739 for (auto Location : locations()) { 740 std::string MLocName = LocIdxToName(Location.Value.getLoc()); 741 std::string DefName = Location.Value.asString(MLocName); 742 dbgs() << LocIdxToName(Location.Idx) << " --> " << DefName << "\n"; 743 } 744 } 745 746 LLVM_DUMP_METHOD 747 void dump_mloc_map() { 748 for (auto Location : locations()) { 749 std::string foo = LocIdxToName(Location.Idx); 750 dbgs() << "Idx " << Location.Idx.asU64() << " " << foo << "\n"; 751 } 752 } 753 754 /// Create a DBG_VALUE based on machine location \p MLoc. Qualify it with the 755 /// information in \pProperties, for variable Var. Don't insert it anywhere, 756 /// just return the builder for it. 757 MachineInstrBuilder emitLoc(Optional<LocIdx> MLoc, const DebugVariable &Var, 758 const DbgValueProperties &Properties) { 759 DebugLoc DL = 760 DebugLoc::get(0, 0, Var.getVariable()->getScope(), Var.getInlinedAt()); 761 auto MIB = BuildMI(MF, DL, TII.get(TargetOpcode::DBG_VALUE)); 762 763 const DIExpression *Expr = Properties.DIExpr; 764 if (!MLoc) { 765 // No location -> DBG_VALUE $noreg 766 MIB.addReg(0, RegState::Debug); 767 MIB.addReg(0, RegState::Debug); 768 } else if (LocIdxToLocID[*MLoc] >= NumRegs) { 769 unsigned LocID = LocIdxToLocID[*MLoc]; 770 const SpillLoc &Spill = SpillLocs[LocID - NumRegs + 1]; 771 Expr = DIExpression::prepend(Expr, DIExpression::ApplyOffset, 772 Spill.SpillOffset); 773 unsigned Base = Spill.SpillBase; 774 MIB.addReg(Base, RegState::Debug); 775 MIB.addImm(0); 776 } else { 777 unsigned LocID = LocIdxToLocID[*MLoc]; 778 MIB.addReg(LocID, RegState::Debug); 779 if (Properties.Indirect) 780 MIB.addImm(0); 781 else 782 MIB.addReg(0, RegState::Debug); 783 } 784 785 MIB.addMetadata(Var.getVariable()); 786 MIB.addMetadata(Expr); 787 return MIB; 788 } 789 }; 790 791 /// Class recording the (high level) _value_ of a variable. Identifies either 792 /// the value of the variable as a ValueIDNum, or a constant MachineOperand. 793 /// This class also stores meta-information about how the value is qualified. 794 /// Used to reason about variable values when performing the second 795 /// (DebugVariable specific) dataflow analysis. 796 class DbgValue { 797 public: 798 union { 799 /// If Kind is Def, the value number that this value is based on. 800 ValueIDNum ID; 801 /// If Kind is Const, the MachineOperand defining this value. 802 MachineOperand MO; 803 /// For a NoVal DbgValue, which block it was generated in. 804 unsigned BlockNo; 805 }; 806 /// Qualifiers for the ValueIDNum above. 807 DbgValueProperties Properties; 808 809 typedef enum { 810 Undef, // Represents a DBG_VALUE $noreg in the transfer function only. 811 Def, // This value is defined by an inst, or is a PHI value. 812 Const, // A constant value contained in the MachineOperand field. 813 Proposed, // This is a tentative PHI value, which may be confirmed or 814 // invalidated later. 815 NoVal // Empty DbgValue, generated during dataflow. BlockNo stores 816 // which block this was generated in. 817 } KindT; 818 /// Discriminator for whether this is a constant or an in-program value. 819 KindT Kind; 820 821 DbgValue(const ValueIDNum &Val, const DbgValueProperties &Prop, KindT Kind) 822 : ID(Val), Properties(Prop), Kind(Kind) { 823 assert(Kind == Def || Kind == Proposed); 824 } 825 826 DbgValue(unsigned BlockNo, const DbgValueProperties &Prop, KindT Kind) 827 : BlockNo(BlockNo), Properties(Prop), Kind(Kind) { 828 assert(Kind == NoVal); 829 } 830 831 DbgValue(const MachineOperand &MO, const DbgValueProperties &Prop, KindT Kind) 832 : MO(MO), Properties(Prop), Kind(Kind) { 833 assert(Kind == Const); 834 } 835 836 DbgValue(const DbgValueProperties &Prop, KindT Kind) 837 : Properties(Prop), Kind(Kind) { 838 assert(Kind == Undef && 839 "Empty DbgValue constructor must pass in Undef kind"); 840 } 841 842 void dump(const MLocTracker *MTrack) const { 843 if (Kind == Const) { 844 MO.dump(); 845 } else if (Kind == NoVal) { 846 dbgs() << "NoVal(" << BlockNo << ")"; 847 } else if (Kind == Proposed) { 848 dbgs() << "VPHI(" << MTrack->IDAsString(ID) << ")"; 849 } else { 850 assert(Kind == Def); 851 dbgs() << MTrack->IDAsString(ID); 852 } 853 if (Properties.Indirect) 854 dbgs() << " indir"; 855 if (Properties.DIExpr) 856 dbgs() << " " << *Properties.DIExpr; 857 } 858 859 bool operator==(const DbgValue &Other) const { 860 if (std::tie(Kind, Properties) != std::tie(Other.Kind, Other.Properties)) 861 return false; 862 else if (Kind == Proposed && ID != Other.ID) 863 return false; 864 else if (Kind == Def && ID != Other.ID) 865 return false; 866 else if (Kind == NoVal && BlockNo != Other.BlockNo) 867 return false; 868 else if (Kind == Const) 869 return MO.isIdenticalTo(Other.MO); 870 871 return true; 872 } 873 874 bool operator!=(const DbgValue &Other) const { return !(*this == Other); } 875 }; 876 877 /// Types for recording sets of variable fragments that overlap. For a given 878 /// local variable, we record all other fragments of that variable that could 879 /// overlap it, to reduce search time. 880 using FragmentOfVar = 881 std::pair<const DILocalVariable *, DIExpression::FragmentInfo>; 882 using OverlapMap = 883 DenseMap<FragmentOfVar, SmallVector<DIExpression::FragmentInfo, 1>>; 884 885 /// Collection of DBG_VALUEs observed when traversing a block. Records each 886 /// variable and the value the DBG_VALUE refers to. Requires the machine value 887 /// location dataflow algorithm to have run already, so that values can be 888 /// identified. 889 class VLocTracker { 890 public: 891 /// Map DebugVariable to the latest Value it's defined to have. 892 /// Needs to be a MapVector because we determine order-in-the-input-MIR from 893 /// the order in this container. 894 /// We only retain the last DbgValue in each block for each variable, to 895 /// determine the blocks live-out variable value. The Vars container forms the 896 /// transfer function for this block, as part of the dataflow analysis. The 897 /// movement of values between locations inside of a block is handled at a 898 /// much later stage, in the TransferTracker class. 899 MapVector<DebugVariable, DbgValue> Vars; 900 DenseMap<DebugVariable, const DILocation *> Scopes; 901 MachineBasicBlock *MBB; 902 903 public: 904 VLocTracker() {} 905 906 void defVar(const MachineInstr &MI, Optional<ValueIDNum> ID) { 907 // XXX skipping overlapping fragments for now. 908 assert(MI.isDebugValue()); 909 DebugVariable Var(MI.getDebugVariable(), MI.getDebugExpression(), 910 MI.getDebugLoc()->getInlinedAt()); 911 DbgValueProperties Properties(MI); 912 DbgValue Rec = (ID) ? DbgValue(*ID, Properties, DbgValue::Def) 913 : DbgValue(Properties, DbgValue::Undef); 914 915 // Attempt insertion; overwrite if it's already mapped. 916 auto Result = Vars.insert(std::make_pair(Var, Rec)); 917 if (!Result.second) 918 Result.first->second = Rec; 919 Scopes[Var] = MI.getDebugLoc().get(); 920 } 921 922 void defVar(const MachineInstr &MI, const MachineOperand &MO) { 923 // XXX skipping overlapping fragments for now. 924 assert(MI.isDebugValue()); 925 DebugVariable Var(MI.getDebugVariable(), MI.getDebugExpression(), 926 MI.getDebugLoc()->getInlinedAt()); 927 DbgValueProperties Properties(MI); 928 DbgValue Rec = DbgValue(MO, Properties, DbgValue::Const); 929 930 // Attempt insertion; overwrite if it's already mapped. 931 auto Result = Vars.insert(std::make_pair(Var, Rec)); 932 if (!Result.second) 933 Result.first->second = Rec; 934 Scopes[Var] = MI.getDebugLoc().get(); 935 } 936 }; 937 938 /// Tracker for converting machine value locations and variable values into 939 /// variable locations (the output of LiveDebugValues), recorded as DBG_VALUEs 940 /// specifying block live-in locations and transfers within blocks. 941 /// 942 /// Operating on a per-block basis, this class takes a (pre-loaded) MLocTracker 943 /// and must be initialized with the set of variable values that are live-in to 944 /// the block. The caller then repeatedly calls process(). TransferTracker picks 945 /// out variable locations for the live-in variable values (if there _is_ a 946 /// location) and creates the corresponding DBG_VALUEs. Then, as the block is 947 /// stepped through, transfers of values between machine locations are 948 /// identified and if profitable, a DBG_VALUE created. 949 /// 950 /// This is where debug use-before-defs would be resolved: a variable with an 951 /// unavailable value could materialize in the middle of a block, when the 952 /// value becomes available. Or, we could detect clobbers and re-specify the 953 /// variable in a backup location. (XXX these are unimplemented). 954 class TransferTracker { 955 public: 956 const TargetInstrInfo *TII; 957 /// This machine location tracker is assumed to always contain the up-to-date 958 /// value mapping for all machine locations. TransferTracker only reads 959 /// information from it. (XXX make it const?) 960 MLocTracker *MTracker; 961 MachineFunction &MF; 962 963 /// Record of all changes in variable locations at a block position. Awkwardly 964 /// we allow inserting either before or after the point: MBB != nullptr 965 /// indicates it's before, otherwise after. 966 struct Transfer { 967 MachineBasicBlock::iterator Pos; /// Position to insert DBG_VALUes 968 MachineBasicBlock *MBB; /// non-null if we should insert after. 969 SmallVector<MachineInstr *, 4> Insts; /// Vector of DBG_VALUEs to insert. 970 }; 971 972 typedef struct { 973 LocIdx Loc; 974 DbgValueProperties Properties; 975 } LocAndProperties; 976 977 /// Collection of transfers (DBG_VALUEs) to be inserted. 978 SmallVector<Transfer, 32> Transfers; 979 980 /// Local cache of what-value-is-in-what-LocIdx. Used to identify differences 981 /// between TransferTrackers view of variable locations and MLocTrackers. For 982 /// example, MLocTracker observes all clobbers, but TransferTracker lazily 983 /// does not. 984 std::vector<ValueIDNum> VarLocs; 985 986 /// Map from LocIdxes to which DebugVariables are based that location. 987 /// Mantained while stepping through the block. Not accurate if 988 /// VarLocs[Idx] != MTracker->LocIdxToIDNum[Idx]. 989 std::map<LocIdx, SmallSet<DebugVariable, 4>> ActiveMLocs; 990 991 /// Map from DebugVariable to it's current location and qualifying meta 992 /// information. To be used in conjunction with ActiveMLocs to construct 993 /// enough information for the DBG_VALUEs for a particular LocIdx. 994 DenseMap<DebugVariable, LocAndProperties> ActiveVLocs; 995 996 /// Temporary cache of DBG_VALUEs to be entered into the Transfers collection. 997 SmallVector<MachineInstr *, 4> PendingDbgValues; 998 999 const TargetRegisterInfo &TRI; 1000 const BitVector &CalleeSavedRegs; 1001 1002 TransferTracker(const TargetInstrInfo *TII, MLocTracker *MTracker, 1003 MachineFunction &MF, const TargetRegisterInfo &TRI, 1004 const BitVector &CalleeSavedRegs) 1005 : TII(TII), MTracker(MTracker), MF(MF), TRI(TRI), 1006 CalleeSavedRegs(CalleeSavedRegs) {} 1007 1008 /// Load object with live-in variable values. \p mlocs contains the live-in 1009 /// values in each machine location, while \p vlocs the live-in variable 1010 /// values. This method picks variable locations for the live-in variables, 1011 /// creates DBG_VALUEs and puts them in #Transfers, then prepares the other 1012 /// object fields to track variable locations as we step through the block. 1013 /// FIXME: could just examine mloctracker instead of passing in \p mlocs? 1014 void loadInlocs(MachineBasicBlock &MBB, ValueIDNum *MLocs, 1015 SmallVectorImpl<std::pair<DebugVariable, DbgValue>> &VLocs, 1016 unsigned NumLocs) { 1017 ActiveMLocs.clear(); 1018 ActiveVLocs.clear(); 1019 VarLocs.clear(); 1020 VarLocs.reserve(NumLocs); 1021 1022 auto isCalleeSaved = [&](LocIdx L) { 1023 unsigned Reg = MTracker->LocIdxToLocID[L]; 1024 if (Reg >= MTracker->NumRegs) 1025 return false; 1026 for (MCRegAliasIterator RAI(Reg, &TRI, true); RAI.isValid(); ++RAI) 1027 if (CalleeSavedRegs.test(*RAI)) 1028 return true; 1029 return false; 1030 }; 1031 1032 // Map of the preferred location for each value. 1033 std::map<ValueIDNum, LocIdx> ValueToLoc; 1034 1035 // Produce a map of value numbers to the current machine locs they live 1036 // in. When emulating VarLocBasedImpl, there should only be one 1037 // location; when not, we get to pick. 1038 for (auto Location : MTracker->locations()) { 1039 LocIdx Idx = Location.Idx; 1040 ValueIDNum &VNum = MLocs[Idx.asU64()]; 1041 VarLocs.push_back(VNum); 1042 auto it = ValueToLoc.find(VNum); 1043 // In order of preference, pick: 1044 // * Callee saved registers, 1045 // * Other registers, 1046 // * Spill slots. 1047 if (it == ValueToLoc.end() || MTracker->isSpill(it->second) || 1048 (!isCalleeSaved(it->second) && isCalleeSaved(Idx.asU64()))) { 1049 // Insert, or overwrite if insertion failed. 1050 auto PrefLocRes = ValueToLoc.insert(std::make_pair(VNum, Idx)); 1051 if (!PrefLocRes.second) 1052 PrefLocRes.first->second = Idx; 1053 } 1054 } 1055 1056 // Now map variables to their picked LocIdxes. 1057 for (auto Var : VLocs) { 1058 if (Var.second.Kind == DbgValue::Const) { 1059 PendingDbgValues.push_back( 1060 emitMOLoc(Var.second.MO, Var.first, Var.second.Properties)); 1061 continue; 1062 } 1063 1064 // If the value has no location, we can't make a variable location. 1065 auto ValuesPreferredLoc = ValueToLoc.find(Var.second.ID); 1066 if (ValuesPreferredLoc == ValueToLoc.end()) 1067 continue; 1068 1069 LocIdx M = ValuesPreferredLoc->second; 1070 auto NewValue = LocAndProperties{M, Var.second.Properties}; 1071 auto Result = ActiveVLocs.insert(std::make_pair(Var.first, NewValue)); 1072 if (!Result.second) 1073 Result.first->second = NewValue; 1074 ActiveMLocs[M].insert(Var.first); 1075 PendingDbgValues.push_back( 1076 MTracker->emitLoc(M, Var.first, Var.second.Properties)); 1077 } 1078 flushDbgValues(MBB.begin(), &MBB); 1079 } 1080 1081 /// Helper to move created DBG_VALUEs into Transfers collection. 1082 void flushDbgValues(MachineBasicBlock::iterator Pos, MachineBasicBlock *MBB) { 1083 if (PendingDbgValues.size() > 0) { 1084 Transfers.push_back({Pos, MBB, PendingDbgValues}); 1085 PendingDbgValues.clear(); 1086 } 1087 } 1088 1089 /// Handle a DBG_VALUE within a block. Terminate the variables current 1090 /// location, and record the value its DBG_VALUE refers to, so that we can 1091 /// detect location transfers later on. 1092 void redefVar(const MachineInstr &MI) { 1093 DebugVariable Var(MI.getDebugVariable(), MI.getDebugExpression(), 1094 MI.getDebugLoc()->getInlinedAt()); 1095 const MachineOperand &MO = MI.getOperand(0); 1096 1097 // Erase any previous location, 1098 auto It = ActiveVLocs.find(Var); 1099 if (It != ActiveVLocs.end()) { 1100 ActiveMLocs[It->second.Loc].erase(Var); 1101 } 1102 1103 // Insert a new variable location. Ignore non-register locations, we don't 1104 // transfer those, and can't currently describe spill locs independently of 1105 // regs. 1106 // (This is because a spill location is a DBG_VALUE of the stack pointer). 1107 if (!MO.isReg() || MO.getReg() == 0) { 1108 if (It != ActiveVLocs.end()) 1109 ActiveVLocs.erase(It); 1110 return; 1111 } 1112 1113 Register Reg = MO.getReg(); 1114 LocIdx MLoc = MTracker->getRegMLoc(Reg); 1115 DbgValueProperties Properties(MI); 1116 1117 // Check whether our local copy of values-by-location in #VarLocs is out of 1118 // date. Wipe old tracking data for the location if it's been clobbered in 1119 // the meantime. 1120 if (MTracker->getNumAtPos(MLoc) != VarLocs[MLoc.asU64()]) { 1121 for (auto &P : ActiveMLocs[MLoc.asU64()]) { 1122 ActiveVLocs.erase(P); 1123 } 1124 ActiveMLocs[MLoc].clear(); 1125 VarLocs[MLoc.asU64()] = MTracker->getNumAtPos(MLoc); 1126 } 1127 1128 ActiveMLocs[MLoc].insert(Var); 1129 if (It == ActiveVLocs.end()) { 1130 ActiveVLocs.insert(std::make_pair(Var, LocAndProperties{MLoc, Properties})); 1131 } else { 1132 It->second.Loc = MLoc; 1133 It->second.Properties = Properties; 1134 } 1135 } 1136 1137 /// Explicitly terminate variable locations based on \p mloc. Creates undef 1138 /// DBG_VALUEs for any variables that were located there, and clears 1139 /// #ActiveMLoc / #ActiveVLoc tracking information for that location. 1140 void clobberMloc(LocIdx MLoc, MachineBasicBlock::iterator Pos) { 1141 assert(MTracker->isSpill(MLoc)); 1142 auto ActiveMLocIt = ActiveMLocs.find(MLoc); 1143 if (ActiveMLocIt == ActiveMLocs.end()) 1144 return; 1145 1146 VarLocs[MLoc.asU64()] = ValueIDNum::EmptyValue; 1147 1148 for (auto &Var : ActiveMLocIt->second) { 1149 auto ActiveVLocIt = ActiveVLocs.find(Var); 1150 // Create an undef. We can't feed in a nullptr DIExpression alas, 1151 // so use the variables last expression. Pass None as the location. 1152 const DIExpression *Expr = ActiveVLocIt->second.Properties.DIExpr; 1153 DbgValueProperties Properties(Expr, false); 1154 PendingDbgValues.push_back(MTracker->emitLoc(None, Var, Properties)); 1155 ActiveVLocs.erase(ActiveVLocIt); 1156 } 1157 flushDbgValues(Pos, nullptr); 1158 1159 ActiveMLocIt->second.clear(); 1160 } 1161 1162 /// Transfer variables based on \p Src to be based on \p Dst. This handles 1163 /// both register copies as well as spills and restores. Creates DBG_VALUEs 1164 /// describing the movement. 1165 void transferMlocs(LocIdx Src, LocIdx Dst, MachineBasicBlock::iterator Pos) { 1166 // Does Src still contain the value num we expect? If not, it's been 1167 // clobbered in the meantime, and our variable locations are stale. 1168 if (VarLocs[Src.asU64()] != MTracker->getNumAtPos(Src)) 1169 return; 1170 1171 // assert(ActiveMLocs[Dst].size() == 0); 1172 //^^^ Legitimate scenario on account of un-clobbered slot being assigned to? 1173 ActiveMLocs[Dst] = ActiveMLocs[Src]; 1174 VarLocs[Dst.asU64()] = VarLocs[Src.asU64()]; 1175 1176 // For each variable based on Src; create a location at Dst. 1177 for (auto &Var : ActiveMLocs[Src]) { 1178 auto ActiveVLocIt = ActiveVLocs.find(Var); 1179 assert(ActiveVLocIt != ActiveVLocs.end()); 1180 ActiveVLocIt->second.Loc = Dst; 1181 1182 assert(Dst != 0); 1183 MachineInstr *MI = 1184 MTracker->emitLoc(Dst, Var, ActiveVLocIt->second.Properties); 1185 PendingDbgValues.push_back(MI); 1186 } 1187 ActiveMLocs[Src].clear(); 1188 flushDbgValues(Pos, nullptr); 1189 1190 // XXX XXX XXX "pretend to be old LDV" means dropping all tracking data 1191 // about the old location. 1192 if (EmulateOldLDV) 1193 VarLocs[Src.asU64()] = ValueIDNum::EmptyValue; 1194 } 1195 1196 MachineInstrBuilder emitMOLoc(const MachineOperand &MO, 1197 const DebugVariable &Var, 1198 const DbgValueProperties &Properties) { 1199 DebugLoc DL = 1200 DebugLoc::get(0, 0, Var.getVariable()->getScope(), Var.getInlinedAt()); 1201 auto MIB = BuildMI(MF, DL, TII->get(TargetOpcode::DBG_VALUE)); 1202 MIB.add(MO); 1203 if (Properties.Indirect) 1204 MIB.addImm(0); 1205 else 1206 MIB.addReg(0); 1207 MIB.addMetadata(Var.getVariable()); 1208 MIB.addMetadata(Properties.DIExpr); 1209 return MIB; 1210 } 1211 }; 1212 1213 class InstrRefBasedLDV : public LDVImpl { 1214 private: 1215 using FragmentInfo = DIExpression::FragmentInfo; 1216 using OptFragmentInfo = Optional<DIExpression::FragmentInfo>; 1217 1218 // Helper while building OverlapMap, a map of all fragments seen for a given 1219 // DILocalVariable. 1220 using VarToFragments = 1221 DenseMap<const DILocalVariable *, SmallSet<FragmentInfo, 4>>; 1222 1223 /// Machine location/value transfer function, a mapping of which locations 1224 // are assigned which new values. 1225 typedef std::map<LocIdx, ValueIDNum> MLocTransferMap; 1226 1227 /// Live in/out structure for the variable values: a per-block map of 1228 /// variables to their values. XXX, better name? 1229 typedef DenseMap<const MachineBasicBlock *, 1230 DenseMap<DebugVariable, DbgValue> *> 1231 LiveIdxT; 1232 1233 typedef std::pair<DebugVariable, DbgValue> VarAndLoc; 1234 1235 /// Type for a live-in value: the predecessor block, and its value. 1236 typedef std::pair<MachineBasicBlock *, DbgValue *> InValueT; 1237 1238 /// Vector (per block) of a collection (inner smallvector) of live-ins. 1239 /// Used as the result type for the variable value dataflow problem. 1240 typedef SmallVector<SmallVector<VarAndLoc, 8>, 8> LiveInsT; 1241 1242 const TargetRegisterInfo *TRI; 1243 const TargetInstrInfo *TII; 1244 const TargetFrameLowering *TFI; 1245 BitVector CalleeSavedRegs; 1246 LexicalScopes LS; 1247 TargetPassConfig *TPC; 1248 1249 /// Object to track machine locations as we step through a block. Could 1250 /// probably be a field rather than a pointer, as it's always used. 1251 MLocTracker *MTracker; 1252 1253 /// Number of the current block LiveDebugValues is stepping through. 1254 unsigned CurBB; 1255 1256 /// Number of the current instruction LiveDebugValues is evaluating. 1257 unsigned CurInst; 1258 1259 /// Variable tracker -- listens to DBG_VALUEs occurring as InstrRefBasedImpl 1260 /// steps through a block. Reads the values at each location from the 1261 /// MLocTracker object. 1262 VLocTracker *VTracker; 1263 1264 /// Tracker for transfers, listens to DBG_VALUEs and transfers of values 1265 /// between locations during stepping, creates new DBG_VALUEs when values move 1266 /// location. 1267 TransferTracker *TTracker; 1268 1269 /// Blocks which are artificial, i.e. blocks which exclusively contain 1270 /// instructions without DebugLocs, or with line 0 locations. 1271 SmallPtrSet<const MachineBasicBlock *, 16> ArtificialBlocks; 1272 1273 // Mapping of blocks to and from their RPOT order. 1274 DenseMap<unsigned int, MachineBasicBlock *> OrderToBB; 1275 DenseMap<MachineBasicBlock *, unsigned int> BBToOrder; 1276 DenseMap<unsigned, unsigned> BBNumToRPO; 1277 1278 // Map of overlapping variable fragments. 1279 OverlapMap OverlapFragments; 1280 VarToFragments SeenFragments; 1281 1282 /// Tests whether this instruction is a spill to a stack slot. 1283 bool isSpillInstruction(const MachineInstr &MI, MachineFunction *MF); 1284 1285 /// Decide if @MI is a spill instruction and return true if it is. We use 2 1286 /// criteria to make this decision: 1287 /// - Is this instruction a store to a spill slot? 1288 /// - Is there a register operand that is both used and killed? 1289 /// TODO: Store optimization can fold spills into other stores (including 1290 /// other spills). We do not handle this yet (more than one memory operand). 1291 bool isLocationSpill(const MachineInstr &MI, MachineFunction *MF, 1292 unsigned &Reg); 1293 1294 /// If a given instruction is identified as a spill, return the spill slot 1295 /// and set \p Reg to the spilled register. 1296 Optional<SpillLoc> isRestoreInstruction(const MachineInstr &MI, 1297 MachineFunction *MF, unsigned &Reg); 1298 1299 /// Given a spill instruction, extract the register and offset used to 1300 /// address the spill slot in a target independent way. 1301 SpillLoc extractSpillBaseRegAndOffset(const MachineInstr &MI); 1302 1303 /// Observe a single instruction while stepping through a block. 1304 void process(MachineInstr &MI); 1305 1306 /// Examines whether \p MI is a DBG_VALUE and notifies trackers. 1307 /// \returns true if MI was recognized and processed. 1308 bool transferDebugValue(const MachineInstr &MI); 1309 1310 /// Examines whether \p MI is copy instruction, and notifies trackers. 1311 /// \returns true if MI was recognized and processed. 1312 bool transferRegisterCopy(MachineInstr &MI); 1313 1314 /// Examines whether \p MI is stack spill or restore instruction, and 1315 /// notifies trackers. \returns true if MI was recognized and processed. 1316 bool transferSpillOrRestoreInst(MachineInstr &MI); 1317 1318 /// Examines \p MI for any registers that it defines, and notifies trackers. 1319 /// \returns true if MI was recognized and processed. 1320 void transferRegisterDef(MachineInstr &MI); 1321 1322 /// Copy one location to the other, accounting for movement of subregisters 1323 /// too. 1324 void performCopy(Register Src, Register Dst); 1325 1326 void accumulateFragmentMap(MachineInstr &MI); 1327 1328 /// Step through the function, recording register definitions and movements 1329 /// in an MLocTracker. Convert the observations into a per-block transfer 1330 /// function in \p MLocTransfer, suitable for using with the machine value 1331 /// location dataflow problem. Do the same with VLoc trackers in \p VLocs, 1332 /// although the precise machine value numbers can't be known until after 1333 /// the machine value number problem is solved. 1334 void produceTransferFunctions(MachineFunction &MF, 1335 SmallVectorImpl<MLocTransferMap> &MLocTransfer, 1336 unsigned MaxNumBlocks, 1337 SmallVectorImpl<VLocTracker> &VLocs); 1338 1339 /// Solve the machine value location dataflow problem. Takes as input the 1340 /// transfer functions in \p MLocTransfer. Writes the output live-in and 1341 /// live-out arrays to the (initialized to zero) multidimensional arrays in 1342 /// \p MInLocs and \p MOutLocs. The outer dimension is indexed by block 1343 /// number, the inner by LocIdx. 1344 void mlocDataflow(ValueIDNum **MInLocs, ValueIDNum **MOutLocs, 1345 SmallVectorImpl<MLocTransferMap> &MLocTransfer); 1346 1347 /// Perform a control flow join (lattice value meet) of the values in machine 1348 /// locations at \p MBB. Follows the algorithm described in the file-comment, 1349 /// reading live-outs of predecessors from \p OutLocs, the current live ins 1350 /// from \p InLocs, and assigning the newly computed live ins back into 1351 /// \p InLocs. \returns two bools -- the first indicates whether a change 1352 /// was made, the second whether a lattice downgrade occurred. If the latter 1353 /// is true, revisiting this block is necessary. 1354 std::tuple<bool, bool> 1355 mlocJoin(MachineBasicBlock &MBB, 1356 SmallPtrSet<const MachineBasicBlock *, 16> &Visited, 1357 ValueIDNum **OutLocs, ValueIDNum *InLocs); 1358 1359 /// Solve the variable value dataflow problem, for a single lexical scope. 1360 /// Uses the algorithm from the file comment to resolve control flow joins, 1361 /// although there are extra hacks, see vlocJoin. Reads the 1362 /// locations of values from the \p MInLocs and \p MOutLocs arrays (see 1363 /// mlocDataflow) and reads the variable values transfer function from 1364 /// \p AllTheVlocs. Live-in and Live-out variable values are stored locally, 1365 /// with the live-ins permanently stored to \p Output once the fixedpoint is 1366 /// reached. 1367 /// \p VarsWeCareAbout contains a collection of the variables in \p Scope 1368 /// that we should be tracking. 1369 /// \p AssignBlocks contains the set of blocks that aren't in \p Scope, but 1370 /// which do contain DBG_VALUEs, which VarLocBasedImpl tracks locations 1371 /// through. 1372 void vlocDataflow(const LexicalScope *Scope, const DILocation *DILoc, 1373 const SmallSet<DebugVariable, 4> &VarsWeCareAbout, 1374 SmallPtrSetImpl<MachineBasicBlock *> &AssignBlocks, 1375 LiveInsT &Output, ValueIDNum **MOutLocs, 1376 ValueIDNum **MInLocs, 1377 SmallVectorImpl<VLocTracker> &AllTheVLocs); 1378 1379 /// Compute the live-ins to a block, considering control flow merges according 1380 /// to the method in the file comment. Live out and live in variable values 1381 /// are stored in \p VLOCOutLocs and \p VLOCInLocs. The live-ins for \p MBB 1382 /// are computed and stored into \p VLOCInLocs. \returns true if the live-ins 1383 /// are modified. 1384 /// \p InLocsT Output argument, storage for calculated live-ins. 1385 /// \returns two bools -- the first indicates whether a change 1386 /// was made, the second whether a lattice downgrade occurred. If the latter 1387 /// is true, revisiting this block is necessary. 1388 std::tuple<bool, bool> 1389 vlocJoin(MachineBasicBlock &MBB, LiveIdxT &VLOCOutLocs, LiveIdxT &VLOCInLocs, 1390 SmallPtrSet<const MachineBasicBlock *, 16> *VLOCVisited, 1391 unsigned BBNum, const SmallSet<DebugVariable, 4> &AllVars, 1392 ValueIDNum **MOutLocs, ValueIDNum **MInLocs, 1393 SmallPtrSet<const MachineBasicBlock *, 8> &InScopeBlocks, 1394 SmallPtrSet<const MachineBasicBlock *, 8> &BlocksToExplore, 1395 DenseMap<DebugVariable, DbgValue> &InLocsT); 1396 1397 /// Continue exploration of the variable-value lattice, as explained in the 1398 /// file-level comment. \p OldLiveInLocation contains the current 1399 /// exploration position, from which we need to descend further. \p Values 1400 /// contains the set of live-in values, \p CurBlockRPONum the RPO number of 1401 /// the current block, and \p CandidateLocations a set of locations that 1402 /// should be considered as PHI locations, if we reach the bottom of the 1403 /// lattice. \returns true if we should downgrade; the value is the agreeing 1404 /// value number in a non-backedge predecessor. 1405 bool vlocDowngradeLattice(const MachineBasicBlock &MBB, 1406 const DbgValue &OldLiveInLocation, 1407 const SmallVectorImpl<InValueT> &Values, 1408 unsigned CurBlockRPONum); 1409 1410 /// For the given block and live-outs feeding into it, try to find a 1411 /// machine location where they all join. If a solution for all predecessors 1412 /// can't be found, a location where all non-backedge-predecessors join 1413 /// will be returned instead. While this method finds a join location, this 1414 /// says nothing as to whether it should be used. 1415 /// \returns Pair of value ID if found, and true when the correct value 1416 /// is available on all predecessor edges, or false if it's only available 1417 /// for non-backedge predecessors. 1418 std::tuple<Optional<ValueIDNum>, bool> 1419 pickVPHILoc(MachineBasicBlock &MBB, const DebugVariable &Var, 1420 const LiveIdxT &LiveOuts, ValueIDNum **MOutLocs, 1421 ValueIDNum **MInLocs, 1422 const SmallVectorImpl<MachineBasicBlock *> &BlockOrders); 1423 1424 /// Given the solutions to the two dataflow problems, machine value locations 1425 /// in \p MInLocs and live-in variable values in \p SavedLiveIns, runs the 1426 /// TransferTracker class over the function to produce live-in and transfer 1427 /// DBG_VALUEs, then inserts them. Groups of DBG_VALUEs are inserted in the 1428 /// order given by AllVarsNumbering -- this could be any stable order, but 1429 /// right now "order of appearence in function, when explored in RPO", so 1430 /// that we can compare explictly against VarLocBasedImpl. 1431 void emitLocations(MachineFunction &MF, LiveInsT SavedLiveIns, 1432 ValueIDNum **MInLocs, 1433 DenseMap<DebugVariable, unsigned> &AllVarsNumbering); 1434 1435 /// Boilerplate computation of some initial sets, artifical blocks and 1436 /// RPOT block ordering. 1437 void initialSetup(MachineFunction &MF); 1438 1439 bool ExtendRanges(MachineFunction &MF, TargetPassConfig *TPC) override; 1440 1441 public: 1442 /// Default construct and initialize the pass. 1443 InstrRefBasedLDV(); 1444 1445 LLVM_DUMP_METHOD 1446 void dump_mloc_transfer(const MLocTransferMap &mloc_transfer) const; 1447 1448 bool isCalleeSaved(LocIdx L) { 1449 unsigned Reg = MTracker->LocIdxToLocID[L]; 1450 for (MCRegAliasIterator RAI(Reg, TRI, true); RAI.isValid(); ++RAI) 1451 if (CalleeSavedRegs.test(*RAI)) 1452 return true; 1453 return false; 1454 } 1455 }; 1456 1457 } // end anonymous namespace 1458 1459 //===----------------------------------------------------------------------===// 1460 // Implementation 1461 //===----------------------------------------------------------------------===// 1462 1463 ValueIDNum ValueIDNum::EmptyValue = {UINT_MAX, UINT_MAX, UINT_MAX}; 1464 1465 /// Default construct and initialize the pass. 1466 InstrRefBasedLDV::InstrRefBasedLDV() {} 1467 1468 //===----------------------------------------------------------------------===// 1469 // Debug Range Extension Implementation 1470 //===----------------------------------------------------------------------===// 1471 1472 #ifndef NDEBUG 1473 // Something to restore in the future. 1474 // void InstrRefBasedLDV::printVarLocInMBB(..) 1475 #endif 1476 1477 SpillLoc 1478 InstrRefBasedLDV::extractSpillBaseRegAndOffset(const MachineInstr &MI) { 1479 assert(MI.hasOneMemOperand() && 1480 "Spill instruction does not have exactly one memory operand?"); 1481 auto MMOI = MI.memoperands_begin(); 1482 const PseudoSourceValue *PVal = (*MMOI)->getPseudoValue(); 1483 assert(PVal->kind() == PseudoSourceValue::FixedStack && 1484 "Inconsistent memory operand in spill instruction"); 1485 int FI = cast<FixedStackPseudoSourceValue>(PVal)->getFrameIndex(); 1486 const MachineBasicBlock *MBB = MI.getParent(); 1487 Register Reg; 1488 int Offset = TFI->getFrameIndexReference(*MBB->getParent(), FI, Reg); 1489 return {Reg, Offset}; 1490 } 1491 1492 /// End all previous ranges related to @MI and start a new range from @MI 1493 /// if it is a DBG_VALUE instr. 1494 bool InstrRefBasedLDV::transferDebugValue(const MachineInstr &MI) { 1495 if (!MI.isDebugValue()) 1496 return false; 1497 1498 const DILocalVariable *Var = MI.getDebugVariable(); 1499 const DIExpression *Expr = MI.getDebugExpression(); 1500 const DILocation *DebugLoc = MI.getDebugLoc(); 1501 const DILocation *InlinedAt = DebugLoc->getInlinedAt(); 1502 assert(Var->isValidLocationForIntrinsic(DebugLoc) && 1503 "Expected inlined-at fields to agree"); 1504 1505 DebugVariable V(Var, Expr, InlinedAt); 1506 1507 // If there are no instructions in this lexical scope, do no location tracking 1508 // at all, this variable shouldn't get a legitimate location range. 1509 auto *Scope = LS.findLexicalScope(MI.getDebugLoc().get()); 1510 if (Scope == nullptr) 1511 return true; // handled it; by doing nothing 1512 1513 const MachineOperand &MO = MI.getOperand(0); 1514 1515 // MLocTracker needs to know that this register is read, even if it's only 1516 // read by a debug inst. 1517 if (MO.isReg() && MO.getReg() != 0) 1518 (void)MTracker->readReg(MO.getReg()); 1519 1520 // If we're preparing for the second analysis (variables), the machine value 1521 // locations are already solved, and we report this DBG_VALUE and the value 1522 // it refers to to VLocTracker. 1523 if (VTracker) { 1524 if (MO.isReg()) { 1525 // Feed defVar the new variable location, or if this is a 1526 // DBG_VALUE $noreg, feed defVar None. 1527 if (MO.getReg()) 1528 VTracker->defVar(MI, MTracker->readReg(MO.getReg())); 1529 else 1530 VTracker->defVar(MI, None); 1531 } else if (MI.getOperand(0).isImm() || MI.getOperand(0).isFPImm() || 1532 MI.getOperand(0).isCImm()) { 1533 VTracker->defVar(MI, MI.getOperand(0)); 1534 } 1535 } 1536 1537 // If performing final tracking of transfers, report this variable definition 1538 // to the TransferTracker too. 1539 if (TTracker) 1540 TTracker->redefVar(MI); 1541 return true; 1542 } 1543 1544 void InstrRefBasedLDV::transferRegisterDef(MachineInstr &MI) { 1545 // Meta Instructions do not affect the debug liveness of any register they 1546 // define. 1547 if (MI.isImplicitDef()) { 1548 // Except when there's an implicit def, and the location it's defining has 1549 // no value number. The whole point of an implicit def is to announce that 1550 // the register is live, without be specific about it's value. So define 1551 // a value if there isn't one already. 1552 ValueIDNum Num = MTracker->readReg(MI.getOperand(0).getReg()); 1553 // Has a legitimate value -> ignore the implicit def. 1554 if (Num.getLoc() != 0) 1555 return; 1556 // Otherwise, def it here. 1557 } else if (MI.isMetaInstruction()) 1558 return; 1559 1560 MachineFunction *MF = MI.getMF(); 1561 const TargetLowering *TLI = MF->getSubtarget().getTargetLowering(); 1562 unsigned SP = TLI->getStackPointerRegisterToSaveRestore(); 1563 1564 // Find the regs killed by MI, and find regmasks of preserved regs. 1565 // Max out the number of statically allocated elements in `DeadRegs`, as this 1566 // prevents fallback to std::set::count() operations. 1567 SmallSet<uint32_t, 32> DeadRegs; 1568 SmallVector<const uint32_t *, 4> RegMasks; 1569 SmallVector<const MachineOperand *, 4> RegMaskPtrs; 1570 for (const MachineOperand &MO : MI.operands()) { 1571 // Determine whether the operand is a register def. 1572 if (MO.isReg() && MO.isDef() && MO.getReg() && 1573 Register::isPhysicalRegister(MO.getReg()) && 1574 !(MI.isCall() && MO.getReg() == SP)) { 1575 // Remove ranges of all aliased registers. 1576 for (MCRegAliasIterator RAI(MO.getReg(), TRI, true); RAI.isValid(); ++RAI) 1577 // FIXME: Can we break out of this loop early if no insertion occurs? 1578 DeadRegs.insert(*RAI); 1579 } else if (MO.isRegMask()) { 1580 RegMasks.push_back(MO.getRegMask()); 1581 RegMaskPtrs.push_back(&MO); 1582 } 1583 } 1584 1585 // Tell MLocTracker about all definitions, of regmasks and otherwise. 1586 for (uint32_t DeadReg : DeadRegs) 1587 MTracker->defReg(DeadReg, CurBB, CurInst); 1588 1589 for (auto *MO : RegMaskPtrs) 1590 MTracker->writeRegMask(MO, CurBB, CurInst); 1591 } 1592 1593 void InstrRefBasedLDV::performCopy(Register SrcRegNum, Register DstRegNum) { 1594 ValueIDNum SrcValue = MTracker->readReg(SrcRegNum); 1595 1596 MTracker->setReg(DstRegNum, SrcValue); 1597 1598 // In all circumstances, re-def the super registers. It's definitely a new 1599 // value now. This doesn't uniquely identify the composition of subregs, for 1600 // example, two identical values in subregisters composed in different 1601 // places would not get equal value numbers. 1602 for (MCSuperRegIterator SRI(DstRegNum, TRI); SRI.isValid(); ++SRI) 1603 MTracker->defReg(*SRI, CurBB, CurInst); 1604 1605 // If we're emulating VarLocBasedImpl, just define all the subregisters. 1606 // DBG_VALUEs of them will expect to be tracked from the DBG_VALUE, not 1607 // through prior copies. 1608 if (EmulateOldLDV) { 1609 for (MCSubRegIndexIterator DRI(DstRegNum, TRI); DRI.isValid(); ++DRI) 1610 MTracker->defReg(DRI.getSubReg(), CurBB, CurInst); 1611 return; 1612 } 1613 1614 // Otherwise, actually copy subregisters from one location to another. 1615 // XXX: in addition, any subregisters of DstRegNum that don't line up with 1616 // the source register should be def'd. 1617 for (MCSubRegIndexIterator SRI(SrcRegNum, TRI); SRI.isValid(); ++SRI) { 1618 unsigned SrcSubReg = SRI.getSubReg(); 1619 unsigned SubRegIdx = SRI.getSubRegIndex(); 1620 unsigned DstSubReg = TRI->getSubReg(DstRegNum, SubRegIdx); 1621 if (!DstSubReg) 1622 continue; 1623 1624 // Do copy. There are two matching subregisters, the source value should 1625 // have been def'd when the super-reg was, the latter might not be tracked 1626 // yet. 1627 // This will force SrcSubReg to be tracked, if it isn't yet. 1628 (void)MTracker->readReg(SrcSubReg); 1629 LocIdx SrcL = MTracker->getRegMLoc(SrcSubReg); 1630 assert(SrcL.asU64()); 1631 (void)MTracker->readReg(DstSubReg); 1632 LocIdx DstL = MTracker->getRegMLoc(DstSubReg); 1633 assert(DstL.asU64()); 1634 (void)DstL; 1635 ValueIDNum CpyValue = {SrcValue.getBlock(), SrcValue.getInst(), SrcL}; 1636 1637 MTracker->setReg(DstSubReg, CpyValue); 1638 } 1639 } 1640 1641 bool InstrRefBasedLDV::isSpillInstruction(const MachineInstr &MI, 1642 MachineFunction *MF) { 1643 // TODO: Handle multiple stores folded into one. 1644 if (!MI.hasOneMemOperand()) 1645 return false; 1646 1647 if (!MI.getSpillSize(TII) && !MI.getFoldedSpillSize(TII)) 1648 return false; // This is not a spill instruction, since no valid size was 1649 // returned from either function. 1650 1651 return true; 1652 } 1653 1654 bool InstrRefBasedLDV::isLocationSpill(const MachineInstr &MI, 1655 MachineFunction *MF, unsigned &Reg) { 1656 if (!isSpillInstruction(MI, MF)) 1657 return false; 1658 1659 // XXX FIXME: On x86, isStoreToStackSlotPostFE returns '1' instead of an 1660 // actual register number. 1661 if (ObserveAllStackops) { 1662 int FI; 1663 Reg = TII->isStoreToStackSlotPostFE(MI, FI); 1664 return Reg != 0; 1665 } 1666 1667 auto isKilledReg = [&](const MachineOperand MO, unsigned &Reg) { 1668 if (!MO.isReg() || !MO.isUse()) { 1669 Reg = 0; 1670 return false; 1671 } 1672 Reg = MO.getReg(); 1673 return MO.isKill(); 1674 }; 1675 1676 for (const MachineOperand &MO : MI.operands()) { 1677 // In a spill instruction generated by the InlineSpiller the spilled 1678 // register has its kill flag set. 1679 if (isKilledReg(MO, Reg)) 1680 return true; 1681 if (Reg != 0) { 1682 // Check whether next instruction kills the spilled register. 1683 // FIXME: Current solution does not cover search for killed register in 1684 // bundles and instructions further down the chain. 1685 auto NextI = std::next(MI.getIterator()); 1686 // Skip next instruction that points to basic block end iterator. 1687 if (MI.getParent()->end() == NextI) 1688 continue; 1689 unsigned RegNext; 1690 for (const MachineOperand &MONext : NextI->operands()) { 1691 // Return true if we came across the register from the 1692 // previous spill instruction that is killed in NextI. 1693 if (isKilledReg(MONext, RegNext) && RegNext == Reg) 1694 return true; 1695 } 1696 } 1697 } 1698 // Return false if we didn't find spilled register. 1699 return false; 1700 } 1701 1702 Optional<SpillLoc> 1703 InstrRefBasedLDV::isRestoreInstruction(const MachineInstr &MI, 1704 MachineFunction *MF, unsigned &Reg) { 1705 if (!MI.hasOneMemOperand()) 1706 return None; 1707 1708 // FIXME: Handle folded restore instructions with more than one memory 1709 // operand. 1710 if (MI.getRestoreSize(TII)) { 1711 Reg = MI.getOperand(0).getReg(); 1712 return extractSpillBaseRegAndOffset(MI); 1713 } 1714 return None; 1715 } 1716 1717 bool InstrRefBasedLDV::transferSpillOrRestoreInst(MachineInstr &MI) { 1718 // XXX -- it's too difficult to implement VarLocBasedImpl's stack location 1719 // limitations under the new model. Therefore, when comparing them, compare 1720 // versions that don't attempt spills or restores at all. 1721 if (EmulateOldLDV) 1722 return false; 1723 1724 MachineFunction *MF = MI.getMF(); 1725 unsigned Reg; 1726 Optional<SpillLoc> Loc; 1727 1728 LLVM_DEBUG(dbgs() << "Examining instruction: "; MI.dump();); 1729 1730 // First, if there are any DBG_VALUEs pointing at a spill slot that is 1731 // written to, terminate that variable location. The value in memory 1732 // will have changed. DbgEntityHistoryCalculator doesn't try to detect this. 1733 if (isSpillInstruction(MI, MF)) { 1734 Loc = extractSpillBaseRegAndOffset(MI); 1735 1736 if (TTracker) { 1737 Optional<LocIdx> MLoc = MTracker->getSpillMLoc(*Loc); 1738 if (MLoc) 1739 TTracker->clobberMloc(*MLoc, MI.getIterator()); 1740 } 1741 } 1742 1743 // Try to recognise spill and restore instructions that may transfer a value. 1744 if (isLocationSpill(MI, MF, Reg)) { 1745 Loc = extractSpillBaseRegAndOffset(MI); 1746 auto ValueID = MTracker->readReg(Reg); 1747 1748 // If the location is empty, produce a phi, signify it's the live-in value. 1749 if (ValueID.getLoc() == 0) 1750 ValueID = {CurBB, 0, MTracker->getRegMLoc(Reg)}; 1751 1752 MTracker->setSpill(*Loc, ValueID); 1753 auto OptSpillLocIdx = MTracker->getSpillMLoc(*Loc); 1754 assert(OptSpillLocIdx && "Spill slot set but has no LocIdx?"); 1755 LocIdx SpillLocIdx = *OptSpillLocIdx; 1756 1757 // Tell TransferTracker about this spill, produce DBG_VALUEs for it. 1758 if (TTracker) 1759 TTracker->transferMlocs(MTracker->getRegMLoc(Reg), SpillLocIdx, 1760 MI.getIterator()); 1761 1762 // VarLocBasedImpl would, at this point, stop tracking the source 1763 // register of the store. 1764 if (EmulateOldLDV) { 1765 for (MCRegAliasIterator RAI(Reg, TRI, true); RAI.isValid(); ++RAI) 1766 MTracker->defReg(*RAI, CurBB, CurInst); 1767 } 1768 } else { 1769 if (!(Loc = isRestoreInstruction(MI, MF, Reg))) 1770 return false; 1771 1772 // Is there a value to be restored? 1773 auto OptValueID = MTracker->readSpill(*Loc); 1774 if (OptValueID) { 1775 ValueIDNum ValueID = *OptValueID; 1776 LocIdx SpillLocIdx = *MTracker->getSpillMLoc(*Loc); 1777 // XXX -- can we recover sub-registers of this value? Until we can, first 1778 // overwrite all defs of the register being restored to. 1779 for (MCRegAliasIterator RAI(Reg, TRI, true); RAI.isValid(); ++RAI) 1780 MTracker->defReg(*RAI, CurBB, CurInst); 1781 1782 // Now override the reg we're restoring to. 1783 MTracker->setReg(Reg, ValueID); 1784 1785 // Report this restore to the transfer tracker too. 1786 if (TTracker) 1787 TTracker->transferMlocs(SpillLocIdx, MTracker->getRegMLoc(Reg), 1788 MI.getIterator()); 1789 } else { 1790 // There isn't anything in the location; not clear if this is a code path 1791 // that still runs. Def this register anyway just in case. 1792 for (MCRegAliasIterator RAI(Reg, TRI, true); RAI.isValid(); ++RAI) 1793 MTracker->defReg(*RAI, CurBB, CurInst); 1794 1795 // Force the spill slot to be tracked. 1796 LocIdx L = MTracker->getOrTrackSpillLoc(*Loc); 1797 1798 // Set the restored value to be a machine phi number, signifying that it's 1799 // whatever the spills live-in value is in this block. Definitely has 1800 // a LocIdx due to the setSpill above. 1801 ValueIDNum ValueID = {CurBB, 0, L}; 1802 MTracker->setReg(Reg, ValueID); 1803 MTracker->setSpill(*Loc, ValueID); 1804 } 1805 } 1806 return true; 1807 } 1808 1809 bool InstrRefBasedLDV::transferRegisterCopy(MachineInstr &MI) { 1810 auto DestSrc = TII->isCopyInstr(MI); 1811 if (!DestSrc) 1812 return false; 1813 1814 const MachineOperand *DestRegOp = DestSrc->Destination; 1815 const MachineOperand *SrcRegOp = DestSrc->Source; 1816 1817 auto isCalleeSavedReg = [&](unsigned Reg) { 1818 for (MCRegAliasIterator RAI(Reg, TRI, true); RAI.isValid(); ++RAI) 1819 if (CalleeSavedRegs.test(*RAI)) 1820 return true; 1821 return false; 1822 }; 1823 1824 Register SrcReg = SrcRegOp->getReg(); 1825 Register DestReg = DestRegOp->getReg(); 1826 1827 // Ignore identity copies. Yep, these make it as far as LiveDebugValues. 1828 if (SrcReg == DestReg) 1829 return true; 1830 1831 // For emulating VarLocBasedImpl: 1832 // We want to recognize instructions where destination register is callee 1833 // saved register. If register that could be clobbered by the call is 1834 // included, there would be a great chance that it is going to be clobbered 1835 // soon. It is more likely that previous register, which is callee saved, is 1836 // going to stay unclobbered longer, even if it is killed. 1837 // 1838 // For InstrRefBasedImpl, we can track multiple locations per value, so 1839 // ignore this condition. 1840 if (EmulateOldLDV && !isCalleeSavedReg(DestReg)) 1841 return false; 1842 1843 // InstrRefBasedImpl only followed killing copies. 1844 if (EmulateOldLDV && !SrcRegOp->isKill()) 1845 return false; 1846 1847 // Copy MTracker info, including subregs if available. 1848 InstrRefBasedLDV::performCopy(SrcReg, DestReg); 1849 1850 // Only produce a transfer of DBG_VALUE within a block where old LDV 1851 // would have. We might make use of the additional value tracking in some 1852 // other way, later. 1853 if (TTracker && isCalleeSavedReg(DestReg) && SrcRegOp->isKill()) 1854 TTracker->transferMlocs(MTracker->getRegMLoc(SrcReg), 1855 MTracker->getRegMLoc(DestReg), MI.getIterator()); 1856 1857 // VarLocBasedImpl would quit tracking the old location after copying. 1858 if (EmulateOldLDV && SrcReg != DestReg) 1859 MTracker->defReg(SrcReg, CurBB, CurInst); 1860 1861 return true; 1862 } 1863 1864 /// Accumulate a mapping between each DILocalVariable fragment and other 1865 /// fragments of that DILocalVariable which overlap. This reduces work during 1866 /// the data-flow stage from "Find any overlapping fragments" to "Check if the 1867 /// known-to-overlap fragments are present". 1868 /// \param MI A previously unprocessed DEBUG_VALUE instruction to analyze for 1869 /// fragment usage. 1870 void InstrRefBasedLDV::accumulateFragmentMap(MachineInstr &MI) { 1871 DebugVariable MIVar(MI.getDebugVariable(), MI.getDebugExpression(), 1872 MI.getDebugLoc()->getInlinedAt()); 1873 FragmentInfo ThisFragment = MIVar.getFragmentOrDefault(); 1874 1875 // If this is the first sighting of this variable, then we are guaranteed 1876 // there are currently no overlapping fragments either. Initialize the set 1877 // of seen fragments, record no overlaps for the current one, and return. 1878 auto SeenIt = SeenFragments.find(MIVar.getVariable()); 1879 if (SeenIt == SeenFragments.end()) { 1880 SmallSet<FragmentInfo, 4> OneFragment; 1881 OneFragment.insert(ThisFragment); 1882 SeenFragments.insert({MIVar.getVariable(), OneFragment}); 1883 1884 OverlapFragments.insert({{MIVar.getVariable(), ThisFragment}, {}}); 1885 return; 1886 } 1887 1888 // If this particular Variable/Fragment pair already exists in the overlap 1889 // map, it has already been accounted for. 1890 auto IsInOLapMap = 1891 OverlapFragments.insert({{MIVar.getVariable(), ThisFragment}, {}}); 1892 if (!IsInOLapMap.second) 1893 return; 1894 1895 auto &ThisFragmentsOverlaps = IsInOLapMap.first->second; 1896 auto &AllSeenFragments = SeenIt->second; 1897 1898 // Otherwise, examine all other seen fragments for this variable, with "this" 1899 // fragment being a previously unseen fragment. Record any pair of 1900 // overlapping fragments. 1901 for (auto &ASeenFragment : AllSeenFragments) { 1902 // Does this previously seen fragment overlap? 1903 if (DIExpression::fragmentsOverlap(ThisFragment, ASeenFragment)) { 1904 // Yes: Mark the current fragment as being overlapped. 1905 ThisFragmentsOverlaps.push_back(ASeenFragment); 1906 // Mark the previously seen fragment as being overlapped by the current 1907 // one. 1908 auto ASeenFragmentsOverlaps = 1909 OverlapFragments.find({MIVar.getVariable(), ASeenFragment}); 1910 assert(ASeenFragmentsOverlaps != OverlapFragments.end() && 1911 "Previously seen var fragment has no vector of overlaps"); 1912 ASeenFragmentsOverlaps->second.push_back(ThisFragment); 1913 } 1914 } 1915 1916 AllSeenFragments.insert(ThisFragment); 1917 } 1918 1919 void InstrRefBasedLDV::process(MachineInstr &MI) { 1920 // Try to interpret an MI as a debug or transfer instruction. Only if it's 1921 // none of these should we interpret it's register defs as new value 1922 // definitions. 1923 if (transferDebugValue(MI)) 1924 return; 1925 if (transferRegisterCopy(MI)) 1926 return; 1927 if (transferSpillOrRestoreInst(MI)) 1928 return; 1929 transferRegisterDef(MI); 1930 } 1931 1932 void InstrRefBasedLDV::produceTransferFunctions( 1933 MachineFunction &MF, SmallVectorImpl<MLocTransferMap> &MLocTransfer, 1934 unsigned MaxNumBlocks, SmallVectorImpl<VLocTracker> &VLocs) { 1935 // Because we try to optimize around register mask operands by ignoring regs 1936 // that aren't currently tracked, we set up something ugly for later: RegMask 1937 // operands that are seen earlier than the first use of a register, still need 1938 // to clobber that register in the transfer function. But this information 1939 // isn't actively recorded. Instead, we track each RegMask used in each block, 1940 // and accumulated the clobbered but untracked registers in each block into 1941 // the following bitvector. Later, if new values are tracked, we can add 1942 // appropriate clobbers. 1943 SmallVector<BitVector, 32> BlockMasks; 1944 BlockMasks.resize(MaxNumBlocks); 1945 1946 // Reserve one bit per register for the masks described above. 1947 unsigned BVWords = MachineOperand::getRegMaskSize(TRI->getNumRegs()); 1948 for (auto &BV : BlockMasks) 1949 BV.resize(TRI->getNumRegs(), true); 1950 1951 // Step through all instructions and inhale the transfer function. 1952 for (auto &MBB : MF) { 1953 // Object fields that are read by trackers to know where we are in the 1954 // function. 1955 CurBB = MBB.getNumber(); 1956 CurInst = 1; 1957 1958 // Set all machine locations to a PHI value. For transfer function 1959 // production only, this signifies the live-in value to the block. 1960 MTracker->reset(); 1961 MTracker->setMPhis(CurBB); 1962 1963 VTracker = &VLocs[CurBB]; 1964 VTracker->MBB = &MBB; 1965 1966 // Step through each instruction in this block. 1967 for (auto &MI : MBB) { 1968 process(MI); 1969 // Also accumulate fragment map. 1970 if (MI.isDebugValue()) 1971 accumulateFragmentMap(MI); 1972 ++CurInst; 1973 } 1974 1975 // Produce the transfer function, a map of machine location to new value. If 1976 // any machine location has the live-in phi value from the start of the 1977 // block, it's live-through and doesn't need recording in the transfer 1978 // function. 1979 for (auto Location : MTracker->locations()) { 1980 LocIdx Idx = Location.Idx; 1981 ValueIDNum &P = Location.Value; 1982 if (P.isPHI() && P.getLoc() == Idx.asU64()) 1983 continue; 1984 1985 // Insert-or-update. 1986 auto &TransferMap = MLocTransfer[CurBB]; 1987 auto Result = TransferMap.insert(std::make_pair(Idx.asU64(), P)); 1988 if (!Result.second) 1989 Result.first->second = P; 1990 } 1991 1992 // Accumulate any bitmask operands into the clobberred reg mask for this 1993 // block. 1994 for (auto &P : MTracker->Masks) { 1995 BlockMasks[CurBB].clearBitsNotInMask(P.first->getRegMask(), BVWords); 1996 } 1997 } 1998 1999 // Compute a bitvector of all the registers that are tracked in this block. 2000 const TargetLowering *TLI = MF.getSubtarget().getTargetLowering(); 2001 unsigned SP = TLI->getStackPointerRegisterToSaveRestore(); 2002 BitVector UsedRegs(TRI->getNumRegs()); 2003 for (auto Location : MTracker->locations()) { 2004 unsigned ID = MTracker->LocIdxToLocID[Location.Idx]; 2005 if (ID >= TRI->getNumRegs() || ID == SP) 2006 continue; 2007 UsedRegs.set(ID); 2008 } 2009 2010 // Check that any regmask-clobber of a register that gets tracked, is not 2011 // live-through in the transfer function. It needs to be clobbered at the 2012 // very least. 2013 for (unsigned int I = 0; I < MaxNumBlocks; ++I) { 2014 BitVector &BV = BlockMasks[I]; 2015 BV.flip(); 2016 BV &= UsedRegs; 2017 // This produces all the bits that we clobber, but also use. Check that 2018 // they're all clobbered or at least set in the designated transfer 2019 // elem. 2020 for (unsigned Bit : BV.set_bits()) { 2021 unsigned ID = MTracker->getLocID(Bit, false); 2022 LocIdx Idx = MTracker->LocIDToLocIdx[ID]; 2023 auto &TransferMap = MLocTransfer[I]; 2024 2025 // Install a value representing the fact that this location is effectively 2026 // written to in this block. As there's no reserved value, instead use 2027 // a value number that is never generated. Pick the value number for the 2028 // first instruction in the block, def'ing this location, which we know 2029 // this block never used anyway. 2030 ValueIDNum NotGeneratedNum = ValueIDNum(I, 1, Idx); 2031 auto Result = 2032 TransferMap.insert(std::make_pair(Idx.asU64(), NotGeneratedNum)); 2033 if (!Result.second) { 2034 ValueIDNum &ValueID = Result.first->second; 2035 if (ValueID.getBlock() == I && ValueID.isPHI()) 2036 // It was left as live-through. Set it to clobbered. 2037 ValueID = NotGeneratedNum; 2038 } 2039 } 2040 } 2041 } 2042 2043 std::tuple<bool, bool> 2044 InstrRefBasedLDV::mlocJoin(MachineBasicBlock &MBB, 2045 SmallPtrSet<const MachineBasicBlock *, 16> &Visited, 2046 ValueIDNum **OutLocs, ValueIDNum *InLocs) { 2047 LLVM_DEBUG(dbgs() << "join MBB: " << MBB.getNumber() << "\n"); 2048 bool Changed = false; 2049 bool DowngradeOccurred = false; 2050 2051 // Collect predecessors that have been visited. Anything that hasn't been 2052 // visited yet is a backedge on the first iteration, and the meet of it's 2053 // lattice value for all locations will be unaffected. 2054 SmallVector<const MachineBasicBlock *, 8> BlockOrders; 2055 for (auto Pred : MBB.predecessors()) { 2056 if (Visited.count(Pred)) { 2057 BlockOrders.push_back(Pred); 2058 } 2059 } 2060 2061 // Visit predecessors in RPOT order. 2062 auto Cmp = [&](const MachineBasicBlock *A, const MachineBasicBlock *B) { 2063 return BBToOrder.find(A)->second < BBToOrder.find(B)->second; 2064 }; 2065 llvm::sort(BlockOrders.begin(), BlockOrders.end(), Cmp); 2066 2067 // Skip entry block. 2068 if (BlockOrders.size() == 0) 2069 return std::tuple<bool, bool>(false, false); 2070 2071 // Step through all machine locations, then look at each predecessor and 2072 // detect disagreements. 2073 unsigned ThisBlockRPO = BBToOrder.find(&MBB)->second; 2074 for (auto Location : MTracker->locations()) { 2075 LocIdx Idx = Location.Idx; 2076 // Pick out the first predecessors live-out value for this location. It's 2077 // guaranteed to be not a backedge, as we order by RPO. 2078 ValueIDNum BaseVal = OutLocs[BlockOrders[0]->getNumber()][Idx.asU64()]; 2079 2080 // Some flags for whether there's a disagreement, and whether it's a 2081 // disagreement with a backedge or not. 2082 bool Disagree = false; 2083 bool NonBackEdgeDisagree = false; 2084 2085 // Loop around everything that wasn't 'base'. 2086 for (unsigned int I = 1; I < BlockOrders.size(); ++I) { 2087 auto *MBB = BlockOrders[I]; 2088 if (BaseVal != OutLocs[MBB->getNumber()][Idx.asU64()]) { 2089 // Live-out of a predecessor disagrees with the first predecessor. 2090 Disagree = true; 2091 2092 // Test whether it's a disagreemnt in the backedges or not. 2093 if (BBToOrder.find(MBB)->second < ThisBlockRPO) // might be self b/e 2094 NonBackEdgeDisagree = true; 2095 } 2096 } 2097 2098 bool OverRide = false; 2099 if (Disagree && !NonBackEdgeDisagree) { 2100 // Only the backedges disagree. Consider demoting the livein 2101 // lattice value, as per the file level comment. The value we consider 2102 // demoting to is the value that the non-backedge predecessors agree on. 2103 // The order of values is that non-PHIs are \top, a PHI at this block 2104 // \bot, and phis between the two are ordered by their RPO number. 2105 // If there's no agreement, or we've already demoted to this PHI value 2106 // before, replace with a PHI value at this block. 2107 2108 // Calculate order numbers: zero means normal def, nonzero means RPO 2109 // number. 2110 unsigned BaseBlockRPONum = BBNumToRPO[BaseVal.getBlock()] + 1; 2111 if (!BaseVal.isPHI()) 2112 BaseBlockRPONum = 0; 2113 2114 ValueIDNum &InLocID = InLocs[Idx.asU64()]; 2115 unsigned InLocRPONum = BBNumToRPO[InLocID.getBlock()] + 1; 2116 if (!InLocID.isPHI()) 2117 InLocRPONum = 0; 2118 2119 // Should we ignore the disagreeing backedges, and override with the 2120 // value the other predecessors agree on (in "base")? 2121 unsigned ThisBlockRPONum = BBNumToRPO[MBB.getNumber()] + 1; 2122 if (BaseBlockRPONum > InLocRPONum && BaseBlockRPONum < ThisBlockRPONum) { 2123 // Override. 2124 OverRide = true; 2125 DowngradeOccurred = true; 2126 } 2127 } 2128 // else: if we disagree in the non-backedges, then this is definitely 2129 // a control flow merge where different values merge. Make it a PHI. 2130 2131 // Generate a phi... 2132 ValueIDNum PHI = {(uint64_t)MBB.getNumber(), 0, Idx}; 2133 ValueIDNum NewVal = (Disagree && !OverRide) ? PHI : BaseVal; 2134 if (InLocs[Idx.asU64()] != NewVal) { 2135 Changed |= true; 2136 InLocs[Idx.asU64()] = NewVal; 2137 } 2138 } 2139 2140 // Uhhhhhh, reimplement NumInserted and NumRemoved pls. 2141 return std::tuple<bool, bool>(Changed, DowngradeOccurred); 2142 } 2143 2144 void InstrRefBasedLDV::mlocDataflow( 2145 ValueIDNum **MInLocs, ValueIDNum **MOutLocs, 2146 SmallVectorImpl<MLocTransferMap> &MLocTransfer) { 2147 std::priority_queue<unsigned int, std::vector<unsigned int>, 2148 std::greater<unsigned int>> 2149 Worklist, Pending; 2150 2151 // We track what is on the current and pending worklist to avoid inserting 2152 // the same thing twice. We could avoid this with a custom priority queue, 2153 // but this is probably not worth it. 2154 SmallPtrSet<MachineBasicBlock *, 16> OnPending, OnWorklist; 2155 2156 // Initialize worklist with every block to be visited. 2157 for (unsigned int I = 0; I < BBToOrder.size(); ++I) { 2158 Worklist.push(I); 2159 OnWorklist.insert(OrderToBB[I]); 2160 } 2161 2162 MTracker->reset(); 2163 2164 // Set inlocs for entry block -- each as a PHI at the entry block. Represents 2165 // the incoming value to the function. 2166 MTracker->setMPhis(0); 2167 for (auto Location : MTracker->locations()) 2168 MInLocs[0][Location.Idx.asU64()] = Location.Value; 2169 2170 SmallPtrSet<const MachineBasicBlock *, 16> Visited; 2171 while (!Worklist.empty() || !Pending.empty()) { 2172 // Vector for storing the evaluated block transfer function. 2173 SmallVector<std::pair<LocIdx, ValueIDNum>, 32> ToRemap; 2174 2175 while (!Worklist.empty()) { 2176 MachineBasicBlock *MBB = OrderToBB[Worklist.top()]; 2177 CurBB = MBB->getNumber(); 2178 Worklist.pop(); 2179 2180 // Join the values in all predecessor blocks. 2181 bool InLocsChanged, DowngradeOccurred; 2182 std::tie(InLocsChanged, DowngradeOccurred) = 2183 mlocJoin(*MBB, Visited, MOutLocs, MInLocs[CurBB]); 2184 InLocsChanged |= Visited.insert(MBB).second; 2185 2186 // If a downgrade occurred, book us in for re-examination on the next 2187 // iteration. 2188 if (DowngradeOccurred && OnPending.insert(MBB).second) 2189 Pending.push(BBToOrder[MBB]); 2190 2191 // Don't examine transfer function if we've visited this loc at least 2192 // once, and inlocs haven't changed. 2193 if (!InLocsChanged) 2194 continue; 2195 2196 // Load the current set of live-ins into MLocTracker. 2197 MTracker->loadFromArray(MInLocs[CurBB], CurBB); 2198 2199 // Each element of the transfer function can be a new def, or a read of 2200 // a live-in value. Evaluate each element, and store to "ToRemap". 2201 ToRemap.clear(); 2202 for (auto &P : MLocTransfer[CurBB]) { 2203 if (P.second.getBlock() == CurBB && P.second.isPHI()) { 2204 // This is a movement of whatever was live in. Read it. 2205 ValueIDNum NewID = MTracker->getNumAtPos(P.second.getLoc()); 2206 ToRemap.push_back(std::make_pair(P.first, NewID)); 2207 } else { 2208 // It's a def. Just set it. 2209 assert(P.second.getBlock() == CurBB); 2210 ToRemap.push_back(std::make_pair(P.first, P.second)); 2211 } 2212 } 2213 2214 // Commit the transfer function changes into mloc tracker, which 2215 // transforms the contents of the MLocTracker into the live-outs. 2216 for (auto &P : ToRemap) 2217 MTracker->setMLoc(P.first, P.second); 2218 2219 // Now copy out-locs from mloc tracker into out-loc vector, checking 2220 // whether changes have occurred. These changes can have come from both 2221 // the transfer function, and mlocJoin. 2222 bool OLChanged = false; 2223 for (auto Location : MTracker->locations()) { 2224 OLChanged |= MOutLocs[CurBB][Location.Idx.asU64()] != Location.Value; 2225 MOutLocs[CurBB][Location.Idx.asU64()] = Location.Value; 2226 } 2227 2228 MTracker->reset(); 2229 2230 // No need to examine successors again if out-locs didn't change. 2231 if (!OLChanged) 2232 continue; 2233 2234 // All successors should be visited: put any back-edges on the pending 2235 // list for the next dataflow iteration, and any other successors to be 2236 // visited this iteration, if they're not going to be already. 2237 for (auto s : MBB->successors()) { 2238 // Does branching to this successor represent a back-edge? 2239 if (BBToOrder[s] > BBToOrder[MBB]) { 2240 // No: visit it during this dataflow iteration. 2241 if (OnWorklist.insert(s).second) 2242 Worklist.push(BBToOrder[s]); 2243 } else { 2244 // Yes: visit it on the next iteration. 2245 if (OnPending.insert(s).second) 2246 Pending.push(BBToOrder[s]); 2247 } 2248 } 2249 } 2250 2251 Worklist.swap(Pending); 2252 std::swap(OnPending, OnWorklist); 2253 OnPending.clear(); 2254 // At this point, pending must be empty, since it was just the empty 2255 // worklist 2256 assert(Pending.empty() && "Pending should be empty"); 2257 } 2258 2259 // Once all the live-ins don't change on mlocJoin(), we've reached a 2260 // fixedpoint. 2261 } 2262 2263 bool InstrRefBasedLDV::vlocDowngradeLattice( 2264 const MachineBasicBlock &MBB, const DbgValue &OldLiveInLocation, 2265 const SmallVectorImpl<InValueT> &Values, unsigned CurBlockRPONum) { 2266 // Ranking value preference: see file level comment, the highest rank is 2267 // a plain def, followed by PHI values in reverse post-order. Numerically, 2268 // we assign all defs the rank '0', all PHIs their blocks RPO number plus 2269 // one, and consider the lowest value the highest ranked. 2270 int OldLiveInRank = BBNumToRPO[OldLiveInLocation.ID.getBlock()] + 1; 2271 if (!OldLiveInLocation.ID.isPHI()) 2272 OldLiveInRank = 0; 2273 2274 // Allow any unresolvable conflict to be over-ridden. 2275 if (OldLiveInLocation.Kind == DbgValue::NoVal) { 2276 // Although if it was an unresolvable conflict from _this_ block, then 2277 // all other seeking of downgrades and PHIs must have failed before hand. 2278 if (OldLiveInLocation.BlockNo == (unsigned)MBB.getNumber()) 2279 return false; 2280 OldLiveInRank = INT_MIN; 2281 } 2282 2283 auto &InValue = *Values[0].second; 2284 2285 if (InValue.Kind == DbgValue::Const || InValue.Kind == DbgValue::NoVal) 2286 return false; 2287 2288 unsigned ThisRPO = BBNumToRPO[InValue.ID.getBlock()]; 2289 int ThisRank = ThisRPO + 1; 2290 if (!InValue.ID.isPHI()) 2291 ThisRank = 0; 2292 2293 // Too far down the lattice? 2294 if (ThisRPO >= CurBlockRPONum) 2295 return false; 2296 2297 // Higher in the lattice than what we've already explored? 2298 if (ThisRank <= OldLiveInRank) 2299 return false; 2300 2301 return true; 2302 } 2303 2304 std::tuple<Optional<ValueIDNum>, bool> InstrRefBasedLDV::pickVPHILoc( 2305 MachineBasicBlock &MBB, const DebugVariable &Var, const LiveIdxT &LiveOuts, 2306 ValueIDNum **MOutLocs, ValueIDNum **MInLocs, 2307 const SmallVectorImpl<MachineBasicBlock *> &BlockOrders) { 2308 // Collect a set of locations from predecessor where its live-out value can 2309 // be found. 2310 SmallVector<SmallVector<LocIdx, 4>, 8> Locs; 2311 unsigned NumLocs = MTracker->getNumLocs(); 2312 unsigned BackEdgesStart = 0; 2313 2314 for (auto p : BlockOrders) { 2315 // Pick out where backedges start in the list of predecessors. Relies on 2316 // BlockOrders being sorted by RPO. 2317 if (BBToOrder[p] < BBToOrder[&MBB]) 2318 ++BackEdgesStart; 2319 2320 // For each predecessor, create a new set of locations. 2321 Locs.resize(Locs.size() + 1); 2322 unsigned ThisBBNum = p->getNumber(); 2323 auto LiveOutMap = LiveOuts.find(p); 2324 if (LiveOutMap == LiveOuts.end()) 2325 // This predecessor isn't in scope, it must have no live-in/live-out 2326 // locations. 2327 continue; 2328 2329 auto It = LiveOutMap->second->find(Var); 2330 if (It == LiveOutMap->second->end()) 2331 // There's no value recorded for this variable in this predecessor, 2332 // leave an empty set of locations. 2333 continue; 2334 2335 const DbgValue &OutVal = It->second; 2336 2337 if (OutVal.Kind == DbgValue::Const || OutVal.Kind == DbgValue::NoVal) 2338 // Consts and no-values cannot have locations we can join on. 2339 continue; 2340 2341 assert(OutVal.Kind == DbgValue::Proposed || OutVal.Kind == DbgValue::Def); 2342 ValueIDNum ValToLookFor = OutVal.ID; 2343 2344 // Search the live-outs of the predecessor for the specified value. 2345 for (unsigned int I = 0; I < NumLocs; ++I) { 2346 if (MOutLocs[ThisBBNum][I] == ValToLookFor) 2347 Locs.back().push_back(LocIdx(I)); 2348 } 2349 } 2350 2351 // If there were no locations at all, return an empty result. 2352 if (Locs.empty()) 2353 return {None, false}; 2354 2355 // Lambda for seeking a common location within a range of location-sets. 2356 typedef SmallVector<SmallVector<LocIdx, 4>, 8>::iterator LocsIt; 2357 auto SeekLocation = 2358 [&Locs](llvm::iterator_range<LocsIt> SearchRange) -> Optional<LocIdx> { 2359 // Starting with the first set of locations, take the intersection with 2360 // subsequent sets. 2361 SmallVector<LocIdx, 4> base = Locs[0]; 2362 for (auto &S : SearchRange) { 2363 SmallVector<LocIdx, 4> new_base; 2364 std::set_intersection(base.begin(), base.end(), S.begin(), S.end(), 2365 std::inserter(new_base, new_base.begin())); 2366 base = new_base; 2367 } 2368 if (base.empty()) 2369 return None; 2370 2371 // We now have a set of LocIdxes that contain the right output value in 2372 // each of the predecessors. Pick the lowest; if there's a register loc, 2373 // that'll be it. 2374 return *base.begin(); 2375 }; 2376 2377 // Search for a common location for all predecessors. If we can't, then fall 2378 // back to only finding a common location between non-backedge predecessors. 2379 bool ValidForAllLocs = true; 2380 auto TheLoc = SeekLocation(Locs); 2381 if (!TheLoc) { 2382 ValidForAllLocs = false; 2383 TheLoc = 2384 SeekLocation(make_range(Locs.begin(), Locs.begin() + BackEdgesStart)); 2385 } 2386 2387 if (!TheLoc) 2388 return {None, false}; 2389 2390 // Return a PHI-value-number for the found location. 2391 LocIdx L = *TheLoc; 2392 ValueIDNum PHIVal = {(unsigned)MBB.getNumber(), 0, L}; 2393 return {PHIVal, ValidForAllLocs}; 2394 } 2395 2396 std::tuple<bool, bool> InstrRefBasedLDV::vlocJoin( 2397 MachineBasicBlock &MBB, LiveIdxT &VLOCOutLocs, LiveIdxT &VLOCInLocs, 2398 SmallPtrSet<const MachineBasicBlock *, 16> *VLOCVisited, unsigned BBNum, 2399 const SmallSet<DebugVariable, 4> &AllVars, ValueIDNum **MOutLocs, 2400 ValueIDNum **MInLocs, 2401 SmallPtrSet<const MachineBasicBlock *, 8> &InScopeBlocks, 2402 SmallPtrSet<const MachineBasicBlock *, 8> &BlocksToExplore, 2403 DenseMap<DebugVariable, DbgValue> &InLocsT) { 2404 bool DowngradeOccurred = false; 2405 2406 // To emulate VarLocBasedImpl, process this block if it's not in scope but 2407 // _does_ assign a variable value. No live-ins for this scope are transferred 2408 // in though, so we can return immediately. 2409 if (InScopeBlocks.count(&MBB) == 0 && !ArtificialBlocks.count(&MBB)) { 2410 if (VLOCVisited) 2411 return std::tuple<bool, bool>(true, false); 2412 return std::tuple<bool, bool>(false, false); 2413 } 2414 2415 LLVM_DEBUG(dbgs() << "join MBB: " << MBB.getNumber() << "\n"); 2416 bool Changed = false; 2417 2418 // Find any live-ins computed in a prior iteration. 2419 auto ILSIt = VLOCInLocs.find(&MBB); 2420 assert(ILSIt != VLOCInLocs.end()); 2421 auto &ILS = *ILSIt->second; 2422 2423 // Order predecessors by RPOT order, for exploring them in that order. 2424 SmallVector<MachineBasicBlock *, 8> BlockOrders; 2425 for (auto p : MBB.predecessors()) 2426 BlockOrders.push_back(p); 2427 2428 auto Cmp = [&](MachineBasicBlock *A, MachineBasicBlock *B) { 2429 return BBToOrder[A] < BBToOrder[B]; 2430 }; 2431 2432 llvm::sort(BlockOrders.begin(), BlockOrders.end(), Cmp); 2433 2434 unsigned CurBlockRPONum = BBToOrder[&MBB]; 2435 2436 // Force a re-visit to loop heads in the first dataflow iteration. 2437 // FIXME: if we could "propose" Const values this wouldn't be needed, 2438 // because they'd need to be confirmed before being emitted. 2439 if (!BlockOrders.empty() && 2440 BBToOrder[BlockOrders[BlockOrders.size() - 1]] >= CurBlockRPONum && 2441 VLOCVisited) 2442 DowngradeOccurred = true; 2443 2444 auto ConfirmValue = [&InLocsT](const DebugVariable &DV, DbgValue VR) { 2445 auto Result = InLocsT.insert(std::make_pair(DV, VR)); 2446 (void)Result; 2447 assert(Result.second); 2448 }; 2449 2450 auto ConfirmNoVal = [&ConfirmValue, &MBB](const DebugVariable &Var, const DbgValueProperties &Properties) { 2451 DbgValue NoLocPHIVal(MBB.getNumber(), Properties, DbgValue::NoVal); 2452 2453 ConfirmValue(Var, NoLocPHIVal); 2454 }; 2455 2456 // Attempt to join the values for each variable. 2457 for (auto &Var : AllVars) { 2458 // Collect all the DbgValues for this variable. 2459 SmallVector<InValueT, 8> Values; 2460 bool Bail = false; 2461 unsigned BackEdgesStart = 0; 2462 for (auto p : BlockOrders) { 2463 // If the predecessor isn't in scope / to be explored, we'll never be 2464 // able to join any locations. 2465 if (BlocksToExplore.find(p) == BlocksToExplore.end()) { 2466 Bail = true; 2467 break; 2468 } 2469 2470 // Don't attempt to handle unvisited predecessors: they're implicitly 2471 // "unknown"s in the lattice. 2472 if (VLOCVisited && !VLOCVisited->count(p)) 2473 continue; 2474 2475 // If the predecessors OutLocs is absent, there's not much we can do. 2476 auto OL = VLOCOutLocs.find(p); 2477 if (OL == VLOCOutLocs.end()) { 2478 Bail = true; 2479 break; 2480 } 2481 2482 // No live-out value for this predecessor also means we can't produce 2483 // a joined value. 2484 auto VIt = OL->second->find(Var); 2485 if (VIt == OL->second->end()) { 2486 Bail = true; 2487 break; 2488 } 2489 2490 // Keep track of where back-edges begin in the Values vector. Relies on 2491 // BlockOrders being sorted by RPO. 2492 unsigned ThisBBRPONum = BBToOrder[p]; 2493 if (ThisBBRPONum < CurBlockRPONum) 2494 ++BackEdgesStart; 2495 2496 Values.push_back(std::make_pair(p, &VIt->second)); 2497 } 2498 2499 // If there were no values, or one of the predecessors couldn't have a 2500 // value, then give up immediately. It's not safe to produce a live-in 2501 // value. 2502 if (Bail || Values.size() == 0) 2503 continue; 2504 2505 // Enumeration identifying the current state of the predecessors values. 2506 enum { 2507 Unset = 0, 2508 Agreed, // All preds agree on the variable value. 2509 PropDisagree, // All preds agree, but the value kind is Proposed in some. 2510 BEDisagree, // Only back-edges disagree on variable value. 2511 PHINeeded, // Non-back-edge predecessors have conflicing values. 2512 NoSolution // Conflicting Value metadata makes solution impossible. 2513 } OurState = Unset; 2514 2515 // All (non-entry) blocks have at least one non-backedge predecessor. 2516 // Pick the variable value from the first of these, to compare against 2517 // all others. 2518 const DbgValue &FirstVal = *Values[0].second; 2519 const ValueIDNum &FirstID = FirstVal.ID; 2520 2521 // Scan for variable values that can't be resolved: if they have different 2522 // DIExpressions, different indirectness, or are mixed constants / 2523 // non-constants. 2524 for (auto &V : Values) { 2525 if (V.second->Properties != FirstVal.Properties) 2526 OurState = NoSolution; 2527 if (V.second->Kind == DbgValue::Const && FirstVal.Kind != DbgValue::Const) 2528 OurState = NoSolution; 2529 } 2530 2531 // Flags diagnosing _how_ the values disagree. 2532 bool NonBackEdgeDisagree = false; 2533 bool DisagreeOnPHINess = false; 2534 bool IDDisagree = false; 2535 bool Disagree = false; 2536 if (OurState == Unset) { 2537 for (auto &V : Values) { 2538 if (*V.second == FirstVal) 2539 continue; // No disagreement. 2540 2541 Disagree = true; 2542 2543 // Flag whether the value number actually diagrees. 2544 if (V.second->ID != FirstID) 2545 IDDisagree = true; 2546 2547 // Distinguish whether disagreement happens in backedges or not. 2548 // Relies on Values (and BlockOrders) being sorted by RPO. 2549 unsigned ThisBBRPONum = BBToOrder[V.first]; 2550 if (ThisBBRPONum < CurBlockRPONum) 2551 NonBackEdgeDisagree = true; 2552 2553 // Is there a difference in whether the value is definite or only 2554 // proposed? 2555 if (V.second->Kind != FirstVal.Kind && 2556 (V.second->Kind == DbgValue::Proposed || 2557 V.second->Kind == DbgValue::Def) && 2558 (FirstVal.Kind == DbgValue::Proposed || 2559 FirstVal.Kind == DbgValue::Def)) 2560 DisagreeOnPHINess = true; 2561 } 2562 2563 // Collect those flags together and determine an overall state for 2564 // what extend the predecessors agree on a live-in value. 2565 if (!Disagree) 2566 OurState = Agreed; 2567 else if (!IDDisagree && DisagreeOnPHINess) 2568 OurState = PropDisagree; 2569 else if (!NonBackEdgeDisagree) 2570 OurState = BEDisagree; 2571 else 2572 OurState = PHINeeded; 2573 } 2574 2575 // An extra indicator: if we only disagree on whether the value is a 2576 // Def, or proposed, then also flag whether that disagreement happens 2577 // in backedges only. 2578 bool PropOnlyInBEs = Disagree && !IDDisagree && DisagreeOnPHINess && 2579 !NonBackEdgeDisagree && FirstVal.Kind == DbgValue::Def; 2580 2581 const auto &Properties = FirstVal.Properties; 2582 2583 auto OldLiveInIt = ILS.find(Var); 2584 const DbgValue *OldLiveInLocation = 2585 (OldLiveInIt != ILS.end()) ? &OldLiveInIt->second : nullptr; 2586 2587 bool OverRide = false; 2588 if (OurState == BEDisagree && OldLiveInLocation) { 2589 // Only backedges disagree: we can consider downgrading. If there was a 2590 // previous live-in value, use it to work out whether the current 2591 // incoming value represents a lattice downgrade or not. 2592 OverRide = 2593 vlocDowngradeLattice(MBB, *OldLiveInLocation, Values, CurBlockRPONum); 2594 } 2595 2596 // Use the current state of predecessor agreement and other flags to work 2597 // out what to do next. Possibilities include: 2598 // * Accept a value all predecessors agree on, or accept one that 2599 // represents a step down the exploration lattice, 2600 // * Use a PHI value number, if one can be found, 2601 // * Propose a PHI value number, and see if it gets confirmed later, 2602 // * Emit a 'NoVal' value, indicating we couldn't resolve anything. 2603 if (OurState == Agreed) { 2604 // Easiest solution: all predecessors agree on the variable value. 2605 ConfirmValue(Var, FirstVal); 2606 } else if (OurState == BEDisagree && OverRide) { 2607 // Only backedges disagree, and the other predecessors have produced 2608 // a new live-in value further down the exploration lattice. 2609 DowngradeOccurred = true; 2610 ConfirmValue(Var, FirstVal); 2611 } else if (OurState == PropDisagree) { 2612 // Predecessors agree on value, but some say it's only a proposed value. 2613 // Propagate it as proposed: unless it was proposed in this block, in 2614 // which case we're able to confirm the value. 2615 if (FirstID.getBlock() == (uint64_t)MBB.getNumber() && FirstID.isPHI()) { 2616 ConfirmValue(Var, DbgValue(FirstID, Properties, DbgValue::Def)); 2617 } else if (PropOnlyInBEs) { 2618 // If only backedges disagree, a higher (in RPO) block confirmed this 2619 // location, and we need to propagate it into this loop. 2620 ConfirmValue(Var, DbgValue(FirstID, Properties, DbgValue::Def)); 2621 } else { 2622 // Otherwise; a Def meeting a Proposed is still a Proposed. 2623 ConfirmValue(Var, DbgValue(FirstID, Properties, DbgValue::Proposed)); 2624 } 2625 } else if ((OurState == PHINeeded || OurState == BEDisagree)) { 2626 // Predecessors disagree and can't be downgraded: this can only be 2627 // solved with a PHI. Use pickVPHILoc to go look for one. 2628 Optional<ValueIDNum> VPHI; 2629 bool AllEdgesVPHI = false; 2630 std::tie(VPHI, AllEdgesVPHI) = 2631 pickVPHILoc(MBB, Var, VLOCOutLocs, MOutLocs, MInLocs, BlockOrders); 2632 2633 if (VPHI && AllEdgesVPHI) { 2634 // There's a PHI value that's valid for all predecessors -- we can use 2635 // it. If any of the non-backedge predecessors have proposed values 2636 // though, this PHI is also only proposed, until the predecessors are 2637 // confirmed. 2638 DbgValue::KindT K = DbgValue::Def; 2639 for (unsigned int I = 0; I < BackEdgesStart; ++I) 2640 if (Values[I].second->Kind == DbgValue::Proposed) 2641 K = DbgValue::Proposed; 2642 2643 ConfirmValue(Var, DbgValue(*VPHI, Properties, K)); 2644 } else if (VPHI) { 2645 // There's a PHI value, but it's only legal for backedges. Leave this 2646 // as a proposed PHI value: it might come back on the backedges, 2647 // and allow us to confirm it in the future. 2648 DbgValue NoBEValue = DbgValue(*VPHI, Properties, DbgValue::Proposed); 2649 ConfirmValue(Var, NoBEValue); 2650 } else { 2651 ConfirmNoVal(Var, Properties); 2652 } 2653 } else { 2654 // Otherwise: we don't know. Emit a "phi but no real loc" phi. 2655 ConfirmNoVal(Var, Properties); 2656 } 2657 } 2658 2659 // Store newly calculated in-locs into VLOCInLocs, if they've changed. 2660 Changed = ILS != InLocsT; 2661 if (Changed) 2662 ILS = InLocsT; 2663 2664 return std::tuple<bool, bool>(Changed, DowngradeOccurred); 2665 } 2666 2667 void InstrRefBasedLDV::vlocDataflow( 2668 const LexicalScope *Scope, const DILocation *DILoc, 2669 const SmallSet<DebugVariable, 4> &VarsWeCareAbout, 2670 SmallPtrSetImpl<MachineBasicBlock *> &AssignBlocks, LiveInsT &Output, 2671 ValueIDNum **MOutLocs, ValueIDNum **MInLocs, 2672 SmallVectorImpl<VLocTracker> &AllTheVLocs) { 2673 // This method is much like mlocDataflow: but focuses on a single 2674 // LexicalScope at a time. Pick out a set of blocks and variables that are 2675 // to have their value assignments solved, then run our dataflow algorithm 2676 // until a fixedpoint is reached. 2677 std::priority_queue<unsigned int, std::vector<unsigned int>, 2678 std::greater<unsigned int>> 2679 Worklist, Pending; 2680 SmallPtrSet<MachineBasicBlock *, 16> OnWorklist, OnPending; 2681 2682 // The set of blocks we'll be examining. 2683 SmallPtrSet<const MachineBasicBlock *, 8> BlocksToExplore; 2684 2685 // The order in which to examine them (RPO). 2686 SmallVector<MachineBasicBlock *, 8> BlockOrders; 2687 2688 // RPO ordering function. 2689 auto Cmp = [&](MachineBasicBlock *A, MachineBasicBlock *B) { 2690 return BBToOrder[A] < BBToOrder[B]; 2691 }; 2692 2693 LS.getMachineBasicBlocks(DILoc, BlocksToExplore); 2694 2695 // A separate container to distinguish "blocks we're exploring" versus 2696 // "blocks that are potentially in scope. See comment at start of vlocJoin. 2697 SmallPtrSet<const MachineBasicBlock *, 8> InScopeBlocks = BlocksToExplore; 2698 2699 // Old LiveDebugValues tracks variable locations that come out of blocks 2700 // not in scope, where DBG_VALUEs occur. This is something we could 2701 // legitimately ignore, but lets allow it for now. 2702 if (EmulateOldLDV) 2703 BlocksToExplore.insert(AssignBlocks.begin(), AssignBlocks.end()); 2704 2705 // We also need to propagate variable values through any artificial blocks 2706 // that immediately follow blocks in scope. 2707 DenseSet<const MachineBasicBlock *> ToAdd; 2708 2709 // Helper lambda: For a given block in scope, perform a depth first search 2710 // of all the artificial successors, adding them to the ToAdd collection. 2711 auto AccumulateArtificialBlocks = 2712 [this, &ToAdd, &BlocksToExplore, 2713 &InScopeBlocks](const MachineBasicBlock *MBB) { 2714 // Depth-first-search state: each node is a block and which successor 2715 // we're currently exploring. 2716 SmallVector<std::pair<const MachineBasicBlock *, 2717 MachineBasicBlock::const_succ_iterator>, 2718 8> 2719 DFS; 2720 2721 // Find any artificial successors not already tracked. 2722 for (auto *succ : MBB->successors()) { 2723 if (BlocksToExplore.count(succ) || InScopeBlocks.count(succ)) 2724 continue; 2725 if (!ArtificialBlocks.count(succ)) 2726 continue; 2727 DFS.push_back(std::make_pair(succ, succ->succ_begin())); 2728 ToAdd.insert(succ); 2729 } 2730 2731 // Search all those blocks, depth first. 2732 while (!DFS.empty()) { 2733 const MachineBasicBlock *CurBB = DFS.back().first; 2734 MachineBasicBlock::const_succ_iterator &CurSucc = DFS.back().second; 2735 // Walk back if we've explored this blocks successors to the end. 2736 if (CurSucc == CurBB->succ_end()) { 2737 DFS.pop_back(); 2738 continue; 2739 } 2740 2741 // If the current successor is artificial and unexplored, descend into 2742 // it. 2743 if (!ToAdd.count(*CurSucc) && ArtificialBlocks.count(*CurSucc)) { 2744 DFS.push_back(std::make_pair(*CurSucc, (*CurSucc)->succ_begin())); 2745 ToAdd.insert(*CurSucc); 2746 continue; 2747 } 2748 2749 ++CurSucc; 2750 } 2751 }; 2752 2753 // Search in-scope blocks and those containing a DBG_VALUE from this scope 2754 // for artificial successors. 2755 for (auto *MBB : BlocksToExplore) 2756 AccumulateArtificialBlocks(MBB); 2757 for (auto *MBB : InScopeBlocks) 2758 AccumulateArtificialBlocks(MBB); 2759 2760 BlocksToExplore.insert(ToAdd.begin(), ToAdd.end()); 2761 InScopeBlocks.insert(ToAdd.begin(), ToAdd.end()); 2762 2763 // Single block scope: not interesting! No propagation at all. Note that 2764 // this could probably go above ArtificialBlocks without damage, but 2765 // that then produces output differences from original-live-debug-values, 2766 // which propagates from a single block into many artificial ones. 2767 if (BlocksToExplore.size() == 1) 2768 return; 2769 2770 // Picks out relevants blocks RPO order and sort them. 2771 for (auto *MBB : BlocksToExplore) 2772 BlockOrders.push_back(const_cast<MachineBasicBlock *>(MBB)); 2773 2774 llvm::sort(BlockOrders.begin(), BlockOrders.end(), Cmp); 2775 unsigned NumBlocks = BlockOrders.size(); 2776 2777 // Allocate some vectors for storing the live ins and live outs. Large. 2778 SmallVector<DenseMap<DebugVariable, DbgValue>, 32> LiveIns, LiveOuts; 2779 LiveIns.resize(NumBlocks); 2780 LiveOuts.resize(NumBlocks); 2781 2782 // Produce by-MBB indexes of live-in/live-outs, to ease lookup within 2783 // vlocJoin. 2784 LiveIdxT LiveOutIdx, LiveInIdx; 2785 LiveOutIdx.reserve(NumBlocks); 2786 LiveInIdx.reserve(NumBlocks); 2787 for (unsigned I = 0; I < NumBlocks; ++I) { 2788 LiveOutIdx[BlockOrders[I]] = &LiveOuts[I]; 2789 LiveInIdx[BlockOrders[I]] = &LiveIns[I]; 2790 } 2791 2792 for (auto *MBB : BlockOrders) { 2793 Worklist.push(BBToOrder[MBB]); 2794 OnWorklist.insert(MBB); 2795 } 2796 2797 // Iterate over all the blocks we selected, propagating variable values. 2798 bool FirstTrip = true; 2799 SmallPtrSet<const MachineBasicBlock *, 16> VLOCVisited; 2800 while (!Worklist.empty() || !Pending.empty()) { 2801 while (!Worklist.empty()) { 2802 auto *MBB = OrderToBB[Worklist.top()]; 2803 CurBB = MBB->getNumber(); 2804 Worklist.pop(); 2805 2806 DenseMap<DebugVariable, DbgValue> JoinedInLocs; 2807 2808 // Join values from predecessors. Updates LiveInIdx, and writes output 2809 // into JoinedInLocs. 2810 bool InLocsChanged, DowngradeOccurred; 2811 std::tie(InLocsChanged, DowngradeOccurred) = vlocJoin( 2812 *MBB, LiveOutIdx, LiveInIdx, (FirstTrip) ? &VLOCVisited : nullptr, 2813 CurBB, VarsWeCareAbout, MOutLocs, MInLocs, InScopeBlocks, 2814 BlocksToExplore, JoinedInLocs); 2815 2816 auto &VTracker = AllTheVLocs[MBB->getNumber()]; 2817 bool FirstVisit = VLOCVisited.insert(MBB).second; 2818 2819 // Always explore transfer function if inlocs changed, or if we've not 2820 // visited this block before. 2821 InLocsChanged |= FirstVisit; 2822 2823 // If a downgrade occurred, book us in for re-examination on the next 2824 // iteration. 2825 if (DowngradeOccurred && OnPending.insert(MBB).second) 2826 Pending.push(BBToOrder[MBB]); 2827 2828 // Patch up the variable value transfer function to use the live-in 2829 // machine values, now that that problem is solved. 2830 if (FirstVisit) { 2831 for (auto &Transfer : VTracker.Vars) { 2832 if (Transfer.second.Kind == DbgValue::Def && 2833 Transfer.second.ID.getBlock() == CurBB && 2834 Transfer.second.ID.isPHI()) { 2835 LocIdx Loc = Transfer.second.ID.getLoc(); 2836 Transfer.second.ID = MInLocs[CurBB][Loc.asU64()]; 2837 } 2838 } 2839 } 2840 2841 if (!InLocsChanged) 2842 continue; 2843 2844 // Do transfer function. 2845 for (auto &Transfer : VTracker.Vars) { 2846 // Is this var we're mangling in this scope? 2847 if (VarsWeCareAbout.count(Transfer.first)) { 2848 // Erase on empty transfer (DBG_VALUE $noreg). 2849 if (Transfer.second.Kind == DbgValue::Undef) { 2850 JoinedInLocs.erase(Transfer.first); 2851 } else { 2852 // Insert new variable value; or overwrite. 2853 auto NewValuePair = std::make_pair(Transfer.first, Transfer.second); 2854 auto Result = JoinedInLocs.insert(NewValuePair); 2855 if (!Result.second) 2856 Result.first->second = Transfer.second; 2857 } 2858 } 2859 } 2860 2861 // Did the live-out locations change? 2862 bool OLChanged = JoinedInLocs != *LiveOutIdx[MBB]; 2863 2864 // If they haven't changed, there's no need to explore further. 2865 if (!OLChanged) 2866 continue; 2867 2868 // Commit to the live-out record. 2869 *LiveOutIdx[MBB] = JoinedInLocs; 2870 2871 // We should visit all successors. Ensure we'll visit any non-backedge 2872 // successors during this dataflow iteration; book backedge successors 2873 // to be visited next time around. 2874 for (auto s : MBB->successors()) { 2875 // Ignore out of scope / not-to-be-explored successors. 2876 if (LiveInIdx.find(s) == LiveInIdx.end()) 2877 continue; 2878 2879 if (BBToOrder[s] > BBToOrder[MBB]) { 2880 if (OnWorklist.insert(s).second) 2881 Worklist.push(BBToOrder[s]); 2882 } else if (OnPending.insert(s).second && (FirstTrip || OLChanged)) { 2883 Pending.push(BBToOrder[s]); 2884 } 2885 } 2886 } 2887 Worklist.swap(Pending); 2888 std::swap(OnWorklist, OnPending); 2889 OnPending.clear(); 2890 assert(Pending.empty()); 2891 FirstTrip = false; 2892 } 2893 2894 // Dataflow done. Now what? Save live-ins. Ignore any that are still marked 2895 // as being variable-PHIs, because those did not have their machine-PHI 2896 // value confirmed. Such variable values are places that could have been 2897 // PHIs, but are not. 2898 for (auto *MBB : BlockOrders) { 2899 auto &VarMap = *LiveInIdx[MBB]; 2900 for (auto &P : VarMap) { 2901 if (P.second.Kind == DbgValue::Proposed || 2902 P.second.Kind == DbgValue::NoVal) 2903 continue; 2904 Output[MBB->getNumber()].push_back(P); 2905 } 2906 } 2907 2908 BlockOrders.clear(); 2909 BlocksToExplore.clear(); 2910 } 2911 2912 void InstrRefBasedLDV::dump_mloc_transfer( 2913 const MLocTransferMap &mloc_transfer) const { 2914 for (auto &P : mloc_transfer) { 2915 std::string foo = MTracker->LocIdxToName(P.first); 2916 std::string bar = MTracker->IDAsString(P.second); 2917 dbgs() << "Loc " << foo << " --> " << bar << "\n"; 2918 } 2919 } 2920 2921 void InstrRefBasedLDV::emitLocations( 2922 MachineFunction &MF, LiveInsT SavedLiveIns, ValueIDNum **MInLocs, 2923 DenseMap<DebugVariable, unsigned> &AllVarsNumbering) { 2924 TTracker = new TransferTracker(TII, MTracker, MF, *TRI, CalleeSavedRegs); 2925 unsigned NumLocs = MTracker->getNumLocs(); 2926 2927 // For each block, load in the machine value locations and variable value 2928 // live-ins, then step through each instruction in the block. New DBG_VALUEs 2929 // to be inserted will be created along the way. 2930 for (MachineBasicBlock &MBB : MF) { 2931 unsigned bbnum = MBB.getNumber(); 2932 MTracker->reset(); 2933 MTracker->loadFromArray(MInLocs[bbnum], bbnum); 2934 TTracker->loadInlocs(MBB, MInLocs[bbnum], SavedLiveIns[MBB.getNumber()], 2935 NumLocs); 2936 2937 CurBB = bbnum; 2938 CurInst = 1; 2939 for (auto &MI : MBB) { 2940 process(MI); 2941 ++CurInst; 2942 } 2943 } 2944 2945 // We have to insert DBG_VALUEs in a consistent order, otherwise they appeaer 2946 // in DWARF in different orders. Use the order that they appear when walking 2947 // through each block / each instruction, stored in AllVarsNumbering. 2948 auto OrderDbgValues = [&](const MachineInstr *A, 2949 const MachineInstr *B) -> bool { 2950 DebugVariable VarA(A->getDebugVariable(), A->getDebugExpression(), 2951 A->getDebugLoc()->getInlinedAt()); 2952 DebugVariable VarB(B->getDebugVariable(), B->getDebugExpression(), 2953 B->getDebugLoc()->getInlinedAt()); 2954 return AllVarsNumbering.find(VarA)->second < 2955 AllVarsNumbering.find(VarB)->second; 2956 }; 2957 2958 // Go through all the transfers recorded in the TransferTracker -- this is 2959 // both the live-ins to a block, and any movements of values that happen 2960 // in the middle. 2961 for (auto &P : TTracker->Transfers) { 2962 // Sort them according to appearance order. 2963 llvm::sort(P.Insts.begin(), P.Insts.end(), OrderDbgValues); 2964 // Insert either before or after the designated point... 2965 if (P.MBB) { 2966 MachineBasicBlock &MBB = *P.MBB; 2967 for (auto *MI : P.Insts) { 2968 MBB.insert(P.Pos, MI); 2969 } 2970 } else { 2971 MachineBasicBlock &MBB = *P.Pos->getParent(); 2972 for (auto *MI : P.Insts) { 2973 MBB.insertAfter(P.Pos, MI); 2974 } 2975 } 2976 } 2977 } 2978 2979 void InstrRefBasedLDV::initialSetup(MachineFunction &MF) { 2980 // Build some useful data structures. 2981 auto hasNonArtificialLocation = [](const MachineInstr &MI) -> bool { 2982 if (const DebugLoc &DL = MI.getDebugLoc()) 2983 return DL.getLine() != 0; 2984 return false; 2985 }; 2986 // Collect a set of all the artificial blocks. 2987 for (auto &MBB : MF) 2988 if (none_of(MBB.instrs(), hasNonArtificialLocation)) 2989 ArtificialBlocks.insert(&MBB); 2990 2991 // Compute mappings of block <=> RPO order. 2992 ReversePostOrderTraversal<MachineFunction *> RPOT(&MF); 2993 unsigned int RPONumber = 0; 2994 for (auto RI = RPOT.begin(), RE = RPOT.end(); RI != RE; ++RI) { 2995 OrderToBB[RPONumber] = *RI; 2996 BBToOrder[*RI] = RPONumber; 2997 BBNumToRPO[(*RI)->getNumber()] = RPONumber; 2998 ++RPONumber; 2999 } 3000 } 3001 3002 /// Calculate the liveness information for the given machine function and 3003 /// extend ranges across basic blocks. 3004 bool InstrRefBasedLDV::ExtendRanges(MachineFunction &MF, 3005 TargetPassConfig *TPC) { 3006 // No subprogram means this function contains no debuginfo. 3007 if (!MF.getFunction().getSubprogram()) 3008 return false; 3009 3010 LLVM_DEBUG(dbgs() << "\nDebug Range Extension\n"); 3011 this->TPC = TPC; 3012 3013 TRI = MF.getSubtarget().getRegisterInfo(); 3014 TII = MF.getSubtarget().getInstrInfo(); 3015 TFI = MF.getSubtarget().getFrameLowering(); 3016 TFI->getCalleeSaves(MF, CalleeSavedRegs); 3017 LS.initialize(MF); 3018 3019 MTracker = 3020 new MLocTracker(MF, *TII, *TRI, *MF.getSubtarget().getTargetLowering()); 3021 VTracker = nullptr; 3022 TTracker = nullptr; 3023 3024 SmallVector<MLocTransferMap, 32> MLocTransfer; 3025 SmallVector<VLocTracker, 8> vlocs; 3026 LiveInsT SavedLiveIns; 3027 3028 int MaxNumBlocks = -1; 3029 for (auto &MBB : MF) 3030 MaxNumBlocks = std::max(MBB.getNumber(), MaxNumBlocks); 3031 assert(MaxNumBlocks >= 0); 3032 ++MaxNumBlocks; 3033 3034 MLocTransfer.resize(MaxNumBlocks); 3035 vlocs.resize(MaxNumBlocks); 3036 SavedLiveIns.resize(MaxNumBlocks); 3037 3038 initialSetup(MF); 3039 3040 produceTransferFunctions(MF, MLocTransfer, MaxNumBlocks, vlocs); 3041 3042 // Allocate and initialize two array-of-arrays for the live-in and live-out 3043 // machine values. The outer dimension is the block number; while the inner 3044 // dimension is a LocIdx from MLocTracker. 3045 ValueIDNum **MOutLocs = new ValueIDNum *[MaxNumBlocks]; 3046 ValueIDNum **MInLocs = new ValueIDNum *[MaxNumBlocks]; 3047 unsigned NumLocs = MTracker->getNumLocs(); 3048 for (int i = 0; i < MaxNumBlocks; ++i) { 3049 MOutLocs[i] = new ValueIDNum[NumLocs]; 3050 MInLocs[i] = new ValueIDNum[NumLocs]; 3051 } 3052 3053 // Solve the machine value dataflow problem using the MLocTransfer function, 3054 // storing the computed live-ins / live-outs into the array-of-arrays. We use 3055 // both live-ins and live-outs for decision making in the variable value 3056 // dataflow problem. 3057 mlocDataflow(MInLocs, MOutLocs, MLocTransfer); 3058 3059 // Number all variables in the order that they appear, to be used as a stable 3060 // insertion order later. 3061 DenseMap<DebugVariable, unsigned> AllVarsNumbering; 3062 3063 // Map from one LexicalScope to all the variables in that scope. 3064 DenseMap<const LexicalScope *, SmallSet<DebugVariable, 4>> ScopeToVars; 3065 3066 // Map from One lexical scope to all blocks in that scope. 3067 DenseMap<const LexicalScope *, SmallPtrSet<MachineBasicBlock *, 4>> 3068 ScopeToBlocks; 3069 3070 // Store a DILocation that describes a scope. 3071 DenseMap<const LexicalScope *, const DILocation *> ScopeToDILocation; 3072 3073 // To mirror old LiveDebugValues, enumerate variables in RPOT order. Otherwise 3074 // the order is unimportant, it just has to be stable. 3075 for (unsigned int I = 0; I < OrderToBB.size(); ++I) { 3076 auto *MBB = OrderToBB[I]; 3077 auto *VTracker = &vlocs[MBB->getNumber()]; 3078 // Collect each variable with a DBG_VALUE in this block. 3079 for (auto &idx : VTracker->Vars) { 3080 const auto &Var = idx.first; 3081 const DILocation *ScopeLoc = VTracker->Scopes[Var]; 3082 assert(ScopeLoc != nullptr); 3083 auto *Scope = LS.findLexicalScope(ScopeLoc); 3084 3085 // No insts in scope -> shouldn't have been recorded. 3086 assert(Scope != nullptr); 3087 3088 AllVarsNumbering.insert(std::make_pair(Var, AllVarsNumbering.size())); 3089 ScopeToVars[Scope].insert(Var); 3090 ScopeToBlocks[Scope].insert(VTracker->MBB); 3091 ScopeToDILocation[Scope] = ScopeLoc; 3092 } 3093 } 3094 3095 // OK. Iterate over scopes: there might be something to be said for 3096 // ordering them by size/locality, but that's for the future. For each scope, 3097 // solve the variable value problem, producing a map of variables to values 3098 // in SavedLiveIns. 3099 for (auto &P : ScopeToVars) { 3100 vlocDataflow(P.first, ScopeToDILocation[P.first], P.second, 3101 ScopeToBlocks[P.first], SavedLiveIns, MOutLocs, MInLocs, 3102 vlocs); 3103 } 3104 3105 // Using the computed value locations and variable values for each block, 3106 // create the DBG_VALUE instructions representing the extended variable 3107 // locations. 3108 emitLocations(MF, SavedLiveIns, MInLocs, AllVarsNumbering); 3109 3110 for (int Idx = 0; Idx < MaxNumBlocks; ++Idx) { 3111 delete[] MOutLocs[Idx]; 3112 delete[] MInLocs[Idx]; 3113 } 3114 delete[] MOutLocs; 3115 delete[] MInLocs; 3116 3117 // Did we actually make any changes? If we created any DBG_VALUEs, then yes. 3118 bool Changed = TTracker->Transfers.size() != 0; 3119 3120 delete MTracker; 3121 VTracker = nullptr; 3122 TTracker = nullptr; 3123 3124 ArtificialBlocks.clear(); 3125 OrderToBB.clear(); 3126 BBToOrder.clear(); 3127 BBNumToRPO.clear(); 3128 3129 return Changed; 3130 } 3131 3132 LDVImpl *llvm::makeInstrRefBasedLiveDebugValues() { 3133 return new InstrRefBasedLDV(); 3134 } 3135