| /linux-6.15/Documentation/driver-api/thermal/ |
| H A D | cpu-idle-cooling.rst | 25 because of the OPP density, we can only choose an OPP with a power 35 If we can remove the static and the dynamic leakage for a specific 38 injection period, we can mitigate the temperature by modulating the 49 idle state target residency, we lead to dropping the static and the 132 - It is less than or equal to the latency we tolerate when the 143 When we reach the thermal trip point, we have to sustain a specified 144 power for a specific temperature but at this time we consume:: 151 because we don’t want to change the OPP. We can group the 172 the idle injection we need. Alternatively if we have the idle 173 injection duration, we can compute the running duration with:: [all …]
|
| /linux-6.15/Documentation/devicetree/bindings/pinctrl/ |
| H A D | sprd,pinctrl.txt | 12 to choose one function (like: UART0) for which system, since we 15 There are too much various configuration that we can not list all 16 of them, so we can not make every Spreadtrum-special configuration 18 global configuration in future. Then we add one "sprd,control" to 19 set these various global control configuration, and we need use 22 Moreover we recognise every fields comprising one bit or several 23 bits in one global control register as one pin, thus we should 32 Now we have 4 systems for sleep mode on SC9860 SoC: AP system, 52 kernel on SC9860 platform), then we can not select "sleep" state 53 when the PUBCP system goes into deep sleep mode. Thus we introduce [all …]
|
| /linux-6.15/Documentation/arch/x86/ |
| H A D | entry_64.rst | 58 so. If we mess that up even slightly, we crash. 60 So when we have a secondary entry, already in kernel mode, we *must 61 not* use SWAPGS blindly - nor must we forget doing a SWAPGS when it's 87 If we are at an interrupt or user-trap/gate-alike boundary then we can 89 whether SWAPGS was already done: if we see that we are a secondary 90 entry interrupting kernel mode execution, then we know that the GS 91 base has already been switched. If it says that we interrupted 92 user-space execution then we must do the SWAPGS. 94 But if we are in an NMI/MCE/DEBUG/whatever super-atomic entry context, 96 stack but before we executed SWAPGS, then the only safe way to check [all …]
|
| /linux-6.15/Documentation/filesystems/ |
| H A D | directory-locking.rst | 10 When taking the i_rwsem on multiple non-directory objects, we 22 * lock the directory we are accessing (shared) 26 * lock the directory we are accessing (exclusive) 84 to another and we run into it when we do a lookup. 99 attach to our directory, under the name we are looking for. 149 For example, if we have NFS filesystem caching on a local one, we have 174 In other words, we have a cycle of threads, T1,..., Tn, 197 we would have a loop. 204 In other words, we have a cross-directory rename that locked 248 the locks) and voila - we have a deadlock. [all …]
|
| H A D | path-lookup.txt | 49 the path given by the name's starting point (which we know in advance -- eg. 55 A parent, of course, must be a directory, and we must have appropriate 81 in that bucket is then walked, and we do a full comparison of each entry 148 However, when inserting object 2 onto a new list, we end up with this: 206 With this two parts of the puzzle, we can do path lookups without taking 256 | children:"npiggin" | we now recheck the d_seq of dentry0. Then we 270 | children:NULL | its refcount because we're holding d_lock. 284 When we reach a point where sleeping is required, or a filesystem callout 295 * synchronize_rcu is called when unregistering a filesystem, so we can 302 so we can load this tuple atomically, and also check whether any of its [all …]
|
| H A D | idmappings.rst | 49 Given that we are dealing with order isomorphisms plus the fact that we're 50 dealing with subsets we can embed idmappings into each other, i.e. we can 85 for simplicity. After that if we want to know what ``id`` maps to we can do 88 - If we want to map from left to right:: 93 - If we want to map from right to left:: 155 with user namespaces. Since we mainly care about how idmappings work we're not 203 If we've been given ``k11000`` from one idmapping we can map that id up in 385 idmappings. This will exhibit some problems we can hit. After that we will 387 they can solve the problems we observed before. 790 As we can see, we end up with an invertible and therefore information [all …]
|
| /linux-6.15/Documentation/dev-tools/kunit/ |
| H A D | run_wrapper.rst | 10 As long as we can build the kernel, we can run KUnit. 44 kunit_tool. This is useful if we have several different groups of 45 tests we want to run independently, or if we want to use pre-defined 64 If we want to run a specific set of tests (rather than those listed 65 in the KUnit ``defconfig``), we can provide Kconfig options in the 96 This means that we can use other tools 104 If we want to make manual changes to the KUnit build process, we 120 If we already have built UML kernel with built-in KUnit tests, we 143 If we have KUnit results in the raw TAP format, we can parse them and 160 example: if we only want to run KUnit resource tests, use: [all …]
|
| /linux-6.15/Documentation/filesystems/ext4/ |
| H A D | orphan.rst | 9 would leak. Similarly if we truncate or extend the file, we need not be able 10 to perform the operation in a single journalling transaction. In such case we 17 inode (we overload i_dtime inode field for this). However this filesystem 36 When a filesystem with orphan file feature is writeably mounted, we set 38 be valid orphan entries. In case we see this feature when mounting the 39 filesystem, we read the whole orphan file and process all orphan inodes found 40 there as usual. When cleanly unmounting the filesystem we remove the
|
| /linux-6.15/tools/lib/perf/Documentation/ |
| H A D | libperf-counting.txt | 73 Once the setup is complete we start by defining specific events using the `struct perf_event_attr`. 97 In this case we will monitor current process, so we create threads map with single pid (0): 110 Now we create libperf's event list, which will serve as holder for the events we want: 121 We create libperf's events for the attributes we defined earlier and add them to the list: 156 so we need to enable the whole list explicitly (both events). 158 From this moment events are counting and we can do our workload. 160 When we are done we disable the events list. 171 Now we need to get the counts from events, following code iterates through the
|
| /linux-6.15/Documentation/gpu/amdgpu/display/ |
| H A D | index.rst | 22 DC case, we maintain a tree to centralize code from different parts. The shared 23 repository has integration tests with our Internal Linux CI farm, and we run a 28 When we upstream a new feature or some patches, we pack them in a patchset with 40 * Finally, developers wait a few days for community feedback before we merge 43 It is good to stress that the test phase is something that we take extremely 44 seriously, and we never merge anything that fails our validation. Follows an 62 In terms of test setup for CI and manual tests, we usually use: 65 #. In terms of userspace, we only use fully updated open-source components 67 #. Regarding IGT, we use the latest code from the upstream. 68 #. Most of the manual tests are conducted in the GNome but we also use KDE.
|
| H A D | dcn-overview.rst | 8 (DCN) works, we need to start with an overview of the hardware pipeline. Below 10 generic diagram, and we have variations per ASIC. 14 Based on this diagram, we can pass through each block and briefly describe 60 setup or ignored accordingly with userspace demands. For example, if we 80 we have dc_stream, and the output (DIO) is handled by dc_link. Keep in mind 125 depth format), bit-depth reduction/dithering would kick in. In OPP, we would 127 Eventually, we output data in integer format at DIO. 134 when we say **pipeline**. In the DCN driver, we use the term **hardware 168 Now, if we inspect the DTN log again we can see some interesting changes:: 180 From the above example, we now split the display pipeline into two vertical [all …]
|
| /linux-6.15/Documentation/scheduler/ |
| H A D | schedutil.rst | 8 we know this is flawed, but it is the best workable approximation. 14 With PELT we track some metrics across the various scheduler entities, from 16 we use an Exponentially Weighted Moving Average (EWMA), each period (1024us) 35 Using this we track 2 key metrics: 'running' and 'runnable'. 'Running' 50 a big CPU, we allow architectures to scale the time delta with two ratios, one 60 For more dynamic systems where the hardware is in control of DVFS we use 62 For Intel specifically, we use:: 84 of DVFS and CPU type. IOW. we can transfer and compare them between CPUs. 141 XXX IO-wait: when the update is due to a task wakeup from IO-completion we 165 suppose we have a CPU saturated with 4 tasks, then when we migrate a task [all …]
|
| /linux-6.15/Documentation/filesystems/xfs/ |
| H A D | xfs-delayed-logging-design.rst | 180 so that when we come to write the dirty metadata into the log we don't run out 205 means we can roll the transaction multiple times before we have to re-reserve 260 available, as we may end up on the end of the FIFO queue and the items we have 290 pins the tail of the log when we sleep on the write reservation, then we will 479 Hence we avoid the need to lock items when we need to flush outstanding 516 If we don't keep the vector around, we do not know where the region boundaries 694 the log vector chaining. If we track by the log vectors, then we only need to 735 To ensure that we can do this, we need to track all the checkpoint contexts 750 are also committed to disk before the one we need to wait for. Therefore we 773 amount of log space required as we add items to the commit item list, but we [all …]
|
| /linux-6.15/Documentation/hid/ |
| H A D | hid-bpf.rst | 39 only load the custom API when we have a user. 92 With eBPF, we can intercept any HID command emitted to the device and 96 kernel/bpf program because we can intercept any incoming command. 101 The last usage is tracing events and all the fun we can do we BPF to summarize 107 1. if the driver doesn't export a hidraw node, we can't trace anything 110 means that we have cases where we need to add printks to the kernel 164 And given that we are in IRQ context, we can not talk back to the device. 236 integer, we can then have a pointer to that value only:: 336 even if we change its report descriptor. 370 For that, we can create a basic skeleton for our BPF program:: [all …]
|
| /linux-6.15/Documentation/arch/powerpc/ |
| H A D | vmemmap_dedup.rst | 14 With 2M PMD level mapping, we require 32 struct pages and a single 64K vmemmap 18 With 1G PUD level mapping, we require 16384 struct pages and a single 64K 19 vmemmap page can contain 1024 struct pages (64K/sizeof(struct page)). Hence we 47 4K vmemmap page contains 64 struct pages(4K/sizeof(struct page)). Hence we 74 With 1G PUD level mapping, we require 262144 struct pages and a single 4K 75 vmemmap page can contain 64 struct pages (4K/sizeof(struct page)). Hence we
|
| H A D | kasan.txt | 39 checks can be delayed until after the MMU is set is up, and we can just not 44 linear mapping, using the same high-bits trick we use for the rest of the linear 47 - We'd like to place it near the start of physical memory. In theory we can do 48 this at run-time based on how much physical memory we have, but this requires 51 is hopefully something we can revisit once we get KASLR for Book3S. 53 - Alternatively, we can place the shadow at the _end_ of memory, but this
|
| H A D | pci_iov_resource_on_powernv.rst | 40 The following section provides a rough description of what we have on P8 52 For DMA, MSIs and inbound PCIe error messages, we have a table (in 91 reserved for MSIs but this is not a problem at this point; we just 93 ignores that however and will forward in that space if we try). 100 Now, this is the "main" window we use in Linux today (excluding 116 bits which are not conveyed by PowerBus but we don't use this. 134 Then we do the same thing as with M32, using the bridge alignment 137 Since we cannot remap, we have two additional constraints: 150 the best we found. So when any of the PEs freezes, we freeze the 158 sense, but we haven't done it yet. [all …]
|
| /linux-6.15/drivers/scsi/aic7xxx/ |
| H A D | aic79xx.seq | 183 * we detect case 1, we will properly defer the post of the SCB 376 * order is preserved even if we batch. 910 * out before we can test SDONE, we'll think that 1109 * If we get one, we use the tag returned to find the proper 1424 * line, or we just want to acknowledge the byte, then we do a dummy read 1466 * Do we have any prefetch left??? 1475 /* Did we just finish fetching segs? */ 1613 * Since we've are entering a data phase, we will 1642 * unless we already know that we should be bitbucketing. 1882 * FIFO. This status is the only way we can detect if we [all …]
|
| H A D | aic7xxx.seq | 211 /* The Target ID we were selected at */ 362 * when we have outstanding transactions, so we can safely 364 * we start sending out transactions again. 486 * we properly identified ourselves. 735 /* Did we just finish fetching segs? */ 738 /* Are we actively fetching segments? */ 742 * Do we have any prefetch left??? 1407 * we aren't going to touch host memory. 1874 * If we get one, we use the tag returned to find the proper 1964 * using SCSIBUSL. When we have pulled the ATN line, or we just want to [all …]
|
| /linux-6.15/tools/testing/selftests/net/packetdrill/ |
| H A D | tcp_close_close-remote-fin-then-close.pkt | 2 // Verify behavior for the sequence: remote side sends FIN, then we close(). 3 // Since the remote side (client) closes first, we test our LAST_ACK code path. 26 // Then we close. 33 // Verify that we send RST in response to any incoming segments
|
| H A D | tcp_inq_server.pkt | 20 // Now we have 10K of data ready on the socket. 24 // We read 2K and we should have 8K ready to read. 31 // We read 8K and we should have no further data ready to read. 42 // We read 10K and we should have one "fake" byte because the connection is
|
| /linux-6.15/Documentation/sound/designs/ |
| H A D | jack-injection.rst | 10 validate ALSA userspace changes. For example, we change the audio 11 profile switching code in the pulseaudio, and we want to verify if the 13 in this case, we could inject plugin or plugout events to an audio 14 jack or to some audio jacks, we don't need to physically access the 26 To inject events to audio jacks, we need to enable the jack injection 28 change the state by hardware events anymore, we could inject plugin or 30 ``status``, after we finish our test, we need to disable the jack
|
| /linux-6.15/Documentation/block/ |
| H A D | deadline-iosched.rst | 20 service time for a request. As we focus mainly on read latencies, this is 49 When we have to move requests from the io scheduler queue to the block 50 device dispatch queue, we always give a preference to reads. However, we 52 how many times we give preference to reads over writes. When that has been 53 done writes_starved number of times, we dispatch some writes based on the 68 that comes at basically 0 cost we leave that on. We simply disable the
|
| /linux-6.15/Documentation/driver-api/firmware/ |
| H A D | lookup-order.rst | 9 * The ''Built-in firmware'' is checked first, if the firmware is present we 11 * The ''Firmware cache'' is looked at next. If the firmware is found we 13 * The ''Direct filesystem lookup'' is performed next, if found we 16 firmware_request_platform() is used, if found we return it immediately
|
| /linux-6.15/Documentation/filesystems/bcachefs/ |
| H A D | errorcodes.rst | 6 In bcachefs, as a hard rule we do not throw or directly use standard error 7 codes (-EINVAL, -EBUSY, etc.). Instead, we define private error codes as needed 19 At the module boundary, we use bch2_err_class() to convert to a standard error 24 be thrown in one place. That means that when we see it in a log message we can
|