Lines Matching refs:compression
18 `zstd` is a fast lossless compression algorithm and data compression tool,
21 `zstd` offers highly configurable compression speed,
23 and strong modes nearing lzma compression ratios.
92 Benchmark file(s) using compression level #
104 `#` compression level \[1-19] (default: 3)
106 unlocks high compression levels 20+ (maximum 22), using a lot more memory.
109 switch to ultra-fast compression levels.
111 The higher the value, the faster the compression speed,
112 at the cost of some compression ratio.
113 This setting overwrites compression level if one was set previously.
114 Similarly, if a compression level is set after `--fast`, it overrides it.
121 Does not spawn a thread for compression, use a single thread for both I/O and compression.
122 In this mode, compression is serialized with I/O, which is slightly slower.
123 (This is different from `-T1`, which spawns 1 compression thread in parallel of I/O).
128 `zstd` will dynamically adapt compression level to perceived I/O conditions.
142 This setting is designed to improve the compression ratio for files with
151 This is effectively dictionary compression with some convenient parameter
159 to improve compression ratio at the cost of speed
160 Note: for level 19, you can get increased compression ratio at the cost
164 `zstd` will periodically synchronize the compression state to make the
166 compression ratio, and the faster compression levels will see a small
167 compression speed hit.
178 do not store dictionary ID within frame header (dictionary compression).
187 This is also used during compression when using with --patch-from=. In this case,
192 This information will be used to better optimize compression parameters, resulting in
193 better and potentially faster compression, especially for smaller source sizes.
196 will be when optimizing compression parameters. If the stream size is relatively
197 small, this guess may be a poor one, resulting in a higher compression ratio than
199 Exact guesses result in better compression ratios. Overestimates result in slightly
200 degraded compression ratios, while underestimates may result in significant degradation.
216 remove source file(s) after successful compression or decompression. If used in combination with
219 keep source file(s) after successful compression or decompression.
245 support, zstd can compress to or decompress from other compression algorithm
263 Shows the default compression parameters that will be used for a
275 They set the compression level and number of threads to use during compression, respectively.
279 `ZSTD_CLEVEL` just replaces the default compression level (`3`).
281 …_NBTHREADS` can be used to set the number of threads `zstd` will attempt to use during compression.
287 `-#` for compression level and `-T#` for number of compression threads.
292 `zstd` offers _dictionary_ compression,
296 Then during compression and decompression, reference the same dictionary,
316 Use `#` compression level during training (optional).
317 Will generate statistics more tuned for selected compression level,
318 resulting in a _small_ compression ratio improvement for this level.
352 in size until compression ratio of the truncated dictionary is at most
353 _shrinkDictMaxRegression%_ worse than the compression ratio of the largest dictionary.
409 benchmark file(s) using compression level #
411 benchmark file(s) using multiple compression levels, from `-b#` to `-e#` (inclusive)
421 **Methodology:** For both compression and decompression speed, the entire input is compressed/decom…
426 `zstd` provides 22 predefined compression levels.
427 The selected or default predefined compression level can be changed with
428 advanced compression options.
431 taken from the selected or default compression level.
446 improves compression ratio.
457 Bigger hash tables cause less collisions which usually makes compression
458 faster, but requires more memory during compression.
466 improves compression ratio.
467 It also slows down compression speed and increases memory requirements for
468 compression.
479 compression ratio but decreases compression speed.
486 Larger search lengths usually decrease compression ratio but improve
496 A larger `targetLength` usually improves compression ratio
497 but decreases compression speed.
501 Impact is reversed : a larger `targetLength` increases compression speed
502 but decreases compression ratio.
511 Reloading more data improves compression ratio, but decreases speed.
526 Bigger hash tables usually improve compression ratio at the expense of more
527 memory during compression and a decrease in compression speed.
536 Larger/very small values usually decrease compression ratio.
546 Larger bucket sizes improve collision resolution but decrease compression
557 Larger values will improve compression speed. Deviating far from the
558 default value will likely result in a decrease in compression ratio.
563 The following parameters sets advanced compression options to something
569 Select the size of each compression job.
571 Default value is `4 * windowSize`, which means it varies depending on compression level.