By default, a single "compression unit" is 16 times the size of a cluster (so most 4 kB cluster NTFS filesystems will require 64 kB chunks to store files), but does not increase past 64 kB. Decompression is almost always the same rate, regardless of the file size, in the Lempel-Ziv algorithm (since the dictionary can just be addressed using a base + offset scheme).Ĭompression also impacts how files are laid out on the disk. This may be due to the fact that the Lempel-Ziv algorithm slows down as the compression moves on (since the dictionary continues to grow, requiring more comparisons as bits come in). However, this does have an effect on large files, which may experience heavy fragmentation (due to the algorithm), or not be compressed at all. So long as your CPU can sustain a compression/decompression rate above your HDD write speed, you should experience a speed gain. Well, it's exactly by relying on the above inequalities. How exactly does NTFS compression affect system performance? This is a drastic assumption in the write case, since Lempel-Ziv's algorithm (as implemented in software) has a non-deterministic compression rate (although it can be constrained with a limited dictionary size). So long as C > W, you get a performance gain when writing, and so long as D > R, you get a performance gain when reading. Assuming your CPU, using some compression algorithm, can compress at C MB/s and decompress at D MB/s, and your hard drive has write speed W and read speed R. Performance because of reduced disk reads.Ĭorrect. I've heard that NTFS compression can reduce performance due to extraĬPU usage, but I've read reports that it may actually increase
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |