1## Files 2 3The implementation of leveldb is similar in spirit to the representation of a 4single [Bigtable tablet (section 5.3)](http://research.google.com/archive/bigtable.html). 5However the organization of the files that make up the representation is 6somewhat different and is explained below. 7 8Each database is represented by a set of files stored in a directory. There are 9several different types of files as documented below: 10 11### Log files 12 13A log file (*.log) stores a sequence of recent updates. Each update is appended 14to the current log file. When the log file reaches a pre-determined size 15(approximately 4MB by default), it is converted to a sorted table (see below) 16and a new log file is created for future updates. 17 18A copy of the current log file is kept in an in-memory structure (the 19`memtable`). This copy is consulted on every read so that read operations 20reflect all logged updates. 21 22## Sorted tables 23 24A sorted table (*.ldb) stores a sequence of entries sorted by key. Each entry is 25either a value for the key, or a deletion marker for the key. (Deletion markers 26are kept around to hide obsolete values present in older sorted tables). 27 28The set of sorted tables are organized into a sequence of levels. The sorted 29table generated from a log file is placed in a special **young** level (also 30called level-0). When the number of young files exceeds a certain threshold 31(currently four), all of the young files are merged together with all of the 32overlapping level-1 files to produce a sequence of new level-1 files (we create 33a new level-1 file for every 2MB of data.) 34 35Files in the young level may contain overlapping keys. However files in other 36levels have distinct non-overlapping key ranges. Consider level number L where 37L >= 1. When the combined size of files in level-L exceeds (10^L) MB (i.e., 10MB 38for level-1, 100MB for level-2, ...), one file in level-L, and all of the 39overlapping files in level-(L+1) are merged to form a set of new files for 40level-(L+1). These merges have the effect of gradually migrating new updates 41from the young level to the largest level using only bulk reads and writes 42(i.e., minimizing expensive seeks). 43 44### Manifest 45 46A MANIFEST file lists the set of sorted tables that make up each level, the 47corresponding key ranges, and other important metadata. A new MANIFEST file 48(with a new number embedded in the file name) is created whenever the database 49is reopened. The MANIFEST file is formatted as a log, and changes made to the 50serving state (as files are added or removed) are appended to this log. 51 52### Current 53 54CURRENT is a simple text file that contains the name of the latest MANIFEST 55file. 56 57### Info logs 58 59Informational messages are printed to files named LOG and LOG.old. 60 61### Others 62 63Other files used for miscellaneous purposes may also be present (LOCK, *.dbtmp). 64 65## Level 0 66 67When the log file grows above a certain size (1MB by default): 68Create a brand new memtable and log file and direct future updates here 69In the background: 70Write the contents of the previous memtable to an sstable 71Discard the memtable 72Delete the old log file and the old memtable 73Add the new sstable to the young (level-0) level. 74 75## Compactions 76 77When the size of level L exceeds its limit, we compact it in a background 78thread. The compaction picks a file from level L and all overlapping files from 79the next level L+1. Note that if a level-L file overlaps only part of a 80level-(L+1) file, the entire file at level-(L+1) is used as an input to the 81compaction and will be discarded after the compaction. Aside: because level-0 82is special (files in it may overlap each other), we treat compactions from 83level-0 to level-1 specially: a level-0 compaction may pick more than one 84level-0 file in case some of these files overlap each other. 85 86A compaction merges the contents of the picked files to produce a sequence of 87level-(L+1) files. We switch to producing a new level-(L+1) file after the 88current output file has reached the target file size (2MB). We also switch to a 89new output file when the key range of the current output file has grown enough 90to overlap more than ten level-(L+2) files. This last rule ensures that a later 91compaction of a level-(L+1) file will not pick up too much data from 92level-(L+2). 93 94The old files are discarded and the new files are added to the serving state. 95 96Compactions for a particular level rotate through the key space. In more detail, 97for each level L, we remember the ending key of the last compaction at level L. 98The next compaction for level L will pick the first file that starts after this 99key (wrapping around to the beginning of the key space if there is no such 100file). 101 102Compactions drop overwritten values. They also drop deletion markers if there 103are no higher numbered levels that contain a file whose range overlaps the 104current key. 105 106### Timing 107 108Level-0 compactions will read up to four 1MB files from level-0, and at worst 109all the level-1 files (10MB). I.e., we will read 14MB and write 14MB. 110 111Other than the special level-0 compactions, we will pick one 2MB file from level 112L. In the worst case, this will overlap ~ 12 files from level L+1 (10 because 113level-(L+1) is ten times the size of level-L, and another two at the boundaries 114since the file ranges at level-L will usually not be aligned with the file 115ranges at level-L+1). The compaction will therefore read 26MB and write 26MB. 116Assuming a disk IO rate of 100MB/s (ballpark range for modern drives), the worst 117compaction cost will be approximately 0.5 second. 118 119If we throttle the background writing to something small, say 10% of the full 120100MB/s speed, a compaction may take up to 5 seconds. If the user is writing at 12110MB/s, we might build up lots of level-0 files (~50 to hold the 5*10MB). This 122may significantly increase the cost of reads due to the overhead of merging more 123files together on every read. 124 125Solution 1: To reduce this problem, we might want to increase the log switching 126threshold when the number of level-0 files is large. Though the downside is that 127the larger this threshold, the more memory we will need to hold the 128corresponding memtable. 129 130Solution 2: We might want to decrease write rate artificially when the number of 131level-0 files goes up. 132 133Solution 3: We work on reducing the cost of very wide merges. Perhaps most of 134the level-0 files will have their blocks sitting uncompressed in the cache and 135we will only need to worry about the O(N) complexity in the merging iterator. 136 137### Number of files 138 139Instead of always making 2MB files, we could make larger files for larger levels 140to reduce the total file count, though at the expense of more bursty 141compactions. Alternatively, we could shard the set of files into multiple 142directories. 143 144An experiment on an ext3 filesystem on Feb 04, 2011 shows the following timings 145to do 100K file opens in directories with varying number of files: 146 147 148| Files in directory | Microseconds to open a file | 149|-------------------:|----------------------------:| 150| 1000 | 9 | 151| 10000 | 10 | 152| 100000 | 16 | 153 154So maybe even the sharding is not necessary on modern filesystems? 155 156## Recovery 157 158* Read CURRENT to find name of the latest committed MANIFEST 159* Read the named MANIFEST file 160* Clean up stale files 161* We could open all sstables here, but it is probably better to be lazy... 162* Convert log chunk to a new level-0 sstable 163* Start directing new writes to a new log file with recovered sequence# 164 165## Garbage collection of files 166 167`DeleteObsoleteFiles()` is called at the end of every compaction and at the end 168of recovery. It finds the names of all files in the database. It deletes all log 169files that are not the current log file. It deletes all table files that are not 170referenced from some level and are not the output of an active compaction. 171