|
Revision tags: 1.6.32, 1.6.31, 1.6.30, 1.6.29, 1.6.28, 1.6.27, 1.6.26, 1.6.25, 1.6.24, 1.6.23, 1.6.22, 1.6.21, 1.6.20, 1.6.19, 1.6.18, 1.6.17, 1.6.16, 1.6.15, 1.6.14, 1.6.13, 1.6.12, 1.6.11, 1.6.10, 1.6.9, 1.6.8, 1.6.7, 1.6.6, 1.6.5, 1.6.4, 1.6.3, 1.6.2, 1.6.1, 1.6.0, 1.5.22, 1.5.21, 1.5.20, 1.5.19, 1.5.18, 1.5.17, 1.5.16, 1.5.15, 1.5.14, 1.5.13, 1.5.12, 1.5.11, 1.5.10, 1.5.9, 1.5.8, 1.5.7, 1.5.6, 1.5.5, 1.5.4, 1.5.3, 1.5.2, 1.5.1, 1.5.0, 1.4.39, 1.4.38, 1.4.37, flash-with-wbuf-stack, 1.4.36, 1.4.35, 1.4.34, 1.4.33, 1.4.32, 1.4.31, 1.4.30, 1.4.29 |
|
| #
ee461d11 |
| 12-Jul-2016 |
dormando <[email protected]> |
slabs reassigns works with chunks and chunked items.
also fixes the new LRU algorithm to balance by total bytes used rather than total chunks used, since total chunks used isn't tracked for multi-ch
slabs reassigns works with chunks and chunked items.
also fixes the new LRU algorithm to balance by total bytes used rather than total chunks used, since total chunks used isn't tracked for multi-chunk items.
also fixes a bug where the lru limit wasn't being utilized for HOT_LRU
also some cleanup from previous commits.
show more ...
|
| #
51a828b9 |
| 07-Jul-2016 |
dormando <[email protected]> |
startup options for chunked items.
has spent some time under performance testing. For larger items there's less than 5% extra CPU usage, however the max usable CPU when using large items is 1/10th o
startup options for chunked items.
has spent some time under performance testing. For larger items there's less than 5% extra CPU usage, however the max usable CPU when using large items is 1/10th or less before you run out of bandwidth. Mixed small/large items will still balance out.
comments out debugging (which must be removed for release).
restores defaults and ensures only t/chunked-items.t is affected.
dyn-maxbytes and item_size_max tests still fail.
append/prepend aren't implemented, sasl needs to be guarded.
slab mover needs to be updated.
show more ...
|
|
Revision tags: 1.4.28 |
|
| #
b05653f9 |
| 01-Jul-2016 |
dormando <[email protected]> |
chunked item second checkpoint
can actually fetch items now, and fixed a few bugs with storage/freeing.
added fetching for binprot. added some basic tests.
many tests still fail for various reason
chunked item second checkpoint
can actually fetch items now, and fixed a few bugs with storage/freeing.
added fetching for binprot. added some basic tests.
many tests still fail for various reasons, and append/prepend isn't fixed yet.
show more ...
|
| #
0567967a |
| 30-Jun-2016 |
dormando <[email protected]> |
chunked items checkpoint commit
can set and store large items via asciiprot. gets/append/prepend/binprot not implemented yet.
|
| #
cb01d504 |
| 27-Jun-2016 |
dormando <[email protected]> |
clean up global stats code a little.
tons of stats were left in the global stats structure that're no longer used, and it looks like we kept accidentally adding new ones in there. There's also an un
clean up global stats code a little.
tons of stats were left in the global stats structure that're no longer used, and it looks like we kept accidentally adding new ones in there. There's also an unused mutex.
Split global stats into `stats` and `stats_state`. initialize via memset, reset only `stats` via memset, removing several places where stats values get repeated. Looks much cleaner and should be less error prone.
show more ...
|
|
Revision tags: 1.4.27 |
|
| #
31541b37 |
| 23-Jun-2016 |
dormando <[email protected]> |
cache_memlimit command for tuning runtime maxbytes
Allows dynamically increasing the memory limit of a running system, if memory isn't being preallocated.
If `-o modern` is in use, can also dynamic
cache_memlimit command for tuning runtime maxbytes
Allows dynamically increasing the memory limit of a running system, if memory isn't being preallocated.
If `-o modern` is in use, can also dynamically lower memory usage. pages are free()'ed back to the OS via the slab rebalancer as memory is freed up. Does not guarantee the OS will actually give the memory back for other applications to use, that depends on how the OS handles memory.
show more ...
|
| #
d7fb022d |
| 22-Jun-2016 |
dormando <[email protected]> |
allow manually specifying slab class sizes
"-o slab_sizes=100-200-300-400-500" will create 5 slab classes of those specified sizes, with the final class being item_max_size.
Using the new online st
allow manually specifying slab class sizes
"-o slab_sizes=100-200-300-400-500" will create 5 slab classes of those specified sizes, with the final class being item_max_size.
Using the new online stats sizes command, it's possible to determine if the typical factoral slab class growth rate doesn't align well with how items are stored.
This is dangerous unless you really know what you're doing. If your items have an exact or very predictable size this makes a lot of sense. If they do not, the defaults are safer.
show more ...
|
| #
ae6f4267 |
| 22-Jun-2016 |
dormando <[email protected]> |
online hang-free "stats sizes" command.
"stats sizes" is one of the lack cache-hanging commands. With millions of items it can hang for many seconds.
This commit changes the command to be dynamic.
online hang-free "stats sizes" command.
"stats sizes" is one of the lack cache-hanging commands. With millions of items it can hang for many seconds.
This commit changes the command to be dynamic. A histogram is tracked as items are linked and unlinked from the cache. The tracking is enabled or disabled at runtime via "stats sizes_enable" and "stats sizes_disable".
This presently "works" but isn't accurate. Giving it some time to think over before switching to requiring that CAS be enabled. Otherwise the values could underflow if items are removed that existed before the sizes tracker is enabled. This attempts to work around it by using it->time, which gets updated on fetch, and is thus inaccurate.
show more ...
|
|
Revision tags: 1.4.26 |
|
| #
9517c656 |
| 31-May-2016 |
dormando <[email protected]> |
bump some global stats to 64bit uints
total_items is pretty easy to overflow. Upped some of the others just in case.
|
| #
e688e97d |
| 18-Jan-2016 |
Natanael Copa <[email protected]> |
fix build with musl libc
musl libc will warn if you include sys/signal.h instead of signal.h as specified by posix. Build will fail due to -Werror explicitly beeing set.
Fix it by use the posix loc
fix build with musl libc
musl libc will warn if you include sys/signal.h instead of signal.h as specified by posix. Build will fail due to -Werror explicitly beeing set.
Fix it by use the posix location.
fixes #138
show more ...
|
|
Revision tags: 1.4.25 |
|
| #
ec937e5e |
| 21-Oct-2015 |
dormando <[email protected]> |
fix over-inflation of total_malloced
mem_alloced was getting increased every time a page was assigned out of either malloc or the global page pool. This means total_malloced will inflate forever as
fix over-inflation of total_malloced
mem_alloced was getting increased every time a page was assigned out of either malloc or the global page pool. This means total_malloced will inflate forever as pages are reused, and once limit_maxbytes is surpassed it will stop attempting to malloc more memory.
The result is we would stop malloc'ing new memory too early if page reclaim happens before the whole thing fills. The test already caused this condition, so adding the extra checks was trivial.
show more ...
|
| #
b1debc4c |
| 10-Oct-2015 |
dormando <[email protected]> |
try harder to save items
previously the slab mover would evict items if the new chunk was within the slab page being moved. now it will do an inline reclaim of the chunk and try until it runs out of
try harder to save items
previously the slab mover would evict items if the new chunk was within the slab page being moved. now it will do an inline reclaim of the chunk and try until it runs out of memory.
show more ...
|
| #
8fa54f7e |
| 08-Oct-2015 |
dormando <[email protected]> |
split rebal_evictions into _nomem and _samepage
gross oversight putting two conditions into the same variable. now can tell if we're evicting because we're hitting the bottom of the free memory pool
split rebal_evictions into _nomem and _samepage
gross oversight putting two conditions into the same variable. now can tell if we're evicting because we're hitting the bottom of the free memory pool, or if we keep trying to rescue items into the same page as the one being cleared.
show more ...
|
| #
186509c2 |
| 07-Oct-2015 |
dormando <[email protected]> |
stop using slab class 255 for page mover
class 255 is now a legitimate class, used by the NOEXP LRU when the expirezero_does_not_evict flag is enabled. Instead, we now force a single bit ITEM_SLABBE
stop using slab class 255 for page mover
class 255 is now a legitimate class, used by the NOEXP LRU when the expirezero_does_not_evict flag is enabled. Instead, we now force a single bit ITEM_SLABBED when a chunk is returned to the slabber, and ITEM_SLABBED|ITEM_FETCHED means it's been cleared for a page move.
item_alloc overwrites the chunk's flags on set. The only weirdness was slab_free |='ing in the ITEM_SLABBED bit. I tracked that down to a commit in 2003 titled "more debugging" and can't come up with a good enough excuse for preserving an item's flags when it's been returned to the free memory pool. So now we overload the flag meaning.
show more ...
|
| #
6ee8daef |
| 06-Oct-2015 |
dormando <[email protected]> |
call STATS_LOCK() less in slab mover.
uses the slab_rebal struct to summarize stats, more occasionally grabbing the global lock to fill them in, instead.
|
| #
11eb3f23 |
| 06-Oct-2015 |
dormando <[email protected]> |
"mem_requested" from "stats slabs" is now accurate
During an item rescue, item size was being added to the slab class when the new chunk requested, and then not removed again from the total if the i
"mem_requested" from "stats slabs" is now accurate
During an item rescue, item size was being added to the slab class when the new chunk requested, and then not removed again from the total if the item was successfully rescued. Now just always remove from the total.
show more ...
|
| #
a836eabc |
| 05-Oct-2015 |
dormando <[email protected]> |
fix memory corruption in slab page mover
If item does not have ITEM_SLABBED bit, or ITEM_LINKED bit, logic was falling through, defaulting to MOVE_PASS. If an item has had storage allocated via item
fix memory corruption in slab page mover
If item does not have ITEM_SLABBED bit, or ITEM_LINKED bit, logic was falling through, defaulting to MOVE_PASS. If an item has had storage allocated via item_alloc(), but haven't completed the data upload, it will sit in this mode. With MOVE_PASS for an item in this state, if no other items trip the busy re-scan of the page the mover will consider the page completely wiped even with the outstanding item.
The hilarious bit is I'd clearly thought this through: the top comment states the if this, then this, or that... with the "or that" logic completely missing. Add one line of code and it survived a 5 hour torture test, where before it crashed after 30-60 minutes.
Leaves some handy debug code #ifdef'ed out. Also moves the memset wipe on page move completion to only happen if the page isn't being returned to the global page pool, as the page allocator does a memset and chunk-split.
Thanks to Scott Mansfield for the initial information eventually leading to this discovery.
show more ...
|
| #
fa51ad84 |
| 30-Sep-2015 |
dormando <[email protected]> |
fix off by one in slab shuffling
Thanks Devon :)
|
| #
d6e96467 |
| 29-Sep-2015 |
dormando <[email protected]> |
first half of new slab automover
If any slab classes have more than two pages worth of free chunks, attempt to free one page back to a global pool.
Create new concept of a slab page move destinatio
first half of new slab automover
If any slab classes have more than two pages worth of free chunks, attempt to free one page back to a global pool.
Create new concept of a slab page move destination of "0", which is a global page pool. Pages can be re-assigned out of that pool during allocation.
Combined with item rescuing from the previous patch, we can safely shuffle pages back to the reassignment pool as chunks free up naturally. This should be a safe default going forward. Users should be able to decide to free or move pages based on eviction pressure as well. This is coming up in another commit.
This also fixes a calculation of the NOEXP LRU size, and completely removes the old slab automover thread. Slab automove decisions will now be part of the lru maintainer thread.
show more ...
|
| #
d5185f9c |
| 29-Sep-2015 |
dormando <[email protected]> |
properly shuffle page list after slab move
used to take the newest page of the page list and replace the oldest page with it. so only the first page we move from a slab class will actually be "old".
properly shuffle page list after slab move
used to take the newest page of the page list and replace the oldest page with it. so only the first page we move from a slab class will actually be "old". instead, actually burn the slight CPU to shuffle all of the pointers down one. Now we always chew the oldest page.
show more ...
|
| #
004e2211 |
| 29-Sep-2015 |
dormando <[email protected]> |
slab mover rescues valid items with free chunks
During a slab page move items are typically ejected regardless of their validity. Now, if an item is valid and free chunks are available in the same s
slab mover rescues valid items with free chunks
During a slab page move items are typically ejected regardless of their validity. Now, if an item is valid and free chunks are available in the same slab class, copy the item over and replace it.
It's up to external systems to try to ensure free chunks are available before moving a slab page. If there is no memory it will simply evict them as normal.
Also adds counters so we can finally tell how often these cases happen.
show more ...
|
|
Revision tags: 1.4.24, 1.4.23 |
|
| #
a2fc8e93 |
| 20-Apr-2015 |
dormando <[email protected]> |
fix off-by-one with slab management
data sticking into the highest slab class was unallocated. Thanks to pyry for the repro case:
perl -e 'use Cache::Memcached;$memd = new Cache::Memcached { server
fix off-by-one with slab management
data sticking into the highest slab class was unallocated. Thanks to pyry for the repro case:
perl -e 'use Cache::Memcached;$memd = new Cache::Memcached { servers=>["127.0.0.1:11212"]};for(20..1000){print "$_\n";$memd->set("fo2$_", "a"x1024)};' (in a loop) with: ./memcached -v -m 32 -p 11212 -f 1.012
This serves as a note to turn this into a test.
show more ...
|
| #
dc272ba5 |
| 08-Jan-2015 |
dormando <[email protected]> |
another lock fix for slab mover
wasn't holding LRU locks while unlinking an item. options were either never hold slabs lock underneath the LRU locks, which is doable but annoying... or drop the slab
another lock fix for slab mover
wasn't holding LRU locks while unlinking an item. options were either never hold slabs lock underneath the LRU locks, which is doable but annoying... or drop the slabs lock for the unlink step. It's not very clear but I think it's safe.
show more ...
|
| #
62415f16 |
| 07-Jan-2015 |
dormando <[email protected]> |
make slab mover lock safe again.
Given mutex_locks act as memory barriers this should work.
This does not yet fix being able to eject hot items from the fetch path.
|
| #
e708513a |
| 07-Jan-2015 |
dormando <[email protected]> |
LRU maintainer thread now fires LRU crawler
... if available. Very simple starter heuristic for how often to run the crawler.
At this point, this patch series should have a significant impact on hi
LRU maintainer thread now fires LRU crawler
... if available. Very simple starter heuristic for how often to run the crawler.
At this point, this patch series should have a significant impact on hit ratio.
show more ...
|