History log of /redis-3.2.3/src/replication.c (Results 1 – 25 of 178)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: 8.0-m02, 6.2.16, 7.2.6, 7.4.1, 8.0-m01, 7.4.0, 7.4-rc2, 7.4-rc1, 7.2.5, 7.2.4, 7.0.15, 7.2.3, 7.2.2, 7.0.14, 6.2.14, 6.2.15, 7.2.1, 7.0.13, 7.2.0, 7.2-rc3, 7.0.12, 6.2.13, 6.0.20, 7.2-rc2, 6.0.19, 6.2.12, 7.0.11, 7.2-rc1, 7.0.10, 7.0.9, 6.2.11, 6.0.18, 6.2.10, 6.0.17, 6.2.9, 7.0.8, 7.0.7, 7.0.6, 6.2.8, 7.0.5, 7.0.4, 7.0.3, 7.0.2, 7.0.1, 7.0.0, 6.2.7, 7.0-rc3, 7.0-rc2, 7.0-rc1, 6.2.6, 6.0.16, 5.0.14, 5.0.13, 6.0.15, 6.2.5, 6.0.14, 6.2.4, 6.2.3, 6.0.13, 6.2.2, 6.2.1, 6.0.12, 5.0.12, 6.0.11, 6.2.0, 5.0.11, 6.2-rc3, 6.0.10, 6.2-rc2, 6.2-rc1, 6.0.9, 5.0.10, 6.0.8, 6.0.7, 6.0.6, 6.0.5, 6.0.4, 6.0.3, 6.0.2, 6.0.1, 6.0.0, 5.0.9, 6.0-rc4, 6.0-rc3, 5.0.8, 6.0-rc2, 6.0-rc1, 5.0.7, 5.0.6, 5.0.5, 3.2.13, 4.0.14, 5.0.4, 4.0.13, 5.0.3, 4.0.12, 5.0.2, 5.0.1, 5.0.0, 5.0-rc6, 5.0-rc5, 4.0.11, 5.0-rc4, 5.0-rc3, 5.0-rc2, 4.0.10, 3.2.12, 5.0-rc1, 4.0.9, 4.0.8, 4.0.7, 4.0.6, 4.0.5, 4.0.4, 4.0.3, 3.2.11, 4.0.2, 3.2.10, 4.0.1, 4.0.0, 3.2.9, 4.0-rc3, 3.2.8, 3.2.7, 3.2.6, 4.0-rc2, 4.0-rc1, 3.2.5, 3.2.4, 3.2.3
# e67ad1d1 01-Aug-2016 Qu Chen <[email protected]>

Fix a bug to delay bgsave while AOF rewrite in progress for replication


Revision tags: 3.2.2
# 0a45fbc3 27-Jul-2016 antirez <[email protected]>

Ability of slave to announce arbitrary ip/port to master.

This feature is useful, especially in deployments using Sentinel in
order to setup Redis HA, where the slave is executed with NAT or port
fo

Ability of slave to announce arbitrary ip/port to master.

This feature is useful, especially in deployments using Sentinel in
order to setup Redis HA, where the slave is executed with NAT or port
forwarding, so that the auto-detected port/ip addresses, as listed in
the "INFO replication" output of the master, or as provided by the
"ROLE" command, don't match the real addresses at which the slave is
reachable for connections.

show more ...


# a1bfe22a 22-Jul-2016 antirez <[email protected]>

Replication: when possible start RDB saving ASAP.

In a previous commit the replication code was changed in order to
centralize the BGSAVE for replication trigger in replicationCron(),
however after

Replication: when possible start RDB saving ASAP.

In a previous commit the replication code was changed in order to
centralize the BGSAVE for replication trigger in replicationCron(),
however after further testings, the 1 second delay imposed by this
change is not acceptable.

So now the BGSAVE is only delayed if the AOF rewriting process is
active. However past comments made sure that replicationCron() is always
able to trigger the BGSAVE when needed, making the code generally more
robust.

The new code is more similar to the initial @oranagra patch where the
BGSAVE was delayed only if an AOF rewrite was in progress.

Trivia: delaying the BGSAVE uncovered a minor Sentinel issue that is now
fixed.

show more ...


# 017378ec 21-Jul-2016 antirez <[email protected]>

Replication: start BGSAVE for replication always in replicationCron().

This makes the replication code conceptually simpler by removing the
synchronous BGSAVE trigger in syncCommand(). This also mea

Replication: start BGSAVE for replication always in replicationCron().

This makes the replication code conceptually simpler by removing the
synchronous BGSAVE trigger in syncCommand(). This also means that
socket and disk BGSAVE targets are handled by the same code.

show more ...


Revision tags: 3.2.1, 3.2.0, 3.2.0-rc3, 3.0.7, 3.2.0-rc2, 3.2-rc1, 3.0.6, 2.8.24
# c5f8c80a 03-Dec-2015 antirez <[email protected]>

Centralize slave replication handshake aborting.

Now we have a single function to call in any state of the slave
handshake, instead of using different functions for different states
which is error p

Centralize slave replication handshake aborting.

Now we have a single function to call in any state of the slave
handshake, instead of using different functions for different states
which is error prone. Change performed in the context of issue #2479 but
does not fix it, since should be functionally identical to the past.
Just an attempt to make replication.c simpler to follow.

show more ...


Revision tags: 3.0.5, 2.8.23
# e3344b80 15-Oct-2015 antirez <[email protected]>

PR 2813 fix ported to unstable.


# 47e6cf11 30-Sep-2015 antirez <[email protected]>

Refactoring: unlinkClient() added to lower freeClient() complexity.


# 2d21af45 30-Sep-2015 antirez <[email protected]>

Refactoring: new function to test if client has pending output.


# 23e7710c 28-Sep-2015 antirez <[email protected]>

Avoid installing the client write handler when possible.


Revision tags: 2.8.22, 3.0.4
# d036abe2 21-Aug-2015 antirez <[email protected]>

Log client details on SLAVEOF command having an effect.


# f18e5b63 20-Aug-2015 antirez <[email protected]>

startBgsaveForReplication(): handle waiting slaves state change.

Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_S

startBgsaveForReplication(): handle waiting slaves state change.

Before this commit, after triggering a BGSAVE it was up to the caller of
startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in
order to update them accordingly. However when the replication target is
the socket, this is not possible since the process of updating the
slaves and sending the FULLRESYNC reply must be coupled with the process
of starting an RDB save (the reason is, we need to send the FULLSYNC
command and spawn a child that will start to send RDB data to the slaves
ASAP).

This commit moves the responsibility of handling slaves in
WAIT_BGSAVE_START to startBgsavForReplication() so that for both
diskless and disk-based replication we have the same chain of
responsiblity. In order accomodate such change, the syncCommand() also
needs to put the client in the slave list ASAP (just after the initial
checks) and not at the end, so that startBgsavForReplication() can find
the new slave alrady in the list.

Another related change is what happens if the BGSAVE fails because of
fork() or other errors: we now remove the slave from the list of slaves
and send an error, scheduling the slave connection to be terminated.

As a side effect of this change the following errors found by
Oran Agra are fixed (thanks!):

1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned
up, otherwise they remain in a wrong state forever since we setup them
for full resync before actually trying to fork.

2. updateSlavesWaitingBgsave() with replication target set as "socket"
was broken since the function changed the slaves state from
WAIT_BGSAVE_START to WAIT_BGSAVE_END via
replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets()
will not find any slave in the right state (WAIT_BGSAVE_START) to feed.

show more ...


# bea12591 07-Aug-2015 antirez <[email protected]>

slaveTryPartialResynchronization and syncWithMaster: better synergy.

It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only call

slaveTryPartialResynchronization and syncWithMaster: better synergy.

It is simpler if removing the read event handler from the FD is up to
slaveTryPartialResynchronization, after all it is only called in the
context of syncWithMaster.

This commit also makes sure that on error all the event handlers are
removed from the socket before closing it.

show more ...


# 88c716a0 06-Aug-2015 antirez <[email protected]>

syncWithMaster(): non blocking state machine.


# ce5761e0 06-Aug-2015 antirez <[email protected]>

startBgsaveForReplication(): log what you really do.


# 3e6d4d59 06-Aug-2015 antirez <[email protected]>

Replication: add REPLCONF CAPA EOF support.

Add the concept of slaves capabilities to Redis, the slave now presents
to the Redis master with a set of capabilities in the form:

REPLCONF capa SOM

Replication: add REPLCONF CAPA EOF support.

Add the concept of slaves capabilities to Redis, the slave now presents
to the Redis master with a set of capabilities in the form:

REPLCONF capa SOMECAPA capa OTHERCAPA ...

This has the effect of setting slave->slave_capa with the corresponding
SLAVE_CAPA macros that the master can test later to understand if it
the slave will understand certain formats and protocols of the
replication process. This makes it much simpler to introduce new
replication capabilities in the future in a way that don't break old
slaves or masters.

This patch was designed and implemented together with Oran Agra
(@oranagra).

show more ...


# 55ba7727 05-Aug-2015 antirez <[email protected]>

Fix replication slave pings period.

For PINGs we use the period configured by the user, but for the newlines
of slaves waiting for an RDB to be created (including slaves waiting for
the FULLRESYNC r

Fix replication slave pings period.

For PINGs we use the period configured by the user, but for the newlines
of slaves waiting for an RDB to be created (including slaves waiting for
the FULLRESYNC reply) we need to ping with frequency of 1 second, since
the timeout is fixed and needs to be refreshed.

show more ...


# 15de6b10 05-Aug-2015 antirez <[email protected]>

Make sure we re-emit SELECT after each new slave full sync setup.

In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. Howev

Make sure we re-emit SELECT after each new slave full sync setup.

In previous commits we moved the FULLRESYNC to the moment we start the
BGSAVE, so that the offset we provide is the right one. However this
also means that we need to re-emit the SELECT statement every time a new
slave starts to accumulate the changes.

To obtian this effect in a more clean way, the function that sends the
FULLRESYNC reply was overloaded with a more important role of also doing
this and chanigng the slave state. So it was renamed to
replicationSetupSlaveForFullResync() to better reflect what it does now.

show more ...


# a5a06a8e 05-Aug-2015 antirez <[email protected]>

Don't send SELECT to slaves in WAIT_BGSAVE_START state.


# 62b5c60e 05-Aug-2015 antirez <[email protected]>

syncCommand() comments improved.


# 292fec05 04-Aug-2015 antirez <[email protected]>

PSYNC initial offset fix.

This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have ev

PSYNC initial offset fix.

This commit attempts to fix a bug involving PSYNC and diskless
replication (currently experimental) found by Yuval Inbar from Redis Labs
and that was later found to have even more far reaching effects (the bug also
exists when diskstore is off).

The gist of the bug is that, a Redis master replies with +FULLRESYNC to
a PSYNC attempt that fails and requires a full resynchronization.
However, the baseline offset sent along with FULLRESYNC was always the
current master replication offset. This is not ok, because there are
many reasosn that may delay the RDB file creation. And... guess what,
the master offset we communicate must be the one of the time the RDB
was created. So for example:

1) When the BGSAVE for replication is delayed since there is one
already but is not good for replication.
2) When the BGSAVE is not needed as we attach one currently ongoing.
3) When because of diskless replication the BGSAVE is delayed.

In all the above cases the PSYNC reply is wrong and the slave may
reconnect later claiming to need a wrong offset: this may cause
data curruption later.

show more ...


# c1e94b6b 28-Jul-2015 antirez <[email protected]>

Force slaves to resync after unsuccessful PSYNC.

Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longe

Force slaves to resync after unsuccessful PSYNC.

Using chained replication where C is slave of B which is in turn slave of
A, if B reconnects the replication link with A but discovers it is no
longer possible to PSYNC, slaves of B must be disconnected and PSYNC
not allowed, since the new B dataset may be completely different after
the synchronization with the master.

Note that there are varius semantical differences in the way this is
handled now compared to the past. In the past the semantics was:

1. When a slave lost connection with its master, disconnected the chained
slaves ASAP. Which is not needed since after a successful PSYNC with the
master, the slaves can continue and don't need to resync in turn.

2. However after a failed PSYNC the replication backlog was not reset, so a
slave was able to PSYNC successfully even if the instance did a full
sync with its master, containing now an entirely different data set.

Now instead chained slaves are not disconnected when the slave lose the
connection with its master, but only when it is forced to full SYNC with
its master. This means that if the slave having chained slaves does a
successful PSYNC all its slaves can continue without troubles.

See issue #2694 for more details.

show more ...


# 278ea9d1 28-Jul-2015 antirez <[email protected]>

replicationHandleMasterDisconnection() belongs to replication.c.


# 32f80e2f 27-Jul-2015 antirez <[email protected]>

RDMF: More consistent define names.


# 40eb548a 26-Jul-2015 antirez <[email protected]>

RDMF: REDIS_OK REDIS_ERR -> C_OK C_ERR.


# 2d9e3eb1 26-Jul-2015 antirez <[email protected]>

RDMF: redisAssert -> serverAssert.


12345678