This is extremely useful in order to simulate an high load of requests
about different keys, and force Redis to track a lot of informations
about several clients, to simulate real world workloads.
Now that the call also invalidates client side caching slots, it is
important that after an internal flush operation we both send the
notifications to the clients and, at the same time, are able to reclaim
the memory of the tracking table. This may even fix a few edge cases
related to MULTI/EXEC + WATCH during resync, not sure, but in general
looks more correct.
Otherwise what happens is that the tracking table will never get garbage
collected if there are no longer clients with tracking enabled.
Now the invalidation function immediately checks if there is any table
allocated, otherwise it returns ASAP, so the overhead when the feature
is not used should be near zero.
Without such change, the diskless replicas, when loading RDB files from
the socket will not abort when a broken RDB file gets loaded. This is
potentially unsafe, because right now Redis is not able to guarantee
that encoding errors are safe from the POV of memory corruptions (for
instance the LZF library may not be safe against untrusted data?) so
better to abort when the RDB file we are going to load is corrupted.
Instead I/O errors are still returned to the caller without aborting,
so that in case of short read the diskless replica can try again.
now that replica can read rdb directly from the socket, it should avoid exiting
on short read and instead try to re-sync.
this commit tries to have minimal effects on non-diskless rdb reading.
and includes a test that tries to trigger this scenario on various read cases.
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.
other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied
When loading rdb from the network, don't kill the server on short read (that can be a network error)
Fix rdb check when performed on preamble AOF
tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb