Because "keymiss" is "special" compared to the rest of
the notifications (Trying not to break existing apps
using the 'A' format for notifications)
Also updated redis.conf and module.c docs
Reduce default minimum effort, so that when fragmentation is just detected,
the impact on the latency will be minor.
Reduce the default maximum effort, mainly to prevent a case were a sudden
massive deletions, won't trigger an aggressive defrag that will cause latency.
When activedefrag is disabled mid-run, reset the 'running' info field, and
clear the scan cursor, so that when it'll be re-enabled, a new fresh scan will
start.
Clearing the 'running' variable is important since lowering the defragger
tunables mid-scan won't help, the defragger only considers new threshold when
a new scan starts, and during a scan it can only become more aggressive,
(when more severe fragmentation is detected), it'll never go less aggressive.
So by temporarily disabling activedefrag, one can lower th the tunables.
Removing the experimantal warning.
The directive tls-prefer-server-cipher is actually tls-prefer-server-ciphers in config.c. This results in a failed directive call shown below. This pull request adds the "s" in ciphers so that the directive is able to be properly called in config.c
ubuntu@ip-172-31-16-31:~/redis$ src/redis-server ./redis.conf
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 200
>>> 'tls-prefer-server-cipher yes'
Bad directive or wrong number of arguments
This adds support for explicit configuration of a CA certs directory (in
addition to the previously supported bundle file). For redis-cli, if no
explicit CA configuration is supplied the system-wide default
configuration will be adopted.
* Introduce a connection abstraction layer for all socket operations and
integrate it across the code base.
* Provide an optional TLS connections implementation based on OpenSSL.
* Pull a newer version of hiredis with TLS support.
* Tests, redis-cli updates for TLS support.
The implementation of the diskless replication was currently diskless only on the master side.
The slave side was still storing the received rdb file to the disk before loading it back in and parsing it.
This commit adds two modes to load rdb directly from socket:
1) when-empty
2) using "swapdb"
the third mode of using diskless slave by flushdb is risky and currently not included.
other changes:
--------------
distinguish between aof configuration and state so that we can re-enable aof only when sync eventually
succeeds (and not when exiting from readSyncBulkPayload after a failed attempt)
also a CONFIG GET and INFO during rdb loading would have lied
When loading rdb from the network, don't kill the server on short read (that can be a network error)
Fix rdb check when performed on preamble AOF
tests:
run replication tests for diskless slave too
make replication test a bit more aggressive
Add test for diskless load swapdb
There are too many advantages in doing this, RDB is faster to persist,
more compact, much faster to load back. The main issues here are that
the code is less tested because this was not the old default (so we are
enabling it for the new 5.0 release), and that the AOF is no longer a
trivially parsable format from now on. However the non-preamble mode
will be supported in the future as well, if new data types will be
added.
This commit, in some parts derived from PR #3041 which is no longer
possible to merge (because the user deleted the original branch),
implements the ability of slaves to have a special configuration
preventing that they try to start a failover when the master is failing.
There are multiple reasons for wanting this, and the feautre was
requested in issue #3021 time ago.
The differences between this patch and the original PR are the
following:
1. The flag is saved/loaded on the nodes configuration.
2. The 'myself' node is now flag-aware, the flag is updated as needed
when the configuration is changed via CONFIG SET.
3. The flag name uses NOFAILOVER instead of NO_FAILOVER to be consistent
with existing NOADDR.
4. The redis.conf documentation was rewritten.
Thanks to @deep011 for the original patch.
- big keys are not defragged in one go from within the dict scan
instead they are scanned in parts after the main dict hash bucket is done.
- add latency monitor sample for defrag
- change default active-defrag-cycle-min to induce lower latency
- make active defrag start a new scan right away if needed, so it's easier
(for the test suite) to detect when it's done
- make active defrag quick the current cycle after each db / big key
- defrag some non key long term global allocations
- some refactoring for smaller functions and more reusable code
- during dict rehashing, one scan iteration of the dict, can end up scanning
one bucket in the smaller dict and many many buckets in the larger dict.
so waiting for 16 scan iterations before checking the time, may be much too long.