1. Rename 18-cluster-nodes-slots.tcl to 19-cluster-nodes-slots.tcl.
it was conflicting with another test prefixed by 18
2. Release memory on exit in redis-cli.c.
3. Fix freeConvertedSds indentation.
* For consistency, use tclsh for the script as well
* Ignore leaked fds that originate from grandparent process, since we
only care about fds redis-sentinel itself is responsible for
* Check every test iteration to catch problems early
* Some cleanups, e.g. parameterization of file name, etc.
The test failed from time to time on Github actions.
We think it's possible that on the module's blocking timeout
time tracking test, the timeout is happening prior we issue the
RedisModule_BlockedClientMeasureTimeStart(bc) on the
background thread. If that is the case one possible solution
is to increase the timeout.
Increasing to 200ms to 500ms to see if nightly stops failing.
When (remaining == (total_size - index)), element will definitely be random to.
But when rand() == RAND_MAX, the element will miss, this will trigger assert
in serverAssert(ziplistRandomPairsUnique(zsetobj->ptr, count, keys, vals) == count).
It is inefficient to repeatedly pick a single random element from a
ziplist.
For CASE4, which is when the user requested a low number of unique
random picks from the collectoin, we used thta pattern.
Now we use a different algorithm that picks unique elements from a
ziplist, and guarentee no duplicate but doesn't provide random order
(which is only needed in the non-unique random picks case)
Unrelated changes:
* change ziplist count and indexes variables to unsigned
* solve compilation warnings about uninitialized vars in gcc 10.2
Co-authored-by: xinluton <xinluton@qq.com>
Disable certificate validation, making it possible to connect to servers
without configuring full trust chain.
The use of this option is insecure and makes the connection vulnerable
to man in the middle attacks.
Without this fix, RM_ZsetRem can leave empty sorted sets which are
not allowed to exist.
Removing from a sorted set while iterating seems to work (while
inserting causes failed assetions). RM_ZsetRangeEndReached is
modified to return 1 if the key doesn't exist, to terminate
iteration when the last element has been removed.
Changes to HRANDFIELD and ZRANDMEMBER:
* Fix risk of OOM panic when client query a very big negative count (avoid allocating huge temporary buffer).
* Fix uneven random distribution in HRANDFIELD with negative count (wasn't using dictGetFairRandomKey).
* Add tests to check an even random distribution (HRANDFIELD, SRANDMEMBER, ZRANDMEMBER).
Co-authored-by: Oran Agra <oran@redislabs.com>
Fix errors of GEOSEARCH bybox search due to:
1. projection of the box to a trapezoid (when the meter box is converted to long / lat it's no longer a box).
2. width and height mismatch
Changes:
- New GEOSEARCH point in rectangle algorithm
- Fix GEOSEARCH bybox width and height mismatch bug
- Add GEOSEARCH bybox testing to the existing "GEOADD + GEORANGE randomized test"
- Add new fuzzy test to stress test the bybox corners and edges
- Add some tests for edge cases of the bybox algorithm
Co-authored-by: Oran Agra <oran@redislabs.com>
* Add bash temporarily to allow sentinel fd leaks test to run.
* Use vmactions-freebsd rdist sync to work around bind permission denied
and slow execution issues.
* Upgrade to tcl8.6 to be aligned with latest Ubuntu envs.
* Concat all command executions to avoid ignoring failures.
* Skip intensive fuzzer on FreeBSD. For some yet unknown reason, generate_fuzzy_traffic_on_key causes TCL to significantly bloat on FreeBSD resulting with out of memory.
This commit provides an optimization, in terms of time, for all GEORADIUS*
and GEOSEARCH* searches which utilize the default, sorted, COUNT clause.
This is commonly used for nearest-neighbor (top-K points closest to a given lat/lon)
searches. While the previous implementation appends all matching points to the
geoPoint array and performs pruning after-the-fact via a full sort and [0, count)-based
for-loop, this PR sorts only the required number of elements.
This optimization provides a 5-20% improvement in runtime depending on the
density of points of interest (POI) as well as the radius searched.
No performance degradation has been observed.
addReplyLongLongWithPrefix, has a check against negative length, and the code
flow removed in this commit bypasses the check.
addReplyAggregateLen has an assertion for negative length, but addReplyBulkLen
does not, so this commit fixes theoretical case of access violation (probably
unreachable though)
* The corrupt dump fuzzer found a division by zero.
* in some cases the random fields from the HRANDFIELD tests produced
fields with newlines and other special chars (due to \ char), this caused
the TCL tests to see a bulk response that has a newline in it and add {}
around it, later it can think this is a nested list. in fact the `alpha` random
string generator isn't using spaces and newlines, so it should not use `\`
either.
This commit fixes sentinel announces hostnames test error in certain linux environment
Before this commit, we only check localhost is resolved into 127.0.0.1, however in ubuntu
or some other linux environments "localhost" will be resolved into ::1 ipv6 address first if
the network stack is capable.
This commit enables tracking time of the background tasks and on replies,
opening the door for properly tracking commands that rely on blocking / background
work via the slowlog, latency history, and commandstats.
Some notes:
- The time spent blocked waiting for key changes, or blocked on synchronous
replication is not accounted for.
- **This commit does not affect latency tracking of commands that are non-blocking
or do not have background work.** ( meaning that it all stays the same with exception to
`BZPOPMIN`,`BZPOPMAX`,`BRPOP`,`BLPOP`, etc... and module's commands that rely
on background threads ).
- Specifically for latency history command we've added a new event class named
`command-unblocking` that will enable latency monitoring on commands that spawn
background threads to do the work.
- For blocking commands we're now considering the total time of a command as the
time spent on call() + the time spent on replying when unblocked.
- For Modules commands that rely on background threads we're now considering the
total time of a command as the time spent on call (main thread) + the time spent on
the background thread ( if marked within `RedisModule_MeasureTimeStart()` and
`RedisModule_MeasureTimeEnd()` ) + the time spent on replying (main thread)
To test for this feature we've added a `unit/moduleapi/blockonbackground` test that relies on
a module that blocks the client and sleeps on the background for a given time.
- check blocked command that uses RedisModule_MeasureTimeStart() is tracking background time
- check blocked command that uses RedisModule_MeasureTimeStart() is tracking background time even in timeout
- check blocked command with multiple calls RedisModule_MeasureTimeStart() is tracking the total background time
- check blocked command without calling RedisModule_MeasureTimeStart() is not reporting background time
New commands:
`HRANDFIELD [<count> [WITHVALUES]]`
`ZRANDMEMBER [<count> [WITHSCORES]]`
Algorithms are similar to the one in SRANDMEMBER.
Both return a simple bulk response when no arguments are given, and an array otherwise.
In case values/scores are requested, RESP2 returns a long array, and RESP3 a nested array.
note: in all 3 commands, the only option that also provides random order is the one with negative count.
Changes to SRANDMEMBER
* Optimization when count is 1, we can use the more efficient algorithm of non-unique random
* optimization: work with sds strings rather than robj
Other changes:
* zzlGetScore: when zset needs to convert string to double, we use safer memcpy (in
case the buffer is too small)
* Solve a "bug" in SRANDMEMBER test: it intended to test a positive count (case 3 or
case 4) and by accident used a negative count
Co-authored-by: xinluton <xinluton@qq.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
The key point is how to recover from last AOF write error, for example:
1. start redis with appendonly yes, and append some write commands
2. short write or something else error happen, `server.aof_last_write_status` changed to `C_ERR`, now redis doesn't accept write commands
3. execute `CONFIG SET appendonly no` to avoid the above problem, now redis can accept write commands again
4. disk error resolved, and execute `CONFIG SET appendonly yes` to reopen AOF, but `server.aof_last_write_status` cannot be changed to `C_OK` (if background aof rewrite run less then 1 second, it will free `server.aof_buf` and then serverCron cannot fix `aof_last_write_status`), then redis cannot accept write commands forever.
This PR use a simple way to fix it:
1. just free `server.aof_buf` when stop appendonly to save memory, if error happens in `flushAppendOnlyFile(1)`, the `server.aof_buf` may contains some data which has not be written to aof, I think we can ignore it because we turn off the appendonly.
2. reset fsync status after stop appendonly and call `flushAppendOnlyFile` only when `aof_state` is ON
3. reset `server.last_write_status` when reopen aof to accept write commands
With AOF policy of fsync "always", redis should respect the contract with the user
that on acknowledged write data is already synced on disk.
Redis was already exiting for AOF write error, but don't care about fsync failure.
So to guarantee data safe, redis should exit for fsync error too (when the AOF fsync
policy is 'always').
APIs added for these stream operations: add, delete, iterate and
trim (by ID or maxlength). The functions are prefixed by RM_Stream.
* RM_StreamAdd
* RM_StreamDelete
* RM_StreamIteratorStart
* RM_StreamIteratorStop
* RM_StreamIteratorNextID
* RM_StreamIteratorNextField
* RM_StreamIteratorDelete
* RM_StreamTrimByLength
* RM_StreamTrimByID
The type RedisModuleStreamID is added and functions for converting
from and to RedisModuleString.
* RM_CreateStringFromStreamID
* RM_StringToStreamID
Whenever the stream functions return REDISMODULE_ERR, errno is set to
provide additional error information.
Refactoring: The zset iterator fields in the RedisModuleKey struct
are wrapped in a union, to allow the same space to be used for type-
specific info for streams and allow future use for other key types.
This is both a bugfix and an enhancement.
Internally, Sentinel relies entirely on IP addresses to identify
instances. When configured with a new master, it also requires users to
specify and IP and not hostname.
However, replicas may use the replica-announce-ip configuration to
announce a hostname. When that happens, Sentinel fails to match the
announced hostname with the expected IP and considers that a different
instance, triggering reconfiguration, etc.
Another use case is where TLS is used and clients are expected to match
the hostname to connect to with the certificate's SAN attribute. To
properly implement this configuration, it is necessary for Sentinel to
redirect clients to a hostname rather than an IP address.
The new 'resolve-hostnames' configuration parameter determines if
Sentinel is willing to accept hostnames. It is set by default to no,
which maintains backwards compatibility and avoids unexpected DNS
resolution delays on systems with DNS configuration issues.
Internally, Sentinel continues to identify instances by their resolved
IP address and will also report the IP by default. The new
'announce-hostnames' parameter determines if Sentinel should prefer to
announce a hostname, when available, rather than an IP address. This
applies to addresses returned to clients, as well as their
representation in the configuration file, REPLICAOF configuration
commands, etc.
This commit also introduces SENTINEL CONFIG GET and SENTINEL CONFIG SET
which can be used to introspect or configure global Sentinel
configuration that was previously was only possible by directly
accessing the configuration file and possibly restarting the instance.
Co-authored-by: myl1024 <myl92916@qq.com>
Co-authored-by: sundb <sundbcn@gmail.com>
if option `set-proc-title' is no, then do nothing for proc title.
The reason has been explained long ago, see following:
We update redis to 2.8.8, then found there are some side effect when
redis always change the process title.
We run several slave instance on one computer, and all these salves
listen on unix socket only, then ps will show:
1 S redis 18036 1 0 80 0 - 56130 ep_pol 14:02 ? 00:00:31 /usr/sbin/redis-server *:0
1 S redis 23949 1 0 80 0 - 11074 ep_pol 15:41 ? 00:00:00 /usr/sbin/redis-server *:0
for redis 2.6 the output of ps is like following:
1 S redis 18036 1 0 80 0 - 56130 ep_pol 14:02 ? 00:00:31 /usr/sbin/redis-server /etc/redis/a.conf
1 S redis 23949 1 0 80 0 - 11074 ep_pol 15:41 ? 00:00:00 /usr/sbin/redis-server /etc/redis/b.conf
Later is more informational in our case. The situation
is worse when we manage the config and process running
state by salt. Salt check the process by running "ps |
grep SIG" (for Gentoo System) to check the running
state, where SIG is the string to search for when
looking for the service process with ps. Previously, we
define sig as "/usr/sbin/redis-server
/etc/redis/a.conf". Since the ps output is identical for
our case, so we have no way to check the state of
specified redis instance.
So, for our case, we prefer the old behavior, i.e, do
not change the process title for the main redis process.
Or add an option such as "set-proc-title [yes|no]" to
control this behavior.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
At least in one case the arm64 cow kernel bug test triggers an assert, which is a problem because it cannot be ignored like cases where the bug is found.
On older systems (Linux <4.5) madvise fails because MADV_FREE is not supported. We treat these failures as an indication the system is not affected.
Fixes#8351, #8406
This commit introduces two new command and two options for an existing command
GETEX <key> [PERSIST][EX seconds][PX milliseconds] [EXAT seconds-timestamp]
[PXAT milliseconds-timestamp]
The getexCommand() function implements extended options and variants of the GET
command. Unlike GET command this command is not read-only. Only one of the options
can be used at a given time.
1. PERSIST removes any TTL associated with the key.
2. EX Set expiry TTL in seconds.
3. PX Set expiry TTL in milliseconds.
4. EXAT Same like EX instead of specifying the number of seconds representing the
TTL (time to live), it takes an absolute Unix timestamp
5. PXAT Same like PX instead of specifying the number of milliseconds representing the
TTL (time to live), it takes an absolute Unix timestamp
Command would return either the bulk string, error or nil.
GETDEL <key>
Would delete the key after getting.
SET key value [NX] [XX] [KEEPTTL] [GET] [EX <seconds>] [PX <milliseconds>]
[EXAT <seconds-timestamp>][PXAT <milliseconds-timestamp>]
Two new options added here are EXAT and PXAT
Key implementation notes
- `SET` with `PX/EX/EXAT/PXAT` is always translated to `PXAT` in `AOF`. When relative time is
specified (`PX/EX`), replication will always use `PX`.
- `setexCommand` and `psetexCommand` would no longer need translation in `feedAppendOnlyFile`
as they are modified to invoke `setGenericCommand ` with appropriate flags which will take care of
correct AOF translation.
- `GETEX` without any optional argument behaves like `GET`.
- `GETEX` command is never propagated, It is either propagated as `PEXPIRE[AT], or PERSIST`.
- `GETDEL` command is propagated as `DEL`
- Combined the validation for `SET` and `GETEX` arguments.
- Test cases to validate AOF/Replication propagation
In activeDefragSdsListAndDict when dict_val_type is DEFRAG_SDS_DICT_VAL_VOID_PTR, it should update de->v.val not ln->value.
Because this code path will never be executed, so this bug never happened.
In some scenarios, such as remote backup, we only want to get remote
redis server db snapshot. Currently, redis-cli acts as a replica and
sends SYNC to redis, but redis still accumulates replication buffer
in the replica client output buffer, that may result in using vast
memory, or failing to transfer RDB because of client-output-buffer-limit.
In this commit, we add 'replconf rdb-only 0|1', redis doesn't send
incremental replication buffer to them if they send 'replconf rdb-only 1',
so we can reduce used memory and improve success of getting RDB.
It was confusing as to why these don't return a map type.
the reason is that order matters, so we need to make sure the client
library knows to respect it.
Added comments in the implementation and tests to cover it.
This commit fixes a well known and an annoying issue in Sentinel mode.
Cause of this issue:
Currently, Redis rewrite process works well in server mode, however in sentinel mode,
the sentinel config has variant semantics for different configurations, in example configuration
https://github.com/redis/redis/blob/unstable/sentinel.conf, we put comments on these.
However the rewrite process only treat the sentinel config as a single option. During rewrite
process, it will mess up with the lines and comments.
Approaches:
In order to solve this issue, we need to differentiate different subconfig options in sentinel separately,
for example, sentinel monitor <master-name> <ip> <redis-port> <quorum>
we can treat it as sentinel monitor option, instead of the sentinel option.
This commit also fixes the dependency issue when putting configurations in sentinel.conf.
For example before this commit,we must put
`sentinel monitor <master-name> <ip> <redis-port> <quorum>` before
`sentinel auth-pass <master-name> <password>` for a single master,
otherwise the server cannot start and will return error. This commit fixes this issue, as long as
the monitoring master was configured, no matter the sequence is, the sentinel can start and run properly.
some tests use attach_to_replication_stream to watch what's propagated
to replicas, but in some cases the periodic ping may slip in and fail
the test.
we disable that ping by setting the period to once an hour (tests should
not run for that long).
other change is so that the next time this oom-score-adj test fails,
we'll see the value (assert_equals prints it)
1. Valgrind leak in a recent change in a module api test
2. Increase treshold of a RESTORE TTL test
3. Change assertions to use assert_range which prints the values
BLPOP and other blocking list commands can only block on empty keys
and LPUSH only wakes up clients when the list is created.
Using the module API, it's possible to block on a non-empty key.
Unblocking a client blocked on a non-empty list (or zset) can only
be done using RedisModule_SignalKeyAsReady(). This commit tests it.