Append only file is an alternative durability option for Redis. What this mean? Let's start with some fact:
For default Redis saves snapshots of the dataset on disk, in a binary file called dump.rdb (by default at least). For instance you can configure Redis to save the dataset every 60 seconds if there are at least 100 changes in the dataset, or every 1000 seconds if there is at least a single change in the dataset. This is known as "Snapshotting".
Snapshotting is not very durable. If your computer running Redis stops, your power line fails, or you write killall -9 redis-server for a mistake, the latest data written on Redis will get lost. There are applications where this is not a big deal. There are applications where this is not acceptable and Redis was not an option for this applications.
Append only file is an alternative durability option for Redis. What this mean? Let's start with some fact:
For default Redis saves snapshots of the dataset on disk, in a binary file called dump.rdb (by default at least). For instance you can configure Redis to save the dataset every 60 seconds if there are at least 100 changes in the dataset, or every 1000 seconds if there is at least a single change in the dataset. This is known as "Snapshotting".
Snapshotting is not very durable. If your computer running Redis stops, your power line fails, or you write killall -9 redis-server for a mistake, the latest data written on Redis will get lost. There are applications where this is not a big deal. There are applications where this is not acceptable and Redis was not an option for this applications.
What is the solution? To use append only file as alternative to snapshotting. How it works?
It is an 1.1 only feature.
You have to turn it on editing the configuration file. Just make sure you have "appendonly yes" somewhere.
Append only files work this way: every time Redis receive a command that changes the dataset (for instance a SET or LPUSH command) it appends this command in the append only file. When you restart Redis it will first re-play the append only file to rebuild the state.
As you can guess... the append log file gets bigger and bigger, every time there is a new operation changing the dataset. Even if you set always the same key "mykey" to the values of "1", "2", "3", ... up to 10000000000 in the end you'll have just a single key in the dataset, just a few bytes! but how big will be the append log file? Very very big.
So Redis supports an interesting feature: it is able to rebuild the append log file, in background, without to stop processing client commands. The key is the command BGREWRITEAOF. This command basically is able to use the dataset in memory in order to rewrite the shortest sequence of commands able to rebuild the exact dataset that is currently in memory.
So from time to time when the log gets too big, try this command. It's safe as if it fails you will not lost your old log (but you can make a backup copy given that currently 1.1 is still in beta!).
As you can guess... the append log file gets bigger and bigger, every time there is a new operation changing the dataset. Even if you set always the same key "mykey" to the values of "1", "2", "3", ... up to 10000000000 in the end you'll have just a single key in the dataset, just a few bytes! but how big will be the append log file? Very very big.
So Redis supports an interesting feature: it is able to rebuild the append log file, in background, without to stop processing client commands. The key is the command BGREWRITEAOF. This command basically is able to use the dataset in memory in order to rewrite the shortest sequence of commands able to rebuild the exact dataset that is currently in memory.
So from time to time when the log gets too big, try this command. It's safe as if it fails you will not lost your old log (but you can make a backup copy given that currently 1.1 is still in beta!).
Basically it uses the same fork() copy-on-write trick that snapshotting already uses. This is how the algorithm works:
Redis forks, so now we have a child and a parent.
The child starts writing the new append log file in a temporary file.
The parent accumulates all the new changes in an in-memory buffer (but at the same time it writes the new changes in the old append only file, so if the rewriting fails, we are safe).
When the child finished to rewrite the file, the parent gets a signal, and append the in-memory buffer at the end of the file generated by the child.
Profit! Now Redis atomically renames the old file into the new one, and starts appending new data into the new file.
Check redis.conf, you can configure how many times Redis will fsync() data on disk. There are three options:
Fsync() every time a new command is appended to the append log file. Very very slow, very safe.
Fsync() one time every second. Fast enough, and you can lose 1 second of data if there is a disaster.
Never fsync(), just put your data in the hands of the Operating System. The faster and unsafer method.
Warning: by default Redis will fsync() after every command! This is because the Redis authors want to ship a default configuration that is the safest pick. But the best compromise for most datasets is to fsync() one time every second.
+
LRANGEkeystartendReturn a range of elements from the List at key
LTRIMkeystartendTrim the list at key to the specified range of elements
LINDEXkeyindexReturn the element at index position from the List at key
LSETkeyindexvalueSet a new value as the element at index position of the List at key
LREMkeycountvalueRemove the first-N, last-N, or all the elements matching value from the List at key
LPOPkeyReturn and remove (atomically) the first element of the List at key
RPOPkeyReturn and remove (atomically) the last element of the List at key
RPOPLPUSHsrckeydstkeyReturn and remove (atomically) the last element of the source List stored at _srckey_ and push the same element to the destination List stored at _dstkey_
RPOPLPUSHsrckeydstkeyReturn and remove (atomically) the last element of the source List stored at _srckey_ and push the same element to the destination List stored at _dstkey_
ZADDkeyscorememberAdd the specified member to the Sorted Set value at key or update the score if it already exist
ZREMkeymemberRemove the specified member from the Sorted Set value at key
ZINCRBYkeyincrementmemberIf the member already exists increment its score by _increment_, otherwise add the member setting _increment_ as score
ZRANGEkeystartendReturn a range of elements from the sorted set at key
ZREVRANGEkeystartendReturn a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score
ZRANGEBYSCOREkeyminmaxReturn all the elements with score >= min and score <= max (a range query) from the sorted set
ZCARDkeyReturn the cardinality (number of elements) of the sorted set at key
ZSCOREkeyelementReturn the score associated with the specified element of the sorted set at key
ZREMRANGEBYSCOREkeyminmaxRemove all the elements with score >= min and score <= max from the sorted set
ZADDkeyscorememberAdd the specified member to the Sorted Set value at key or update the score if it already exist
ZREMkeymemberRemove the specified member from the Sorted Set value at key
ZINCRBYkeyincrementmemberIf the member already exists increment its score by _increment_, otherwise add the member setting _increment_ as score
ZRANKkeymemberReturn the rank (or index) or _member_ in the sorted set at _key_, with scores being ordered from low to high
ZREVRANKkeymemberReturn the rank (or index) or _member_ in the sorted set at _key_, with scores being ordered from high to low
ZRANGEkeystartendReturn a range of elements from the sorted set at key
ZREVRANGEkeystartendReturn a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score
ZRANGEBYSCOREkeyminmaxReturn all the elements with score >= min and score <= max (a range query) from the sorted set
ZCARDkeyReturn the cardinality (number of elements) of the sorted set at key
ZSCOREkeyelementReturn the score associated with the specified element of the sorted set at key
ZREMRANGEBYRANKkeyminmaxRemove all the elements with rank >= min and rank <= max from the sorted set
ZREMRANGEBYSCOREkeyminmaxRemove all the elements with score >= min and score <= max from the sorted set
ZUNION / ZINTERdstkeyNkey1 ... keyN WEIGHTS w1 ... wN AGGREGATE SUM|MIN|MAX Perform a union or intersection over a number of sorted sets with optional weight and aggregate
+redis> set a 100
+OK
+redis> expire a 360
+(integer) 1
+redis> incr a
+(integer) 1
-
+I set a key to the value of 100, then set an expire of 360 seconds, and then incremented the key (before the 360 timeout expired of course). The obvious result would be: 101, instead the key is set to the value of 1. Why?
+There is a very important reason involving the Append Only File and Replication. Let's rework a bit hour example adding the notion of time to the mix:
+
+SET a 100
+EXPIRE a 5
+... wait 10 seconds ...
+INCR a
+
+Imagine a Redis version that does not implement the "Delete keys with an expire set on write operation" semantic.
+Running the above example with the 10 seconds pause will lead to 'a' being set to the value of 1, as it no longer exists when INCR is called 10 seconds later.
Instead if we drop the 10 seconds pause, the result is that 'a' is set to 101.
And in the practice timing changes! For instance the client may wait 10 seconds before INCR, but the sequence written in the Append Only File (and later replayed-back as fast as possible when Redis is restarted) will not have the pause. Even if we add a timestamp in the AOF, when the time difference is smaller than our timer resolution, we have a race condition.
The same happens with master-slave replication. Again, consider the example above: the client will use the same sequence of commands without the 10 seconds pause, but the replication link will slow down for a few seconds due to a network problem. Result? The master will contain 'a' set to 101, the slave 'a' set to 1.
The only way to avoid this but at the same time have reliable non time dependent timeouts on keys is to destroy volatile keys when a write operation is attempted against it.
After all Redis is one of the rare fully persistent databases that will give you EXPIRE. This comes to a cost :)
@@ -58,10 +58,9 @@ Redis for the same objects. This happens because when data is in
memory is full of pointers, reference counters and other metadata. Add
to this malloc fragmentation and need to return word-aligned chunks of
memory and you have a clear picture of what happens. So this means to
-have 10 times the I/O between memory and disk than otherwise needed.
This may happen and it's prefectly ok. Redis objects are small C structures allocated and freed a lot of times. This costs a lot of CPU so instead of being freed, released objects are taken into a free list and reused when needed. This memory is taken exactly by this free objects ready to be reused.
With modern operating systems malloc() returning NULL is not common, usually the server will start swapping and Redis performances will be disastrous so you'll know it's time to use more Redis servers or get more RAM.
The INFO command (work in progress in this days) will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions.
You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands).
Just an example on normal hardware: It takes about 45 seconds to restore a 2 GB database on a fairly standard system, no RAID. This can give you some kind of feeling about the order of magnitude of the time needed to load data when you restart the server.
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
And now the long one:
Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can't tell in advance how much memory the child will take, so if the overcommit_memory setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.
Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.
Yes, redis background saving process is always fork(2)ed when the server is outside of the execution of a command, so every command reported to be atomic in RAM is also atomic from the point of view of the disk snapshot.
Simply start multiple instances of Redis in different ports in the same box and threat them as different servers! Given that Redis is a distributed database anyway in order to scale you need to think in terms of multiple computational units. At some point a single box may not be enough anyway.
In general key-value databases are very scalable because of the property that different keys can stay on different servers independently.
In Redis there are client libraries such Redis-rb (the Ruby client) that are able to handle multiple servers automatically using consistent hashing. We are going to implement consistent hashing in all the other major client libraries. If you use a different language you can implement it yourself otherwise just hash the key before to SET / GET it from a given server. For example imagine to have N Redis servers, server-0, server-1, ..., server-N. You want to store the key "foo", what's the right server where to put "foo" in order to distribute keys evenly among different servers? Just perform the crc = CRC32("foo"), then servernum = crc % N (the rest of the division for N). This will give a number between 0 and N-1 for every key. Connect to this server and store the key. The same for gets.
This is a basic way of performing key partitioning, consistent hashing is much better and this is why after Redis 1.0 will be released we'll try to implement this in every widely used client library starting from Python and PHP (Ruby already implements this support).
With SORT BY you need that all the weight keys are in the same Redis instance of the list/set you are trying to sort. In order to make this possible we developed a concept called key tags. A key tag is a special pattern inside a key that, if preset, is the only part of the key hashed in order to select the server for this key. For example in order to hash the key "foo" I simply perform the CRC32 checksum of the whole string, but if this key has a pattern in the form of the characters {...} I only hash this substring. So for example for the key "foo{bared}" the key hashing code will simply perform the CRC32 of "bared". This way using key tags you can ensure that related keys will be stored on the same Redis instance just using the same key tag for all this keys. Redis-rb already implements key tags.
In theory Redis can handle up to 232 keys, and was tested in practice to handle at least 150 million of keys per instance. We are working in order to experiment with larger values.
Every list, set, and ordered set, can hold 232 elements.
Actually Redis internals are ready to allow up to 264 elements but the current disk dump format don't support this, and there is a lot time to fix this issues in the future as currently even with 128 GB of RAM it's impossible to reach 232 elements.
Yes, try to compile it with 32 bit target if you are using a 64 bit box.
If you are using Redis >= 1.3, try using the Hash data type, it can save a lot of memory.
If you are using hashes or any other type with values bigger than 128 bytes try also this to lower the RSS usage (Resident Set Size): EXPORT MMAP_THRESHOLD=4096
This may happen and it's prefectly ok. Redis objects are small C structures allocated and freed a lot of times. This costs a lot of CPU so instead of being freed, released objects are taken into a free list and reused when needed. This memory is taken exactly by this free objects ready to be reused.
With modern operating systems malloc() returning NULL is not common, usually the server will start swapping and Redis performances will be disastrous so you'll know it's time to use more Redis servers or get more RAM.
The INFO command (work in progress in this days) will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions.
You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands).
Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference.
You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32".
If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient.
Just an example on normal hardware: It takes about 45 seconds to restore a 2 GB database on a fairly standard system, no RAID. This can give you some kind of feeling about the order of magnitude of the time needed to load data when you restart the server.
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
And now the long one:
Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can't tell in advance how much memory the child will take, so if the overcommit_memory setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.
Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.
Yes, redis background saving process is always fork(2)ed when the server is outside of the execution of a command, so every command reported to be atomic in RAM is also atomic from the point of view of the disk snapshot.
Simply start multiple instances of Redis in different ports in the same box and threat them as different servers! Given that Redis is a distributed database anyway in order to scale you need to think in terms of multiple computational units. At some point a single box may not be enough anyway.
In general key-value databases are very scalable because of the property that different keys can stay on different servers independently.
In Redis there are client libraries such Redis-rb (the Ruby client) that are able to handle multiple servers automatically using consistent hashing. We are going to implement consistent hashing in all the other major client libraries. If you use a different language you can implement it yourself otherwise just hash the key before to SET / GET it from a given server. For example imagine to have N Redis servers, server-0, server-1, ..., server-N. You want to store the key "foo", what's the right server where to put "foo" in order to distribute keys evenly among different servers? Just perform the crc = CRC32("foo"), then servernum = crc % N (the rest of the division for N). This will give a number between 0 and N-1 for every key. Connect to this server and store the key. The same for gets.
This is a basic way of performing key partitioning, consistent hashing is much better and this is why after Redis 1.0 will be released we'll try to implement this in every widely used client library starting from Python and PHP (Ruby already implements this support).
With SORT BY you need that all the weight keys are in the same Redis instance of the list/set you are trying to sort. In order to make this possible we developed a concept called key tags. A key tag is a special pattern inside a key that, if preset, is the only part of the key hashed in order to select the server for this key. For example in order to hash the key "foo" I simply perform the CRC32 checksum of the whole string, but if this key has a pattern in the form of the characters {...} I only hash this substring. So for example for the key "foo{bared}" the key hashing code will simply perform the CRC32 of "bared". This way using key tags you can ensure that related keys will be stored on the same Redis instance just using the same key tag for all this keys. Redis-rb already implements key tags.
In theory Redis can handle up to 232 keys, and was tested in practice to handle at least 150 million of keys per instance. We are working in order to experiment with larger values.
Every list, set, and ordered set, can hold 232 elements.
Actually Redis internals are ready to allow up to 264 elements but the current disk dump format don't support this, and there is a lot time to fix this issues in the future as currently even with 128 GB of RAM it's impossible to reach 232 elements.
In order to scale LLOOGG. But after I got the basic server working I liked the idea to share the work with other guys, and Redis was turned into an open source project.
-
diff --git a/doc/IncrCommand.html b/doc/IncrCommand.html
index b2b35499..5479e5f9 100644
--- a/doc/IncrCommand.html
+++ b/doc/IncrCommand.html
@@ -33,8 +33,7 @@
Time complexity: O(1)
Increment or decrement the number stored at key by one. If the key doesnot exist or contains a value of a wrong type, set the key to thevalue of "0" before to perform the increment or decrement operation.
INCRBY and DECRBY work just like INCR and DECR but instead toincrement/decrement by 1 the increment/decrement is integer.
INCR commands are limited to 64 bit signed integers.
Integer reply, this commands will reply with the new value of key after the increment or decrement.
-
+Note: this is actually a string operation, that is, in Redis there are not "integer" types. Simply the string stored at the key is parsed as a base 10 64 bit signed integer, incremented, and then converted back as a string.
Integer reply, this commands will reply with the new value of key after the increment or decrement.
diff --git a/doc/IntroductionToRedisDataTypes.html b/doc/IntroductionToRedisDataTypes.html
index 2bba4091..26b2ba19 100644
--- a/doc/IntroductionToRedisDataTypes.html
+++ b/doc/IntroductionToRedisDataTypes.html
@@ -16,7 +16,7 @@
- = A fifteen minutes introduction to Redis data types =
As you already probably know Redis is not a plain key-value store, actually it is a data structures server, supporting different kind of values. That is, you can't just set strings as values of keys. All the following data types are supported as values:
Binary-safe strings.
Lists of binary-safe strings.
Sets of binary-safe strings, that are collection of unique unsorted elements. You can think at this as a Ruby hash where all the keys are set to the 'true' value.
Sorted sets, similar to Sets but where every element is associated to a floating number score. The elements are taken sorted by score. You can think at this as Ruby hashes where the key is the element and the value is the score, but where elements are always taken in order without requiring a sorting operation.
As you already probably know Redis is not a plain key-value store, actually it is a data structures server, supporting different kind of values. That is, you can't just set strings as values of keys. All the following data types are supported as values:
Binary-safe strings.
Lists of binary-safe strings.
Sets of binary-safe strings, that are collection of unique unsorted elements. You can think at this as a Ruby hash where all the keys are set to the 'true' value.
Sorted sets, similar to Sets but where every element is associated to a floating number score. The elements are taken sorted by score. You can think at this as Ruby hashes where the key is the element and the value is the score, but where elements are always taken in order without requiring a sorting operation.
It's not always trivial to grasp how this data types work and what to use in order to solve a given problem from the Redis command reference, so this document is a crash course to Redis data types and their most used patterns.
For all the examples we'll use the redis-cli utility, that's a simple but handy command line utility to issue commands against the Redis server.
Before to start talking about the different kind of values supported by Redis it is better to start saying that keys are not binary safe strings in Redis, but just strings not containing a space or a newline character. For instance "foo" or "123456789" or "foo_bar" are valid keys, while "hello world" or "hello\n" are not.
Actually there is nothing inside the Redis internals preventing the use of binary keys, it's just a matter of protocol, and actually the new protocol introduced with Redis 1.2 (1.2 betas are 1.1.x) in order to implement commands like MSET, is totally binary safe. Still for now consider this as an hard limit as the database is only tested with "normal" keys.
A few other rules about keys:
Too long keys are not a good idea, for instance a key of 1024 bytes is not a good idea not only memory-wise, but also because the lookup of the key in the dataset may require several costly key-comparisons.
Too short keys are not a good idea. There is no point in writing "u:1000:pwd" as key if you can write instead "user:1000:password", the latter is more readable and the added space is very little compared to the space used by the key object itself.
Try to stick with a schema. For instance "object-type:id:field" can be a nice idea, like in "user:1000:password". I like to use dots for multi-words fields, like in "comment:1234:reply.to".
The INCR command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like INCRBY, DECR and DECRBY. Actually internally it's always the same command, acting in a slightly different way.
What means that INCR is atomic? That even multiple clients issuing INCR against the same key will never incur into a race condition. For instance it can't never happen that client 1 read "10", client 2 read "10" at the same time, both increment to 11, and set the new value of 11. The final value will always be of 12 ad the read-increment-set operation is performed while all the other clients are not executing a command at the same time.
Another interesting operation on string is the GETSET command, that does just what its name suggests: Set a key to a new value, returning the old value, as result. Why this is useful? Example: you have a system that increments a Redis key using the INCR command every time your web site receives a new visit. You want to collect this information one time every hour, without loosing a single key. You can GETSET the key assigning it the new value of "0" and reading the old value back.
To explain the List data type it's better to start with a little of theory, as the term List is often used in an improper way by information technology folks. For instance "Python Lists" are not what the name may suggest (Linked Lists), but them are actually Arrays (the same data type is called Array in Ruby actually).
From a very general point of view a List is just a sequence of ordered elements: 10,20,1,2,3 is a list, but when a list of items is implemented using an Array and when instead a Linked List is used for the implementation, the properties change a lot.
Redis lists are implemented via Linked Lists, this means that even if you have million of elements inside a list, the operation of adding a new element in the head or in the tail of the list is performed in constant time. Adding a new element with the LPOP command to the head of a ten elements list is the same speed as adding an element to the head of a 10 million elements list.
What's the downside? That accessing an element by index is very fast in lists implemented with an Array and not so fast in lists implemented by linked lists.
Redis Lists are implemented with linked lists because for a database system is crucial to be able to add elements to a very long list in a very fast way. Another strong advantage is, as you'll see in a moment, that Redis Lists can be taken at constant length in constant time.
The LPUSH command add a new element into a list, on the left (on head), while the RPUSH command add a new element into alist, ot the right (on tail). Finally the LRANGE command extract ranges of elements from lists:
+
The INCR command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like INCRBY, DECR and DECRBY. Actually internally it's always the same command, acting in a slightly different way.
What means that INCR is atomic? That even multiple clients issuing INCR against the same key will never incur into a race condition. For instance it can't never happen that client 1 read "10", client 2 read "10" at the same time, both increment to 11, and set the new value of 11. The final value will always be of 12 ad the read-increment-set operation is performed while all the other clients are not executing a command at the same time.
Another interesting operation on string is the GETSET command, that does just what its name suggests: Set a key to a new value, returning the old value, as result. Why this is useful? Example: you have a system that increments a Redis key using the INCR command every time your web site receives a new visit. You want to collect this information one time every hour, without loosing a single key. You can GETSET the key assigning it the new value of "0" and reading the old value back.
To explain the List data type it's better to start with a little of theory, as the term List is often used in an improper way by information technology folks. For instance "Python Lists" are not what the name may suggest (Linked Lists), but them are actually Arrays (the same data type is called Array in Ruby actually).
From a very general point of view a List is just a sequence of ordered elements: 10,20,1,2,3 is a list, but when a list of items is implemented using an Array and when instead a Linked List is used for the implementation, the properties change a lot.
Redis lists are implemented via Linked Lists, this means that even if you have million of elements inside a list, the operation of adding a new element in the head or in the tail of the list is performed in constant time. Adding a new element with the LPUSH command to the head of a ten elements list is the same speed as adding an element to the head of a 10 million elements list.
What's the downside? That accessing an element by index is very fast in lists implemented with an Array and not so fast in lists implemented by linked lists.
Redis Lists are implemented with linked lists because for a database system is crucial to be able to add elements to a very long list in a very fast way. Another strong advantage is, as you'll see in a moment, that Redis Lists can be taken at constant length in constant time.
The LPUSH command add a new element into a list, on the left (on head), while the RPUSH command add a new element into alist, ot the right (on tail). Finally the LRANGE command extract ranges of elements from lists:
$ ./redis-cli rpush messages "Hello how are you?"
OK
$ ./redis-cli rpush messages "Fine thanks. I'm having fun with Redis"
@@ -142,6 +143,7 @@ $ ./redis-cli zrangebyscore hackers -inf 1950
$ ./redis-cli zremrangebyscore hackers 1940 1960
(integer) 2
ZREMRANGEBYSCORE is not the best command name, but it can be very useful, and returns the number of removed elements.
For the last time, back to the Reddit example. Now we have a decent plan to populate a sorted set in order to generate the home page. A sorted set can contain all the news that are not older than a few days (we remove old entries from time to time using ZREMRANGEBYSCORE). A background job gets all the elements from this sorted set, get the user votes and the time of the news, and compute the score to populate the reddit.home.page sorted set with the news IDs and associated scores. To show the home page we have just to perform a blazingly fast call to ZRANGE.
From time to time we'll remove too old news from the reddit.home.page sorted set as well in order for our system to work always against a limited set of news.
Just a final note before to finish this tutorial. Sorted sets scores can be updated at any time. Just calling again ZADD against an element already included in the sorted set will update its score (and position) in O(log(N)), so sorted sets are suitable even when there are tons of updates.
This tutorial is in no way complete, this is just the basics to get started with Redis, read the Command Reference to discover a lot more.
-Time complexity: O(n) (with n being the length of the range)
Return the specified elements of the list stored at the specifiedkey. Start and end are zero-based indexes. 0 is the first elementof the list (the list head), 1 the next element and so on.
-
For example LRANGE foobar 0 2 will return the first three elementsof the list.
-
_start_ and end can also be negative numbers indicating offsetsfrom the end of the list. For example -1 is the last element ofthe list, -2 the penultimate element and so on.
-
Indexes out of range will not produce an error: if start is overthe end of the list, or start > end, an empty list is returned.If end is over the end of the list Redis will threat it just likethe last element of the list.
Multi bulk reply, specifically a list of elements in the specified range.
-
+Time complexity: O(start+n) (with n being the length of the range and start being the start offset)Return the specified elements of the list stored at the specified
+key. Start and end are zero-based indexes. 0 is the first element
+of the list (the list head), 1 the next element and so on.
For example LRANGE foobar 0 2 will return the first three elements
+of the list.
start and end can also be negative numbers indicating offsets
+from the end of the list. For example -1 is the last element of
+the list, -2 the penultimate element and so on.
Note that if you have a list of numbers from 0 to 100, LRANGE 0 10 will return
+11 elements, that is, rightmost item is included. This may or may not be consistent with
+behavior of range-related functions in your programming language of choice (think Ruby's Range.new, Array#slice or Python's range() function).
Indexes out of range will not produce an error: if start is over
+the end of the list, or start > end, an empty list is returned.
+If end is over the end of the list Redis will threat it just like
+the last element of the list.
Time complexity: O(N) (with N being the length of the list)
Set the list element at index (see LINDEX for information about the_index_ argument) with the new value. Out of range indexes willgenerate an error. Note that setting the first or last elements ofthe list is O(1).
Similarly to other list commands accepting indexes, the index can be negative to access elements starting from the end of the list. So -1 is the last element, -2 is the penultimate, and so forth.
The Append Only File is an alternative way to save your data in Redis that is fully durable! Unlike the snapshotting (default) persistence mode, where the database is saved asynchronously from time to time, the Append Only File saves every change ASAP in a text-only file that works like a journal. Redis will play back this file again at startup reloading the whole dataset back in memory. Redis Append Only File supports background Log compaction. For more info read the Append Only File HOWTO.
Sorted sets are collections of elements (like Sets) with an associated score (in the form of a double precision floating point number). Elements in a sorted set are taken in order, so for instance to take the greatest element is an O(1) operation. Insertion and deletion is O(log(N)). Sorted sets are implemented using a dual ported data structure consisting of an hash table and a skip list. For more information please read the Introduction To Redis Data Types.
Redis 1.2 will use less memory than Redis 1.0 for values in Strings, Lists or Sets elements that happen to be representable as 32 or 64 bit signed integers (it depends on your arch bits for the long C type). This is totally transparent form the point of view of the user, but will safe a lot of memory (30% less in datasets where there are many integers).
100x times faster SAVE and BGSAVE! There was a problem in the LZF lib configuration that is now resolved. The effect is this impressive speedup. Also the saving child will no longer use 100% of CPU.
Glue output buffer and writev(). Many commands producing large outputs, like LRANGE, will now be even 10 times faster, thanks to the new output buffer gluing algorithm and the (optional) use of writev(2) syscall.
Support for epool and kqueue / kevent. 10,000 clients scalability.
Much better EXPIRE support, now it's possible to work with very large sets of keys expiring in very short time without to incur in memory problems (the new algorithm expires keys in an adaptive way, so will get more aggressive if there are a lot of expiring keys)
Redis will now compile and work on Solaris without problems. Warning: the Solaris user base is very little, so Redis running on Solaris may not be as tested and stable as it is on Linux and Mac OS X.
Redis is now able to accept commands in a new fully binary safe way: with the new protocol keys are binary safe, not only values, and there is no distinction between bulk commands and inline commands. This new protocol is currently used only for MSET and MSETNX but at some point it will hopefully replace the old one. See the Multi Bulk Commands section in the Redis Protocol Specification for more information.
The SortCommand is now supprots the STORE and GET # forms, the first can be used to save sorted lists, sets or sorted sets into keys for caching. Check the manual page for more information about the GET # form.
The new RPOPLPUSH command can do many interesting magics, and a few of this are documented in the wiki page of the command.
Of course, many bugs are now fixed, and I bet, a few others introduced: this is how software works after all, so make sure to report issues in the Redis mailing list or in the Google Code issues tracker.
Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:
A master can have multiple slaves.
Slaves are able to accept other slaves connections, so instead to connect a number of slaves against the same master it is also possible to connect some of the slaves to other slaves in a graph-alike structure.
Redis replication is non-blocking on the master side, this means that the master will continue to serve queries while one or more slaves are performing the first synchronization. Instead replication is blocking on the slave side: while the slave is performing the first synchronization it can't reply to queries.
Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example heavy SORT operations can be launched against slaves), or simply for data redundancy.
It is possible to use replication to avoid the saving process on the master side: just configure your master redis.conf in order to avoid saving at all (just comment al the "save" directives), then connect a slave configured to save from time to time.
In order to start the replication, or after the connection closes in order resynchronize with the master, the client connects to the master and issues the SYNC command.
The master starts a background saving, and at the same time starts to collect all the new commands received that had the effect to modify the dataset. When the background saving completed the master starts the transfer of the database file to the slave, that saves it on disk, and then load it in memory. At this point the master starts to send all the accumulated commands, and all the new commands received from clients, that had the effect of a dataset modification.
You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the SYNC command. You'll see a bulk transfer and then every command received by the master will be re-issued in the telnet session.
Slaves are able to automatically reconnect when the master <-> slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests it performs a single background saving in order to serve all them.
Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:
A master can have multiple slaves.
Slaves are able to accept other slaves connections, so instead to connect a number of slaves against the same master it is also possible to connect some of the slaves to other slaves in a graph-alike structure.
Redis replication is non-blocking on the master side, this means that the master will continue to serve queries while one or more slaves are performing the first synchronization. Instead replication is blocking on the slave side: while the slave is performing the first synchronization it can't reply to queries.
Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example heavy SORT operations can be launched against slaves), or simply for data redundancy.
It is possible to use replication to avoid the saving process on the master side: just configure your master redis.conf in order to avoid saving at all (just comment al the "save" directives), then connect a slave configured to save from time to time.
In order to start the replication, or after the connection closes in order resynchronize with the master, the slave connects to the master and issues the SYNC command.
The master starts a background saving, and at the same time starts to collect all the new commands received that had the effect to modify the dataset. When the background saving completed the master starts the transfer of the database file to the slave, that saves it on disk, and then load it in memory. At this point the master starts to send all the accumulated commands, and all the new commands received from clients that had the effect of a dataset modification, to the slave, as a stream of commands, in the same format of the Redis protocol itself.
You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the SYNC command. You'll see a bulk transfer and then every command received by the master will be re-issued in the telnet session.
Slaves are able to automatically reconnect when the master <-> slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests it performs a single background saving in order to serve all them.
Save the DB on disk. The server hangs while the saving is notcompleted, no connection is served in the meanwhile. An OK codeis returned when the DB was fully stored in disk.
+
Save the whole dataset on disk (this means that all the databases are saved, as well as keys with an EXPIRE set (the expire is preserved). The server hangs while the saving is notcompleted, no connection is served in the meanwhile. An OK codeis returned when the DB was fully stored in disk.
+
The background variant of this command is BGSAVE that is able to perform the saving in the background while the server continues serving other clients.
@@ -57,8 +57,12 @@ SORT mylist BY weight_* GET object_* GET #
SORT mylist BY weight_* STORE resultkey
An interesting pattern using SORT ... STORE consists in associatingan EXPIRE timeout to the resulting key so that inapplications where the result of a sort operation can be cached forsome time other clients will use the cached list instead to call SORTfor every request. When the key will timeout an updated version ofthe cache can be created using SORT ... STORE again.
Note that implementing this pattern it is important to avoid that multipleclients will try to rebuild the cached version of the cacheat the same time, so some form of locking should be implemented(for instance using SETNX).
It's possible to use BY and GET options against Hash fields using the following syntax:
+SORT mylist BY weight_*->fieldname
+SORT mylist GET object_*->fieldname
+
+
The two chars string -> is used in order to signal the name of the Hash field. The key is substituted as documented above with sort BY and GET against normal keys, and the Hash stored at the resulting key is accessed in order to retrieve the specified field.
Important notice: since 15 March 2010 I Joined VMware that is sponsoring all my work on Redis. Thank you to all the companies and people donating in the past. No further donations are accepted.
This is a list of companies that sponsorship Redis developments, with details about the sponsored features. Thanks for helping the project!.
15 January 2010, provided Virtual Machines for Redis testing in a virtualized environment.
14 January 2010, provided Virtual Machines for Redis testing in a virtualized environment.
18 Dec 2009, part of Virtual Memory.
15 Dec 2009, part of Redis Cluster.
13 Dec 2009, for blocking POP (BLPOP) and part of the Virtual Memory implementation.
Also thaks to the following people or organizations that donated to the Project:
I'm accepting sponsorships for Redis development, the idea is that a company using Redis and willing to donate some money can receive something back: visibility in the Redis site, and prioritization of features planned in the TODO list that are somewhat more important for this company compared to other features.
In the last year I spent 50% of my time working on Redis. At the same time Redis is released under the very liberal BSD license, this is very important for users as it prevents that in the future the project will be killed, but at the same time it's not possible to build a business model selling licenses like it happens for MySQL. The alternative is to run a consultancy company, but this means to use more time to work with customers than working to the Redis code base itself, or sponsorship, that I think is the best option currently to ensure fast development of the project.
So, if you are considering a donation, thank you! This is a set of simple rules I developed in order to make sure I'm fair with everybody willing to help the project:
1. Every company can donate any amount of money, even 10$, in order to support Redis development.
2. Every company donating an amount equal or greater than 1000$ will be featured in the home page for at least 6 months, and anyway for all the time the sponsored feature takes to reach a stable release of Redis.
3. Every company donating at least 100$ will anyway be featured in the "Sponsors" page forever, this page is linked near to the logos of the current sponsors in the front page (the logos about point 2 of this list).
4. A sponsoring company can donate for sponsorship of a feature already in the TODO list. If a feature not planned is needed we should first get in touch, discuss if this is a good idea, put it in the TODO list, and then the sponsorship can start, but I've to be genuinely convinced this feature will be good and of general interest ;)
5. Not really a sponsorship/donation, but in rare case of a vertical, self-contained feature, I could develop it as a patch for the current stable Redis distribution for a "donation" proportional to the work needed to develop the feature, but in order to have the patch for the next release of Redis there will be to donate again for the porting work and so forth.
6. Features for which I receive a good sponsorship (proportionally to the work required to implement the sponsored feature) are prioritized and will get developed faster than other features, possibly changing the development roadmap.
7. To sponsor a specific feature is not a must, a company can just donate to the project as a whole.
-If you want to get in touch with me about this issues please drop me an email to my gmail account (username is antirez) or direct-message me @antirez on Twitter. Thanks in advance for the help!
If you just feel like donating a small amount to Redis the simplest way is to use paypal, my paypal address is antirez@invece.org. Please specify in the donation if you don't like to have your name / company name published in the donations history (the amount will not be published anyway).
-
diff --git a/doc/TypeCommand.html b/doc/TypeCommand.html
index 50c5bed9..45900422 100644
--- a/doc/TypeCommand.html
+++ b/doc/TypeCommand.html
@@ -33,6 +33,8 @@
"string" if the key contains a String value
"list" if the key contains a List value
"set" if the key contains a Set value
+"zset" if the key contains a Sorted Set value
+"hash" if the key contains a Hash value
1.3.4) =
Time complexity: O(log(N))+O(M) with N being the number of elements in the sorted set and M the number of elements returned by the command, so if M is constant (for instance you always ask for the first ten elements with LIMIT) you can consider it O(log(N))
Return the all the elements in the sorted set at key with a score between_min_ and max (including elements with score equal to min or max).
The elements having the same score are returned sorted lexicographically asASCII strings (this follows from a property of Redis sorted sets and does notinvolve further computation).
Using the optional LIMIT it's possible to get only a range of the matchingelements in an SQL-alike way. Note that if offset is large the commandsneeds to traverse the list for offset elements and this adds up to theO(M) figure.
Hello! The followings are pointers to different parts of the Redis Documentation.
The README is the best starting point to know more about the project.
This short Quick Start provides a five minutes step-by-step istructions on how to download, compile, run and test the basic workings of a Redis server.
The command reference is a description of all the Redis commands with links to command specific pages.
The Redis Replication HOWTO is what you need to read in order to understand how Redis master <-> slave replication works.
The Append Only File HOWTO explains how the alternative Redis durability mode works. AOF is an alternative to snapshotting on disk from time to time (the default).
+ = Redis Documentation =
Russian TranslationHello! The followings are pointers to different parts of the Redis Documentation.
The README is the best starting point to know more about the project.
This short Quick Start provides a five minutes step-by-step istructions on how to download, compile, run and test the basic workings of a Redis server.
The command reference is a description of all the Redis commands with links to command specific pages. You can also download the Redis Commands Cheat-Sheet provided by Mason Jones (btw some command may be missing, the primary source is the wiki).
The Redis Replication HOWTO is what you need to read in order to understand how Redis master <-> slave replication works.
The Append Only File HOWTO explains how the alternative Redis durability mode works. AOF is an alternative to snapshotting on disk from time to time (the default).
Virutal Memory User Guide. A simple to understand guide about using and configuring the Redis Virtual Memory.
The Protocol Specification is all you need in order to implement a Redis client library for a missing language. PHP, Python, Ruby and Erlang are already supported.
+
Look at Redis Internals if you are interested in the implementation details of the Redis server.