https://github.com/aCoder2013/blog/issues/26
https://stackoverflow.com/questions/26406303/redis-key-expire-notification-with-jedis
http://www.xiongxiangming.com/?hmsr=toutiao.io&p=195
redis实现频率限制及高并发时的坑
http://fivezh.github.io/2017/05/24/Redis-cas/
Redis的事务机制Transaction通过四个命令来完成:
https://www.quora.com/Which-is-better-Redis-cluster-or-Cassandra
http://bigdataconsultants.blogspot.com/2013/12/difference-between-cassandra-and-redis.html
Coming to your question specifically -
1) You have key-value relationship - Redis wins
2) Read is more than write - neutral
3) Fault tolerance - Cassandra scores over redis (but not much)
4) Persistence of session data - Redis wins
Let's start with Redis:
Pros:
- Fast in memory updates, reads;
- Good with fat reads from memory;
Cons:
- In case of wanting to use Redis cluster, you are risking exposure to initial release issues as an early adapter. You probably don't want to have this risk in your data layer;
- Not the best persistent storage;
- Master/Slave setup will have a single point of failure if you lose your master; you have to handle slave promotion separately to handle this scenario;
- Your maintenance will be more difficult;
- No commercial support;
Cassandra:
Pros:
- No single point of failure; all nodes acts the same;
- Works well for a write heavy load;
- Multi-Data center support is built in;
- Maintenance will be easier;
- Had commercial support;
Cons:
- Depending on how fat are your reads and how your data structure is stored (aka fat rows), you may not get the best read performance; although, you can tune your memtables for your scenario to be bigger to absorb as much read,write in memory before they get flushed to persisted storage;
https://redis.io/topics/data-types-intro
always use
http://stackoverflow.com/questions/29531056/how-to-get-both-keys-and-values-when-using-rediss-keys-command
how to get both keys and values when using redis's 'keys' command
http://stackoverflow.com/questions/19098079/how-to-get-all-keys-from-redis-using-redis-template
How to get all Keys from Redis using redis template
In Redis, how do I get the expiration date of a key?
https://redis.io/commands/flushall
https://github.com/antirez/redis/issues/2864
https://medium.com/@petehouston/install-and-config-redis-on-mac-os-x-via-homebrew-eb8df9a4f298
brew install redis
Start Redis server using configuration file
redis-server /usr/local/etc/redis.conf
Test if Redis server is running.
https://github.com/redis-store/testing/issues/1
http://www.rediscookbook.org/create_unique_ids.html
Create Unique IDs
http://www.rediscookbook.org/implement_a_fifo_queue.html
分布式中使用 Redis 实现 Session 共享
http://blog.jobbole.com/91870/
http://coligo.io/nodejs-api-redis-cache/
http://blog.jobbole.com/91874/
http://blog.jobbole.com/91877/
http://blog.csdn.net/murderxchip/article/details/47954351
http://antirez.com/news/88
https://timyang.net/data/cassandra-vs-redis/
http://stackoverflow.com/questions/35471666/how-to-config-redis-cluster-when-use-spring-data-redis-1-7-0-m1
redis-cli -h master1 -p 6379 -c
https://ilyabylich.svbtle.com/redis-cluster-quick-overview
比如有这么一个场景,接口A限制用户30S内只能调用3次,但出现了一个诡异的现象是,已经过了这个时间还是不能调用,查看应用日志、外部依赖都没有发现异常。
Jedis redis = getRedis();
try {
redis.set(SafeEncoder.encode(key), SafeEncoder.encode(def + ""), "nx".getBytes(),
"ex".getBytes(), exp);
Long count = redis.incrBy(key.getBytes(), val);
} finally {
redis.close();
}
做的事情很简单,第一set命令就是说若key不存在则将值设置为def,并且设置过期时间,然后incrBy命令自增val,因此这里如果val传递了0则可以获取当前值,但是这里其实有一个问题,不是很容易复现,但是一旦出现用户就不能调用接口了。
假设应用在调用这个方法,在时间点t1执行set命令,并发现key是存在的,那么就不会设置过期时间,也不会去设置默认值,然后再时间点t2调用incrBy命令,但是如果这里key刚好在t1和t2之间过期的话,那么这个key就会一直存在,也就会导致上述的问题。
- 客户端执行set命令,这个时候key还未过期,因此set命令不会设置value也不会设置过期时间
- set命令执行完毕,这个时候key过期
- 客户端执行incrBy命令,因为上一步中key已经过期,因此这里的incrBy命令相当于在一个新的key上自增,但这里的关键是没有设置过期时间,也就是说key会一直存在。
这里提出一种解决方案,首先分析一下这段代码想做什么,传递一个key和默认值以及一个过期时间,需求就是自增并且能够过期。那么分析之后发现其实不需要set命令,下面给出一个解决方案:
try (Jedis redis = getRedis()) {
Long count = redis.incrBy(key.getBytes(), val);
if (count == val) {
redis.expire(key, exp);
}
}
首先调用incrBy命令自增,如果incrBy返回的值等于val,那么说明这是第一次调用因此需要设置下过期时间,这里涉及到了两次网络调用,因此可以改成lua脚本这样就只有一次网络调用,如果还想优化那么可以改成evalsha命令,避免每次都需要传递lua脚本避免额外的网络开销。
Lua脚本之前在前公司也经常使用。但是Lua脚本有一个问题,就是在多主的Redis集群中不适合使用。因为一个Lua脚本可能同时涉及多个键的操作,多个键可能分布在不同的master上。当然如果只操作一个键也是可以的
http://marjavamitjava.com/redis-keyspace-notifications-get-notified-keys-get-expired-using-jedis/
# By default all notifications are disabled because most users don’t need
# this feature and the feature has some overhead. Note that if you don’t
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events KEA
# this feature and the feature has some overhead. Note that if you don’t
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events KEA
By default value of notify-keyspace-events would be an empty string which means disabled.
JAVA实现redis超时失效key 的监听触发
https://redis.io/topics/notificationshttps://stackoverflow.com/questions/26406303/redis-key-expire-notification-with-jedis
http://www.xiongxiangming.com/?hmsr=toutiao.io&p=195
redis实现频率限制及高并发时的坑
而从上面代码可知,实现频率限制,通过两步操作来实现的
- op1: 读取当前计数器的值
- op2: 若超出频率限制,则返回失败,否则通过事务实现原子性加1
从上面的分析可知,redis是无法保证op1和op2原子性执行的,最终的结果会跟我们设想的大相径庭。
其实,只需要保证op1和op2操作的原子性,就能解决高并发时的问题。目前的方案有两种:
(1) 使用lua脚本在redis中实现频率限制的功能。redis是能够保证lua脚本执行的原子性的,并且还将多次网络开销变成一次网络开销,速度上会大大提高
(2) redis 4.0以上版本支持了模块功能,目前官方已经有成熟的频率限制的第三方模块,可以通过redis官网-module了解详情。
Redis的事务机制Transaction通过四个命令来完成:
MULTI, EXEC, DISCARD and WATCH
- 事务(transaction)的定义从
multi
开始,到exec
结束。 - 同一个事务内的多个命令,具有原子性,不会被打断It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.
CAS(Check-And-Set)支持
watch
已监视的key,只允许在当前终端的multi
和exec
见被修改,其他情况的修改都将导致watch和此事务的失败。
CAS的实现主要通过
watch
命令完成,也就是说在watch
一个key后,其他终端修改此key的值时,都将触发当前事务的失败。
Redis 允许一个客户端不间断执行多条命令:发送 MULTI 后,用户键入多条命令;再发送 EXEC 即可不间断执行之前输入的多条命令。因为,Redis 是单进程单线的工作模式,因此多条命令的执行是不会被中断的。
内部实现不难:Redis 服务器收到来自客户端的 MULTI 命令后,为客户端保存一个命令队列结构体,直到收到 EXEC 后才开始执行命令队列中的命令。
通过结合使用INCR和EXPIRE命令,可以实现一个只记录用户在指定间隔时间内的访问次数的计数器
客户端可以通过GETSET命令获取当前计数器的值并且重置为0
通过类似于DECR或者INCRBY等原子递增/递减的命令,可以根据用户的操作来增加或者减少某些值 比如在线游戏,需要对用户的游戏分数进行实时控制,分数可能增加也可能减少。
客户端可以通过GETSET命令获取当前计数器的值并且重置为0
通过类似于DECR或者INCRBY等原子递增/递减的命令,可以根据用户的操作来增加或者减少某些值 比如在线游戏,需要对用户的游戏分数进行实时控制,分数可能增加也可能减少。
限速器是一种可以限制某些操作执行速率的特殊场景。
传统的例子就是限制某个公共api的请求数目。
假设我们要解决如下问题:限制某个api每秒每个ip的请求次数不超过10次。
我们可以通过incr命令来实现两种方法解决这个问题。
传统的例子就是限制某个公共api的请求数目。
假设我们要解决如下问题:限制某个api每秒每个ip的请求次数不超过10次。
我们可以通过incr命令来实现两种方法解决这个问题。
这里我们将在java中使用redis-incr的特性来构建一个1分钟内只允许 请求100次的控制代码,key代表在redis内存放的被控制的键值。
最近项目中有个需求,短信发送的并发请求问题:业务需求是需要限制一个号码一分钟内只能获取一次随机码,之前的实现是短信发送请求过来后,先去数据库查询发送记录,根据上一次的短信发送时间和当前时间比较,如果时间差小于一分钟,则提示短信获取频繁,如果超过一分钟,则发送短信,并记录短信发送日志。
问题分析
短信发送是一个很敏感的业务,上面的实现存在一个并发请求的问题,当同一时间有很多请求过来时,同时去查库,同时获取到上一次发送时间没有,或者已超过一分钟,这时候就会重复发送短信了。
http://bigdataconsultants.blogspot.com/2013/12/difference-between-cassandra-and-redis.html
Coming to your question specifically -
1) You have key-value relationship - Redis wins
2) Read is more than write - neutral
3) Fault tolerance - Cassandra scores over redis (but not much)
4) Persistence of session data - Redis wins
Let's start with Redis:
Pros:
- Fast in memory updates, reads;
- Good with fat reads from memory;
Cons:
- In case of wanting to use Redis cluster, you are risking exposure to initial release issues as an early adapter. You probably don't want to have this risk in your data layer;
- Not the best persistent storage;
- Master/Slave setup will have a single point of failure if you lose your master; you have to handle slave promotion separately to handle this scenario;
- Your maintenance will be more difficult;
- No commercial support;
Cassandra:
Pros:
- No single point of failure; all nodes acts the same;
- Works well for a write heavy load;
- Multi-Data center support is built in;
- Maintenance will be easier;
- Had commercial support;
Cons:
- Depending on how fat are your reads and how your data structure is stored (aka fat rows), you may not get the best read performance; although, you can tune your memtables for your scenario to be bigger to absorb as much read,write in memory before they get flushed to persisted storage;
https://redis.io/topics/data-types-intro
Redis hashes look exactly how one might expect a "hash" to look, with field-value pairs:
> hmset user:1000 username antirez birthyear 1977 verified 1
OK
> hget user:1000 username
"antirez"
> hget user:1000 birthyear
"1977"
> hgetall user:1000
http://stackoverflow.com/questions/29942541/how-to-get-keys-which-does-not-match-a-particular-pattern-in-redisalways use
SCAN
instead of (the evil) KEYS
http://stackoverflow.com/questions/29531056/how-to-get-both-keys-and-values-when-using-rediss-keys-command
how to get both keys and values when using redis's 'keys' command
hashes is the right way to do it.
https://redis.io/commands/keys
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using SCAN or sets.
Supported glob-style patterns:
h?llo
matcheshello
,hallo
andhxllo
h*llo
matcheshllo
andheeeello
h[ae]llo
matcheshello
andhallo,
but nothillo
h[^e]llo
matcheshallo
,hbllo
, ... but nothello
h[a-b]llo
matcheshallo
andhbllo
Use
\
to escape special characters if you want to match them verbatim.http://stackoverflow.com/questions/19098079/how-to-get-all-keys-from-redis-using-redis-template
How to get all Keys from Redis using redis template
1. Directly from RedisTemplate
Set<String> redisKeys = template.keys("samplekey*"));
// Store the keys in a List
List<String> keysList = new ArrayList<>();
Iterator<String> it = redisKeys.iterator();
while (it.hasNext()) {
String data = it.next();
keysList.add(data);
}
Note: You should have configured redisTemplate with StringRedisSerializer in your bean
if you use java based bean configuration
redisTemplate.setDefaultSerializer(new StringRedisSerializer());
2. From JedisConnectionFactory
RedisConnection redisConnection = template.getConnectionFactory().getConnection();
Set<byte[]> redisKeys = redisConnection.keys("samplekey*").getBytes());
List<String> keysList = new ArrayList<>();
Iterator<byte[]> it = redisKeys.iterator();
while (it.hasNext()) {
byte[] data = (byte[])it.next();
keysList.add(new String(data, 0, data.length));
}
redisConnection.close();
if you dont close this connection explicitly, you will run into an exhaustion of the underlying jedis connection pool as said in http://stackoverflow.com/a/36641934/3884173.
http://stackoverflow.com/questions/6935468/in-redis-how-do-i-get-the-expiration-date-of-a-keyIn Redis, how do I get the expiration date of a key?
TTL key
See the documentation of the TTL command.
There is also a
https://www.tutorialspoint.com/redis/redis_hashes.htmPTTL
command since Redis 2.6 that returns the amount of time in milliseconds instead of seconds.HMSET tutorialspoint name "redis tutorial" description "redis basic commands for caching" likes 20 visitors 23000
http://stackoverflow.com/questions/16375188/redis-strings-vs-redis-hashes-to-represent-json-efficiency
I want to store a JSON payload into redis. There's really 2 ways I can do this:
- One using a simple string keys and values.
key:user, value:payload (the entire JSON blob which can be 100-200 KB)SET user:1 payload
- Using hashes
HSET user:1 username "someone"
HSET user:1 location "NY"
HSET user:1 bio "STRING WITH OVER 100 lines"
Keep in mind that if I use a hash, the value length isn't predictable. They're not all short such as the bio example above.
It depends on how you access the data:
Go for Option 1:
- If you use most of the fields on most of your accesses.
- If there is variance on possible keys
Go for Option 2:
- If you use just single fields on most of your accesses.
- If you always know which fields are available
P.S.: As a rule of the thumb, go for the option which requires fewer queries on most of your use cases.
exists(key):确认一个key是否存在
del(key):删除一个key
type(key):返回值的类型
keys(pattern):返回满足给定pattern的所有key
randomkey:随机返回key空间的一个key
flushdb:删除当前选择数据库中的所有key
flushall:删除所有数据库中的所有key
6、对zset(sorted set)操作的命令
https://redis.io/commands/flushall
Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.
With redis-cli:
- FLUSHDB - Removes data from your connection's CURRENT database.
- FLUSHALL - Removes data from ALL databases.
DEL
will allow you to remove the key entirely: http://redis.io/commands/del.ZREM
will allow you to remove members from the set: http://redis.io/commands/zrem.
There are additional
ZREM*
commands that allow the removal of ranges of members - see ZREMRANGEBYLEX
, ZREMRANGEBYRANK
and ZREMRANGEBYSCORE
You could delete the set altogether with
DEL
.DEL metasyn
https://github.com/antirez/redis/issues/2864
WRONGTYPE Operation against a key holding the wrong kind of value.
You're in luck, as
zrange
does not take scores, but indices. 0
is the first index, and -1
will be interpreted as the last index:zrange key 0 -1
redis 127.0.0.1:6379> ZRANGE 'ages' 0 1
brew install redis
Start Redis server using configuration file
redis-server /usr/local/etc/redis.conf
Test if Redis server is running.
redis-cli ping
redis-cli
https://github.com/redis-store/testing/issues/1
'dbfilename "/var/db/sync_app/app_discovery/user.rdb"' dbfilename can't be a path, just a filename
dbfilename dump.rdb |
Create Unique IDs
The use of INCR to provide unique ID's is one of the core concepts in Redis. It is often used in the 'primary key' style, replacing the same functionality used in relational databases.
Sometimes when we use sharding in MySQL or other SQL database, we can not rely on MySQL autoincrement, simply because we have several tables where we insert the data.
In Postgres and Oracle there are special objects called "sequences" that may give you unique id which you can use in such case. However MySQL have no sequence objects.
One of the ways is to simulate a sequence with table like this:
CREATE TABLE serial(
id int unsigned not null auto_increment primary key,
tag tinyint unsigned
);
Then we need to insert a record there, and to get last-insert-id, which to be used in real table.
This works well, but sometimes on high traffic websites, MySQL autoincrement slows down entire database. Slow down may be 1000% or even more. In that case we can do the job with Redis, using following script.
You want to use Redis to implement a simple abstract first-in, first-out queue, with basic push and pop operations.
Solution
Redis' built-in
List
datatype is a natural-born queue. To effectively implement a simple queue, all you need to do is utilize a limited set of List
operations.redis> LPUSH queue1 tom
(integer) 1
redis> LPUSH queue1 dick
(integer) 2
redis> LPUSH queue1 harry
(integer) 3
redis> RPOP queue1
tom
redis> RPOP queue1
dick
redis> RPOP queue1
harry
Redis comes with four basic list push and pop operations (RPUSH, LPUSH, LPOP, RPOP), as well as blocking popoperations. They are all O(1) operations, so the time complexity of the commands does not depend upon the length of the list.
分布式中使用 Redis 实现 Session 共享
http://blog.jobbole.com/91870/
redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多,包括string(字符串)、list(链表)、set(集合)、zset(sorted set –有序集合)和hash(哈希类型)。这些数据类型都支持push/pop、add/remove及取交集并集和差集及更丰富的操作,而且这些操作都是原子性的。在此基础上,redis支持各种不同方式的排序。与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步。
- Redis list的应用场景非常多,也是Redis最重要的数据结构之一。
- 我们可以轻松地实现最新消息排行等功能。
- Lists的另一个应用就是消息队列,可以利用Lists的PUSH操作,将任务存在Lists中,然后工作线程再用POP操作将任务取出进行执行。
Cookie是什么? Cookie 是一小段文本信息,伴随着用户请求和页面在 Web 服务器和浏览器之间传递。Cookie 包含每次用户访问站点时 Web 应用程序都可以读取的信息。(Cookie 会随每次HTTP请求一起被传递服务器端,排除js,css,image等静态文件,这个过程可以从fiddler或者ie自带的网络监控里面分析到,考虑性能的化可以从尽量减少cookie着手)
httpOnly是表示这个cookie是不会在浏览器端通过js进行操作的,防止人为串改sessionid。
Session是什么? Session我们可以使用它来方便地在服务端保存一些与会话相关的信息。比如常见的登录信息。
Session实现原理? HTTP协议是无状态的,对于一个浏览器发出的多次请求,WEB服务器无法区分 是不是来源于同一个浏览器。所以服务器为了区分这个过程会通过一个sessionid来区分请求,而这个sessionid是怎么发送给服务端的呢?前面说了cookie会随每次请求发送到服务端,并且cookie相对用户是不可见的.
Instead of just as a plain string value, redis can have any of the following:
- Binary-safe strings which can be up to 512MB in size
- Lists which are a collection of strings
- Sets (sorted and unsorted)
- Hashes
- Bit arrays and HyperLogLogs
http://blog.jobbole.com/91874/
http://blog.jobbole.com/91877/
http://blog.csdn.net/murderxchip/article/details/47954351
需求:同一个qq号在10分钟之内只允许投5票
问题:要考虑第一票并发的情况,第5票以后如何处理?这里有陷阱
https://timyang.net/data/cassandra-vs-redis/
Redis的几种使用场景
- 访问量大
- key value或者key list数据结构
- 容量小,可控,可以全部放入内存。由于Redis是单线程设计,因此大value会导致后续的请求一定的堵塞。另外hashset当hgetall时候由于存在遍历操作,也不适合集合太大。如果数据超过单机容量可以使用常规的sharding方法分布到多台机
- 需持久化的场景
上面四点一般情况下应是必要条件。因此常见网站的用户资料、好友列表就适用用Redis来保存。由于Redis具有memcached所有的特性,也有讨论说memcache是否可以退出了?在以下情况下,我会倾向于选择memcached而非redis
- 简单无需持久化的key value,比如100字节以下。这种情况下使用memcached空间更节约且维护更简便。
- 有滚动过期需求,如网站的session,每个新登录的用户定期过期。
Either connect to node instance and use shutdown command or if you are on ubuntu you can try to restart redis server through init.d:
/etc/init.d/redis-server restart
or stop/start it:
/etc/init.d/redis-server stop
/etc/init.d/redis-server start
On Mac
redis-cli shutdown
Basically all that is needed is setting the inital collection of cluster nodes in
RedisClusterConfiguration
and provide that one to JedisConnectionFactory
or LettuceConnectionFactory
.@Configuration
class Config {
List<String> clusterNodes = Arrays.asList("127.0.0.1:30001", "127.0.0.1:30002", "127.0.0.1:30003");
@Bean
RedisConnectionFactory connectionFactory() {
return new JedisConnectionFactory(new RedisClusterConfiguration(clusterNodes));
}
@Bean
RedisTemplate<String, String> redisTemplate(RedisConnectionFactory factory) {
// just used StringRedisTemplate for simplicity here.
return new StringRedisTemplate(factory);
}
}
http://docs.spring.io/spring-data/redis/docs/current/reference/html/#cluster * spring.redis.cluster.nodes[0] = 127.0.0.1:7379
* spring.redis.cluster.nodes[1] = 127.0.0.1:7380
@Autowired ClusterConfigurationProperties clusterProperties;
public @Bean RedisConnectionFactory connectionFactory() {
return new JedisConnectionFactory(
new RedisClusterConfiguration(clusterProperties.getNodes()));
}
https://www.javacodegeeks.com/2015/09/redis-clustering.html
Enables Redis cluster mode for this instance
yes (default: no)
The path to a file where the configuration of this instance is stored. This file should be never touched and is simply generated at startup by the Redis Cluster instances and updated every time it is needed (see please section Redis Clustering in Nutshell)
(default: nodes.conf)- 5000
- upon start, each node generates its Redis Clustering in Nutshell, please note that this value will be generated only at first run and then reused (name) as we discussed in
- every instance is running in
- also, for every running instance there is a file created with current (name) and some additional information
At this moment we have three Redis master nodes running in a cluster mode but actually not yet forming a cluster (every Redis master node sees only itself but not others). To verify that, we can run
command https://ilyabylich.svbtle.com/redis-cluster-quick-overview