https://scalegrid.io/blog/high-availability-with-redis-sentinels-connecting-to-redis-masterslave-sets/
in order to achieve High Availability (HA), you need to deploy a master and slave(s)
Redis Sentinels also act as configuration providers for master-slave sets. That is, a Redis client can connect to the Redis Sentinels to find out the current master and general health of the master/slave replica set
https://www.inovex.de/blog/redis-sentinel-make-your-dataset-highly-available/
in order to achieve High Availability (HA), you need to deploy a master and slave(s)
High availability in Redis is achieved through master-slave replication. A master Redis server can have multiple Redis servers as slaves, preferably deployed on different nodes across multiple data centers. When the master is unavailable, one of the slaves can be promoted to become the new master and continue to serve data with little or no interruption.
Redis Sentinels run as a set of separate processes that in combination monitor Redis master-slave sets and provide automatic failover and reconfiguration.
Redis Sentinels also act as configuration providers for master-slave sets. That is, a Redis client can connect to the Redis Sentinels to find out the current master and general health of the master/slave replica set
https://www.inovex.de/blog/redis-sentinel-make-your-dataset-highly-available/
The Sentinel process is a Redis Instance which was started with the –sentinel Option (or redis-sentinel binary). It needs a configuration file that tells the Sentinel which Redis master it should monitor.
In short, these are the benefits of using Sentinel:
- Monitoring: Sentinel constantly checks if the Redis master and its slave instances are working.
- Notifications: It can notify the system administrators or other tools via an API if something happens to your Redis instances.
- Automatic Failover: When Sentinel detects a failure of the master node it will start a failover where a slave is promoted to master. The additional slaves will be reconfigured automatically to use the new master. The application/clients that are using the Redis setup will be informed about the new address to use for the connection.
- Configuration provider: Sentinel can be used for service discovery. That means clients can connect to the Sentinel in order to ask for the current address of your Redis master. After a failover Sentinel will provide the new address.
- At least Three Sentinel instances are needed for a robust deployment.
- Separate your Sentinel instances with different VMs or servers.
- Due to the asynchronous replication of Redis the distributed setup does not guarantee that acknowledged writes are retained during failures.
- Your client-library needs to support Sentinel.
- Test your high availability setup from time to time in your test environment and even in production systems.
- Sentinel, Docker or other NAT/Port Mapping technologies should only be mixed with care.
On each of your Redis Hosts you create the same Sentinel config (/etc/redis-sentinel.conf, depending on your Linux distribution), like so:
1
2
3
4
5
|
sentinel monitor myHAsetup 192.168.1.29 6379 2
sentinel down-after-milliseconds myHAsetup 6001
sentinel failover-timeout myHAsetup 60000
sentinel parallel-syncs myHAsetup 1
bind ip-of-interface
|
- This line tells Sentinel which master to monitor (myHAsetup). The Sentinel will find the host at 192.168.1.3:6379. The quorum for this setup is 2. The quorum is only used to detect failures: This number of Sentinels must agree about the fact that the master is not reachable. When a failure is detected, one Sentinel is elected as the leader who authorizes the failover. This happens when the majority of Sentinel processes vote for the leader.
- The time in milliseconds an instance is allowed be unreachable for a Sentinel (not answering to PINGs or replies with an error). After this time, the master is considered to be down.
- The timeout in milliseconds that Sentinel will wait after a failover before initiating a new failover.
- The number of slaves that can sync with the new master at the same time after a failover. The lower the number the longer the failover will need to complete. Using the slaves to serve old data to clients, you maybe don’t want to re-synchronize all slaves with the new master at the same time as there is a very short timeframe in which the slave stops while loading the bulk data from the master. In this case set it to 1.If this does not matter set it to the maximum of slaves that might be connected to the master.
- Listen IP, limited to one interface.