Redis4.0数据库(二):Redis4.0之持久化存储(下)
十,Redis的RDB存储方式
10.1 redis的运行方式说明
redis如果提供缓存服务,可以关闭所有持久化存储,如此一来redis重启后所有数据会丢失
开启rdb或aof持久化存储,能把redis中的数据持久化到磁盘中。
rdb和aof对性能都有影响,所以建议持久化的操作在从库上进行
10.2 redis rdb存储方式,使用save配置开启rdb存储或者关闭rdb存储
[ ~]# cat /usr/local/redis/conf/redis.conf
#与rdb相关的配置文件信息
~]# cat -n /usr/local/redis/conf/redis.conf | sed -n "14p;15p;16p;18p;20p;21p"
14 save 900 1 #900s改变1个key(写入一条数据)触发rdb存储
15 save 300 10 #300s改变10个key(写入一条数据),触发rdb存储
16 save 60 10000 #60s改变10000key(写入一条数据),触发rdb存储
18 rdbcompression no #rdb压缩最好关闭,影响cpu性能
20 dbfilename "dump.rdb"#rdb存储文件的名字
21 dir "/data/redis"#dir为rdb存储的路径
说明:
首先,redis触发存储,如果太频繁,显然影响应能,redis单进程+单线程接待用户(异步非阻塞模型),所有用户时一个线程接待的,单线程没有了冲突问题。最大化降低冲突问题。但是,这个单线程很怕大文件,耗时大的。而且,单线程很容易造成阻塞,如果太频繁的存储。会极大地影响数据写入的性能。rdb是一种兼顾性能和数据安全的一种存储方式。redis不忙的时候,900秒触发一次存储。如果忙起来,300秒,10触发一次。60秒 1万条数据,我才触发一次。
10.3 设置开启或者关闭rdb存储
提示:默认情况下rdb持久化存储是开启的
[ ~]# redis-cli config set save "" #关闭rdb存储
OK
[ ~]# redis-cli config rewrite #配置保存
OK
[ ~]# redis-cli config set save "180 1 120 10 60 10000" #开启rdb
OK
[ ~]# redis-cli config rewrite
OK
官方推荐开启的方式就是rdb,最小程度影响性能。
redis是数据库缓存。为的是降低数据库从库的读压力。为何memcached会被redis淘汰。本质就是持久化存储。但是,刚一开启memcached的时候,里面没有数据。此时大量用户请求都要找数据库从库。很可能引起数据库崩溃。所以,以前在用memcached时候,一旦memcached重启,现需要一个步骤叫做预热。才能正式启用。redis不需要这个。他有持久化方式。就算丢。丢的也是那一小部分。
有些数据只存redis,不存在数据库里。例如:一个网站统计的时时在线人数。
10.4 进行数据写入,观察rdb存储日志
#输入1万条数据
[ ~]# for line in `seq -w 10000`;do redis-cli set key_${line} value1_${line};done
#查看日志信息
[ ~]# cat /data/redis/redis.log
26094:M 14 Apr 21:36:20.247 # Server initialized
26094:M 14 Apr 21:36:20.248 * DB loaded from disk: 0.000 seconds
26094:M 14 Apr 21:36:20.248 * Ready to accept connections
26094:M 14 Apr 21:37:16.399 # CONFIG REWRITE executed with success.
26094:M 14 Apr 21:37:30.709 # CONFIG REWRITE executed with success.
26094:M 14 Apr 21:39:50.374 * 1 changes in 180 seconds. Saving...#发生了1万条数据改变180秒内,触发rdb存储
26094:M 14 Apr 21:39:50.375 * Background saving started by pid 26289 #生成一个子进程触发RDB
26289:C 14 Apr 21:39:50.382 * DB saved on disk
26289:C 14 Apr 21:39:50.382 * RDB: 0 MB of memory used by copy-on-write
26094:M 14 Apr 21:39:50.475 * Background saving terminated with success
#查看redis里有多少个键值(keys)
[ ~]# redis-cli info
......省略......
# Keyspace
db0:keys=10055,expires=0,avg_ttl=0
#redis占用了多少内存
[ ~]# redis-cli info memory
# Memory
used_memory:1702592 #数据占用内存大小
used_memory_human:1.62M #人性化的方式显示数据占用内存大小
used_memory_rss:3239936 #数据和进程占用大小
used_memory_rss_human:3.09M #人性化的方式显示数据和进程占用大小
used_memory_peak:1703568
used_memory_peak_human:1.62M
used_memory_peak_perc:99.94%
used_memory_overhead:1369526
used_memory_startup:786592
used_memory_dataset:333066
used_memory_dataset_perc:36.36%
total_system_memory:1021906944
total_system_memory_human:974.57M
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:1.90 #redis碎片率(频繁的写入删除,就会产生碎片,删的越多碎片就会越高),1-2之间代表没有内存碎片,小于1说明已经占用虚拟缓存,分配的内存不够用了
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0
10.5 redis提供的bgsave命令能够立刻触发rdb存储,观察存储日志
[ ~]# redis-cli save #直接占用主进程,会阻塞前端客户数据输入
ok
[ ~]# redis-cli bgsave #后台启动新进程进行rdb存储(不影响前端输入)
Background saving started
#查看日志
[ ~]# cat /data/redis/redis.log
26094:M 14 Apr 21:53:08.882 * DB saved on disk #save触发的日志信息
26094:M 14 Apr 21:53:13.450 * Background saving started by pid 36961 #bgsave触发的信息
36961:C 14 Apr 21:53:13.464 * DB saved on disk #bgsave触发的信息
36961:C 14 Apr 21:53:13.465 * RDB: 0 MB of memory used by copy-on-write #bgsave触发的信息
26094:M 14 Apr 21:53:13.474 * Background saving terminated with success #bgsave触发的信息
十一,Redis的AOF存储方式
redis的appendonly(aof)持久化存储会把用户每次的操作都记录到文件中(类似mysqlbinlog)
11.1 动态开启或者关闭aof
[ ~]# redis-cli config set appendonly yes #开启
OK
[ ~]# redis-cli config rewrite
OK
[ ~]# redis-cli config get appendonly #查询状态
1) "appendonly"
2) "yes"
[ ~]# redis-cli config set appendonly no
OK
[ ~]# redis-cli config rewrite
OK
11.2 写入数据,观察aof。多次运行,aof文件不断增大,rdb文件大小不变
#查看aof和rdb文件大小
[ ~]# du -sh /data/redis/appendonly.aof
464K /data/redis/appendonly.aof
[ ~]# du -sh /data/redis/dump.rdb
236K /data/redis/dump.rdb
#写入数据
[ ~]# for line in `seq -w 100`;do redis-cli set key_${line} value_${line};done
#查看aof和rdb文件大小
[ ~]# redis-cli config set appendonly yes
OK
[ ~]# redis-cli config rewrite
OK
[ ~]# du -sh /data/redis/appendonly.aof
468K /data/redis/appendonly.aof
[ ~]# du -sh /data/redis/dump.rdb
236K /data/redis/dump.rdb
11.3 重写aof文件,整理相同的key,写入最后的有效值
BGREWRITEAOF
执行一个AOF文件重写操作。重写会创建一个当前AOF文件的体积优化版本
即使BGREWRITEAOF执行失败,也不会有任何数据丢失,因为旧的AOF文件在BGREWRITEAOF成功之前不会被修改。
重写操作只会在没有其他持久化工作在后台执行时被触发。
从Redis2.4开始,AOF重写由Redis自行触发,BGREWRITEAOF仅仅用于手动触发重写操作。
#清空aof文件
[ ~]# > /data/redis/appendonly.aof
[ ~]# du -sh /data/redis/appendonly.aof
0 /data/redis/appendonly.aof
[ ~]# redis-cli bgrewriteaof #手动触发AOF重写
Background append only file rewriting started
[ ~]# du -sh /data/redis/appendonly.aof #redis里所有数据被重写入aof
468K /data/redis/appendonly.aof
#清空aof文件
[ ~]# > /data/redis/appendonly.aof
[ ~]# du -sh /data/redis/appendonly.aof
0 /data/redis/appendonly.aof
[ ~]# redis-cli set yunjisuan benet
OK
[ ~]# du -sh /data/redis/appendonly.aof
4.0K /data/redis/appendonly.aof
[ ~]# cat /data/redis/appendonly.aof
*2
$6
SELECT #select 0 表示切换到db0
$1
0
*3
$3
set #执行set yunjisuan benet
$9
yunjisuan
$5
benet
[ ~]# redis-cli del yunjisuan benet
(integer) 1
[ ~]# cat /data/redis/appendonly.aof
*3
$3
set
$9
yunjisuan
$5
benet
*3
$3
del #执行del yunjisuan benet
$9
yunjisuan
$5
benet
重要提示
我们发现虽然我们向redis添加了一个key,又删除了这个key。redis数据库从本质上来说并没有新增任何数据。但是aof文件仍旧把操作都给记录了。这样就会导致aof文件最终会非常大。所以aof文件的优化,就是让aof文件进行重写,只记录数据的增量部分。如此aof文件就小很多了。
11.4 aof配置自动rewrite机制
#在默认配置文件里,默认存在
[ ~]# redis-cli config get auto-aof-rewrite* #获取aof-rewrite配置
1) "auto-aof-rewrite-percentage"
2) "100" #默认100%,也就是aof增加一倍后考虑rewrite,两个条件要同时满足
3) "auto-aof-rewrite-min-size"
4) "67108864" #默认64mb,也就是aof达到64M后考虑rewirte,两个条件要同时满足
#进行aof自动重写测试
[ ~]# redis-cli config set auto-aof-rewrite-min-size 100000
OK
[ ~]# redis-cli config get auto-aof-rewrite*
1) "auto-aof-rewrite-percentage"
2) "100"
3) "auto-aof-rewrite-min-size"
4) "100000"
[ ~]# redis-cli config rewrite
OK
[ ~]# > /data/redisappendonly.aof
[ ~]# du -sh /data/redis/appendonly.aof
4.0K /data/redis/appendonly.aof
[ ~]# for line in `seq -w 1000`;do redis-cli set key2_${line} value2_${line};done
[ ~]# du -sh /data/redis/appendonly.aof
48K /data/redis/appendonly.aof
[ ~]# for line in `seq -w 1000`;do redis-cli set key2_${line} value2_${line};done
[ ~]# du -sh /data/redis/appendonly.aof
128K /data/redis/appendonly.aof
[ ~]# du -sh /data/redis/appendonly.aof
92K /data/redis/appendonly.aof #自动触发了aof重写机制
十二,Redis最大内存设置和删除算法
[ ~]# redis-cli flushall #手动清空redis里所有数据
OK
[ ~]# redis-cli
127.0.0.1:6379> keys *
(empty list or set)
12.1 redis的键设置有效期,过期自动删除
redis对于数据过期的处理方式,取决于它的内存过期算法
[ ~]# redis-cli set name yunjisuan
OK
[ ~]# redis-cli ttl name
(integer) -1 #-1代表key永久有效
[ ~]# redis-cli expire name 10 #设定key 10s有效
(integer) 1
[ ~]# redis-cli ttl name #查看key存活剩余时间
(integer) 3
[ ~]# redis-cli ttl name
(integer) 1
[ ~]# redis-cli ttl name
(integer) 0
[ ~]# redis-cli ttl name
(integer) -2 #说明已经被过期清除了
[ ~]# redis-cli get name
(nil) #key已经被过期清除了
12.2 查看设置最大内存
#查看和设定最大内存限制
[ ~]# redis-cli config get maxmemory
1) "maxmemory"
2) "0" #默认对内存无限制
[ ~]# redis-cli config set maxmemory 1M #限制1M
OK
[ ~]# redis-cli config get maxmemory
1) "maxmemory"
2) "1000000"
12.3 可选择的删除算法
olatile-lru:使用LRU算法删除键(key需要设置过期时间)
volatile-random:随机删除键(key需要设置过期时间)
volatile-ttl:删除ttl最小的键(key需要设置过期时间)
allkeys-lru:使用LRU算法删除键(所有key)
allkeys-random:随机删除键(所有key)
noeviction:不进行任何的操作,只返回错误,默认redis的算法
[ ~]# redis-cli config get maxmemory-policy #查看内存清理算法
1) "maxmemory-policy"
2) "noeviction" #默认noeviction
12.4 模拟超过内存
[ ~]# for line in `seq -w 2000`;do redis-cli set key_${line} value_${line};done
#测试会发现报错
(error) OOM command not allowed when used memory > ‘maxmemory‘.
12.5 设置删除算法
#将删除算法设置为volatile-lru
[ ~]# redis-cli config get maxmemory-policy
1) "maxmemory-policy"
2) "noeviction"
[ ~]# redis-cli config set maxmemory-policy volatile-lru
OK
[ ~]# redis-cli config get maxmemory-policy
1) "maxmemory-policy"
2) "volatile-lru"
[ ~]# redis-cli config rewrite
OK
#算法测试
[ ~]# redis-cli get key_0011
"value_0011"
[ ~]# redis-cli expire key_0011 3600
(integer) 1
[ ~]# redis-cli ttl key_0011
(integer) -2
[ ~]# redis-cli get key_0011
(nil)
说明:由上述测试可以发现volatile-lru算法当内存到了最大值以后,会优先删除有过期时间的key。
十三,Redis禁用屏蔽危险命令
13.1 redis禁用的命令
FLUSHALL和FLUSHDB会清除redis的数据,比较危险KEYS在键过多的时候使用会阻塞业务请求
13.2 redis禁用危险命令配置代码如下(写入配置文件即可,此配置无法平滑更新)
#将配置加入redis.conf配置文件
[ ~]# echo ‘rename-command FLUSHALL ""‘ >> /usr/local/redis/conf/redis.conf
[ ~]# echo ‘rename-command FLUSHDB ""‘ >> /usr/local/redis/conf/redis.conf
[ ~]# echo ‘rename-command KEYS ""‘ >> /usr/local/redis/conf/redis.conf
[ ~]# tail -3 /usr/local/redis/conf/redis.conf
rename-command FLUSHALL "" #将命令改名成空
rename-command FLUSHDB ""#将命令改名成空
rename-command KEYS "" #将命令改名成空
13.3 登陆redis,运行禁止命令进行测试
#重启redis-server
[ ~]# redis-cli shutdown
[ ~]# redis-server /usr/local/redis/conf/redis.conf
[ ~]# netstat -antup | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 24399/redis-server
#测试被屏蔽的危险命令
[ ~]# redis-cli flushall
(error) ERR unknown command `flushall`, with args beginning with:
[ ~]# redis-cli flushdb
(error) ERR unknown command `flushdb`, with args beginning with:
[ ~]# redis-cli
127.0.0.1:6379> key *
(error) ERR unknown command `key`, with args beginning with: `*`,
十四,Redis主从服务器环境的搭建
在工作中redis主库不开启任何持久化,rdb和aof肯定是全关闭的,任何影响主库性能全都关闭。
持久化都是在从上做
主机名 | IP | 用途 |
redis01 | 192.168.200.158 | redis-master |
redis02 | 192.168.200.181 | redis-slaveA |
redis03 | 192.168.200.178 | redis-slaveB |
14.1 环境要求与redis基础编译部署调优
#操作系统环境要求
[ ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
[ ~]# uname -r
3.10.0-862.el7.x86_64
[ ~]# systemctl stop firewalld
[ ~]# systemctl disable firewalld
[ ~]# setenforce 0
setenforce: SELinux is disabled
[ ~]# sestatus
SELinux status: disabled
三台redis都进行如下编译过程(安装过程略)
yum -y install wget gcc gcc-c++ make tar openssl openssl-devel cmake
tar xf redis-4.0.11.tar.gz -C /usr/src/
cd /usr/src/redis-4.0.11/
make
make MALLOC=jemalloc
make PREFIX=/usr/local/redis install
cd /usr/local/redis/
ls
mkdir -p /usr/local/redis/conf
cp /usr/src/redis-4.0.11/redis.conf /usr/local/redis/conf/
cp /usr/src/redis-4.0.11/sentinel.conf /usr/local/redis/conf/
ln -s /usr/local/redis/bin/* /usr/local/bin/
which redis-server
三台都进行配置文件优化和简单的基础调优
cd /usr/local/redis
cp conf/redis.conf{,.bak}
egrep -v "^$|^#" conf/redis.conf.bak > conf/redis.conf
mkdir -p /data/redis/
#创建redis数据目录
修改配置文件
#修改成以下设置
[ conf]# cat -n redis.conf | sed -n ‘1p;3p;4p;7p;9p;11p;21p‘
1 bind 0.0.0.0 #监听地址
3 port 6379 #监听端口
4 tcp-backlog 1024 #tcp连接数
7 daemonize yes #是否后台启动
9 pidfile /data/redis/redis.pid #pid存放目录
11 logfile "/data/redis/redis.log" #日志存放目录
21 dir /data/redis/ #工作目录
进行基础调优设置
echo "* - nofile 10240" >> /etc/security/limits.conf
echo "net.core.somaxconn = 10240" >> /etc/sysctl.conf
echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
sysctl -p
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled‘ >> /etc/rc.local
echo ‘echo never > /sys/kernel/mm/transparent_hugepage/defrag‘ >> /etc/rc.local
三台的redis-server都启动(上文实验已经启动过redis-master)
[ conf]# redis-server /usr/local/redis/conf/redis.conf
[ conf]# netstat -antup | grep redis
14.2 redis主从同步服务器搭建
redis的主从同步,不用修改master任何配置
只需要在redis-slave上指定master的IP地址即可
先启动redis-master,然后再在两个redis-slave上都进行如下操作
redis-cli shutdown
echo "SLAVEOF 192.168.200.158 6379" >> /usr/local/redis/conf/redis.conf
> /data/redis/redis.log
redis-server /usr/local/redis/conf/redis.conf
netstat -antup | grep redis
14.3 主从同步日志分析(全量同步)
#查看redis-slave同步日志
[ conf]# cat /data/redis/redis.log
.......略。。。。。
18342:S 11 Aug 13:56:54.897 # Server initialized #服务器初始化
18342:S 11 Aug 13:56:54.897 * DB loaded from disk: 0.000 seconds #数据从磁盘加载0秒
18342:S 11 Aug 13:56:54.897 * Ready to accept connections #准备接受连接
18342:S 11 Aug 13:56:54.897 * Connecting to MASTER 192.168.200.158:6379 #链接到主192.168.200.165:6379
18342:S 11 Aug 13:56:54.897 * MASTER <-> SLAVE sync started #主从同步开始
18342:S 11 Aug 13:56:54.897 * Non blocking connect for SYNC fired the event. #非阻塞同步连接触发事件
18342:S 11 Aug 13:56:54.898 * Master replied to PING, replication can continue... #主应答,复制可以继续
18342:S 11 Aug 13:56:54.898 * Partial resynchronization not possible (no cached master) #部分同步不能(本机无缓存的主文件)
18342:S 11 Aug 13:56:54.899 * Full resync from master: e3adc85bd644e66bd1ee17b49c25e5e0491084d5:0 #进行全同步
18342:S 11 Aug 13:56:54.917 * MASTER <-> SLAVE sync: receiving 43606 bytes from master #从主接收43606字节
18342:S 11 Aug 13:56:54.917 * MASTER <-> SLAVE sync: Flushing old data #刷新旧数据
18342:S 11 Aug 13:56:54.917 * MASTER <-> SLAVE sync: Loading DB in memory #加载数据到内存
18342:S 11 Aug 13:56:54.918 * MASTER <-> SLAVE sync: Finished with success #同步完成
#查看redis-master同步日志
[ conf]# cat /data/redis/redis.log
......略。。。。。。
4676:M 15 Apr 00:51:57.366 # Server initialized
4676:M 15 Apr 00:51:57.366 * Ready to accept connections
4676:M 15 Apr 00:54:37.000 * Slave 192.168.200.181:6379 asks for synchronization #从192.168.200.181:6379请求同步
4676:M 15 Apr 00:54:37.000 * Full resync requested by slave 192.168.200.181:6379 #从192.168.200.181:6379请求完整的重新同步
4676:M 15 Apr 00:54:37.000 * Starting BGSAVE for SYNC with target: disk #master启动bgsave与目标的磁盘进行同步
4676:M 15 Apr 00:54:37.000 * Background saving started by pid 4702 #后台保存rdb的进程的pid号为4702
4702:C 15 Apr 00:54:37.002 * DB saved on disk #rdb文件已经保存到了磁盘
4702:C 15 Apr 00:54:37.003 * RDB: 0 MB of memory used by copy-on-write #rdb写时复制使用了0MB的内存
4676:M 15 Apr 00:54:37.080 * Background saving terminated with success #后台保存成功
4676:M 15 Apr 00:54:37.081 * Synchronization with slave 192.168.200.181:6379 succeeded #与从192.168.200.181:6379同步成功
14.4 主从同步日志分析(部分同步)
#清空master日志
[ conf]# > /data/redis/redis.log
#清空slave日志,并shutdown在启动slave
[ conf]# > /data/redis/redis.log
[ conf]# redis-cli shutdown
[ conf]# redis-server /usr/local/redis/conf/redis.conf
[ conf]# netstat -antup | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 15365/redis-server
tcp 0 0 192.168.200.181:36703 192.168.200.158:6379 ESTABLISHED 15365/redis-server
#查看redis-slave日志
[ conf]# cat /data/redis/redis.log
.....略......
15365:S 15 Apr 00:38:31.768 # Server initialized#服务器初始化
15365:S 15 Apr 00:38:31.768 * DB loaded from disk: 0.000 seconds#从磁盘加载旧数据用时0.000秒
15365:S 15 Apr 00:38:31.768 * Before turning into a slave, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.#由于之前是一个从库,利用主的参数合成一个主的缓存,这样就可以和主仅仅进行一部分的数据同步
15365:S 15 Apr 00:38:31.768 * Ready to accept connections#准备接受连接
15365:S 15 Apr 00:38:31.768 * Connecting to MASTER 192.168.200.158:6379 #连接到master
15365:S 15 Apr 00:38:31.769 * MASTER <-> SLAVE sync started #主从同步开始
15365:S 15 Apr 00:38:31.770 * Non blocking connect for SYNC fired the event.#非阻塞同步连接触发事件
15365:S 15 Apr 00:38:31.770 * Master replied to PING, replication can continue...#master应答,复制可以继续
15365:S 15 Apr 00:38:31.771 * Trying a partial resynchronization (request 06bae065010514a7adab4aa07d67c1bbb8ae64e2:1499).#尝试进行部分同步(要求06bae065010514a7adab4aa07d67c1bbb8ae64e2:1499)
15365:S 15 Apr 00:38:31.772 * Successful partial resynchronization with master.#成功进行部分同步
15365:S 15 Apr 00:38:31.772 * MASTER <-> SLAVE sync: Master accepted a Partial Resynchronization.
#master应答接受一个部分同步的请求
#查看redis-master日志
[ conf]# cat /data/redis/redis.log
4676:M 15 Apr 01:12:29.132 # Connection with slave 192.168.200.181:6379 lost.#一个从库192.168.200.181:6379请求进行同步
4676:M 15 Apr 01:12:29.150 * Slave 192.168.200.181:6379 asks for synchronization
4676:M 15 Apr 01:12:29.150 * Partial resynchronization request from 192.168.200.181:6379 accepted. Sending 0 bytes of backlog starting from offset 1499.#一个部分同步的请求来自192.168.200.181:6379,master已经接受请求,从偏移量为1499处开始发送0字节的剩余数据
14.5 主从同步的停止
#清空从库日志并停止从库主从同步(只能在从库上执行)
[ conf]# > /data/redis/redis.log
[ conf]# redis-cli slaveof no one
OK
#再次查看日志
[ conf]# cat /data/redis/redis.log
15365:M 15 Apr 00:47:49.587 # Setting secondary replication ID to , valid up to offset: 2269. New replication ID is #将第二次复制的ID设置为06bae065010514a7adab4aa07d67c1bbb8ae64e2,有效偏移量:3571。新的复制ID为9dccfc15852e1a1d5f1f07410a624b2f5d97d97f
15365:M 15 Apr 00:47:49.587 # Connection with master lost.#与主库失去联系
15365:M 15 Apr 00:47:49.587 * Caching the disconnected master state.#主从状态缓存断开
15365:M 15 Apr 00:47:49.587 * Discarding previously cached master state.#丢弃之前缓存的主的状态
15365:M 15 Apr 00:47:49.588 * MASTER MODE enabled (user request from ‘id=4 addr=127.0.0.1:52224 fd=8 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=slaveof‘#主模式启用
#清空redis日志并恢复从库主从同步
[ conf]# > /data/redis/redis.log
[ conf]# redis-cli slaveof 192.168.200.158 6379
OK
#查看slave日志
[ conf]# cat /data/redis/redis.log
15365:S 15 Apr 00:51:25.111 * Before turning into a slave, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
15365:S 15 Apr 00:51:25.112 * SLAVE OF 192.168.200.158:6379 enabled (user request from ‘id=5 addr=127.0.0.1:52226 fd=7 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=slaveof‘)
15365:S 15 Apr 00:51:26.118 * Connecting to MASTER 192.168.200.158:6379
15365:S 15 Apr 00:51:26.119 * MASTER <-> SLAVE sync started
15365:S 15 Apr 00:51:26.119 * Non blocking connect for SYNC fired the event.
15365:S 15 Apr 00:51:26.120 * Master replied to PING, replication can continue...
15365:S 15 Apr 00:51:26.121 * Trying a partial resynchronization (request 9dccfc15852e1a1d5f1f07410a624b2f5d97d97f:2269).
15365:S 15 Apr 00:51:26.122 * Full resync from master: 06bae065010514a7adab4aa07d67c1bbb8ae64e2:2562
15365:S 15 Apr 00:51:26.122 * Discarding previously cached master state.
15365:S 15 Apr 00:51:26.142 * MASTER <-> SLAVE sync: receiving 177 bytes from master
15365:S 15 Apr 00:51:26.142 * MASTER <-> SLAVE sync: Flushing old data
15365:S 15 Apr 00:51:26.142 * MASTER <-> SLAVE sync: Loading DB in memory
15365:S 15 Apr 00:51:26.142 * MASTER <-> SLAVE sync: Finished with success
14.6 加密的主从同步
(1)为redis-master平滑设置连接密码
[ conf]# redis-cli config get requirepass
1) "requirepass"
2) ""
[ conf]# redis-cli config set requirepass ‘yunjisuan‘
OK
[ conf]# redis-cli config get requirepass
(error) NOAUTH Authentication required.
[ conf]# redis-cli -a yunjisuan config get requirepass
Warning: Using a password with ‘-a‘ option on the command line interface may not be safe.
1) "requirepass"
2) "yunjisuan"
[ conf]# redis-cli -a yunjisuan config rewrite
Warning: Using a password with ‘-a‘ option on the command line interface may not be safe.
OK
#查看从库日志信息
[ conf]# cat /data/redis/redis.log
15365:S 15 Apr 00:56:06.605 * (Non critical) Master does not understand REPLCONF capa: -NOAUTH Authentication required.#主从同步需要进行端口验证请求
15365:S 15 Apr 00:56:06.605 * Partial resynchronization not possible (no cached master)#部分同步不能,没有主的缓存
15365:S 15 Apr 00:56:06.606 # Unexpected reply to PSYNC from master: -NOAUTH Authentication required.#从主发来的意外回复:需要身份验证
15365:S 15 Apr 00:56:06.606 * Retrying with SYNC...#进行同步重试
15365:S 15 Apr 00:56:06.607 # MASTER aborted replication with an error: NOAUTH Authentication required.#主从复制终止:需要身份验证
(2)为从库提供主从同步密码验证
#从服务器需要设置主从同步的认证密码
[ conf]# redis-cli config get masterauth
1) "masterauth"
2) ""
[ conf]# redis-cli config set masterauth "yunjisuan"
OK
[ conf]# redis-cli config get masterauth
1) "masterauth"
2) "yunjisuan"
[ conf]# redis-cli config rewrite
OK
[ conf]# tail -1 /usr/local/redis/conf/redis.conf
masterauth "yunjisuan"
#查看从服务器日志
[ conf]# cat /data/redis/redis.log
15365:S 15 Apr 00:59:04.079 * Connecting to MASTER 192.168.200.158:6379
15365:S 15 Apr 00:59:04.080 * MASTER <-> SLAVE sync started
15365:S 15 Apr 00:59:04.080 * Non blocking connect for SYNC fired the event.
15365:S 15 Apr 00:59:04.081 * Master replied to PING, replication can continue...
15365:S 15 Apr 00:59:04.084 * Partial resynchronization not possible (no cached master)
15365:S 15 Apr 00:59:04.085 * Full resync from master: 06bae065010514a7adab4aa07d67c1bbb8ae64e2:2786
15365:S 15 Apr 00:59:04.167 * MASTER <-> SLAVE sync: receiving 177 bytes from master
15365:S 15 Apr 00:59:04.167 * MASTER <-> SLAVE sync: Flushing old data
15365:S 15 Apr 00:59:04.167 * MASTER <-> SLAVE sync: Loading DB in memory
15365:S 15 Apr 00:59:04.167 * MASTER <-> SLAVE sync: Finished with success
十五,使用Python操作Redis单例
15.1 Python安装redis扩展
yum -y install epel-release
yum -y install python2-pip
pip install redis
15.2 利用python进行redis数据的读写
[ conf]# python
Python 2.7.5 (default, Apr 11 2018, 07:36:10)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import redis
>>> r = redis.Redis(host=‘127.0.0.1‘,port=6379,password=‘yunjisuan‘,db=0)
>>> r.set(‘key_test‘,‘value_test‘)
True
>>> value = r.get(‘key_test‘)
>>> print (value)
value_test
>>> exit()
[ conf]# redis-cli -a yunjisuan get key_test